Abstract
Purpose
To improve shared decision making (SDM) with advanced cancer patients, communication skills training for oncologists is needed. The purpose was to examine the effects of a blended online learning (i.e. e-learning and online training session) for oncologists about SDM in palliative oncological care and to compare this blended format with a more extensive, fully in-person face-to-face training format.
Methods
A one-group pre-posttest design was adopted. Before (T0) and after (T2) training, participants conducted simulated consultations (SPAs) and surveys; after the e-learning (T1), an additional survey was filled out. The primary outcome was observed SDM (OPTION12 and 4SDM). Secondary outcomes included observed SDM per stage, SPA duration and decision made as well as oncologists’ self-reported knowledge, clinical behavioural intentions, satisfaction with the communication and evaluation of the training. Additionally, outcomes of the blended learning were compared with those of the face-to-face training cohort. Analyses were conducted in SPSS by linear mixed models.
Results
Oncologists (n = 17) showed significantly higher SDM scores after the blended online learning. The individual stages of SDM and the number of times the decision was postponed as well as oncologists’ beliefs about capabilities, knowledge and satisfaction increased after the blended learning. Consultation duration was unchanged. The training was evaluated as satisfactory. When compared with the face-to-face training, the blended learning effects were smaller.
Conclusion
Blended online SDM training for oncologists was effective. However, the effects were smaller compared to face-to-face training. The availability of different training formats provides opportunities for tailoring training to the wishes and needs of learners.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Avoid common mistakes on your manuscript.
Introduction
For most patients with metastatic cancer, the primary goals of anti-cancer treatment are maintaining the quality of life and prolonging survival. However, treatment options have uncertain, possibly limited benefits with high burden. Alternatively, patients may choose foregoing anti-cancer treatment. Often, no single best treatment strategy exists. In this setting, shared decision-making (SDM) is required to provide care that matches patients’ values and preferences best. SDM involves four steps: (1) introducing choice, (2) explaining options with related pros and cons, (3) elucidating patients’ values and constructing preferences and (4) jointly making or postponing the decision [1]. SDM is advocated because of respect for patient autonomy [1, 2], reports of positive patient outcomes, including improved satisfaction and less decisional conflict [3], and patients’ wish to be involved in SDM [4].
Although physicians value SDM [5], observational studies show that SDM is not always visible in palliative cancer care [6,7,8,9,10]. Often, limited awareness is created about available treatment options and the option to refrain from chemotherapy [6, 7]. Patients do not always receive clear information about the survival benefit of palliative chemotherapy [8], nor are their values and appraisals of treatment option characteristics explicitly addressed [6, 9]. Lastly, patients’ preferred decision-making role is infrequently elicited, and the decision-making process is not matched accordingly [10].
Physician training is proposed to facilitate the implementation of SDM. Several communication skills training (CST) programs on SDM have been developed [11] and have been shown to improve SDM [12,13,14]. Blended learning formats, i.e. online learning with some level of learner control (e.g. over time, place or pace) combined with more traditional instructor-led synchronous learning [15], are increasingly adopted for CST because of their flexibility, richness and cost-effectiveness [16]. Online and blended CST, both with and without participant interaction, benefits cancer and palliative healthcare professionals [17], and its completion rate can be up to six times higher compared to traditional training [18]. Although a review comparing e-learning or blended learning with conventional learning suggests that e-learning may be at least as effective as conventional training, no definite conclusions can be drawn given the large heterogeneity across studies [19].
In response to the call for more research into the effects of different formats of CST about SDM [14, 20], a blended online learning format (4 hours) of a previously evaluated highly effective intensive, in-person face-to-face training (10 hours) on SDM in palliative oncological care [12, 13] was developed and evaluated. The aim of this study is to examine the effects of this blended online learning. We hypothesise that the blended online learning will improve observed SDM about palliative systemic treatment in simulated consultations. Secondary outcomes include observed SDM per stage, knowledge, clinical behavioural intentions, satisfaction with communication, consultation duration, decision made and evaluation of the blended learning. Additionally, we aimed to compare the effect of the blended online format with a more extensive in-person face-to-face training format, which was evaluated in a similar design.
Materials and methods
The Human Ethics Committee at the Amsterdam UMC, location AMC, provided ethical clearance for the study, and local permission was obtained at all participating hospitals. The STROBE guidelines [21] were followed in this report.
Design
The study adopted a one-group pre-posttest design (Fig. 1). Participants engaged in standardised patient assessments (SPAs), i.e. simulated consultations with actors, at baseline (T0) and after the training (T2). In addition, participants filled out surveys at baseline (T0), after completing the e-learning (T1) and after the second SPA (T2).
Setting and participants
Participants were medical oncologists (in training), who regularly have decision-making conversations with advanced cancer patients regarding starting, continuing or changing palliative systemic treatment.
Sample size
Based on previously reported effect sizes [12, 13]), the study was powered to detect a large effect (Cohen’s d = 0.8). This required a sample size of fifteen oncologists (G*Power 3.1.9.2, α = 0.05, β = 0.80; paired t-test).
Recruitment
Potential participants were contacted via medical oncology departments within hospitals, until at least fifteen oncologists were recruited. Interested oncologists were informed about the study by e-mail, received an information and informed consent letter, which was signed by all participants before the baseline SPA was performed. After attending the blended learning, oncologists received accreditation by the Netherlands Association of Internal Medicine.
Training
The blended online learning consisted of two parts: an asynchronous component (e-learning) and a synchronous component with an instructor (online training session). We originally planned an in-person training session, but constrainedly switched to an online modality due to the COVID-19 restrictions. Both training parts addressed SDM knowledge, attitude (i.e. motivation and personal barriers) and skills (i.e. ability to apply the four stages of SDM). The e-learning consisted of three obligatory modules: (1) theory of SDM, (2) applying SDM and (3) SDM in palliative care, e.g. communication about prognosis and incorporating advance care planning, which were estimated to take 1 hour in total. The training session content was based on the previously evaluated face-to-face training [12, 13]. It adopted behaviour change techniques [22] among which providing instruction and prompting practice by role-play with professional actors according to the fishbowl working format, in which one learner practiced with one of the stages of SDM with an actor and the other participants observed and provided feedback [23]. The online training sessions were provided in small groups (n = 2–5) by an experienced trainer in a session of 3 hours. Afterwards, participants received a pocket-size card with the four SDM steps and example phrases as a follow-up prompt [22]. On average, the total training was estimated to take 4.5 hours. The blended learning was piloted in an in-person setting with six oncologists (in training) from three hospitals, after which small modifications were made.
SPAs
Two different standardised patient assessment (SPA) cases, adopted from the previous trial [12], reflected a patient with either metastatic gastric or oesophageal cancer who met the oncologist to discuss the start of first-line palliative chemotherapy. For each participant, the cases were randomly assigned to either T0 or T2. Participants received a simulated medical file. Three experienced professional male actors (aged 57-64 years) played both roles. Two of the three actors also participated in the previous trial [12].The SPAs took place online due to COVID-19 restrictions and were video recorded (August 2020 to May 2021).
Measurements
The outcomes were assessed at levels one (reaction, i.e. evaluation of training) and two (learning, i.e. self-reported changes or observed changes in simulated settings) of Kirkpatrick’s Model of Training Evaluation [24].
Sample characteristics
Participants
Oncologists reported their age, sex, whether or not they were in training, years of experience in medical oncology (including residency), number of palliative cancer patients in their care for the period of 1 month and receipt of communication skills training during medical school, residency and post education (yes/no). Besides these background characteristics, both oncologists’ perception of their patients’ attitude towards SDM and their own attitude were assessed with the Control Preference Scale (CPS, a 1-item measure with five different treatment decision-making roles [25]). The items were rearranged to reflect an active, shared or passive role of patients [26] or an informative, SDM or paternalistic role of oncologists [5].
SPAs
After each SPA, oncologists were asked how realistic and comparable to their clinical practice the simulated consultation was using four study-specific items with Likert scale responses (1–10).
Primary outcome
The primary outcome was the level of SDM as assessed from video-recorded SPAs using two instruments. First, the Observing Patient Involvement Scale (OPTION12), a widely used 12-item scoring instrument of physician communicative behaviour associated with SDM [27, 28]. Items are rated on a 5-point Likert scale (0: not observed–4: very high standard), and the sum score is transformed to reflect a total out of 100. Next to the general OPTION12 manual, a study-specific manual from the previous evaluation study was used [12]. Second, the 4SDM was used, an instrument developed by Henselmans et al. [12] based on the four-stage SDM model [1]. The 4SDM has eight items, which are coded on a 4-point Likert scale (0: not observed–3: observed and of high quality). Two blinded assessors rated the video-recorded consultations. The coding process consisted of training, calibration to achieve sufficient interrater reliability and independent coding. Since ICCs and kappa’s were not considered sufficient for independent coding, all SPAs were double coded and scores averaged or discussed until consensus was reached (Appendix 1).
Secondary outcomes
See Table 1 for a description of the secondary outcomes and how they were assessed.
Comparison of training formats
For comparing different training formats, data (n = 31 oncologists) from a previously evaluated face-to-face training conducted in 2016 was used [12]. This training took 10 hours, including preparatory reading (1.5 hours), two small group training sessions with mainly role-play (3.5 hours each) and a booster session (1–1.5 hours, 6 weeks after the last training session). The face-to-face training was evaluated in a randomised controlled trial, in which both the intervention and the control group participated in SPAs and questionnaires. The eligibility criteria, SPAs, actors, coding instruments and questionnaire items (except from the items regarding clinical behavioural intentions and knowledge) were similar to those of the current study. Apart from the training format, there were additional differences between both trials: the previous trial (1) involved a different trainer, (2) involved different observers, (3) had SPA cases not randomly assigned to either T0 or T2, (4) had SPAs taking place in-person instead of online, (5) had a shorter average time between training and T2 (on average 11 days as opposed to 41 days) and (6) took place 5 years earlier. These differences warrant a cautious interpretation of the comparison.
Statistical analyses
Linear mixed models (LMMs) were conducted in IBM SPSS Statistics 26 (IBM Corporation, Armonk, NY, USA) with time as an independent fixed effect. Separate analyses were conducted for the outcomes observed SDM (OPTION12 and 4SDM), the stages of SDM (4SDM), satisfaction with the conversation (PSQ), clinical behavioural intentions (CPD) and knowledge. For the dichotomous outcome decision made, a generalized estimating equation (GEE) model was used with time as an independent fixed effect. For each model, different repeated covariance types were compared, and the model with the lowest AIC was used. Cohen’s d was presented as a measure of effect size (d = 0.20 small, d = 0.50 medium, d = 0.80 large effects) [29]. The comparison between the two training formats was assessed in LMMs with time, condition and time*condition as fixed factors and, except from clinical behavioural intentions (CPD) and knowledge, the same outcomes as described above. First, the control group of the face-to-face training trial was used as the reference category, and second, the blended learning group was used as the reference category to compare the face-to-face training with the blended learning group.
Results
After contacting 25 hospitals, seventeen oncologists from two academic and five non-academic hospitals participated in the evaluation. Of two respondents, the T0 SPA recording was missed due to technical issues, and one oncologist missed T1 after the e-learning (see Table 2 for participant and SPA characteristics).
Effect of the blended online learning
The oncologists demonstrated significantly more SDM after the blended online learning as measured with both the OPTION12 (F(1, 26.436) = 17.181, p < 0.001) and the 4SDM (F(1, 28.818 = 20.544, p < 0.001) (Table 3). The effect size was large for both primary outcomes. In addition, SDM in all four stages (stage 1: F(1, 24.962) = 18.323, p < 0.001; stage 2: F(1, 15.000) = 24.380, p < 0.001; stage 3: F(1, 15.811) = 18.318, p = 0.001; stage 4: F(1, 16.130) = 5.283, p = 0.035), oncologists’ knowledge about SDM (F(1, 28.420) = 7.180, p = 0.012) and the satisfaction of oncologists with the conversation (F(1, 17.000) = 24.362, p < 0.001) improved after the blended online learning. Of the measures relating to clinical behavioural intentions, only oncologists’ beliefs about capabilities significantly improved after the blended learning (F(2, 37.668) = 5.593, p = 0.007); intention (F(2, 33.077) = 1.525, p = 0.233), social influence (F(2, 20.273) = 1.198, p = 0.322), moral norm (F(2, 33.493 = 1.517, p = 0.234) and beliefs about consequences (F(2, 31.150) = 0.398, p = 0.675) did not. The SPA duration did not change (F(1, 16.183) = 0.352, p = 0.561), and the decision was almost eight times more likely to be postponed after the blended learning (OR = 7.76, p = 0.039).
Evaluation of training
Except for three oncologists, all participants completed the three required e-learning modules. Oncologists assessed the e-learning with a 7.3 and the online training session with an 8.5 averagely (Table 4). About 60% would recommend the e-learning to colleagues, and about 90% would recommend the training session. Most participants indicated it took 15–30 min to complete an e-learning module, adding up to a total of 45–90 min for all three modules. When asked about the online modality of the training session, most respondents implied that its quality, usefulness and enjoyment were equal to an in-person modality and that it was more practical.
Comparison between different training formats
Table 5 presents the raw means of the current blended online training group (n = 17) as well as of the face-to-face training group (n = 15) and the control group (n = 16) of the previous trial [12]. Except for stage 3 of SDM (F(2, 84.537) = 2.232, p = 0.114) and satisfaction (F(2, 48.000) = 2.430, p = 0.099), the interaction between time and condition (previous control group, previous face-to-face training group and current blended learning group) was significant for all outcomes, among which the primary outcomes (OPTION12: F(2, 88.210) = 6.396, p = 0.003; 4SDM: F(2, 84.132) = 7.681, p = 0.001) and the three other stages of SDM (stage 1: F(2, 90.014) = 5.829, p = 0.004; stage 2: F(2, 73.276) = 6.203, p = 0.003; stage 4: F(2, 46.313) = 5.301, p = 0.008). Post hoc comparisons showed that the group which received the blended learning did not differ significantly from the control group of the previous study on any of the outcomes (Table 6). The differences between the blended learning group and the previous control group on the primary outcomes were of small to medium size, while the differences between the face-to-face training and the control group were large. When comparing the two formats with each other, the blended learning format showed a significantly smaller effect compared to the face-to-face format on the primary outcomes. Except for stage 4, the two formats did not differ significantly on the other individual SDM stages nor on oncologist satisfaction with the conversation.
Discussion
By means of a one-group pre-posttest design, we showed a large and significant effect of the blended learning on observed SDM in standardised patient assessments. To the best of our knowledge, this is the first positively evaluated blended online learning for oncologists about SDM. In addition, the blended learning improved oncologists’ skills in all four SDM stages, their knowledge about SDM, beliefs about capabilities, satisfaction with the consultation and increased the frequency of postponing the decision. The blended learning did not increase the consultation duration. Oncologists evaluated the blended online learning as satisfactory and did not clearly express a preference for either an online or a face-to-face modality. Secondly, we compared CST formats by contrasting the 4-hour blended online training to a previously evaluated 10-hour face-to-face training. Taking limitations into account when comparing the two training formats, the effect of the blended learning on SDM appears to be smaller compared to the face-to-face training.
As stated in the Introduction section, several SDM training programs for oncologists (in training) and internal medicine residents have large training effects. This study shows that SDM skills can also improve with training in a blended online format, partly without an instructor. Although the low response rate might suggest little enthusiasm for the blended learning, participating oncologists graded the blended online learning with an average of 7.9 (range 1–10). Probably, the low response rate was due to the emergency situation during the first months of the COVID-19 pandemic during which the study was performed. The online modality was well appreciated, especially from a practical perspective. All this is promising from an efficiency and implementation point of view, especially taking into account the physical restrictions during the COVID-19 pandemic era [30].
The results tentatively suggest that the more intensive 10-hour face-to-face training format is more effective than the 4-hour blended online learning format. Previous research regarding training duration yielded mixed results: while some research shows that longer CST, for example at least 1 [31] or 3 [32] days, is most successful, other research demonstrates that training less than 10 hours is as successful as longer training [33]. Besides, a review concluded that blended learning formats may be more effective than traditional learning [19]. Strong evidence for effective features of CST regarding format, intensity and content is not yet available [20]. Nevertheless, as both the face-to-face and the blended learning format evaluations showed large effects on SDM skills, albeit in different study designs, the results call for a personalised training approach, using the right ingredients in different situations and for different learners.
A first issue in the comparison of training formats may be the changing SDM zeitgeist. The OPTION12 scores were significantly higher at baseline in the blended learning evaluation (2020/2021) as compared to the face-to-face training evaluation (2015/2016). This might imply that, over time, SDM has become better incorporated in clinical practice due to physicians better applying SDM or patients being more aware of SDM principles. Secondly, the duration between the last training moment and the follow-up SPA was significantly longer in the blended format. When adjusting for this duration, the differences between the two formats decreased. This may indicate that the training effects decrease over time, probably hindering the transfer of skills in clinical practice. Furthermore, it has yet to be established what the effects are of online SPAs, as were conducted in the current study, rather than in-person SPAs, as were conducted in the previous study, on observed SDM skills. Possibly, participants can demonstrate the learnt skills better in live SPAs than in online SPAs. In line, a study on Objective Structural Clinical Examination found that those participating in online examinations performed worse than those participating onsite [34].
Despite all inherent limitations, the comparison of training formats may be regarded as a strength of the current study. Such comparisons are rare in literature and contribute to the better use of research data. Another strength is the evaluation of training outcomes on different levels of Kirkpatrick’s model, i.e. the level of reaction and learning. On the level of learning, we both evaluated if the participant ‘knows how’ (e.g. the knowledge test) and ‘shows’ (the SPA) in terms of Miller’s model of clinical competence [35]. The design of the current study has limitations as well. Different training intensities (10 versus 4 hours) and formats (face-to-face versus e-learning and online training session) were simultaneously compared, which hinders understanding about which effective ingredient has which effect. Secondly, the blended online learning was not evaluated in a randomised controlled trial, and given the lack of randomization and absence of a true parallel control group, confounding explanations for its effect cannot be excluded. Also, participants may have learned unintentionally from the baseline SPA. Indeed, in the previous face-to-face training evaluation, the control group also significantly improved their SDM skills. Thirdly, the study population may not be completely representative, as possibly only highly motivated oncologists participated in this COVID-19 era. Lastly, the trial was powered to establish large training effects, which were demonstrated in this study design. However, when comparing the blended learning with the control group of the previous trial, small to medium effect sizes were found, for which the trial was not powered. Nevertheless, these effects may imply a clinically relevant change in SDM behaviour.
Next, research steps should be to conduct non-inferiority trials in robust study designs, comparing different intensities and formats of SDM training to find the ideal dose–response balance. Ideally, research also establishes the effects of training on behaviour of oncologists in the clinical setting and on patient outcomes [14, 20], including both observer and patient-reported outcomes. Since the patient may experience more involvement than observers recognise [36], different methods, e.g. conversation analysis [37], or different instruments, e.g. MAPPIN’SDM that includes observers’ as well as physicians’ and patients’ perspective [38], could be deployed to perceive insight in patient experiences. Future research should also demonstrate if the acquired skills are retained over time and whether differences between training formats continue to exist. Additionally, SDM increasingly takes place in multiple conversations with multiple healthcare professionals, also referred to as interprofessional SDM [39]. This is supported by this study’s finding that, after training, significantly more decisions were postponed, suggesting that patients would need another conversation about the treatment decision, either with the oncologist or another healthcare professional. It was previously stated that for optimal implementation of SDM in practice, the interprofessional nature of SDM should be acknowledged [40]. Given that current as well as previous research has shown that SDM skills of oncologists can be improved through training, other healthcare professionals in the SDM process may benefit from such training.
In conclusion, the blended online SDM training for oncologists was found to be effective. This is promising given the flexible, rich and cost-effective nature of blended learnings, especially in pandemic times. These findings are not entirely conclusive, since a pre-posttest evaluation design was adopted and the comparison with data from a previous study involving a face-to-face training showed smaller effect sizes for the blended online training. Nevertheless, opportunities arise for tailoring training formats to the wishes and needs of learners.
Data availability
Data, material and/or codes are available upon reasonable request, except for the video-recorded SPAs (i.henselmans@amsterdamumc.nl).
References
Stiggelbout AM, Pieterse AH, De Haes JC (2015) Shared decision making: concepts, evidence, and practice. Patient Educ Couns 98(10):1172–1179
Gulbrandsen P, Clayman ML, Beach MC et al (2016) Shared decision-making as an existential journey: aiming for restored autonomous capacity. Patient Educ Couns 99(9):1505–1510
Shay LA, Lafata JE (2015) Where is the evidence? A systematic review of shared decision making and patient outcomes. Med Decis Making 35(1):114–131
Brom L, Pasman HR, Widdershoven GA et al (2014) Patients’ preferences for participation in treatment decision-making at the end of life: qualitative interviews with advanced cancer patients. PLoS One 9(6):e100435
Driever EM, Stiggelbout AM, Brand PLP (2020) Shared decision making: physicians’ preferred role, usual role and their perception of its key components. Patient Educ Couns 103(1):77–82
Brom L, De Snoo-Trimp JC, Onwuteaka-Philipsen BD et al (2017) Challenges in shared decision making in advanced cancer care: a qualitative longitudinal observational and interview study. Health Expect 20(1):69–84
Koedoot CG, Oort FJ, de Haan RJ et al (2004) The content and amount of information given by medical oncologists when telling patients with advanced cancer what their treatment options are. palliative chemotherapy and watchful-waiting. Eur J Cancer 40(2):225–35
Audrey S, Abel J, Blazeby JM et al (2008) What oncologists tell patients about survival benefits of palliative chemotherapy and implications for informed consent: qualitative study. BMJ 337:a752
Henselmans I, Van Laarhoven HW, Van der Vloodt J et al (2017) Shared decision making about palliative chemotherapy: a qualitative observation of talk about patients’ preferences. Palliat Med 31(7):625–633
Bieber C, Nicolai J, Gschwendtner K et al (2018) How does a shared decision-making (SDM) intervention for oncologists affect participation style and preference matching in patients with breast and colon cancer? J Cancer Educ 33(3):708–715
Diouf NT, Menear M, Robitaille H et al (2016) Training health professionals in shared decision making: update of an international environmental scan. Patient Educ Couns 99(11):1753–1758
Henselmans I, van Laarhoven HWM, de Haes H et al (2019) Training for medical oncologists on shared decision-making about palliative chemotherapy: a randomized controlled trial. Oncologist 24(2):259–265
Henselmans I, Laarhoven HWM, Maarschalkerweerd P et al (2019) Effect of a skills training for oncologists and a patient communication aid on shared decision making about palliative systemic treatment: a randomized clinical trial. The Oncologist 25(3):e578–e588
Coates D, Clerke T (2020) Training interventions to equip health care professionals with shared decision-making skills: a systematic scoping review. J Contin Educ Health Prof 40(2):100–119
Hrastinski S (2019) What do we mean by blended learning? TechTrends 63(5):564–569
Maloney S, Nicklen P, Rivers G et al (2015) A cost-effectiveness analysis of blended versus face-to-face delivery of evidence-based medicine to medical students. J Med Internet Res 17(7):1–11
Berg MN, Ngune I, Schofield P et al (2021) Effectiveness of online communication skills training for cancer and palliative care health professionals: a systematic review. Psychooncology 30(9):1405–1419
Pelayo-Alvarez M, Perez-Hoyos S, Agra-Varela Y (2013) Clinical effectiveness of online training in palliative care of primary care physicians. J Palliat Med 16(10):1188–1196
Vallee A, Blacher J, Cariou A, Sorbets E (2020) blended learning compared to traditional learning in medical education: systematic review and meta-analysis. J Med Internet Res 22(8):e16504
Bos-van den Hoek DW, Visser LNC, Brown RF et al (2019) Communication skills training for healthcare professionals in oncology over the past decade: a systematic review of reviews. Curr Opin Support Palliat Care 13(1):33–45
von Elm E, Altman DG, Egger M et al (2007) The Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) statement: guidelines for reporting observational studies. PLoS Med 4(1):1623–1627
Agbadje TT, Elidor H, Perin MS et al (2020) Towards a taxonomy of behavior change techniques for promoting shared decision making. Implement Sci 15(1):67
Bank I, Rasenberg EMC, Makkenze-Mangold SH et al (2021) Fifteen simulated patient working formats to use in communication skills training: report of a survey. Med Teach 43(12):1391–1397
Kirkpatrick D, Kirkpatrick J (2006) Evaluating training programs: The four levels. Berrett-Koehler Publishers
Degner LF, Sloan JA, Venkatesh P (1997) The Control Preference Scale. Can J Nursing Res 29(3):21–43
Brom L, Hopmans W, Pasman HR et al (2014) Congruence between patients’ preferred and perceived participation in medical decision-making: a review of the literature. BMC Med Inform Decis Mak 14:25
Couet N, Desroches S, Robitaille H et al (2015) Assessments of the extent to which health-care providers involve patients in decision making: a systematic review of studies using the OPTION instrument. Health Expect 18(4):542–561
Elwyn G, Hutchings H, Edwards A et al (2005) The OPTION scale: measuring the extent that clinicians involve patients in decision-making tasks. Health Expect 8:34–42
Cohen, J (2013) Statistical power analysis for the behavioral sciences. Routledge
Malik M, Valiyaveettil D, Joseph D (2021) Optimizing e-learning in oncology during the COVID-19 pandemic and beyond. Radiat Oncol J 39(1):1–7
Berkhof M, van Rijssen HJ, Schellart AJ et al (2011) Effective training strategies for teaching communication skills to physicians: an overview of systematic reviews. Patient Educ Couns 84(2):152–162
Barth J, Lannen P (2011) Efficacy of communication skills training courses in oncology: a systematic review and meta-analysis. Ann Oncol 22(5):1030–1040
Dwamena F, Holmes-Rovner M, Gaulden CM et al (2012) Interventions for providers to promote a patient-centred approach in clinical consultations. Cochrane Database Syst Rev 12:CD003267
Novack DH, Cohen D, Peitzman SJ et al (2002) A pilot test of WebOSCE: a system for assessing trainees’ clinical skills via teleconference. Med Teach 24(5):483–487
Miller GE (1990) The assessment of clinical skills/competence/performance. Acad Med 65(9):S63–S67
Diendere G, Farhat I, Witteman H, Ndjaboue R (2021) Observer ratings of shared decision making do not match patient reports: an observational study in 5 family medicine practices. Med Decis Making 41(1):51–59
Landmark AMD, Ofstad EH, Svennevig J (2017) Eliciting patient preferences in shared decision-making (SDM): comparing conversation analysis and SDM measurements. Patient Educ Couns 100(11):2081–2087
Kasper J, Hoffmann F, Heesen C et al (2012) MAPPIN’SDM–the multifocal approach to sharing in shared decision making. PLoS ONE 7(4):e34849
Legare F, Stacey D, Pouliot S et al (2011) Interprofessionalism and shared decision-making in primary care: a stepwise approach towards a new model. J Interprof Care 25(1):18–25
Legare F, Stacey D, Turcotte S et al (2014) Interventions for improving the adoption of shared decision making by healthcare professionals. Cochrane Database Syst Rev (9):CD006732
Legare F, Borduas F, Freitas A et al (2014) Development of a simple 12-item theory-based instrument to assess the impact of continuing professional development on clinical behavioral intentions. PLoS ONE 9(3):e91013
Ong LML, Visser MRM, Lammes FB, De Haes JCJM (2000) Doctor-patient communication and cancer patients’ quality of life and satisfaction. Patient Educ Couns 41:145–156
Zandbelt LC, Smets EMA, Oort FJ et al (2004) Satisfaction with the outpatient encounter. A comparison of patients’ and physicians’ views. J Gen Intern Med 19:1088–95
Landis JR, Koch GG (1977) The measurement of observer agreement for categorical data. Biometrics 33(1):159–174
Sim J, Wright CC (2005) The kappa statistic in reliability studies: use, interpretation, and sample size requirements. Phys Ther 85(3):257–268
Acknowledgements
We would like to express our gratitude to all oncologists (in training) for participating in this training evaluation. Furthermore, we would like to thank Marius Schalkwijk, Hans van Dijk and Paul Vermeulen for their consistent and careful acting in the SPAs and training sessions. Lastly, we would like to thank Miek Crouzen and Emma ten Brink for their coding efforts.
Funding
This work was financially supported by the Netherlands Organization of Health Research and Development (ZonMw, #844001514).
Author information
Authors and Affiliations
Contributions
Conceptualization: Inge Henselmans, Ellen Smets, Hanneke van Laarhoven, Danique Bos, Dorien Tange. Methodology: Danique Bos, Rania Ali, Inge Henselmans. Formal analysis and investigation: Danique Bos, Inge Henselmans. Writing—original draft preparation: Danique Bos, Inge Henselmans. Writing—review and editing: all authors. Funding acquisition: Inge Henselmans, Ellen Smets, Hanneke van Laarhoven, Dorien Tange. Resources: Sandra Bakker, Anniek Goosens, Mathijs Hendriks, Manon Pepels, Filip de Vos, Yes van de Wouw. Supervision: Inge Henselmans, Ellen Smets.
Corresponding author
Ethics declarations
Ethics approval
The Human Ethics Committee at the Amsterdam UMC, location AMC, provided ethical clearance for the study, and local permission was obtained at all participating hospitals. The study was performed in accordance with the principles of the Declaration of Helsinki.
Consent to participate
Informed consent was signed by all participants before the start of the study.
Competing interests
The authors declare no competing interests.
Additional information
Publisher's note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendix 1
Appendix 1
Assessor training
Two psychologists with experience in using the OPTION12 and 4SDM [13] and providing communication skills training in a medical setting restudied the manuals and discussed them with two researchers (IH, DB). They independently rated three video-recorded SPAs from the previous evaluation study with SPAs [12]. After rating these video recordings, the assessors compared their scores and discussed inconsistencies to reach a common understanding of the items and response categories. One of the researchers (DB) facilitated these discussions.
Assessor calibration
The assessors repeatedly double coded sets of five SPAs with both the OPTION12 and 4SDM. Interrater reliability (IRR) was calculated after each set. The IRR of the OPTION12 and 4SDM was considered sufficient if the intraclass correlation (ICC) and the average weighted kappa (κ) across items were higher than 0.60 for each item (reflecting substantial agreement) [44]. Κappas were prevalence-adjusted by balancing the matrix [45] if needed when row and column totals contained zeroes due to the low number of coded consultations and skewed distributions of ratings within items. When IRR was insufficient, scores of items with low κ were discussed and the study-specific manuals extended if needed. After the first set of SPAs (n = 5), the IRR was considered moderate for the OPTION12 (ICC = 0.76, κ = 0.58) and substantial for the 4SDM (ICC = 0.94, κ = 0.63). After coding the second set of SPAs (n = 5), the IRR was considered moderate to sufficient (OPTION12: ICC = 0.65, κ = 0.49; 4SDM: ICC = 0.86, κ = 0.50). The third set showed no improvement: the IRR was still moderate to sufficient (OPTION12: ICC = 0.87, κ = 0.55; 4SDM: ICC = 0.71, κ = 0.50) (see Appendix Table 7 for more details).
Double coding SPAs
As the ICCs and kappas were not considered sufficient for independent coding after three calibration rounds, the remaining (n = 17) SPAs were coded double. After each sixth consultation, the items with scores > 1 point difference were discussed until consensus was reached and study-specific manuals extended if required. The scores with 1 point difference between the assessors were averaged.
Overall IRR
The overall ICC between the assessors of the 32 SPAs was 0.868 (OPTION12) and 0.915 (4SDM). The overall average kappas of the OPTION12 and the 4SDM were both 0.62, reflecting substantial agreement. Of the OPTION12, five items had κ < 0.60 and of the 4SDM, two items. The observed percentage agreement was 64.6% for the OPTION12 and 66.0% for the 4SDM. One assessor seemed more strict than the other on scoring the OPTION12 (e.g. T0: μ1 = 41.39, SD1 = 13.20; μ2 = 44.44, SD2 = 12.62). However, paired sample t-tests between both assessors showed no significant differences (two-sided p-values) between the total scores of the OPTION12 (T0: p = 0.153; T2: p = 0.089) and 4SDM (T0: p = 0.935; T2: p = 0.079), indicating no assessor bias.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Bos-van den Hoek, D.W., van Laarhoven, H.W.M., Ali, R. et al. Blended online learning for oncologists to improve skills in shared decision making about palliative chemotherapy: a pre-posttest evaluation. Support Care Cancer 31, 184 (2023). https://doi.org/10.1007/s00520-023-07625-6
Received:
Accepted:
Published:
DOI: https://doi.org/10.1007/s00520-023-07625-6