Abstract
Purpose
Patient-reported outcome (PRO) measures are increasingly important in evaluating medical care. The increased integration of technology within the healthcare systems allows for collection of PROs electronically. The objectives of this study were to Ashley et al. J Med Internet Res (2013) implement an electronic assessment of PROs in inpatient cancer care and test its feasibility for patients and Dawson et al. BMJ (2010) determine the equivalence of the paper and electronic assessment.
Methods
We analyzed two arms from a study that was originally designed to be an interventional, three-arm, and multicenter inpatient trial. A self-administered questionnaire based on validated PRO-measures was applied and completed at admission, 1 week after, and at discharge. For this analysis — focusing on feasibility of the electronic assessment — the following groups will be considered: Group A (intervention arm) received a tablet version, while group B (control arm) completed the questionnaire on paper. A feasibility questionnaire, that was adapted from Ashley et al. J Med Internet Res (2013), was administered to group A.
Results
We analyzed 103 patients that were recruited in oncology wards. ePRO was feasible to most patients, with 84% preferring the electronic over paper-based assessment. The feasibility questionnaire contained questions that were answered on a scale ranging from “1” (illustrating non achievement) to “5” (illustrating achieving goal). The majority (mean 4.24, SD .99) reported no difficulties handling the electronic tool and found it relatively easy finding time for filling out the questionnaire (mean 4.15, SD 1.05). There were no significant differences between the paper and the electronic assessment regarding the PROs.
Conclusion
Results indicate that electronic PRO assessment in inpatient cancer care is feasible.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
Introduction
The importance of the patients’ perspective on their well-being is increasingly recognized. Patient-reported outcome (PRO) measures are a valid method to provide valuable insight into patients’ experiences. PROs usually ask patients to self-report general well-being, symptoms, and functional status [2] revealing a patient-centered view on their subjective experiences [3].
By integrating PROs in the clinical routine, communication and engagement between healthcare providers and their patients is facilitated [4, 5]. If physicians have access to PROs, they can identify a higher number of symptoms [6]. In addition, PROs have shown to allow for more efficient use of clinic time [7]. Overall, the integration of PROs has resulted in better treatment quality [8, 9] and patients’ compliance [10]. Consequently, the collection and use of patient-reported information are extremely valuable.
PROs have been increasingly implemented in clinical practice as well as in research [11]. Many PRO measures have originally been completed on paper. However, there are several issues associated with paper versions of PROs. Frequent problems arise from incomplete questionnaires, making it difficult for healthcare professionals to accurately use data [12]. In addition, errors can occur due to manual scoring and data entry [13].
Subsequently, PRO measures have been altered to allow for electronic administration [14] offering a replacement to the traditional use of paper by using, for example, computers, tablets, or smartphones.
Studies have shown that electronic data capture offers several advantages: It reduces the number of data entry errors [15, 16] as well as the amount of missing data [17, 18]. Data are automatically calculated and transferred to a central database, strengthening the accuracy and efficiency of data collection [19]. Furthermore, the end user has immediate access to the data through the centralized database.
In addition, data capture can be improved since respondents can neither create their own response option nor have the opportunity to provide ambiguous responses [15, 17]. However, patients with less technical experiences may encounter difficulties when operating technical devices, such as tablets that were used in this study.
As the quality of data collection has improved and the use of data has become easier through electronic assessment, health care teams increasingly prefer electronic data capture over paper [20].
This has resulted in higher demand for research on the equivalence between electronic and paper-based PRO measures [14]. To target support and understand which mode of PRO-administration is most useful, we need to understand patients experience with the electronic assessment.
The purpose of this study was to [1] implement an electronic assessment of PROs in routine inpatient cancer care and test its feasibility. Furthermore, [2] we assessed whether the implementation of an electronic version of the standardized questionnaires results in equivalent responses to the paper version. In addition, [3] we examined the completion rate between paper-based and electronic assessment.
Methods
Ethical approval
The study was conducted in accordance with the Declaration of Helsinki and was approved by the Ethics Committee of the Chamber of Physicians in Berlin, Germany (Eth-48/16). All patients provided written informed consent to participate in the study.
Study design
We evaluated the feasibility of an electronic PRO assessment tool in inpatient oncology care by conducting a multicenter, randomized, controlled trail. Patients were recruited between July 2017 and February 2019 while being admitted to four participating oncology wards (Helios Klinikum Emil von Behring, two centers at Helios Klinikum Berlin Buch, Helios Klinikum Bad Saarow) for planned anticancer treatment including chemotherapy, radiotherapy, or immunotherapy.
For the primary analysis, we randomized patients into three groups. As the current paper reports a secondary analysis, we analyzed two subgroups of the study: group A (now designated as the intervention arm) received a tablet version, while group B (control arm) completed the questionnaire on paper. PRO results from the third arm were graphically displayed and presented to their treating physicians for them to explore before their next patient encounter. However, this will be discussed in a further paper. Randomization was carried out using computerized routine by a staff member not further involved in the study.
Both groups received a set of standardized PRO questionnaires. While group B was given a paper version, group A completed the PROs in an electronic survey using an Apple iPad. Participants completed measurements independently but were allowed to ask for assistance.
There were three different points of measurement: T0: admission, T1: 1 week after admission (if applicable), and T2: discharge. Following T2, a feasibility questionnaire was administered to arm A to assess their experience using the electronic tool. If patients remained in the hospital less than 1 week, T1 did not take place.
A summary of the study design is depicted in the following consort diagram.
Questionnaires
The composition of PROs used for the study was developed in a multi-professional expert team and consisted of different instruments used at different assessment points (see Fig. 1). Standardized questionnaires with a total amount of 102 items were applied.
EORTC QLQ-C30. The Quality of Life Questionnaire C30 (QLQ-C30) [21] developed by the European Organization for Research and Treatment of Cancer (EORTC) assesses the global health status, five functional scales (physical, role, emotional, cognitive, social), and nine common symptoms in cancer patients. The QLQ-C30 consists of 30 items that relate to the state of health and well-being with scores ranging from 0 to 100. Higher symptom scores indicate higher symptom burden; however, higher scores in the global health status and the functional scales imply a better functioning.
IN-PATSAT32. The EORTC cancer inpatient satisfaction with care measure (IN-PATSAT 32) [22] holds 32-items assessing patients’ satisfaction with care by physicians and nurses. Higher values indicate greater satisfaction in the respective area.
SDM-Q-9. The German translation of the Shared Decision-making Questionnaire (SDM-Q-9) [23] assesses the extent of participatory decision-making. Patients indicate on a scale how appropriate individual elements of participation were at their last doctor’s visit. Higher values indicate a higher degree of participation.
PRO-CTCAE. Patient-reported adverse events are measured using the Common Terminology Criteria for Adverse Events (CTCAE) [24] that has also been adapted for patients to self-report (PRO-CTCAE) [25]. Patients rate the frequency, severity, and impairment of symptoms. For the present work, we only evaluated the severity. Modules were created that are suitable for the patients’ entities and treatments.
Feasibility questionnaire (shown in supplement). Feasibility is assessing the patients’ acceptability for the electronic assessment. The feasibility questionnaire presents 10 questions that were adapted from Ashley et al. [1] and applying goal attainment scaling [26].
Questions were answered on a scale ranging from “1” (illustrating non achievement) to “5” (illustrating achieving goal). Additionally, there were three items being answered with “yes” or “no”: “Did a staff member help you how to use the questionnaire today?,” “Did you need help from a staff member while answering the questions?,” and “Would you have preferred to answer the questions with pencil and paper?.” The feasibility questionnaire was administered to the intervention group (group A).
Participants
Patients (aged 18 years or older) diagnosed with hematological or oncological cancer were eligible to enter the study. Hematological cancers relate to malignant hematological neoplasm, while we categorized any solid tumors as oncological. Eligibility was restricted to patients with a planned hospital stay for ≥ 3 days to undergo anticancer therapy.
Statistical analysis
Scales of questionnaires were calculated according to the respective scoring manuals. For categorical variables, absolute frequencies and percentage are presented. For comparisons of the distributions of categorical variables between groups, the chi-squared-test [27] or Fisher’s exact test [28] (in case of counts of 5 or lower) were used. For continuous variables, arithmetic mean and standard deviation are presented. Continuous parameters between groups were compared using the Wilcoxon-Mann-Whitney-U-test [29, 30]. Overall significance level was 10% two-sided. Statistical analyses were conducted using SPSS version 27 and R version 4.0.1. Statistical analyses were pre-specified in a detailed statistical analysis plan.
Results
Between July 2017 and February 2019, a total of n = 125 patients admitted for inpatient care were included in this study. Due to n = 12 dropouts (6 in group A, 6 in group B), the target sample resulted in n = 113 patients (100%). Patients were randomly assigned to two groups: group A consisted of 56 patients (100%) and group B of 57 patients (100%).
After 1 week, 34% of patients in group A and 35% in group B, that remained in the hospital participated in T1. The median duration of hospital stay was 6 days in group A and 7 days in group B. All patients included in the study participated at time of discharge (T2).
Study population
Demographic data for patients is shown in Table 1.
On average, patients in group A were 61.7 (SD ± 12.5) years and patients in group B were 65.7 (SD ± 14.4) years old. More than half in group A (62%, n = 35) and 49% (n = 28) in group B were male. The majority (82% in group A and 75% in group B) were educated with an apprenticeship or university degree. More than half in group A (55%) as well as 45% in group B were employed before diagnosis. Most patients (66% in group A and 61% in group B) were treated for oncological disease.
Stage IV tumor stadium applied to 28% in group A and 35% in group B.
Completion rate
The following Table 2 shows completion rates for admission (T0), 1 week after admission (T1) and discharge (T2). We evaluated differences between the surveys as well as among the questionnaires.
For the PRO-CTCAE and the SDM-Q-9, completion rates at T2 were higher with the electronic assessment versus paper-based. However, the QLQ-C30 showed better completion rates for paper-based assessment at all assessment points. Significant differences between ePRO and paper-based assessment could be found: 64% at T0 (group A) versus 93% at T0 (group B), chi-squared test, p = 0.0005. As well as 59% at T2 (group A) versus 82% at T2 (group B), chi-squared-test, p = 0.011.
There was no difference regarding the 100% completion rate for the IN PATSAT32.
For the PRO-CTCAE, the completion rate was higher at admission (T0) than at discharge (T2).
The PRO-CTCAE was more often fully completed than the other questionnaires.
Differences in PROs at T0 and T2
Results regarding QoL and symptom burden at admission (T0) are reported in Tables 3 and 4. High concordances were noted between the paper and the electronic version. Throughout all items, we did not observe significant differences between paper and electronic assessment of PROs. This indicates that electronic and paper-based assessment collect comparable information.
Feasibility
Feasibility results are reported in Table 5. Almost 79% participants reported not needing support for answering the questionnaire indicating that the electronic assessment was broadly acceptable for participants. However, 51.8% required help to operate the questionnaire.
Respondents needing help presented a relatively high satisfaction with the support they received (mean 3.75, SD 1.31). This question was added to all patients, even to those not actively asking for help. We did assume our staff to give an introduction in the study rationale, procedure, and device handling. High satisfaction with help can therefore also be interpreted with the staff supporting our patients satisfactorily if any assistance was needed. Furthermore, the patients commonly reported that they found it relatively easy finding time for filling out the questionnaire (mean 4.15, SD 1.05).
The majority (mean 4.24, SD .99) reported no difficulties handling the electronic assessment.
Patients were generally satisfied with the completion of the questionnaire (mean 3.78, SD .88). More than 59% participants found the number of questions asked to be adequate and the majority would have continued answering more questions with the system (mean 3.59, SD 1.60).
The majority (84%) would have not preferred to complete the questionnaires in a paper version indicating a preference for the electronic assessment.
Discussion
As previously discussed, self-assessment and external assessment of patients’ symptoms tend to diverge [31,32,33] and patients’ subjective view can never be reliably represented except from the patient itself. PROs and their efficient assessment are therefore of major importance.
However, as previously suggested [34], incorporating ePRO measures into existing workloads was a significant barrier.
Feasibility
We found that the electronic tool was feasible as most patients reported no difficulties handling the electronic assessment indicating that the tool was broadly acceptable for patients.
However, there was a difference between operating the questionnaire and answering the questions itself. While 78.6% reported not needing help for answering the questions, more than half (51.8%) did require help to operate the questionnaire. Possibly, this is related to the age of the patients, as older age is associated with lower computer and internet use [35]. In the long term, the internet will become established almost universally. We still consider the tool feasible as the ePRO assessment was well received with the majority presenting a relatively high satisfaction with the tool. Furthermore, patients would not prefer to complete the questionnaires in a paper version, which indicates a preference for the electronic assessment.
Those findings suggest good feasibility, suggesting that the electronic capture of PROs provides a reliable replacement for the paper form. This is consistent with other research findings comparing electronic and paper assessment of PROs [17, 18] and showing good concordance between the surveys across a wide variety of diseases [36].
Multiple benefits are associated with ePRO implementation. If ePROs were implemented in routine care, data could be efficiently stored in one location and was immediately available for the healthcare professionals to review in the data base. In addition to these improvements, our results have shown that patients are satisfied with the electronic capture. This is consistent with previous studies comparing electronic forms to paper forms [37,38,39,40]. As already stated, our patients showed little problems handling the electronic tool which aligns with previous studies that suggest high acceptability among patients using tablet based assessment tools [41, 42].
In addition, electronic completion may be easier for patients with limited manual dexterity [43].
Electronic surveys are also perceived as more anonymous than paper-based surveys [44], potentially leading to greater honesty on the patients side.
While there are advantages being associated with electronic assessment of PROs, it should be kept in mind that studies, that require an electronic device, risk excluding patients whose insight and experiences with technology are limited.
Completion rate
We included four questionnaires with a total of 102 items in the study. For the PRO-CTCAE and the SDM-Q-9, completion rates at T2 were higher with the electronic assessment versus paper-based. However, the QLQ-C30 showed better completion rates for paper-based assessment at all assessment points. This shows that neither the electronic nor the paper-based assessment consistently resulted in better completion rates.
For the QLQ-C30, nearly half of those in group A (47%) had no entries in the questionnaire at T1. This is quite surprising, especially since at all other timepoints a minimum of 85% patients in both groups reached at least 96% completion rate. We were not able to completely clarify the reason for this deviation. Most likely it was due to a technical issue based on the study staff not appropriately administering the survey. This shows that even with electronic administration, manual input can still be necessary if problems occur.
Overall, a high number of questionnaires were not fully completed. This leads to the question whether an unsuitable amount of items was chosen for this study. Considering the fact that patients undertook the survey more than once, it is thinkable that the number of items was too high. This can be underlined by a previous study that indicated a negative correlation between the completion rate and survey length [45]. The burden of symptoms could also have an impact on questionnaire completion, as described in further studies [46, 47]. If patients are unwell, this might lead to difficulties completing PROs, regardless if they are electronic or paper based.
If PROs are seen to be useful for clinical care, then patients are more likely to overcome barriers to achieve completion.
Limitations
There were some limitations that should be considered when interpreting these findings. The main limitation of this study was that the feasibility questionnaire was only given to the intervention group that used the electronic assessment. Therefore, we cannot report on patients’ experiences with the paper form.
We also assessed differences within the PROs. However, the questionnaires were completed only either in the electronic or the paper version. Respondents were not exposed to both surveys, making direct comparison somewhat difficult.
The relatively short median duration of hospitalization (6 days in group A and 7 days in group B) could have affected the survey completion rates. A longer stay, so that all patients could have participated in T1, would have increased the informative value about differences between electronic and paper-based assessment.
Conclusion
Our study concludes that the assessment of electronic PRO measures is feasible in routine inpatient cancer care. The majority (84%) would have not preferred to complete the questionnaires in a paper version indicating a preference for the electronic assessment. Patients were therefore receptive to the electronic assessment and showed little problems handling the tool. Our findings support that electronic assessment is a valuable replacement to paper versions. The results of this study can allow for better understanding of the complexity of ePRO implementation and be helpful in creating strategies for further application.
References
Ashley L, Jones H, Thomas J, Newsham A, Downing A, Morris E et al (2013) Integrating patient reported outcomes with clinical cancer registry data: a feasibility study of the electronic patient-reported outcomes from cancer survivors (ePOCS) system. J Med Internet Res 15(10):e230
Dawson J, Doll H, Fitzpatrick R, Jenkinson C, Carr AJ (2010) Routine use of patient reported outcome measures in healthcare settings. BMJ 340(jan18 1):c186–c186
Zagadailov E, Fine M, Shields A (2013) Patient-reported outcomes are changing the landscape in oncology care: challenges and opportunities for payers. Am Heal Drug Benefits 6(5):264–274
Black N (2013) Patient reported outcome measures could help transform healthcare. BMJ 346(jan 18 1):f167–f167
Santana MJ, Feeny D (2014) Framework to assess the effects of using patient-reported outcome measures in chronic care management. Qual Life Res 23(5):1505–1513
Detmar SB, Muller MJ, Schornagel JH, Wever LDV, Aaronson NK (2002) Health-related quality-of-life assessments and patient-physician communication: a randomized controlled trial. J Am Med Assoc 288(23):3027–3034
Velikova G, Booth L, Smith AB, Brown PM, Lynch P, Brown JM et al (2004) Measuring quality of life in routine oncology practice improves communication and patient well-being: a randomized controlled trial. J Clin Oncol 22(4):714–724
Velikova G, Brown JM, Smith AB, Selby PJ (2002) Computer-based quality of life questionnaires may contribute to doctor-patient interactions in oncology. Br J Cancer 86(1):51–59
Wagner AK, Vickrey BG (1995) The routine use of health-related quality of life measures in the care of patients with epilepsy: rationale and research agenda. Qual Life Res 4(2):169–177
Judson TJ, Bennett AV, Rogak LJ, Sit L, Barz A, Kris MG et al (2013) Feasibility of long-term patient self-reporting of toxicities from home via the Internet during routine chemotherapy. J Clin Oncol 31(20):2580–2585
Basch E, Abernethy AP (2011) Supporting clinical practice decisions with real-time patient-reported outcomes. J Clin Oncol 29(8):954–956
Dale O, Hagen KB (2007) Despite technical problems personal digital assistants outperform pen and paper when collecting patient diary data. J Clin Epidemiol 60(1):8–17
Kaushal R, Shojania KG, Bates DW (2003) Effects of computerized physician order entry and clinical decision support systems on medication safety: a systematic review. Arch Intern Med 163(12):1409–1416
Coons SJ, Gwaltney CJ, Hays RD, Lundy JJ, Sloan JA, Revicki DA et al (2009) Recommendations on evidence needed to support measurement equivalence between electronic and paper-based patient-reported outcome (PRO) measures: ISPOR ePRO good research practices task force report. Value Heal 12(4):419–429
Bates DW, Cohen M, Leape LL, Overhage JM, Shabot MM, Sheridan T (2001) Reducing the frequency of errors in medicine using information technology. J Am Med Inform Assoc 8(4):299–308
Agrawal A (2009) Medication errors: prevention using information technology systems. Br J Clin Pharmacol 67(6):681–686
VanDenKerkhof EG, Goldstein DH, Blaine WC, Rimmer MJ (2005) A comparison of paper with electronic patient-completed questionnaires in a clinic. Anesth Analg 101(4):1075–1080
Coons SJ, Eremenco S, Lundy JJ, O’Donohoe P, O’Gorman H, Malizia W (2015) Capturing Patient-reported outcome (PRO) data electronically: the past, present, and promise of ePRO measurement in clinical trials. Patient 8(4):301–309
Hernar I, Graue M, Richards D, Strandberg RB, Nilsen RM, Tell GS et al (2019) Electronic capturing of patient-reported outcome measures on a touchscreen computer in clinical diabetes practice (the DiaPROM trial): a feasibility study. Pilot Feasibility Stud 5(1)
Le Jeannic A, Quelen C, Alberti C, Durand-Zaleski I (2014) Comparison of two data collection processes in clinical studies: electronic and paper case report forms. BMC Med Res Methodol 14(1)
Aaronson NK, Ahmedzai S, Bergman B, Bullinger M, Cull A, Duez NJ et al (1993) The European organization for research and treatment of cancer QLQ-C30: a quality-of-life instrument for use in international clinical trials in oncology. J Natl Cancer Inst 85(5):365–376
Brédart A, Bottomley A, Blazeby JM, Conroy T, Coens C, D’Haese S et al (2005) An international prospective study of the EORTC cancer in-patient satisfaction with care measure (EORTC IN-PATSAT32). Eur J Cancer 41(14):2120–2131
Kriston L, Scholl I, Hölzel L, Simon D, Loh A, Härter M (2010) The 9-item Shared Decision Making Questionnaire (SDM-Q-9). Development and psychometric properties in a primary care sample. Patient Educ Couns 80(1):94–99
Basch E, Reeve BB, Mitchell SA, Clauser SB, Minasian LM, Dueck AC et al (2014) Development of the national cancer institute’s patient-reported outcomes version of the common terminology criteria for adverse events (PRO-CTCAE). J Natl Cancer Inst 106(9):dju244
Dueck AC, Mendoza TR, Reeve BB, Sloan JA, Cleeland CS, Hay J et al (2010) Validation study of the patient-reported outcomes version of the common terminology criteria for adverse events (PRO-CTCAE). J Clin Oncol 28(15_suppl):TPS274
Kiresuk TJ, Sherman RE (1968) Goal attainment scaling: a general method for evaluating comprehensive community mental health programs. Community Ment Health J 4(6):443–453
Pearson KX (1900) On the criterion that a given system of deviations from the probable in the case of a correlated system of variables is such that it can be reasonably supposed to have arisen from random sampling. Edinburgh, Dublin Philos Mag J Sci, London, pp 11–28
Fisher RA (1922) On the interpretation of χ2 from contingency tables, and the calculation of P. J R Stat Soc 85(1):87
Wilcoxon F (1945) Individual comparisons by ranking methods. Biometrics Bull 1(6):80
Mann HB, Whitney DR (1947) On a test of whether one of two random variables is stochastically larger than the other. Ann Math Stat 18(1):50–60
Atkinson TM, Ryan SJ, Bennett AV, Stover AM, Saracino RM, Rogak LJ et al (2016) The association between clinician-based common terminology criteria for adverse events (CTCAE) and patient-reported outcomes (PRO): a systematic review. Support Care Cancer 24(8):3669–3676
Efficace F, Rosti G, Aaronson N, Cottone F, Angelucci E, Molica S et al (2014) Patient- versus physician-reporting of symptoms and health status in chronic myeloid leukemia. Haematologica 99(4):788–793
Fares CM, Williamson TJ, Theisen MK, Cummings A, Bornazyan K, Carroll J et al (2018) Low concordance of patient-reported outcomes with clinical and clinical trial documentation. JCO Clin Cancer Inform 2:1–12
Macnair A, Sharkey A, Le Calvez K, Walters R, Smith L, Nelson A et al (2020) The trigger project: the challenge of introducing electronic patient-reported outcome measures into a radiotherapy service. Clin Oncol 32(2):e76–e79
Paul CL, Carey ML, Hall AE, Lynagh MC, Sanson-Fisher RW, Henskens FA (2011) Improving access to information and support for patients with less common cancers: hematologic cancer patients’ views about web-based approaches. J Med Internet Res 13(4):e112
Muehlhausen W, Doll H, Quadri N, Fordham B, O’Donohoe P, Dogar N et al (2015) Equivalence of electronic and paper administration of patient-reported outcome measures: a systematic review and meta-analysis of studies conducted between 2007 and 2013. Health Qual Life Outcomes 13(1)
Richter JG, Becker A, Koch T, Nixdorf M, Widers R, Monser R et al (2008) Self-assessments of patients via Tablet PC in routine patient care: comparison with standardised paper questionnaires. Ann Rheum Dis 67(12):1739–1741
Recinos PF, Dunphy CJ, Thompson N, Schuschu J, Urchek JL, Katzan IL (2017) Patient satisfaction with collection of patient-reported outcome measures in routine care. Adv Ther 34(2):452–465
Schamber EM, Takemoto SK, Chenok KE, Bozic KJ (2013) Barriers to completion of patient reported outcome measures. J Arthroplasty 28(9):1449–1453
Salaffi F, Gasparini S, Ciapetti A, Gutierrez M, Grassi W (2013) Usability of an innovative and interactive electronic system for collection of patient-reported data in axial spondyloarthritis: comparison with the traditional paper-administered format. Rheumatol 52(11):2062–2070
Hess R, Santucci A, McTigue K, Fischer G, Kapoor W (2008) Patient difficulty using tablet computers to screen in primary care. J Gen Intern Med 23(4):476–480
Bliven BD, Kaufman SE, Spertus JA (2001) Electronic collection of health-related quality of life data: validity, time benefits, and patient preference. Qual Life Res 10:15–21
Schefte DB, Hetland ML (2010) An open-source, self-explanatory touch screen in routine care. Validity of filling in the bath measures on ankylosing spondylitis disease activity index, function index, the health assessment questionnaire and visual analogue scales in comparison with paper versions. Rheumatology 49(1):99–104
Trau RNC, Härtel CEJ, Härtel GF (2013) Reaching and hearing the invisible: organizational research on invisible stigmatized groups via Web surveys. Br J Manag 24(4):532–541
Liu M, Wronski L (2018) Examining completion rates in Web surveys via over 25,000 real-world surveys. Soc Sci Comput Rev 36(1):116–124
Valderas JM, Kotzeva A, Espallargues M, Guyatt G, Ferrans CE, Halyard MY et al (2008) The impact of measuring patient-reported outcomes in clinical practice: a systematic review of the literature. Qual Life Res 17(2):179–193
Chen J, Ou L, Hollis SJ (2013) A systematic review of the impact of routine collection of patient reported outcome measures on patients, providers and health organisations in an oncologic setting. BMC Health Serv Res 13(1)
Funding
Open Access funding enabled and organized by Projekt DEAL. The study was funded by the Stiftung Oskar-Helene-Heim, Walterhöferstr. 11, 14165 Berlin, Germany.
Author information
Authors and Affiliations
Contributions
Hanna Salm wrote the main manuscript text. All authors reviewed the manuscript.
Corresponding author
Ethics declarations
Competing interests
The authors declare no competing interests.
Additional information
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Supplementary information
ESM 1
(DOCX 75 kb)
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Salm, H., Hentschel, L., Eichler, M. et al. Evaluation of electronic patient–reported outcome assessment in inpatient cancer care: a feasibility study. Support Care Cancer 31, 575 (2023). https://doi.org/10.1007/s00520-023-08014-9
Received:
Accepted:
Published:
DOI: https://doi.org/10.1007/s00520-023-08014-9