Abstract
Purpose
A well-defined and reliable patient-reported outcome instrument for COVID-19 is important for assessing symptom severity and supporting research studies. The InFLUenza Patient-Reported Outcome (FLU-PRO) instrument has been expanded to include loss of taste and smell in the FLU-PRO Plus, to comprehensively cover COVID-19 symptoms. Our studies were designed to evaluate and validate the FLU-PRO Plus among patients with COVID-19.
Methods
Two studies were conducted: (1) a qualitative, non-interventional, cross-sectional study of patients with COVID-19 involving hybrid concept elicitation and cognitive debriefing interviews; (2) a psychometric evaluation of the measurement properties of FLU-PRO Plus, using data from COMET-ICE (COVID-19 Monoclonal antibody Efficacy Trial—Intent to Care Early).
Results
In the qualitative interviews (n = 30), all 34 items of the FLU-PRO Plus were considered relevant to COVID-19, and participants determined the questionnaire was easily understood, well written, and comprehensive. In the psychometric evaluation (n = 845), the internal consistency reliability of FLU-PRO Plus total score was 0.94, ranging from 0.71 to 0.90 for domain scores. Reproducibility (Day 20–21) was 0.83 for total score, with domain scores of 0.67–0.89. Confirmatory factor analysis with the novel smell/taste domain demonstrated an acceptable fit to the data.
Conclusion
The content, reliability, validity, and responsiveness of the FLU-PRO Plus in the COVID-19 population were supported. Our results suggest that FLU-PRO Plus is a content- and psychometrically-valid, fit-for-purpose measure which is easily understood by patients. FLU-PRO Plus is a suitable PRO measure for evaluating symptoms of COVID-19 and treatment benefit directly from the patient perspective.
Trial Registration: ClinicalTrials.Gov: NCT04545060, September 10, 2020; retrospectively registered.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
Plain english summary
To assess how COVID-19 affects the lives of patients, researchers need to develop standard ways to measure its impact. An example of one of these measures is the FLU-PRO questionnaire, which was developed to assess the intensity and duration of symptoms in viral respiratory tract illnesses, such as influenza. Questions about loss of smell and taste, which are common symptoms of COVID-19, have been added to the FLU-PRO questionnaire, in an updated version named FLU-PRO Plus. In this study, we performed interviews to explore the symptom burden of COVID-19 and evaluate how relevant, important, and easily understood all the questions included in the FLU-PRO Plus are to patients who have recently tested positive for COVID-19. We also performed statistical analyses to determine the reliability and validity of the questionnaire. Our results show the COVID-19 symptoms measured by the FLU-PRO Plus are important and relevant to patients, and questions were easy to comprehend and covered all their symptoms, allowing an accurate depiction of the disease’s symptoms. We also found the FLU-PRO Plus was reliable, valid, and responsive to change. Findings from this study support the use of the FLU-PRO Plus in clinical trials and other research to assess COVID-19 symptoms and the impact of treatments on the disease, directly from the patient perspective.
Introduction
A significant health burden is associated with coronavirus disease 2019 (COVID-19) [3]. To fully capture the symptom burden of COVID-19 on patients, comprehensive, standardized, and valid patient-reported outcome (PRO) instruments that reliably assess symptom severity are essential. In addition, well-defined and reliable PRO measures support therapy and vaccine effectiveness studies, alongside other research activities [4].
Epidemiological studies show that viral respiratory diseases have common symptom profiles, including cough, shortness of breath, fatigue, sore throat, muscle pain or body aches, headache, vomiting, and diarrhea [5]. The original inFLUenza Patient-Reported Outcome (FLU-PRO) measure was developed to assess core symptoms of influenza and other viral respiratory diseases, based on a recall period of the past 24 h [6, 7]. FLU-PRO consists of 32 items across six body systems (nose, throat, eyes, chest/respiratory, gastrointestinal, and body/systemic) which are predominantly scored on a five-point severity scale [6], and has been validated among patients with influenza and influenza-like illness [7, 8]. An extended version, the FLU-PRO Plus, has since been developed, which includes the commonly reported COVID-19 symptoms of loss of taste and smell [9]. FLU-PRO Plus has recently been shown to have good construct validity, known-groups validity, and responsiveness to change among patients with COVID-19 [4].
At initiation of this work, the psychometric properties of the FLU-PRO Plus had not yet been evaluated among the COVID-19 population. Two studies were therefore conducted to assess the measurement properties of FLU-PRO Plus among patients with COVID-19. First, a qualitative, cross-sectional, descriptive, non-interventional study was conducted, using an adapted grounded theory approach, to explore and gain insight into how patients describe their experience of COVID-19, and to identify COVID-19 symptoms that are important and clinically relevant to patients. In addition, the study aimed to elicit patient feedback on the relevance, comprehensiveness, and understandability of the FLU-PRO Plus when measuring COVID-19 symptoms. The second was a quantitative study that evaluated the factor structure and psychometric properties (reliability, construct and known-groups validity, responsiveness, and responder definition) of the FLU-PRO Plus for use in patients with COVID-19 in the COVID-19 Monoclonal antibody Efficacy Trial—Intent to Care Early (COMET-ICE) study population. COMET-ICE was a randomized, double-blind, multi-center, placebo-controlled trial that investigated the efficacy and safety of sotrovimab for the prevention of progression of mild/moderate COVID-19 in a high-risk adult population (NCT04545060).
Methods
Qualitative study
Hybrid concept elicitation and cognitive debriefing interviews were conducted among symptomatic adults with a confirmed diagnosis of COVID-19. Patients with mild, moderate, and severe COVID-19 were included in the sample. One-to-one, 90-min interviews were conducted via webcam or telephone. In the concept elicitation segment, participants described their experience of COVID-19, including its symptoms and impacts. In the cognitive debriefing segment, participants completed the FLU-PRO Plus questionnaire using a retrospective think-aloud method. A full description of the sample, procedure, and analysis of qualitative data is included in Online Resource 1.
Quantitative analysis
Blinded PRO data from the COMET-ICE randomized clinical trial were used for the psychometric analysis, which was conducted over 12 weeks, using a data cut taken when all patients had reached Day 29 (full details of the COMET-ICE trial protocol are published elsewhere [10, 11]). Using an electronic device or paper questionnaire, participants completed the FLU-PRO Plus daily during the first 3 weeks following trial enrollment, and then on a single day at Weeks 4, 8, and 12. Participants also completed the 12-item Short Form (SF-12) hybrid survey [12] (the 12 items of the SF-12 plus the full Vitality and Role Physical Domains of the 36-item Short Form survey), which consists of eight domains and two summary scores. An additional pre-COVID health supplemental question (“Compared to before the COVID-19 crisis, how would you rate your health in general now?”) was added for the purpose of this study. Scores are transformed to a metric, with scores > 50 indicating better physical or mental health than the mean. The Work Productivity and Activity Impairment Questionnaire: General Health (WPAI-GH) [13] was also completed, which assesses absenteeism, presenteeism, work productivity loss, and activity impairment across six items. WPAI-GH scores are expressed as impairment percentages, with higher numbers indicating greater impairment and less productivity. Both questionnaires were distributed at key timepoints (Weeks 1 [Day 1], 2 [Day 15], 4 [Day 29], 8, 12, 16, 20, and 24).
Psychometric evaluation
The psychometric evaluation of the FLU-PRO Plus was performed in accordance with classical test theory [14] and comprised two phases. Phase I involved confirmation of factor structure, including item evaluation and scaling, and evaluation of the instrument’s fit with the inclusion of additional COVID-19-specific items (i.e., loss of taste and loss of smell). Phase II involved assessment of the reliability, validity, and responsiveness of the measure.
Descriptive statistics (N, mean, standard deviation [SD]) were calculated for FLU-PRO Plus items across all 21 days in which the FLU-PRO Plus was administered, and for the SF-12 and WPAI-GH. Both FLU-PRO and FLU-PRO Plus scores are reported for the analyses below, in order to assess consistency between the measures.
Confirmatory factor analysis (CFA) models were run to evaluate the fit of the original FLU-PRO and FLU-PRO Plus single-factor and multi-factor conceptual models for use in the COVID-19 population (factor structure). CFA was conducted using Mplus software [15]; CFA model fit was assessed with the χ2 test, comparative fit index (CFI), root mean squared error of approximation (RMSEA), and standardized root mean square residual (SRMR). A low χ2 value relative to the degrees of freedom indicates a better fit [16]. Acceptable model fit is indicated when CFI > 0.90, and RMSEA or SRMR < 0.07 [17, 18].
Reliability of the FLU-PRO Plus was assessed for internal consistency and reproducibility. Internal consistency was evaluated using Cronbach’s coefficient alpha at Day 1, with descriptive scores from 0 to 1.0 and higher scores indicating a more homogenous instrument [14, 19]. The reproducibility of FLU-PRO domain and total scores was evaluated post hoc among participants with no change in hospitalization status between Days 20 and 21, and who had FLU-PRO Plus data on both days. Reproducibility was assessed using an estimation of an intraclass correlation coefficient (ICC) and a calculation of effect size from Days 20 to 21 with a two-way mixed-effect analysis of variance model. ICC values range from 0 to 1, with ICC > 0.6 generally considered acceptable [20, 21].
Construct validity assessed the relationship between FLU-PRO Plus and other PRO measures (SF-12 [mental and physical component summary scores, role physical, and vitality domains and general health question], and WPAI-GH [percent impairment, work productivity, and regular activities]). Construct validity of the FLU-PRO Plus was assessed at Days 1, 15, and 29 using Spearman correlations. A correlation coefficient > 0.3 indicates convergent validity [22].
Known-groups validity and responsiveness were assessed through an analysis of variance and an analysis of covariance, respectively, using WPAI activity impairment score as a variable. Responsiveness was assessed by comparing changes from Day 1 to Days 7, 14, and 21 in WPAI-GH scores, using an analysis of covariance.
A pre-specified responder definition for the FLU-PRO Plus, developed through discussion with regulatory authorities, comprised key COVID-19 symptoms as understood at the time. This definition allowed for persistent cough and fatigue (responses up to “Somewhat”) and loss of taste and smell to continue but required other symptoms to resolve (full details of the responder definition in Supplementary Table 1 [Online Resource 1]). To explore the value of the responder definition, comparisons between FLU-PRO Plus responders and non-responders were made using the SF-12 scales, pre-COVID health supplemental question, and WPAI scores at Days 15 and 29, and Weeks 8 and 12.
Statistical methods
All statistical tests used a significance level of 0.05 unless otherwise stated. Statistical tests involving multiple comparisons were adjusted for multiplicity to reduce the possibility of Type I error. For continuous variables, mean, median, SD, and range are described. For categorical variables, frequency and percentage are described.
PRO data missing due to early withdrawal from the study, a missed entry, device failure, or non-compliance were not included in the psychometric analyses. No item-level missing data were expected for data collected with electronic devices, as participants were required to select a response before advancing to subsequent items. Due to the rapid initiation of sites, paper completion of questionnaires was sometimes required. Paper records with missing data were scored according to the original FLU-PRO user manual. To score the novel “Smell/taste” domain of FLU-PRO Plus, data on at least one of the two items were required.
Results
Qualitative analysis
Study participants
A total of 30 symptomatic patients with a confirmed COVID-19 diagnosis participated in the interviews, which were conducted an average of 34.9 (SD = 15.0; range: 12–66) days after testing. Participants were evenly split in terms of sex and severity of symptoms (mild or moderate/severe) (Table 1). Mean age (range) was 49.8 (22–70) years and most participants (70%) were White. Of the 30 patients included in the sample, 50.0% had mild COVID-19, 33.3% had moderate COVID-19, and 16.7% had severe COVID-19, with 36.7% of participants also having comorbid conditions including diabetes, asthma, and Crohn’s disease. The saturation analysis did not identify any new symptoms in the last code sets, suggesting sufficient interviews were conducted to reach saturation.
COVID-19 symptoms and participant feedback on the FLU-PRO Plus
During qualitative interviews, participants described experiencing a wide array of COVID-19-related symptoms, either spontaneously or following probes, confirming the systemic and variable presentation of the disease. Participants described variability in the symptoms experienced, when symptoms occurred, and the duration of symptoms, highlighting the heterogenous nature of COVID-19. Figure 1 shows the proportion of participants who described experiencing each item of the FLU-PRO Plus questionnaire across different disease severities. Overall, most common symptoms were feeling weak or tired (100.0%), sleeping more than usual (86.7%), congested/stuffy nose (83.3%), and lack of appetite (83.3%). There was no clear pattern of symptom variation by severity, other than respiratory-related symptoms (which were reported by a higher percentage of severe participants). All severe participants (n = 5; 100.0%) reported trouble breathing and chest congestion, while the majority (n = 4; 80.0%) also reported symptoms of chest tightness and shortness of breath. Similarly, those with comorbid conditions reported a dry or hacking cough (n = 10; 90.9%), trouble breathing (n = 8; 72.7%), and shortness of breath (n = 7; 63.6%) more often than those without comorbid conditions. Due to the variability of symptoms experienced, each individual participant did not report that all symptoms in the FLU-PRO Plus were relevant to their personal experience. However, all 34 items received high levels of endorsement, with at least 60% of participants reporting each item as relevant to their experience of COVID-19 (Fig. 2). All items of the instrument mapped directly to the COVID-19 symptoms reported by participants (see Supplementary Table 2 [Online Resource 1] for exemplary quotes).
Overall, participants found the FLU-PRO Plus to be well written and comprehensible (Table 2). Only one item pertaining to the symptom “head congestion” was reported as difficult to comprehend by four participants (13.3%), since the item could be misinterpreted to be a symptom similar to “brain fog” by those participants.
All 34 items were considered relevant and important to capture the heterogeneity of COVID-19 symptoms. While most participants (73.3%) indicated that the FLU-PRO Plus comprehensively captured their experience, 14 participants (46.7%) mentioned experiencing a disturbance in their thinking and cognitive capacity while ill, which was termed “brain fog” by some. This impact of COVID-19 was not covered by the FLU-PRO Plus.
Participants agreed the FLU-PRO Plus instructions were simple and easy to understand (n = 27; 90%), and most reported that the length of the questionnaire was appropriate for capturing COVID-19 symptoms (n = 26; 86%). There was general agreement that the 24-h recall period would be easy and useful to report COVID-19 symptoms. The majority of participants (n = 23; 76%) found the response options (Not at all, A little bit, Somewhat, Quite a bit, Very much) which were used for most FLU-PRO Plus items to be appropriate. A Yes/No response option to loss of smell and taste items was also reported as adequate by 90% of participants; one participant suggested a scale could be more useful if the response options reflected a partial loss, decrease, or change in these items.
Quantitative analysis
Patients
Of the 1057 patients enrolled in COMET-ICE at the time of the analyses, 845 had FLU-PRO Plus score data at Day 1 and at least one follow-up visit, and were therefore included in the psychometric analyses. Mean (SD) age was 52.3 (14.9) years, 55.3% of patients were female, and the majority were White (87.1%) (Table 3). Of the 845 patients, the proportion included in the analyses who completed the FLU-PRO Plus questionnaire was 75.0% at Day 2, 53.7% at Day 21, 62.8% at Day 29, 58.2% at Week 8, and 48.2% at Week 12.
FLU-PRO Plus item evaluation and scaling
For the quantitative analysis, mean FLU-PRO Plus item scores ranged from 0.3 (vomiting) to 2.1 (weak or tired and coughing) at Day 1, and most items were experienced by more than 50% of patients (Fig. 3). There was good use of the range of response options, and at least some participants endorsed each of the response options for every symptom (Fig. 4). In addition, there were some expected floor effects due to the heterogeneity of COVID-19 symptoms. Mean item scores reduced over time but a range of response options continued to be selected and floor effects were maintained, with less severe response options selected more frequently at later trial timepoints.
At Day 1, WPAI-GH scores were high and SF-12 scores were low, indicating a substantial impact of COVID-19 on patients, but were followed by notable improvements by Day 15 and through Week 12.
Confirmatory analysis
The multi-factor models (defined based on the conceptual models for the original FLU-PRO [six factors] and FLU-PRO Plus [seven factors]) yielded an acceptable fit to the data, with factor loadings > 0.4 for all items, and > 0.7 for most items (Table 4).
FLU-PRO Plus assessment of reliability, validity, and responsiveness
At Day 1, internal consistency reliability (Cronbach’s coefficient alpha) for the original FLU-PRO and FLU-PRO Plus total scores was 0.95 and 0.94, respectively. Cronbach’s coefficient alpha for the FLU-PRO Plus domain scores ranged from 0.71 (gastrointestinal) to 0.90 (body/systemic); the smell/taste domain score was 0.86.
In the post hoc reproducibility analysis conducted using data from Day 20 to 21, ICCs were good for both total scores and all domains. For total scores, ICCs were 0.82 and 0.83 for FLU-PRO and FLU-PRO Plus, respectively. For domain scores, ICCs were 0.89 for smell/taste, 0.84 for throat, 0.82 for nose and chest/respiratory, 0.81 for body/systemic, 0.68 for eyes, and 0.67 for gastrointestinal.
An analysis of construct validity at Day 1 is presented in Table 5. Moderate correlations between FLU-PRO Plus total score and SF-12 (mental and physical components, and role physical domain) scores were observed at Days 1, 7, and 15 (r range: − 0.37 to − 0.55). Moderate correlations were observed between FLU-PRO Plus total score and the WPAI-GH (r range: 0.41 to 0.58).
Known-groups validity and responsiveness were also demonstrated. Analysis using the WPAI-GH—Activity Impairment showed significant known-groups validity for the original FLU-PRO and FLU-PRO Plus total scores (p < 0.0001) at all timepoints, and the domains showed similar results (Supplementary Tables 3–7 [Online Resource 1]). FLU-PRO Plus total score was responsive to change in WPAI-GH score from Day 1 to Day 29 (n = 173; p < 0.05 for all) (Supplementary Table 8 [Online Resource 1]). The a priori responder definition performed well. For all comparisons of all variables at each timepoint, responders had better scores than non-responders (p < 0.05), except for missed work time at Week 12 (Supplementary Tables 9–12 [Online Resource 1]).
Discussion
Findings from this qualitative study and psychometric evaluation support the content validity and conceptual structure of the FLU-PRO Plus in the setting of COVID-19, indicating that the validity of the measure in COVID-19 is consistent with its demonstrated validity in other viral respiratory diseases. Furthermore, our study is among the first to explore COVID-19 symptoms and experience qualitatively, directly from the patient perspective, and we extend previous quantitative work to a greater number of participants with more severe disease.
Both the qualitative and quantitative studies tested and confirmed that FLU-PRO Plus is an appropriate and comprehensive tool for measuring COVID-19 symptoms and their improvement. We provide valuable qualitative evidence of high levels of participant endorsement of all FLU-PRO Plus items as relevant. Participants also confirmed they experienced symptoms in the manner described in the FLU-PRO Plus items and understood the meaning of each item appropriately.
The addition of two new items (loss of taste and loss of smell) to assess COVID-19 symptoms was supported by the data. These symptoms were endorsed during interviews and fit into the adapted conceptual framework of the FLU-PRO instrument as a separate domain (smell/taste), thus supporting their inclusion in the total FLU-PRO Plus score. The possibility of including an additional item to account for the “brain fog” noted by some participants may warrant further study, of both wording/response options and the investigation of “brain fog” as a multi-dimensional concept resulting from COVID-19 symptoms (rather than a symptom of COVID-19 itself).
Our analyses support the reliability, reproducibility, construct validity, known-groups validity, and responsiveness of the FLU-PRO Plus. FLU-PRO Plus scores declined throughout the trial as patients experienced improvement in their illness and were responsive to changes in WPAI-GH activity impairment score. This supports the overall validity of the FLU-PRO questionnaire to assess COVID-19 symptoms. This conclusion is supported by another recent study which showed FLU-PRO Plus was reliable, valid, and responsive to change in patients with COVID-19 [4].
Recently, FLU-PRO Plus was endorsed as an outcome measure by the International Consortium for Health Outcomes Measurement COVID-19 Working Group, further supporting its integration into research activities and in assessing COVID-19 symptoms [23]. Based on our findings and use of the FLU-PRO Plus in the COMET-ICE trial, the instrument is suitable for use in observational studies, clinical trials of COVID-19 treatments, and clinical practice with the purpose of evaluating COVID-19 symptoms and improvements, directly from the patient perspective. FLU-PRO Plus can also be used as an outcomes assessment in COVID-19 studies.
This work is consistent with good scientific principles and those articulated in the US Food and Drug Administration (FDA) Patient-Reported Outcome guidance [24, 25]. The FDA has issued guidance for assessing COVID-19 in clinical trials, which includes 14 common symptoms [26]. FLU-PRO Plus consists of 34 items and encompasses all the common symptoms contained in the guidance. All items of the FLU-PRO Plus questionnaire received high levels of endorsement by participants during the qualitative interviews, and quantitative analyses demonstrated a good distribution across all item response categories. These data underscore the known symptom heterogeneity reported among patients with COVID-19 [27]. Furthermore, they highlight the importance of a comprehensive tool which covers the diversity of symptoms experienced, and the loss of information which will occur when using measures that focus solely on the clinician-identified “core symptoms” of COVID-19.
Other PRO instruments have since been designed to evaluate COVID-19 symptoms, such as the 23-item Symptoms Evolution of COVID-19 (SE-C19) [28]. The SE-C19 also uses a recall period of the past 24 h, and its response options include No symptoms, Mild, Moderate, and Severe. To validate the instrument, 30 non-hospitalized patients with COVID-19 participated in concept elicitation and cognitive debriefing interviews. Minor improvements to SE-C19 were suggested to improve conceptual clarity, including separating loss of smell/taste into two items, as in FLU-PRO Plus. The Symptom-Burden Questionnaire for Long COVID (SBQ™-LC) has also been recently developed to assess the symptom burden of “long” COVID-19 [29].
There are some potential limitations to the studies described here. In the qualitative analysis, all interviews had to be conducted remotely, instead of in-person, where nonverbal and behavioral nuances important for interpreting cognitive interviews can be detected. To mitigate this, interviewers were trained to listen for lengthy pauses, recognize changes in tone and inflection, and detect verbal indications of confusion, which could indicate challenges in understanding and/or responding to an item. Webcams were used whenever possible to facilitate face-to-face interaction. While technical familiarity prevented some participants from participating by webcam, no differences in responses across these two modes were noted. Due to the acute nature of COVID-19, most participants had largely recovered at interview, so symptoms were discussed retrospectively. However, all interviews were conducted within 66 days of a positive COVID-19 diagnosis, and participants did not experience difficulties remembering and discussing details of their illness. Additionally, purposive sampling methods were used to ensure that at least 20–30% of participants had pre-existing conditions that placed them in a higher risk category. While this was achieved, the primary comorbidities documented were obesity or diabetes, and so future studies should capture a greater range of pre-existing conditions. In addition, there were few participants aged ≥ 65 years, who are likely to be most severely impacted by COVID-19. Future analyses should investigate symptom burden among this group. In the quantitative analysis, use of a population with mild-to-moderate COVID-19 symptoms may limit the generalizability of the results to patients with more severe symptoms. Finally, due to the dynamic nature of the pandemic, COMET-ICE trial sites were initiated rapidly following aggressive timelines to study initiation. As a result, there was insufficient time to build electronic PRO instruments and so many patients completed questionnaires on paper before transitioning to electronic PRO measures, which affected response rate. However, only completed, available data were used in this psychometric evaluation, and therefore the impact of missing data on these findings is assumed to be minimal. In addition, these time constricts did not make inclusion of typical anchors used in a psychometric analysis such as this one possible, with the exception of the WPAI.
Conclusion
The qualitative analysis supports the content validity of FLU-PRO Plus, in that the concepts measured are relevant and important to patients with COVID-19, and the questions and response options are understandable. The results of the psychometric analyses support the reliability, validity, and responsiveness of FLU-PRO Plus in individuals with symptoms of COVID-19. FLU-PRO Plus is a well-defined, reliable, and psychometrically sound measure with proven construct- and content-validity. Therefore, these findings indicate that FLU-PRO Plus is an appropriate tool for measuring symptoms of COVID-19 infection.
Data availability
Anonymized individual participant data and study documents can be requested for further research from https://www.clinicalstudydatarequest.com.
References
Keeley, T., Gelhorn, H. L., Andrews, H., Chen, W. H., Birch, H., Satram, S., Reyes, C., & Lopuski, A. (2021). (3005) Psychometric validation of the FLU-PRO Plus among patients with COVID-19: Measurement properties, challenges, and opportunities. Quality of Life Research, 30(Suppl 1), S85.
Raymond, K., Keeley, T., Birch, H., Satram, S., Reyes, C., Saucier, C., Tipple, C., Foster, A., Lovely, A., & Kosinski, M. (2021). (B203.4) Qualitative interviews with COVID-19 patients: Content validity of the FLU-PRO Plus for use in COVID-19 clinical research. Quality of Life Research, 30(Suppl 1), S63.
Gebru, A. A., Birhanu, T., Wendimu, E., Ayalew, A. F., Mulat, S., Abasimel, H. Z., Kazemi, A., Tadesse, B. A., Gebru, B. A., Deriba, B. S., Zeleke, N. S., Girma, A. G., Munkhbat, B., Yusuf, Q. K., Luke, A. O., & Hailu, D. (2021). Global burden of COVID-19: Situational analyis and review. Human Antibodies, 29(2), 139–148. https://doi.org/10.3233/hab-200420.
Richard, S. A., Epsi, N. J., Pollett, S., Lindholm, D. A., Malloy, A. M. W., Maves, R., Utz, G. C., Lalani, T., Smith, A. G., Mody, R. M., Ganesan, A., Colombo, R. E., Colombo, C. J., Chi, S. W., Huprikar, N., Larson, D. T., Bazan, S., Madar, C., Lanteri, C., ... & Epidemiology, Immunology, and Clinical Characteristics of Pandemic Infectious Diseases (EPICC) COVID-19 Cohort Study Group. (2021). Performance of the inFLUenza Patient-Reported Outcome Plus (FLU-PRO Plus) instrument in patients with coronavirus disease 2019. Open Forum Infectious Diseases, 8(12), 517. https://doi.org/10.1093/ofid/ofab517.
CDC. (2020). Similarities and differences between flu and COVID-19. Retrieved September 15, 2021, from https://www.cdc.gov/flu/symptoms/flu-vs-covid19.htm.
Powers, J. H., Guerrero, M. L., Leidy, N. K., Fairchok, M. P., Rosenberg, A., Hernández, A., Stringer, S., Schofield, C., Rodríguez-Zulueta, P., Kim, K., Danaher, P. J., Ortega-Gallegos, H., Bacci, E. D., Stepp, N., Galindo-Fraga, A., St Clair, K., Rajnik, M., McDonough, E. A., Ridoré, M., ... & Ruiz-Palacios, G. M. (2016). Development of the Flu-PRO: A patient-reported outcome (PRO) instrument to evaluate symptoms of influenza. BMC Infectious Diseases, 16, 1. https://doi.org/10.1186/s12879-015-1330-0.
Yu, J., Powers, J. H., 3rd., Vallo, D., & Falloon, J. (2020). Evaluation of efficacy endpoints for a Phase IIb study of a respiratory syncytial virus vaccine in older adults using patient-reported outcomes with laboratory confirmation. Value Health, 23(2), 227–235. https://doi.org/10.1016/j.jval.2019.09.2747.
Powers, J. H., 3rd., Bacci, E. D., Leidy, N. K., Poon, J. L., Stringer, S., Memoli, M. J., Han, A., Fairchok, M. P., Coles, C., Owens, J., Chen, W. J., Arnold, J. C., Danaher, P. J., Lalani, T., Burgess, T. H., Millar, E. V., Ridore, M., Hernández, A., Rodríguez-Zulueta, P., ... & Guerrero, M. L. (2018). Performance of the inFLUenza Patient-Reported Outcome (FLU-PRO) diary in patients with influenza-like illness (ILI). PLoS ONE, 13(3), e0194180. https://doi.org/10.1371/journal.pone.0194180.
Tong, J. Y., Wong, A., Zhu, D., Fastenberg, J. H., & Tham, T. (2020). The prevalence of olfactory and gustatory dysfunction in COVID-19 patients: A systematic review and meta-analysis. Otolaryngology Head and Neck Surgery, 163(1), 3–11. https://doi.org/10.1177/0194599820926473.
Gupta, A., Gonzalez-Rojas, Y., Juarez, E., Crespo Casal, M., Moya, J., Falci, D. R., Sarkis, E., Solis, J., Zheng, H., Scott, N., Cathcart, A. L., Hebner, C. M., Sager, J., Mogalian, E., Tipple, C., Peppercorn, A., Alexander, E., Pang, P. S., Free, A., ... & COMET-ICE Investigators. (2021). Early treatment for Covid-19 with SARS-CoV-2 neutralizing antibody sotrovimab. New England Journal of Medicine, 385(21), 1941–1950. https://doi.org/10.1056/NEJMoa2107934.
Gupta, A., Gonzalez-Rojas, Y., Juarez, E., Crespo Casal, M., Moya, J., Rodrigues Falci, D., Sarkis, E., Solis, J., Zheng, H., Scott, N., Cathcart, A. L., Parra, S., Sager, J. E., Austin, D., Peppercorn, A., Alexander, E., Yeh, W. W., Brinson, C., Aldinger, M., ... & COMET-ICE Investigators. (2022). Effect of sotrovimab on hospitalization or death among high-risk patients with mild to moderate COVID-19: A randomized clinical trial. JAMA, 327(13), 1236–1246. https://doi.org/10.1001/jama.2022.2832.
Ware, J., Jr., Kosinski, M., & Keller, S. D. (1996). A 12-Item Short-Form Health Survey: Construction of scales and preliminary tests of reliability and validity. Medical Care, 34(3), 220–233. https://doi.org/10.1097/00005650-199603000-00003.
Reilly, M. C., Zbrozek, A. S., & Dukes, E. M. (1993). The validity and reproducibility of a work productivity and activity impairment instrument. PharmacoEconomics, 4(5), 353–365. https://doi.org/10.2165/00019053-199304050-00006.
Nunnally, J. C., & Bernstein, I. H. (1994). Psychometric theory (3rd ed.). McGraw-Hill.
Muthén, L. K., & Muthén, B. O. (1998–2017). Mplus user’s guide (8th ed.). Muthén & Muthén.
Wheaton, B., Muthen, B., Alwin, D. F., & Summers, G. F. (1977). Assessing reliability and stability in panel models. Sociological Methodology, 8, 84–136. https://doi.org/10.2307/270754.
Steiger, J. H. (2007). Understanding the limitations of global fit assessment in structural equation modeling. Personality and Individual Differences, 42(5), 893–898. https://doi.org/10.1016/j.paid.2006.09.017.
Yu, C. Y. (2002). Evaluating cutoff criteria of model fit indices for latent variable models with binary and continuous outcomes. University of California.
Cronbach, L. J. (1951). Coefficient alpha and the internal structure of tests. Psychometrika, 16(3), 297–334. https://doi.org/10.1007/BF02310555.
Streiner, D. L., & Norman, G. R. (2008). Health measurement scales: A practical guide to their development and use. Oxford University Press.
Leidy, N. K., Revicki, D. A., & Genesté, B. (1999). Recommendations for evaluating the validity of quality of life claims for labeling and promotion. Value in Health, 2(2), 113–127. https://doi.org/10.1046/j.1524-4733.1999.02210.x.
Cohen, J. (1988). Statistical power analysis for the behavioral studies. Lawrence Erlbaum Associates.
Seligman, W. H., Fialho, L., Sillett, N., Nielsen, C., Baloch, F. M., Collis, P., Demedts, I. K. M., Fleck, M. P., Floriani, M. A., Gabriel, L. E. K., Gagnier, J. J., Keetharuth, A., Londral, A., Ludwig, I. I. L., Lumbreras, C., Moscoso Daza, A., Muhammad, N., Nader Bastos, G. A., Owen, C. W., ... & Brinkman, K. (2021). Which outcomes are most important to measure in patients with COVID-19 and how and when should these be measured? Development of an international standard set of outcomes measures for clinical use in patients with COVID-19: A report of the International Consortium for Health Outcomes Measurement (ICHOM) COVID-19 Working Group. BMJ Open, 11(11), e051065. https://doi.org/10.1136/bmjopen-2021-051065.
Patrick, D. L., Burke, L. B., Gwaltney, C. J., Leidy, N. K., Martin, M. L., Molsen, E., & Ring, L. (2011). Content validity–establishing and reporting the evidence in newly developed patient-reported outcomes (PRO) instruments for medical product evaluation: ISPOR PRO good research practices task force report: Part 1–eliciting concepts for a new PRO instrument. Value in Health, 14(8), 967–977. https://doi.org/10.1016/j.jval.2011.06.014.
US Food and Drug Administration (FDA). (2009). Patient-reported outcome measures: Use in medical product development to support labeling claims. Retrieved March 3, 2022, from https://www.fda.gov/regulatory-information/search-fda-guidance-documents/patient-reported-outcome-measures-use-medical-product-development-support-labeling-claims.
US Food and Drug Administration (FDA). (2020). Assessing COVID-19-related symptoms in outpatient adult and adolescent subjects in clinical trials of drugs and biological products for COVID-19 prevention or treatment. Guidance for industry. Retrieved March 3, 2022, from https://www.fda.gov/media/142143/download.
Huang, C., Wang, Y., Li, X., Ren, L., Zhao, J., Hu, Y., Zhang, L., Fan, G., Xu, J., Gu, X., Cheng, Z., Yu, T., Xia, J., Wei, Y., Wu, W., Xie, X., Yin, W., Li, H., Liu, M., ... & Cao, B. (2020). Clinical features of patients infected with 2019 novel coronavirus in Wuhan, China. Lancet (London, England), 395(10223), 497–506. https://doi.org/10.1016/s0140-6736(20)30183-5.
Rofail, D., McGale, N., Im, J., Rams, A., Przydzial, K., Mastey, V., Sivapalasingam, S., & Podolanczuk, A. J. (2022). Development and content validation of the Symptoms Evolution of COVID-19: A patient-reported electronic daily diary in clinical and real-world studies. Journal of Patient Reported Outcomes, 6(1), 41. https://doi.org/10.1186/s41687-022-00448-9.
Hughes, S. E., Haroon, S., Subramanian, A., McMullan, C., Aiyegbusi, O. L., Turner, G. M., Jackson, L., Davies, E. H., Frost, C., McNamara, G., Price, G., Matthews, K., Camaradou, J., Oremerod, J., Walker, A., & Calvert, M. J. (2022). Development and validation of the symptom burden questionnaire for long covid (SBQ-LC): Rasch analysis. BMJ, 377, e070230. https://doi.org/10.1136/bmj-2022-070230.
Acknowledgements
Editorial support (in the form of writing assistance, including preparation of the draft manuscript under the direction and guidance of the authors, collating and incorporating authors’ comments for each draft, assembling tables and figures, grammatical editing, and referencing) was provided by Kathryn Wardle and Tony Reardon of Aura, a division of Spirit Medical Communications Group Limited (Manchester, UK), and was funded by GlaxoSmithKline.
Funding
This study was funded by GlaxoSmithKline in collaboration with Vir (GSK study 215031 and GSK study 215032, Vir-7831-5001, ClinicalTrials.gov ID: NCT04545060).
Author information
Authors and Affiliations
Contributions
TJHK, SS, AL, PG, CR, KR, MK, CDS, and AMF contributed to the conception or design of the study, acquisition of the data, and data analysis or interpretation. HJB contributed to the conception and design of the study and data interpretation. HLG and JHP contributed to data analysis and interpretation. All authors contributed to the writing, provided critical review, and gave final approval for publication, and all authors take responsibility for its content. All authors meet the criteria for authorship set forth by the International Committee for Medical Journal Editors.
Corresponding author
Ethics declarations
Conflict of interest
TJHK, PG, HJB, and AL are employees of, and hold stocks/shares in, GSK. CR and SS are employees of, and hold stocks/shares in, Vir Biotechnology. KR, MK, CDS, and AMF are employees of QualityMetric Incorporated; QualityMetric Incorporated received funding from GSK to conduct part of this research and did not receive funding for manuscript development. HLG is an employee of Evidera; Evidera received funding from GSK to conduct part of this research and did not receive funding for manuscript development. JHP has served as a consultant for Arrevus, Eicos, Eli Lilly, Evofem, Eyecheck, Gilead, GlaxoSmithKline, Johnson & Johnson, Microbion, OPKO, Otsuka, Resolve, Romark, Shinogi, SpineBioPharma, UTIlity, and Vir.
Ethical approval
Ethics approval for the qualitative study was granted by the New England Independent Review Board (NEIRB). NEIRB reviewed and approved all qualitative study materials prior to the start of recruitment. Ethics approval for the psychometric evaluation was covered by the COMET-ICE trial (NCT04545060) IRB.
Consent to participate
All subjects who participated in the qualitative interviews were required to sign an informed consent form prior to interview. Use of patient data in the psychometric evaluation was covered under the COMET-ICE trial's ethical approval (NCT04545060).
Consent to publish
Consent to publish was included in the informed consent form subjects were required to sign before participating in the qualitative interviews. Publishing of patient data analyzed in the psychometric evaluation was covered under the COMET-ICE trial's ethical approval (NCT04545060).
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
These data have previously been presented in abstract/poster form at the International Society for Quality of Life Research, virtual conference, October 12–28, 2021 [1, 2].
Supplementary Information
Below is the link to the electronic supplementary material.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Keeley, T.J.H., Satram, S., Ghafoori, P. et al. Content validity and psychometric properties of the inFLUenza Patient-Reported Outcome Plus (FLU-PRO Plus©) instrument in patients with COVID-19. Qual Life Res 32, 1645–1657 (2023). https://doi.org/10.1007/s11136-022-03336-3
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11136-022-03336-3