Abstract
Uncertainty assessment is a cornerstone in model-based health economic evaluations (HEEs) that inform reimbursement decisions. No comprehensive overview of available uncertainty assessment methods currently exists. We aimed to review methods for uncertainty assessment for use in model-based HEEs, by conducting a snowballing review. We categorised all methods according to their stage of use relating to uncertainty assessment (identification, analysis, communication). Additionally, we classified identification methods according to sources of uncertainty, and subdivided analysis and communication methods according to their purpose. The review identified a total of 80 uncertainty methods: 30 identification, 28 analysis, and 22 communication methods. Uncertainty identification methods exist to address uncertainty from different sources. Most identification methods were developed with the objective to assess related concepts such as validity, model quality, and relevance. Almost all uncertainty analysis and communication methods required uncertainty to be quantified and inclusion of uncertainties in probabilistic analysis. Our review can help analysts and decision makers in selecting uncertainty assessment methods according to their aim and purpose of the assessment. We noted a need for further clarification of terminology and guidance on the use of (combinations of) methods to identify uncertainty and related concepts such as validity and quality. A key finding is that uncertainty assessment relies heavily on quantification, which may necessitate increased use of expert elicitation and/or the development of methods to assess unquantified uncertainty.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
Health economic evaluation (HEE) can support reimbursement decision making by comparing the cost and effects of health technologies. These evaluations are often based on decision-analytic models, and the results of these models always contain uncertainty [1]. Consequently, reimbursement decisions are often uncertain and are potentially associated with a risk of suboptimal decisions [2]. There is pressure on reimbursement institutions to grant market access at an early stage when evidence is immature [3]. As a result, decision makers are forced to make decisions given considerable uncertainty [4], which increases the potential benefit of assessing and managing uncertainty and risks adequately. Considering uncertainty in decision-making processes is challenging: its quantification and communication can be difficult [5] and it is sometimes met with uncertainty intolerance [6]. However, when uncertainties are quantified and communicated, they can be considered more easily and precisely, leading to better decisions.
Uncertainty assessment is considered good practice in HEE [1, 7, 8]. Uncertainty assessments can indicate the robustness of the results to changes in model inputs, the consequences of uncertainty associated with a decision, and the value of collecting additional information to support that decision. The latter can be calculated through value of information (VOI) analyses [9, 10]. The results of uncertainty assessments can inform the design of managed entry agreements (in the form of pricing schemes and/or evidence collection schemes) that minimise the opportunity loss associated with a reimbursement decision [11]. While uncertainty assessment is thus clearly relevant, it has been recognised that, in particular, uncertainty relating to structural aspects of the model is often not included in uncertainty analysis [12].
There is a lack of consistency in terminology around uncertainty assessment. The ISPOR-SMDM taskforce 7 paper on uncertainty assessment and parameter estimation [1] classified uncertainty into four types: stochastic uncertainty, parameter uncertainty, heterogeneity, and structural uncertainty. The TRansparent Uncertainty Assessment Tool (TRUST) [2], which was developed by our research group, classified uncertainty according to its source and model aspects. Sources are transparency, methods, imprecision, bias and indirectness, and unavailability. Model aspects are context/scope, model structure, selection of evidence, inputs, implementation, and outcomes. In addition, the concept of uncertainty is related to concepts such as risk, validity, quality, and relevance. In the context of reimbursement decision making, risk can be defined as the probability of making a wrong reimbursement decision combined with the consequences of making that decision. Model validity is defined as a model’s ability to reproduce reality [13]. In the Grading of Recommendations, Assessment, Development and Evaluations (GRADE) system, quality of evidence is equated to the level of confidence in the results of the research [14]. The relevance of a model relates to how closely the problem addressed in the model applies to the problem faced by decision makers [15].
There are methods and method guides for performing assessment of uncertainty and related concepts [1, 16,17,18]. To make an uncertainty known that was previously unknown, it is necessary to identify all uncertainty in an HEE [2]. Only then can an uncertainty be analysed and communicated to the involved stakeholders. A comprehensive overview, including methods at the different stages of uncertainty assessment (identification, analysis, and communication), is lacking. In this article, we aimed to comprehensively review methods for uncertainty identification, analysis, and communication for use in model-based HEEs.
2 Methods
We performed a snowballing review in which we identified articles that may be relevant to the research question by screening references and citations as described by Wohlin [19]. We chose eight articles [1, 2, 9,10,11, 13, 15, 20] as starting points for the snowballing search. These articles were chosen because we considered them to be key papers for the topic of uncertainty assessment in HEEs. We conducted two snowballing iterations. For the first iteration we used both reference and citation searches of the initial eight articles, and for the second iteration we used only a reference search of articles that we identified in the first iteration. We conducted the snowballing searches in the Web of Science, Google Scholar and Embase databases on 18 January 2022 using their ‘cited by’ functions and lists of references. The relevant references were then imported for screening through Web of Science and deduplicated automatically there (further details about the literature search are reported in Appendix 1).
We included methodological articles that explicitly described methods to identify, assess or communicate uncertainty for use in model-based HEE. We excluded non-English and non-peer-reviewed articles, as well as articles that were unrelated to HEE. We extracted data on authors, year, and aim/description of the method as presented in the article.
All identified articles were subject to title and abstract screening and, if eligible, to a full-text review by one of the authors (TO). In case of ambiguity, we discussed the inclusion of articles with all authors. We categorised the identified methods into three categories, according to their stage of use: uncertainty identification, analysis, and communication. Furthermore, we subdivided the methods according to their purpose, which emerged from the review; these purpose groups were discussed with all co-authors.
Additionally, we classified all uncertainty identification methods into the sources of uncertainty that they covered following the TRUST tool. TRUST describes the following sources of uncertainty: transparency in the reporting, the appropriateness of methods used imprecision, bias and indirectness, and unavailability in the evidence used to develop the model. We considered that a method addressed a source of uncertainty when the method could be used to identify at least a partial lack of knowledge related to that source: for example, a checklist may ask a precise question about a common cause of intransparencies but not ask about all possible intransparencies, and we still considered that this checklist would cover transparency. This means that when a method addresses a source of uncertainty (Table 1), the method can help to identify uncertainties from that source, but not necessarily all uncertainties related to that source. The classification was conducted through extensive discussions between the authors.
3 Results
3.1 Literature Search
The search strategy identified 6018 references in total. After deduplication, a total of 3551 references were subject to title and abstract screening (Fig. 1). The first iteration identified a total of 5040 (2945 after deduplication) articles, and 150 were selected for full-text review. Of these, we included 34 articles (+8 starting point articles). Through a reference search for the second iteration of the snowballing review, we found 954 references (606 after deduplication). After title and abstract screening, 99 articles were selected for full-text review, of which 13 met the inclusion criteria. Together with the eight starting point articles, this adds to a total of 55 included articles. Within these articles we identified 80 methods, of which we categorised 30 as uncertainty identification methods, 28 as uncertainty analysis methods and 22 as uncertainty communication methods (further details on identification and communication methods are reported in Appendices 2 and 3). Figure 2 gives an overview of the stages of uncertainty assessment and the broad aims of methods that we have identified. A more complete figure including all methods can be found in Appendix 4.
3.2 Uncertainty Identification Methods
The majority (19) of the 30 identification methods assessed model quality and transparency of reporting [2, 15, 20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35, 46]. Three methods specifically aimed at determining the model validity [36,37,38] and eight at assessing the quality of evidence used to develop the model [14, 39,40,41,42,43,44,45] (Table 1). Methods to assess model quality, transparency and validation were all specific to health economic modelling, whereas methods to assess quality of evidence were either targeted to examine evidence in general [14, 45] or to examine the quality of specific types of research design such as indirect treatment comparisons and network meta-analyses [39], systematic reviews [43], observational studies [40, 42], non-randomised studies [44], or real-world evidence [41]. Knowing the quality of the evidence used to develop an economic model is essential for assessing the uncertainty in the model outputs. With the exception of TRUST [2], none of the methods were created with the aim of identifying uncertainties. As we used TRUST [2] (classified as model quality and transparency of reporting method) as a framework for categorising methods, TRUST considered all sources of uncertainty. With regard to sources of uncertainty, most methods could identify uncertainties caused by lack of transparency (26) [2, 15, 20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36, 39,40,41,42,43,44, 46] and/or methodological uncertainty (18) [2, 15, 20,21,22,23,24,25,26,27,28, 30, 33,34,35,36,37,38]. Besides TRUST, imprecision was considered by five quality-of-evidence methods [39,40,41,42,43,44] and unavailability was considered by two quality-of-evidence methods [39, 42].
Descriptions and details about how each individual method was categorised can be found in Appendix 2.
3.3 Uncertainty Analysis Methods
We identified 28 uncertainty analysis methods, 16 of which were variations of one of the following five overarching methods: expected acceptability analysis [1], deterministic sensitivity analysis (DSA) [1], importance analysis [53], VOI analysis [9], and opportunity loss analysis [56, 57]. We subdivided the uncertainty analysis methods into four groups according to their purpose: quantify uncertain inputs when quantitative estimates are unavailable or indirect (2); generate base-case results using the most plausible assumption (7); explore uncertainty not included in the base-case (11); analyse consequences of uncertainty such as risk and VOI (8). All methods in the latter three groups (generating base-case results, uncertainty exploration, consequences of uncertainty) can analyse uncertainties from all sources if uncertainty was quantified. Only threshold analysis can be used to analyse an uncertainty when this uncertainty is not quantified. Thirteen methods require a probabilistic analysis (expected acceptability analysis [1], analysis for policy acceptability [49], fuzzy expected acceptability [50], probabilistic DSA [52], all six VOI methods [9, 55], HTA risk analysis [57], expected loss analysis [56], and real-options analysis [59]).
3.4 Uncertainty Communication Methods
We identified 22 uncertainty communication methods, nine of which were a variation of one of the following four overarching methods: cost-effectiveness planes [60], efficiency frontiers [63], expected acceptability curves [65], and tornado diagrams [1] (see the expanded table in Appendix 3). We used the same subdivisions as for analysis methods, except that there were no communication methods for the ‘quantification of uncertainty’ group. The groups of uncertainty communication methods according to their purpose were communicate base-case results (11), communicate the results of uncertainty exploration (5), and communicate the consequences of uncertainty (6). All uncertainty communication methods, except for the Assessment of Risk Table (ART) and Assessment of Risk Chart (ARCH), can only include uncertainty from all sources when the uncertainty is quantified. The majority (17) of the communication methods require probabilistic analysis. Only the methods range of results [1], standard tornado diagram [1], stepwise tornado diagram [52], distributional tornado diagram [52], and threshold analysis figure [1] do not require a probabilistic analysis.
4 Discussion
We identified 80 uncertainty assessment methods, of which 30 were uncertainty identification, 28 were uncertainty analysis, and 22 were uncertainty communication methods. All but one uncertainty identification method focus on specific sources of uncertainty. Nearly all methods for uncertainty analysis and communication require uncertainties to be quantified and the majority (30 of 50) also require inclusion of uncertainty in a probabilistic analysis. Of note, most of the uncertainty analysis and communication methods serve different purposes and are therefore not interchangeable. Our categorisation, division by purpose and description can help provide clarity as to what method may be appropriate for the desired purpose and stage of assessment. The desired purpose may depend on local requirements, for example guidance by health technology assessment (HTA) agencies. The analysis method used also depends on whether the uncertainty was quantified or not.
Briggs et al. [1] and Bilcke et al. [18] also provide an overview of uncertainty assessment methods. Briggs et al. [1] presented seven uncertainty analysis and five communication methods, describing their use and best practices in detail. Bilcke et al. [18] provided a flowchart with steps of uncertainty analysis depending on the type of uncertainty faced. Our article complements this literature by presenting an up-to-date and comprehensive review of 80 uncertainty assessment methods. In addition, our categorisation of methods by stage of assessment (identification, analysis and communication of uncertainty) and purpose may help in further clarifying the use of these methods within uncertainty assessment. Furthermore, we highlight which sources of uncertainty are covered by the different uncertainty identification methods (Table 1). This classification can be used to guide the selection of identification methods. The overlap in sources of uncertainty treated throughout different categories of uncertainty identification methods highlights the lack of conceptual clarity of different concepts.
Our study has some limitations. The starting point papers were predominantly focused on the field of HTA and health economics. We note that we attempted a broader search and review and abandoned it because it was not feasible, due to both the much larger search and review required and because this would also require an assessment of how these potentially additional methods could possibly be applied in the present context. We consider that this could be another potentially interesting article.
For feasibility reasons, we chose a snowballing review approach, and although 80 methods were identified, relevant methods may have been missed. Another challenge was that methods classified as uncertainty identification methods in the current review included methods that were developed to assess concepts such as validity, model quality, quality of evidence or model transparency. These were also able to identify uncertainties. Similarly, methods to assess uncertainties can identify issues regarding validity or model quality. This highlights that these concepts are related and that their boundaries are not always clear. Therefore, we had to review a broad range of literature to identify potentially useful methods for uncertainty assessment in health economic models. The lack of conceptual clarity for terms relating to uncertainty complicates methodological research in this area [14].
Another limitation of this study is the subjectivity in the classification of identification methods according to the sources of uncertainty that they assess. To mitigate this, all classifications were extensively discussed between the authors to obtain consensus on potentially ambiguous classifications. Because there was no objective threshold at which a source of uncertainty could be considered ‘fully’ covered, we considered a source of uncertainty to be covered even if this was only partially the case. As a result, in practical use the uncertainty identification methods may not fully cover some sources of uncertainty.
The comprehensive set of methods presented in Tables 1, 2 and 3 can help analysts to select methods for an uncertainty assessment, and decision makers in the interpretation of results, by giving them a brief explanation of how a method works.
Within the group of identification methods, the quality and transparency of reporting methods are mostly checklists with similar aims, but different levels of detail and comprehensiveness of sources of uncertainty that they may identify. The proposed framework for quality assessment of decision-analytic models by Philips et al. [20], and CHEERS 2022 [27], have a high level of detail, with the former being focused on model-based evaluations and the latter not focused on model-based evaluations. CHEERS has been widely endorsed as a reporting tool for HEEs. The detail of the items of CHEERS makes it easy to use for less experienced analysts and improves reproducibility. However, the framework by Philips et al., and CHEERS 2022, only cover two sources of uncertainty: transparency and methods. TRUST [2] is the most comprehensive tool that we found, covering all sources of uncertainty, but it does not contain detailed questions about each item. This shows that checklists for uncertainty identification are complementary, and we recommend that both level of detail and comprehensiveness are considered when selecting methods.
Within the categories of methods for uncertainty analysis and communication, the selection of a method will depend on the specific aim and audience of the analysis. For instance, the National Institute for Health and Care Excellence (NICE), National Health Care Institute (Zorginstituut Nederland; ZIN) and Health Information and Quality Authority (HIQA) require the use of probabilistic analyses, cost-effectiveness acceptability curves, simple tornado diagrams, and scenario analyses [17, 68, 69]. Furthermore, selection of methods will depend on whether uncertainty is expressed as a probability, and constraints in time and skills of the analysts. We consider that an application and comparison of methods, with the outlining of advantages and disadvantages, could be an interesting topic for further research. However, we note that most analysis and communication methods serve different purposes and can therefore not be used interchangeably.
Additionally, further research is necessary to analyse and communicate uncertainty that is not quantified. We found only two methods to analyse uncertainty that is not quantified (expert elicitation, threshold analyses [1]) and three methods to communicate uncertainty that is not quantified (the threshold analysis figure [1], the ART [67] and the ARCH [67]). Threshold analyses and the relating figure result in an estimate of how sensitive model outcomes are, when uncertainty is not quantified. Both ART and ARCH ask for analysts to state information about unquantified uncertainty, risk, and their impact on decision making. Due to imperfect evidence and challenges in the quantification of uncertainties, it is unlikely that all uncertainties will ever be quantified [5].
Structured expert elicitation can help to quantify previously unquantified uncertainty. This allows these types of uncertainties to be included in analysis and communication methods when this may otherwise not be possible [70]. Moreover, compared with other methods of obtaining new evidence, expert elicitation can be used to quickly and efficiently quantify inputs and uncertainty [71]. While expert elicitation is a versatile method, it is limited by the availability of experts with sufficient expertise [71]. Furthermore, HTA agencies making reimbursement decisions based on model-based economic analyses offer limited guidance on the conduct and reporting of expert elicitation [69, 72]. As a possible result of this, a review of company submissions to NICE found significant variations in the quality of conduct and reporting of expert elicitation [73]. The endorsement and use of reporting guidelines [70, 74, 75], developed specifically for structured expert elicitation in HTA, may help improve quality and transparency.
All methods except for expert elicitation rely on implicit judgements: the interpretation of the results of threshold analyses/figures requires decision makers to make judgements on the likelihood of the threshold being reached, and ART and ARCH require analysts or decision makers to state information on unquantified uncertainty. Current assessment of uncertainties that are not quantified are therefore not transparent. Furthermore, detailed guidance on how to assess unquantified uncertainty by reimbursement agencies are currently missing. Further research is necessary to find out how to address unquantified uncertainty adequately and transparently.
5 Conclusion
Uncertainty assessment methods exist that can address uncertainty from different sources. The overview presented here will help analysts and decision makers in the choice of methods for uncertainty assessments. Quantification of uncertainty is key to uncertainty analysis and communication. Where not all uncertainty is quantified, efforts need to be undertaken to analyse and communicate the unquantified uncertainty and resulting consequences. Further clarification of terminology and guidance on the use of (combinations of) methods to identify uncertainty, and related concepts such as validity and quality, could improve uncertainty assessment.
References
Briggs AH, Weinstein MC, Fenwick EAL, Karnon J, Sculpher MJ, Paltiel AD. Model parameter estimation and uncertainty: a report of the ISPOR-SMDM modeling good research practices task force-6. Value Health. 2012;15(6):835–42. https://doi.org/10.1016/j.jval.2012.04.014.
Grimm SE, et al. Development and validation of the TRansparent Uncertainty ASsessmenT (TRUST) tool for assessing uncertainties in health economic decision models. Pharmacoeconomics. 2020;38(2):205–16. https://doi.org/10.1007/s40273-019-00855-9.
Davis C, Naci H, Gurpinar E, Poplavska E, Pinto A, Aggarwal A. Availability of evidence of benefits on overall survival and quality of life of cancer drugs approved by European Medicines Agency: retrospective cohort study of drug approvals 2009–13. BMJ. 2017. https://doi.org/10.1136/bmj.j4530.
Sabry-Grant C, Malottki K, Diamantopoulos A. The cancer drugs fund in practice and under the new framework. Pharmacoeconomics. 2019;37(7):953–62. https://doi.org/10.1007/s40273-019-00793-6.
Petersohn S, Grimm SE, Ramaekers BLT, ten Cate-Hoek AJ, Joore MA. Exploring the feasibility of comprehensive uncertainty assessment in health economic modeling: a case study. Value Health. 2021;24(7):983–94. https://doi.org/10.1016/j.jval.2021.01.004.
Grutters JPC, Van Asselt MBA. Healthy decisions: towards uncertainty tolerance in healthcare policy. Pharmacoeconomics. 2015. https://doi.org/10.1007/s40273-014-0201-7.
NICE. Guide to the methods of technology appraisal 2013. no. April 2013, 2018.
Russell LB. Comparing model structures in cost-effectiveness analysis. Med Decis Mak. 2005;25(5):485–6. https://doi.org/10.1177/0272989X05281155.
Fenwick E, et al. Value of information analysis for research decisions—an introduction: report 1 of the ISPOR value of information analysis emerging good practices task force. Value Health. 2020;23(2):139–50. https://doi.org/10.1016/j.jval.2020.01.001.
Rothery C, et al. Value of information analytical methods: report 2 of the ISPOR value of information analysis emerging good practices task force. Value Health. 2020;23(3):277–86. https://doi.org/10.1016/j.jval.2020.01.004.
Garrison LP, et al. Performance-based risk-sharing arrangements—Good practices for design, implementation, and evaluation: report of the ISPOR good practices for performance-based risk-sharing arrangements task force. Value Health. 2013;16(5):703–19. https://doi.org/10.1016/j.jval.2013.04.011.
Bojke L, Claxton K, Sculpher M, Palmer S. Characterizing structural uncertainty in decision analytic models: a review and application of methods. Value Health. 2009;12(5):739–49. https://doi.org/10.1111/j.1524-4733.2008.00502.x.
Eddy DM, Hollingworth W, Caro JJ, Tsevat J, McDonald KM, Wong JB. Model transparency and validation: a report of the ISPOR-SMDM modeling good research practices task force-7. Value Health. 2012;15(6):843–50. https://doi.org/10.1016/j.jval.2012.04.012.
Guyatt GH, et al. GRADE: an emerging consensus on rating quality of evidence and strength of recommendations. Chinese J Evidence-Based Med. 2009;9(1):8–11.
Caro J, et al. Questionnaire to assess relevance and credibility of modeling studies for informing health care decision making: an ISPOR-AMCP-NPC good practice task force report. Value Health. 2014;17(2):174–82. https://doi.org/10.1016/j.jval.2014.01.003.
Briggs A, Sculpher M, Claxton K. Decision modelling for health economic evaluation. Oxford: Oxford University Press; 2006.
Zorginstituut Nederland. Guideline for economic evaluations in healthcare. 2016. https://english.zorginstituutnederland.nl/publications/reports/2016/06/16/guideline-for-economic-evaluations-in-healthcare.
Bilcke J, Beutels P, Brisson M, Jit M. Accounting for methodological, structural, and parameter uncertainty in decision-analytic models: a practical guide. Med Decis Mak. 2011;31(4):675–92. https://doi.org/10.1177/0272989X11409240.
Wohlin C. Guidelines for snowballing in systematic literature studies and a replication in software engineering. EASE '14: Proceedings of the 18th International Conference on Evaluation and Assessment in Software Engineering. 2014. https://doi.org/10.1145/2601248.2601268.
Philips Z, Bojke L, Sculpher M, Claxton K, Golder S. Good practice guidelines for decision-analytic modelling in health technology assessment: a review and consolidation of quality assessment. Pharmacoeconomics. 2006;24(4):355–71. https://doi.org/10.2165/00019053-200624040-00006.
Evers S, Goossens M, De Vet H, Van Tulder M, Ament A. Criteria list for assessment of methodological quality of economic evaluations: Consensus on Health Economic Criteria. Int J Technol Assess Health Care. 2005;21(2):240–5. https://doi.org/10.1017/s0266462305050324.
Ungar WJ, Santos MT. The pediatric quality appraisal questionnaire: an instrument for evaluation of the pediatric health economics literature. Value Health. 2003;6(5):584–94. https://doi.org/10.1046/j.1524-4733.2003.65253.x.
Ofman JJ, et al. Examining the value and quality of health economic analyses: implications of utilizing the QHES. J Manag Care Pharm. 2003;9(1):53–61. https://doi.org/10.18553/jmcp.2003.9.1.53.
Ades AE, Caldwell DM, Reken S, Welton NJ, Sutton AJ, Dias S. Evidence synthesis for decision making 7: a reviewer’s checklist. Med Decis Mak. 2013;33(5):679–91. https://doi.org/10.1177/0272989X13485156.
Kearns B, et al. Good practice guidelines for the use of statistical regression models in economic evaluations. Pharmacoeconomics. 2013;31(8):643–52. https://doi.org/10.1007/s40273-013-0069-y.
Adarkwah CC, van Gils PF, Hiligsmann M, Evers SMAA. Risk of bias in model-based economic evaluations: the ECOBIAS checklist. Expert Rev Pharmacoeconomics Outcomes Res. 2016;16(4):513–23. https://doi.org/10.1586/14737167.2015.1103185.
Husereau D, et al. Consolidated health economic evaluation reporting standards (CHEERS) 2022 explanation and elaboration: a report of the ISPOR CHEERS II good practices task force. Value Health. 2022;25(1):10–31. https://doi.org/10.1016/j.jval.2021.10.008.
Zimovetz E, Wolowacz S. Pmc45 Reviewer’S checklist for assessing the quality of decision models. Value Health. 2009;12(7):A395. https://doi.org/10.1016/s1098-3015(10)74947-0.
Sacristán JA, Soto J, Galende I. Evaluation of pharmacoeconomic studies: utilization of a checklist. Ann Pharmacother. 1993;27(9):1126–33. https://doi.org/10.1177/106002809302700919.
Soto J. Health economic evaluations using decision analytic modeling. Principles and practices–utilization of a checklist to their development and appraisal. Int J Technol Assess Health Care. 2002;18(1):94–111.
Chiou CF, Hay JW, Wallace JF, Bloom BS, Neumann PJ, Sullivan SD, et al. Development and validation of a grading system for the quality of cost-effectiveness studies. Med Care. 2003;41(1):32–44.
Zhang X, Lhachimi SK, Rogowski WH. Reporting quality of discrete event simulations in healthcare—results from a generic reporting checklist. Value Health. 2020;23(4):506–14. https://doi.org/10.1016/j.jval.2020.01.005.
Sculpher M, Fenwick E, Claxton K. Assessing quality in decision analytic cost-effectiveness models: a suggested framework and example of application. Pharmacoeconomics. 2000;17(5):461–77. https://doi.org/10.2165/00019053-200017050-00005.
Drummond MF, Jefferson TO. Guidelines for authors and peer reviewers of economic submissions to the BMJ. BMJ. 1996;313(7052):275–83. https://doi.org/10.1136/bmj.313.7052.275.
Vemer P, Corro Ramos I, van Voorn GAK, Al MJ, Feenstra TL. AdViSHE: a validation-assessment tool of health-economic models for decision makers and model users. Pharmacoeconomics. 2016;34(4):349–61.
McManus E, Turner D, Sach T. Can you repeat that? Exploring the definition of a successful model replication in health economics. Pharmacoeconomics. 2019;37(11):1371–81. https://doi.org/10.1007/s40273-019-00836-y.
Büyükkaramikli NC, Rutten-van Mölken MPMH, Severens JL, Al M. TECH-VER: a verification checklist to reduce errors in models and improve their credibility. Pharmacoeconomics. 2019;37(11):1391–408. https://doi.org/10.1007/s40273-019-00844-y.
Corro Ramos I, van Voorn GAK, Vemer P, Feenstra TL, Al MJ. A new statistical method to determine the degree of validity of health economic model outcomes against empirical data. Value Health. 2017;20(8):1041–7. https://doi.org/10.1016/j.jval.2017.04.016.
Jansen JP, et al. Indirect treatment comparison/network meta-analysis study questionnaire to assess relevance and credibility to inform health care decision making: an ISPOR-AMCP-NPC good practice task force report. Value Health. 2014;17(2):157–73. https://doi.org/10.1016/j.jval.2014.01.004.
Berger ML, et al. A questionnaire to assess the relevance and credibility of observational studies to inform health care decision making: An ISPOR-AMCP-NPC good practice task force report. Value Health. 2014. https://doi.org/10.1016/j.jval.2013.12.011.
Campbell JD, et al. The REal Life EVidence AssessmeNt Tool (RELEVANT): development of a novel quality assurance asset to rate observational comparative effectiveness research studies. Clin Transl Allergy. 2019;9(1):1–11. https://doi.org/10.1186/s13601-019-0256-9.
Dreyer NA, Bryant A, Velentgas P. The GRACE checklist: a validated assessment tool for high quality observational studies of comparative effectiveness. J Manag Care Spec Pharm. 2016;22(10):1107–13. https://doi.org/10.18553/jmcp.2016.22.10.1107.
Shea BJ, et al. AMSTAR 2: a critical appraisal tool for systematic reviews that include randomised or non-randomised studies of healthcare interventions, or both. BMJ. 2017. https://doi.org/10.1136/bmj.j4008.
Sterne JA, et al. ROBINS-I: a tool for assessing risk of bias in non-randomised studies of interventions. BMJ. 2016;355:1–7. https://doi.org/10.1136/bmj.i4919.
Rawlins M. De Testimonio: on the evidence for decisions about the use of therapeutic interventions. Clin Med J R Coll Physicians London. 2008;8(6):579–88. https://doi.org/10.7861/clinmedicine.8-6-579.
Stevens GA, et al. Guidelines for accurate and transparent health estimates reporting: the GATHER statement. Lancet. 2016;388(10062):e19–23. https://doi.org/10.1016/S0140-6736(16)30388-9.
Bojke L, Claxton K, Bravo-Vergel Y, Sculpher M, Palmer S, Abrams K. Eliciting distributions to populate decision analytic models. Value Health. 2010;13(5):557–64. https://doi.org/10.1111/j.1524-4733.2010.00709.x.
Grigore B, Peters J, Hyde C, Stein K. A comparison of two methods for expert elicitation in health technology assessments. BMC Med Res Methodol. 2016;16(1):1–11. https://doi.org/10.1186/s12874-016-0186-3.
Chen Q, Ayer T, Chhatwal J. Sensitivity analysis in sequential decision models. Med Decis Mak. 2017;37(2):243–52. https://doi.org/10.1177/0272989X16670605.
Jakubczyk M, Kamiński B. Fuzzy approach to decision analysis with multiple criteria and uncertainty in health technology assessment. Ann Oper Res. 2017;251(1–2):301–24. https://doi.org/10.1007/s10479-015-1910-9.
Strong M, Oakley JE, Chilcott J. Managing structural uncertainty in health economic decision models: a discrepancy approach. J R Stat Soc Ser C Appl Stat. 2012;61(1):25–45. https://doi.org/10.1111/j.1467-9876.2011.01014.x.
Vreman RA, Geenen JW, Knies S, Mantel-Teeuwisse AK, Leufkens HGM, Goettsch WG. The application and implications of novel deterministic sensitivity analysis methods. Pharmacoeconomics. 2021;39(1):1–17. https://doi.org/10.1007/s40273-020-00979-3.
Coyle D, Buxton MJ, O’Brien BJ. Measures of importance for economic analysis based on decision modeling. J Clin Epidemiol. 2003;56(10):989–97. https://doi.org/10.1016/S0895-4356(03)00176-8.
Chen JV, Higle JL, Hintlian M. A systematic approach for examining the impact of calibration uncertainty in disease modeling. Comput Manag Sci. 2018;15(3–4):541–61. https://doi.org/10.1007/s10287-018-0329-6.
Boncompte M. The expected value of perfect information in unrepeatable decision-making. Decis Support Syst. 2017;2018(110):11–9. https://doi.org/10.1016/j.dss.2018.03.003.
Alarid-Escudero F, Enns EA, Kuntz KM, Michaud TL, Jalal H. ‘Time traveling is just too dangerous’ but some methods are worth revisiting: the advantages of expected loss curves over cost-effectiveness acceptability curves and frontier. Value Health. 2019;22(5):611–8. https://doi.org/10.1016/j.jval.2019.02.008.
Grimm S, Strong M, Brennan A, Wailoo AJ. The HTA risk analysis chart: visualising the need for and potential value of managed entry agreements in health technology assessment. Pharmacoeconomics. 2017;35(12):1287–96. https://doi.org/10.1007/s40273-017-0562-9.
Fornaro G, Federici C, Rognoni C, Ciani O. Broadening the concept of value: a scoping review on the option value of medical technologies. Value Health. 2021;24(7):1045–58. https://doi.org/10.1016/j.jval.2020.12.018.
Grutters JPC, Abrams K, De Ruysscher D, Joore MA. When to wait for more evidence? Real options analysis in proton therapy. Oncologist. 2012;17(1):46–54. https://doi.org/10.1634/theoncologist.2011-0029.
Briggs A, Fenn P. Confidence intervals or surfaces? Uncertainty on the cost-effectiveness plane. Health Econ. 1998;7(8):723–40. https://doi.org/10.1002/(SICI)1099-1050(199812)7:8%3c723::AID-HEC392%3e3.3.CO;2-F.
Geenen JW, Vreman RA, Boersma C, Klungel OH, Hövels AM, Ham RMTT. Increasing the information provided by probabilistic sensitivity analysis: the relative density plot. Cost Eff Resour Alloc. 2020;18(1):1–10. https://doi.org/10.1186/s12962-020-00251-7.
Mayorga A, Gleicher M. Splatterplots: overcoming overdraw in scatter plots. IEEE Trans Vis Comput Graph. 2013;19(9):1526–38. https://doi.org/10.1109/TVCG.2013.65.
Mühlbacher AC, Sadler A. The Probabilistic efficiency frontier: a framework for cost-effectiveness analysis in Germany put into practice for hepatitis C treatment options. Value Health. 2017;20(2):266–72. https://doi.org/10.1016/j.jval.2016.12.015.
Eckermann S, Briggs A, Willan AR. Health technology assessment in the cost-disutility plane. Med Decis Mak. 2008;28(2):172–81. https://doi.org/10.1177/0272989X07312474.
Van Hout BA, Al MJ, Gordon GS, Rutten FFH. Costs, effects and C/E-ratios alongside a clinical trial. Health Econ. 1994. https://doi.org/10.1002/hec.4730030505.
Stinnett AA, Mullahy J. Net health benefits: a new framework for the analysis of uncertainty in cost-effectiveness analysis. Med Decis Mak. 1998;18(2 Suppl):S68–80. https://doi.org/10.1177/0272989X98018002S09.
Grimm SE, et al. State of the ART? Two new tools for risk communication in health technology assessments. Pharmacoeconomics. 2021;39(10):1185–96. https://doi.org/10.1007/s40273-021-01060-3.
Health Information and Quality Authority. Guidelines for the Economic Evaluation of Health Technologies in Ireland. Health Information and Quality Authority; 2020. pp. 108, 2020. https://www.hiqa.ie/sites/default/files/2020-09/HTA-Economic-Guidelines-2020.pdf.
National Institute of Health and Care Excellence. NICE health technology evaluations: the draft manual. No. January, 2021.
Bojke L, et al. Developing a reference protocol for structured expert elicitation in health-care decision-making: a mixed-methods study. Health Technol Assess (Rockv). 2021;25(37):v–124. https://doi.org/10.3310/HTA25370.
O’Hagan A. Expert knowledge elicitation: subjective but scientific. Am Stat. 2019;73(Suppl 1):69–81. https://doi.org/10.1080/00031305.2018.1518265.
Linderstrøm-Lang K. Allgemeine methoden. Fresenius’ Zeitschrift für Anal Chemie. 2009;76(5–6):236–7. https://doi.org/10.1007/bf01388372.
Van Hest N, Upton E, Ader J, Woodhouse F, Connor MEO. PDG85 trust the experts? Acceptance of expert elicitation in the National Institute for Health and Care Excellence (NICE) single technology appraisal (STA) process. Value Health. 2019;22(November):S611. https://doi.org/10.1016/j.jval.2019.09.1098.
Iglesias CP, Thompson A, Rogowski WH, Payne K. Reporting guidelines for the use of expert judgement in model-based economic evaluations. Pharmacoeconomics. 2016;34(11):1161–72. https://doi.org/10.1007/s40273-016-0425-9.
Bojke L, Grigore B, Jankovic D, Peters J. Informing Reimbursement decisions using cost-effectiveness modelling: a guide to the process of generating elicited priors to capture model uncertainties. Pharmacoeconomics. 2017;35(9):867–77. https://doi.org/10.1007/s40273-017-0525-1.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Funding
No funding was received for the work carried out in the preparation of this manuscript. The views and opinions expressed in the study are those of the individual authors and should not be attributed to a specific organisation.
Conflicts of interest/competing interest
Thomas M. Otten, Sabine E. Grimm, Bram Ramaekers, and Manuela A. Joore have published on the topic of uncertainty assessment. In addition, the authors have received funding for research on this topic, which has been provided to their organisation (MUMC+).
Ethics approval
Not applicable.
Consent for publication
Not applicable.
Data availability
Not applicable.
Code availability
Not applicable.
Author contributions
All authors contributed to the study conception, design, and execution. TO conducted the data collection. All authors participated in the analysis and writing of the manuscript, and read and approved the final manuscript.
Supplementary Information
Below is the link to the electronic supplementary material.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License, which permits any non-commercial use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc/4.0/.
About this article
Cite this article
Otten, T.M., Grimm, S.E., Ramaekers, B. et al. Comprehensive Review of Methods to Assess Uncertainty in Health Economic Evaluations. PharmacoEconomics 41, 619–632 (2023). https://doi.org/10.1007/s40273-023-01242-1
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s40273-023-01242-1