Math Modeling: Policy & Populations

Research Priority

To merge simulation methods from across disciplines to improve the transparency, utility, and adoption of mathematical modeling as an accessible tool for evidence-based decision making in health and policy.

 

Specialties

Economics

Risk Assessment

Health Economics

Social Protection Policy

Epidemiology

Population Modeling

Mixed Methods Analysis

Recent Projects

A Domain Ontology for Population Simulation Methods

There is growing recognition among simulation modelers that our most exciting methods breakthroughs are having limited effect due to the fragmentation of the field across disciplines. This has severely hampered our ability to communicate, validate, and duplicate simulations and has slowed the uptake of simulations in policy making. My dissertation is attempting to address this problem by providing an ontology of the ‘building blocks’ of simulations designed for population modeling. This work is intended to empower modelers with the language to communicate how precisely they built their model, using terms that have formal definitions and robust mathematical analysis to aid in the comparison across model designs. This ontology can contribute to the development of simulation modeling as a inter-disciplinary field of its own that can adopt and combine methods from all fields of interest.

 

Using microsimulation to impute risk-adjusted population distributions from mixed-data sources

While doing an early economic evaluation of using stem-cell therapy to treat chronic lung disease in extreme pre-term infants, we ran into a common problem: the existing population data lacked the granularity for us to model risk-adjusted probabilities of outcomes in a risk-heterogeneous population. This posed a very exciting epidemiological challenge that we felt could be overcome through careful application of first-order microsimulation methods. The published paper can be found through the title link above. But in a nut shell we had to generate plausible risk-stratifications for patients at birth so we could estimate the true proportion of high-risk infants that could be saved from improved treatment, which wasn’t already known because not all patients live long enough to be assessed. Working out that distribution in a clinically validate-able way was really exciting work that combined my Epi and micro-sim training in a way I thought merited it’s own project post.

 

Early Economic Evaluation: The future of health technology assessment

A solid chunk of my modeling work has been in health economics. I have done many economic evaluations for new treatments, technologies, and protocols that health practitioners want to have policy makers adopt, as well as being an independent evaluator for CADTH of several pharma model arguing for their treatments to be reimbursed in Canada’s health systems. If I can stand on a paragraph-sized soap box for a moment, I would like to argue that there is a tremendous amount of waste in the form of one-off disease models that are used in health technology assessment.

Often, health economists are brought into a clinical trial or policy project—often far too late to provide the necessary insight on requisite data collection to be as useful as they can be—and asked to make a model custom to the clinical research question to determine if the intervention is cost-effective. This will often require building a toy model of the disease progression and/or patient trajectory to get their standard of care baseline, after which they apply treatment effects specific to this one research project to derive the necessary incremental cost-effectiveness ratio (or better yet, net monetary benefit, #NMB>ICER). This practice is repeated over and over for every grant-funded project and for every country, province, and hospital. It is at high risk of producing externally invalid results since often the modeler has to use only the data available to the research team as this one project’s grant allows, it is dependent on the relative expertise of the modeler themselves (and health economists are in short supply relative to demand, at least here in Canada), and it is an astonishingly wasteful allocation of money, data manager time, and modeling effort to rebuild bespoke disease models from scratch every time we want to test whether or not we can improve population health.

Early economic evaluation (E3 or EHTA) has many great added features that I won’t go into here (see Value of Information) but the main benefit I’ll argue here is that it moves the decision modeling stage up to even before the core clinical questions have been decided. An E3 simulates a disease in the population, type of patient trajectory, or current system dynamics and then asks what are the potential NMB gains from marginal improvements in whatever output measures are of interest (e.g. mortality over time horizon T, risk of event X, time to Y, etc.). Done well, an E3 model can not only be re-used as a robust generalized model across any number of interventions, but can help decision makers identify the highest-potential health gains and quantify how much they should be willing to pay if an enterprising clinical team can design an intervention to achieve those gains. Extrapolating the utility of E3, it offers a better incentive to develop larger, robust models for major health priorities that are external valid that can be used to do a ‘quick test’ of whether the expected clinical outcomes would be worth significant investment into development and trials, and if they are then perhaps a more bespoke model that can address the intricacies of that treatment may be built as well, if necessary. This is my general proposition to health economists and decision makers to adopt E3. I have built a couple myself—here is one on stem-cell therapy for sepsis patients—and think these can be enormously powerful and cost-effective as a next step in evidence-based health policy decision making.

Resources

Lab: Learning R for Data Analysis

Annotated R script that guides users from basic commands to performing standard database analysis. Includes a list packages I have found useful in statistical analysis, data visualization, and simulation modeling.

Lab: Basic Simulation Modeling in R

Annotated R script that illustrates the basic architecture for designing Markov chains and microsimulations. Running requires several packages from the ‘Into to.R’ script.