Emily Fox
Distinguished Engineer, Apple
Professor, University of Washington |
Emily Fox leads the Health AI team at Apple, where she is a Distinguished Engineer. She is also the Amazon Professor of Machine Learning in the Paul G. Allen School of Computer Science & Engineering and Department of Statistics at the University of Washington. She received an S.B., M.Eng., and Ph.D. from EECS at MIT. Emily has been awarded a Presidential Early Career Award for Scientists and Engineers (PECASE), Sloan Research Fellowship, ONR Young Investigator award, and NSF CAREER award. Her research interests are in large-scale Bayesian dynamic modeling, interpretability and computations, with applications in health and computational neuroscience.
|
Technical Vision Talk: The joys and Perils of leveraging mechanistic models in health ML: from Type 1 diabetes to COVID-19
We are increasingly faced with the need to analyze complex data streams; for example, sensor measurements from wearable devices have the potential to transform healthcare. Machine learning — and moreover deep learning — has brought many recent success stories to the analysis of complex sequential data sources, including speech, text, and video. However, in health we are often limited to observing only a small subset of variables at play in an intricate physiological process. The result can be under-identifiability of the system, even in the presence of a lot of data. Using standard machine learning approaches in these cases can then result in clinically implausible predictions and inferences.
In some cases, aspects of the underlying mechanism are well understood and approximately modeled via a mechanistic model, such as a set of differential equations or a compartmental model. Unfortunately, mechanistic models often likewise bake in unrealistic assumptions that do not generalize well to broad populations or scenarios.
We explore hybrid approaches that combine the domain knowledge of mechanistic models with the flexibility and expressivity of machine learning methods. We also discuss when relying on the inductive bias provided by a mechanistic model can break down.
We explore these ideas through two use cases: Glucose forecasting in Type 1 diabetes and modeling the relationship between mobility and transmission in the COVID-19 pandemic.
In some cases, aspects of the underlying mechanism are well understood and approximately modeled via a mechanistic model, such as a set of differential equations or a compartmental model. Unfortunately, mechanistic models often likewise bake in unrealistic assumptions that do not generalize well to broad populations or scenarios.
We explore hybrid approaches that combine the domain knowledge of mechanistic models with the flexibility and expressivity of machine learning methods. We also discuss when relying on the inductive bias provided by a mechanistic model can break down.
We explore these ideas through two use cases: Glucose forecasting in Type 1 diabetes and modeling the relationship between mobility and transmission in the COVID-19 pandemic.
Meet-the-Speaker:
Emily will also participate in a Meet the Speaker session, where you will be able to have a deeper dive conversation about topics covered on the main stage. Accessible to WiDS Worldwide registrants.
Emily will also participate in a Meet the Speaker session, where you will be able to have a deeper dive conversation about topics covered on the main stage. Accessible to WiDS Worldwide registrants.