During the course of the pandemic, an abundance of companies have touted AI systems to catch potential cases of Covid-19 early, or predict which patients might get sickest. But behind the flashy claims, some of which aren’t substantiated, it’s still worth looking at these algorithms with a critical eye.
In a situation as fast-changing as a pandemic, it can be difficult to actually build a model that will work. Changes in vaccination rates, mask mandates and the rise of new variants can all affect the performance of a model.
Duke Health decided not to build a risk prediction tool for Covid-19 for these reasons, said Dr. Erich Huang. A physician by background, he heads up Duke University’s center for health data science, Duke Forge, and also serves as Duke Health’s chief data officer for quality.
More importantly, algorithms can be quickly scaled to thousands of patients, having a much broader impact than any one clinical decision. The stakes are high for any automated system — not just those related to Covid-19.
He joined me as a guest on our MedCity Pivot podcast to talk about how clinicians should vet these tools, and where he thinks the Food and Drug Administration is headed as it takes a closer look at how AI is regulated as a medical device.
This podcast is part of a series on algorithms in healthcare at MedCity News. You can read the first part of the series here.
Elise Reuter reported this story while participating in the USC Annenberg Center for Health Journalism’s 2020 Data Fellowship.
The Center for Health Journalism supports journalists as they investigate health challenges and solutions in their communities. Its online community hosts an interdisciplinary conversation about community health, social determinants, child and family well-being and journalism.