The human-technology relationship is still evolving in healthcare, but AI and machine learning is here to help doctors instead of replacing them. Thanks to faster data transfers and improved computational capabilities, we can answer questions that we couldn’t answer before.
A recent study by Ziad Obermeyer and colleagues in Science identified a racial bias in a risk stratification algorithm that is used to prioritize patients for care management. Like most algorithms currently in use, it considers past cost to identify individuals most in need of help. Because white people tend to have higher medical expenses, they are prioritized over sicker black patients. The researchers show that if the bias is corrected for, the proportion of black people prioritized for care swings from 17.7 percent to 46.0 percent.
Less than a week later, news of the study appeared in Nature, Business Insider, the Wall Street Journal, and Wired. The State of New York is investigating and threatening suit against UnitedHealthcare and others that employ such approaches.
While this recent discovery is rightfully gaining attention, it is just one of many known biases and shortcomings of the health care system’s current approach to risk stratification. Obermeyer and his colleagues’ study and the concerns it raised offer an opportunity to carefully consider unintended consequences of the prevalent approaches of stratifying risk to find a new way forward.
Flaws And Unintended Consequences
The algorithms in question are decades-old adaptations of actuarial models. They rely mostly on claims (that is, billing) data as input. With the introduction of managed care in the 1980s, health plans needed a healthcare and updates
AI is used as an example of a capability hindered by the lack of access. But of course, lack of access causes greater harm than just slowing AI adoption.
Dr. D’Avolio spoke at BrightHealth’s client conference on the difference between traditional analytics and the need for performance improvement in value-based care.
Painfully little has been written for non-technical healthcare leaders whose job it is to successfully execute in the real world with real returns. It’s time to address that gap for two reasons.
We usually deal with smaller sets of rich but messy data (sample sizes in the hundreds or thousands). 10k rows vs 10M rows of claims data tend to be equally useful (or useless) for most problems.
I asked LinkedIn friends to submit their questions related to AI in healthcare in preparation for an upcoming keynote at this year’s HIMSS in Vegas. I promised to try to answer the questions they submitted.
Leonard D’Avolio is a medical school professor, an entrepreneur—and a pragmatic optimist. Learn why he has spent the last 13 years trying to make data the key to improving healthcare and why he thinks we are now at the tipping point.
The healthcare AI space is frothy. Billions in venture capital are flowing, nearly every writer on the healthcare beat has at least an article or two on the topic, and there isn’t a medical conference that doesn’t at least have a panel if not a dedicated day to discuss. The promise and potential is very real.