Being able to build state-of-the-art models regularly is much harder and takes much longer if you don’t have a robust repeatable pipeline to process your data. This gets even more unwieldy if you’re trying to do that in a team, where one giant notebook doesn’t cut it.
Do you think Amazon purchases a 3rd party machine learning tool to tell them what books you might want to order? No. So why do we expect 3rd party algorithms to solve our core business problems in healthcare?
AI is used as an example of a capability hindered by the lack of access. But of course, lack of access causes greater harm than just slowing AI adoption.
Grand Rounds by Cyft CEO Dr. Leonard D'Avolio at Children's Mercy Kansas City.
Painfully little has been written for non-technical healthcare leaders whose job it is to successfully execute in the real world with real returns. It’s time to address that gap for two reasons.
We usually deal with smaller sets of rich but messy data (sample sizes in the hundreds or thousands). 10k rows vs 10M rows of claims data tend to be equally useful (or useless) for most problems.
In this episode of Creating a New Healthcare, Dr. Zeev Neuwirth interviews Len D’Avolio, CEO and founder of Cyft – an organization that uses data and Artificial Intelligence (AI) I to make value-based care wildly successful.
I asked LinkedIn friends to submit their questions related to AI in healthcare in preparation for an upcoming keynote at this year’s HIMSS in Vegas. I promised to try to answer the questions they submitted.
The healthcare AI space is frothy. Billions in venture capital are flowing, nearly every writer on the healthcare beat has at least an article or two on the topic, and there isn’t a medical conference that doesn’t at least have a panel if not a dedicated day to discuss. The promise and potential is very real.
Leonard, D’Avolio, Harvard professor and CEO of Cyft, explains how to work with “scruffy” data to improve healthcare.