Posted

By Lisa Suennen, CEO, Venture Valkyrie

July 2023 – Yesterday I was chatting with an interesting person who works closely with young companies and who himself is considering whether he should further his own education in health tech.  He asked me this question:  What are the biggest challenges in the adoption of AI in healthcare and what would have to happen to change those barriers to adoption?

I have to say that I have an immediate kneejerk reaction when anyone says “AI” to me these days.  A thousand fleece vests flash before my eyes and my ears snap shut.  I have heard that acronym so many times over the last 15-20 years that it now sounds like what Ginger the dog hears in that famous Far Side cartoon: “Blah blah blah…”  And the topic has become far more complex over time.

It used to be that you just needed to say “AI” to get a venture capitalist to write you an absurdly large check.  Now you are required to add a fancy-sounding modifier to the term to prove you’re totally hip and fresh:  Generative AI, Explainable AI, Responsible AI, Theory of Mind AI, etc.    As I have said and written elsewhere, we are soon going to need an AI system to tell us which AI product to use to best effect.

But back to the question at hand, what are the biggest challenges and, more pointedly, what must happen for adoption to reach the so-called hockey stick inflection point that so many entrepreneurs show their investors?  It is a question worth considering given that we are clearly not going to escape this jargon bingo word as quickly as we did “bitcoin.”

Let’s start with the fact that AI, or at least a variety of forms of it, is already being used in the healthcare world.  The two most prevalent examples are in drug discovery and image analytics. Numerous companies are applying forms or subsets of AI to medical imaging to find abnormalities that may have been missed by the human eye.  This is a great application, as it allows human doctors to perform better by giving them more actionable knowledge.  Granted, this can also lead to overuse of diagnostic interventions, but fundamentally, finding a cancer that may otherwise have been missed is a worthwhile endeavor.

On the drug discovery front, the jury is still out.  Lots of amazing effort is happening here to find molecules to hit targets that people have yet to see on their own.  The idea is to get better, faster, cheaper drug discovery.  We may get to the holy land–there are several AI-generated new molecules now in Phase I clinical trials–but those in the know are still a ways off from deciding if these are safe and effective and some recent attempts have not borne fruit.  We all know that the devil is in the FDA’s details and, usually, when deciding between better, faster, cheaper, you only get to pick two of those outcomes.  It turns out that AI isn’t the answer to every pharma executive’s prayers.

There are many other applications of AI in healthcare now, but mainly they are machine learning-based chatbots and the like – not the fancy stuff of full-on, adjective-modified AI.

But focusing on the question about what must happen to get full-blown AI adoption in the sticky wicket of healthcare, here is my list:

  1. Proof: If healthcare has taught us anything, it is that the introduction of technology into its midst is more difficult than explaining the meaning of life. Vendors of AI products need to demonstrate that the application of this stuff delivers what was promised in the marketing materials and actually makes life better for the buyer.  As to that last one, “better” is defined by the buyer:  for health systems it’s more revenue; for payers it’s less cost; for pharma/medtech manufacturers it’s better products used by more people and greater speed to market.  Everyone has a different definition of “proof.”  Delivering that proof is not inconsequential given the requisite life cycle of demonstrating it.
  2. Understandability: A corollary issue to proof is the ability to understand how the AI product is getting to its answer.  Many of the people who would use AI in healthcare are scientists in reality or at heart.  These people want to know not only that the experiment worked, but that it is reliable and repeatable.  Given that AI is a decidedly black box, the concept of Explainable AI has been born.  What we don’t yet know is whether opening the AI black box will put light where there has been darkness or release what Pandora was worried about all along.
  3. Data without Bias: Many, including I, have written about the fact that the data that feeds our AI systems is basically a mash-up of what humans have written down before.  And we all know what that means:  garbage in, garbage out.  If the majority of the knowledge base of ChatGPT was defined by young white men looking through their own Casper-colored lenses, then we should just start calling it ChadGPT and get it over with.  We need a much wider aperture of perspectives to feed the AI world to be sure the results it brings to healthcare are not biased against large segments of our rainbow-colored world, or that give results that do not apply to women, etc.  This is a huge problem right now, and one that stands in the way of responsible adoption for clinical purposes, even knowing that that AI can already outperform doctors in certain instances.  Doctors make mistakes, but AI that was informed by medical systems that trained those doctors can also make mistakes, particularly when the data set came from a population that looks like the spectrum of people at the average Rush concert.
  4. Ethics: Data bias is a problem to be addressed, but ethical issues in AI are even more thorny. Thankfully, enough people care about the ethical use of AI that it has become a serious conversation and not entirely an afterthought that ends in saying, “Oops, we accidentally eliminated the human race.” We must figure out how to guard against the misuse of AI to intentionally drive false conclusions that benefit a particular group, party, evil dictator, or greedy technology billionaire who may or may not be setting up a cage-fight with his counterpart.
  5. Self-Interest: As the famous Upton Sinclair quote goes, “It is difficult to get a man to understand something when his salary depends on him not understanding it.”  To be very direct, what rational physician, scientist, insert healthcare job title here will elect to advance AI initiatives if it ultimately serves to disintermediate themselves?  AI solutions may get imposed on organizations top down, but that is a pretty rough row to hoe in healthcare. While it does happen (see: Epic EHR), the process is loud, painful and lengthy and many workarounds ensue.  There is going to have to be a much clearer alignment of incentives between wo/man and machine for us to achieve broad-based adoption.
  6. Simplicity: Systems must be easy enough for regular humans to use them to meaningful effect to get fully adopted, even if said humans are soon-to-be-replaced-by-AI. Many of the AI products take hard-core data scientists and engineers to put them into practice.  Not every healthcare enterprise has loads of those people hanging around.  Yes, it is true that ChatGPT (or ChadGPT), has made it sort of easy for regular people to use Generative AI, but consider this:  a whole new job category has now been created for people who are exceptionally good at writing Generative AI queries in order to ensure that the right answers come out.  That’s yet another specialty expertise and we are in the first inning of this game.  I think a good analogy is WordPress, the internet-based software originally created so any idiot (aka, me) can make their own website.  Except it’s not really that simple to use and eventually, when the website gets mildly complex, we idiots must hire WordPress experts to help us get next level.  Then, when next level needs to rise to full-blown enterprise grade, WordPress just doesn’t cut it anymore and we must hire experts who build and maintain complex websites.  I suspect that AI will prove to be far more complex and expensive than this for businesses to utilize effectively, thus creating a weird dichotomy between businesses that can afford to apply AI and those that can’t.  And that will drive even more industry consolidation, which drives up cost which drives me crazy.
  7. Workforce Issues: It’s interesting, but with some exceptions (and you will, I’m sure, tell me who you are), deep tech people are not healthcare people and healthcare people are not deep tech people. The healthcare industry is, as you may have surmised, filled with healthcare people and a bit light on deep tech people.  And the deep tech people are usually reluctant to flood into the healthcare world because…logic doesn’t enter into healthcare.  Without a healthy integrated workforce of people who get both sides of this nerdy pair of worlds, we cannot achieve its full positive potential.  Right now, the playing field tends to look like that time when Michael Jordan did a stint as a minor league baseball player.  Nice idea and he could have achieved greatness out of his typical environment, but the rest of the sport was confused.  It’s going to take a while to bring the deep tech and healthcare cultures and expertise together to achieve AI nirvana. Note to self: ask ChatGPT how to achieve AI nirvana.

I’m sure there are other issues to consider, but this list will keep the healthcare world busy for the next several dozen years while a whole generation or two exits the workforce, thus opening the doors much wider for our robot overlords to take over.  Or for AI to deliver great improvements.  Or for there to be 52 new adjectives to put in front of AI to specify its particular use.  Or all of the above.

 

About the Author

Lisa Suennen has spent over 35 years in healthcare and technology as entrepreneur, operating executive, venture capitalist and strategy consultant. She has worked broadly across healthcare, including digital health, medical devices, health services, and especially at the convergence between these sectors.

Lisa currently leads Venture Valkyrie Consulting, advising both large and emerging companies across the healthcare spectrum. Previously she held C Suite-level operating roles at Canary Medical, Merit Behavioral Care and Manatt, Phelps & Phillips.  Lisa has also held General Partner roles at several venture funds, including Manatt Ventures, Psilos Group and GE Ventures, where she led the healthcare venture fund and sat on the overall GE Ventures Investment Committee. Lisa was Co-founder and CEO of CSweetener, a company focused on matching mentors with rising healthcare leaders (sold to HLTH Foundation).  Lisa chairs the Scientific Advisory Board of the NASA -funded Translational Research Institute for Space Health and the International Investment Committee of the Australian & Victorian government-funded ANDHealth Digital Health Fund.  She is a Fellow of the Aspen Institute’s Health Innovators Fellowship and on faculty at the UC Berkeley Haas School of Business where she teaches the annual class on healthcare innovation and investment.  Lisa writes the Venture Valkyrie blog and is an internationally recognized author and speaker.

Lisa resides in the San Francisco Bay Area and is looking for her next opportunity to lead an emerging healthcare company as CEO.


Leave a Reply