Rick Newell, MD MPH is CEO of Influencing healthChief Transformation Officer at Viuity, and passionate about driving change in healthcare.
getty
I’ve talked about what artificial intelligence can do to improve healthcare for patients, doctors, other healthcare providers and payers. Just as important is understanding what AI can not do — to prevent tech entrepreneurs and investors from pouring time and resources down rabbit holes that don’t lead to better care, better working conditions or lower costs. Based on my experiences as a practicing emergency medicine physician and executive for a healthcare innovation and investment center, here are my thoughts and observations about what AI can’t do for healthcare:
Patients need a human connection.
The first step a doctor takes in healing is actively listening to the patient and making him feel seen, heard and understood. While technology can help people communicate, patients typically want a kind eye and comfort when they’re in need — not a thumbs-up from a chatbot.
As AI is inevitably more used in detecting, analyzing and treating diseases, I think we should avoid the urge to push patients to self-service apps before everything else. On the contrary, we should look to free doctors and health care providers to devote even more time and attention to the human face of medicine.
AI solutions are only as good as the data.
Much of AI works by cracking huge sets of data. But often there is contradictory data or a complete lack of it. I routinely let patients come to my emergency department like undifferentiated blanks, but the patients themselves know that something is wrong. An elderly man who reports to the emergency room and says, “I know something is wrong,” is usually right. If all the available evidence says he’s okay, we’ve learned to keep digging, and more often than not, we’ll find out he’s right.
Machine logic cannot think like a human being.
machine learning has proven to be superior to the human mind in detecting impending disease early and in correlating symptoms with probable diagnoses, but it cannot replace human thinking—which is often nonlinear and creative—by recognizing when things just happen. Do not knock. I think it would be wrong to think that AI is ultimately doing the doctors’ job instead of helping them do better.
In the emergency department, we often see non-textbook cases that AI could mismanage in its current form. Automation has already proven its ability to analyze digitized images and information, but it has not yet been successfully applied to master tactile tasks such as joint reduction, suturing or positioning center lines. Those operations still require a skilled practitioner to guide the work manually.
Engineers are not doctors.
In addition, there is a specific problem that I still see with AI systems in development in healthcare: Machine learning systems need to be trained on very large data sets. A human expert should tell the AI, “This is a positive reading. This is a negative reading. This number indicates that possibility.” Too often that training is delivered without the full involvement of physicians with relevant experience, as the saying goes in software: Garbage in, garbage out.
Solving the challenges of healthcare requires human framing.
While AI can solve the most challenging math problems, it cannot determine which mental model to use for a situation, nor recognize when to replace that model with a superior one. The book Framers: Human advantage in an age of technology and turmoil explains how the biggest problem society faces is to map our problems, not solve them. I have found that much of the current AI in healthcare does not solve a useful problem because the problem has not been framed by practicing healthcare experts.
For example, we could ask AI to determine optimal masking and isolation protocols to minimize the spread of Covid-19 while maximizing economic growth. But a human mental model has to decide the importance of individual autonomy versus group security. In addition, a person must decide how much a life is worth and what its value is relative to the economic gain of fewer restrictions. A person must also determine the time horizon for the considerations to optimize the solution: a month or 10 years?
Once we’ve mapped out the problem, we need to create boundaries for how AI can solve it. In this example, race/ethnicity is closely linked to economic opportunity, so a model can be more accurate if the AI is allowed to use race/ethnicity data in its calculations. But this could encourage an AI solution that fosters even more inequality. There is a legitimate concern that AI could propagate inequality if the limits are set by data scientists seeking the most accurate model rather than the most fair and equitable.
I believe AI has huge untapped potential to improve healthcare for everyone involved. But to get there, we need to clearly keep in mind that there are things AI can’t do and may never do.
businesstraverse.com Business Council is the leading growth and networking organization for entrepreneurs and leaders. Am I eligible?