Q&A: AI tech can considerably improve healthcare risk adjustment Special

Posted Jul 24, 2019 by Tim Sandle
Dr. Darren Schulte took his experience from being a doctor to provide a solution to the industry’s risk adjustment pain point. He tells Digital Journal why AI can make a positive impact on healthcare risk adjustment.
File photo: The healthcare IT industry in Berlin.
File photo: The healthcare IT industry in Berlin.
The healthcare information ecosystem is complex and there are instances where data is inaccurate and record ownership is unclear amongst patients, hospitals and EHR vendors. To tackle this, Dr. Darren Schulte (CEO of a healthcare AI tech company Apixio) is developing technology to solve today’s healthcare risk, quality and prospective pain points.
Apixio uses machine learning in order to position patients ahead of future costly diagnoses by flagging health risks earlier on and recommending they start treatment earlier rather than later. Dr. Schulte explains how to Digital Journal.
Digital Journal: In which ways is AI disrupting healthcare?
Dr. Darren Schulte: AI will alter how, where, and when we diagnose and treat individuals. Text and image recognition algorithms will be able to help physicians make more accurate diagnoses and ensure that evidence-based protocols are used for the right individual at the right time. This will not render the physician obsolete. Rather, physicians in a new AI-assisted healthcare world can spend more time “laying hands”—that is, providing comfort, reassurance, and guidance. With new advancements in technology, tracking devices and wearables, individuals can also interact with healthcare providers in a more flexible way because care isn’t tethered to the hospital or clinic.
DJ: Which forms of AI are the most promising?
Schulte: So far, AI has shown great promise in machine vision, interpreting wearable data, and deciphering medical records for care insights. Machine vision leverages a variety of deep learning algorithms to analyze and classify images to assist radiologists and pathologists in reading images and making diagnoses. There are a few companies that have received FDA approval for their algorithms to interpret EKG tracings from watches to diagnose abnormal heart rhythms and alert patients to seek medical care. Apixio has been able to determine patient conditions actively being treated from the text written in hospital and clinic notes.
DJ: How should health facilities weigh up the risks associated with AI?
Schulte: Risks are related to its security, applicability, and interpretability. As more data is being used by machines for recommendations, any system hacks compromise the results of those algorithms. If training data is not representative of a broader population, then results might be biased in favor of the populations reflected in the training data. For example, if a certain ethnic group is not part of an algorithm’s training data, its conclusions might not be appropriate for that group. One of the biggest risks to AI adoption relates to the lack of interpretability of results.
Machine learning and deep learning algorithms appear to be a black box—data inputs provide outputs without any clue as to how the result was rendered. If the algorithm is providing clinical decision support, and the physician wants to understand why or how the algorithm arrived at the result, there would be no real guidance to offer.
DJ: Will any form of AI totally replace a physician?
Schulte: No. If the technology works as envisioned, AI can improve the efficiency and accuracy of physician work. This has the effect of reducing errors, improving quality and outcomes, and freeing up physician time for meaningful patient interactions.
DJ: How should patients who are fearful of AI be weighing up the technology?
Schulte: Patients probably won’t even know that AI is working to assist physicians and other caregivers. AI is not substituting for the physician—a human is still involved. Individuals don’t realize that machine learning is driving recommendations on Netflix or Amazon or Google, or targeted ads on Facebook, or detection of abnormal cardiac rhythms on an Apple Watch. AI becomes something to be feared when it fully replaces a human, such as with autonomous vehicles. In those case, there needs to be trust created based upon experience.
DJ: How can data privacy concerns be addressed?
Schulte: To safeguard its patients and their data, healthcare organizations should establish vendor requirements for compliance and conduct routine audits. HIPAA is a common compliance framework for use of data, but it only addresses patient privacy and not payment security or other protocols. Organizations should also consider undertaking HITRUST certification, PCI-DSS to manage payment integrity, and SOC2 compliance for additional backups and security. Additionally, organizations should take vendor compliance seriously by auditing protocols and processes either internally or with third-party assistance.
DJ: How intelligent is AI?
Schulte: Not very intelligent. Computers can be taught to complete straightforward tasks, in some cases better and faster than a human. But there is no ability to explain how it got from inputs and outputs (cause and effect), no intelligence related to social or emotional elements of decision making, no real creativity. AI waits to be trained to do something.
DJ: Which types of AI have been over exaggerated?
Schulte: AI has been touted as replacing many human activities outright. There are many menial tasks that can be better done by a computer, but complex tasks still require human involvement. Humans are superior at physical tasks, pattern recognition, communication, and creativity, among other things. AI algorithms are trained to do one specific task and can’t bring them all together they way humans can.