Advertisement
Cleveland Clinic neuromuscular specialist shares insights on AI in his field and beyond
Artificial intelligence (AI) is a powerful tool with the potential to transform clinical practice, from faster and more accurate diagnosis of diseases to the ability to flag patients at risk of worse outcomes. But while many providers take a wait-and-see approach, leaving technology development and policymaking to tech experts and government, it’s important for clinicians to have a seat at the table, argues John Morren, MD, a neurologist in Cleveland Clinic’s Neuromuscular Center with a special interest in the technology.
Advertisement
Cleveland Clinic is a non-profit academic medical center. Advertising on our site helps support our mission. We do not endorse non-Cleveland Clinic products or services. Policy
“We have to be in the driver’s seat,” he says. “We need to be AI savvy and engaged in order to direct this powerful tool into alignment with our greater objectives and to advocate for the responsible and strategic integration of AI into clinical applications.”
Dr. Morren is the senior author of a recent review on the role of AI in electrodiagnostic and neuromuscular medicine in Muscle & Nerve (Epub 2023 Dec 27), which is freely accessible here.
The rapid growth of AI tools, including machine learning and deep learning applications, has led to technological breakthroughs in healthcare, leveraging large-scale patient data for unprecedented insights and benefits.
Dr. Morren notes that these advancements are quite striking in the field of electrodiagnostic medicine, where AI models analyzing electromyography (EMG) data have shown impressive capabilities in diagnosing conditions like amyotrophic lateral sclerosis and myopathy. “We’re getting accuracies ranging from 67% on the low end, up to 99.5%,” he says.
In the area of neuromuscular ultrasound, AI models have achieved diagnostic accuracy above 90% for nerve entrapment disorders and 87% for inflammatory myopathies, often surpassing human experts.
Dr. Morren encourages stakeholders to think of AI as “augmented intelligence” — not as a substitute for human clinicians but as a tool to improve their effectiveness and efficiency, and to extend their reach.
“It’s the synergistic combination of human expertise with AI,” he says. “It’s augmentation, because there is no way to replace the unique nuanced, experiential skill set that humans bring.”
Advertisement
Think beyond diagnosis, he suggests, and consider other ways AI can augment healthcare, such as by improving patient experiences. AI tools have been shown to produce an accurate diagnosis from a needle EMG recording of 5 seconds or less, for example, with the potential to significantly shorten an uncomfortable test.
AI can also be used to assist diagnosis at the bedside, Dr. Morren adds, pointing to studies showing that AI clinical video analysis can detect telltale changes in facial expressions of patients with myasthenia gravis, leading to diagnostic accuracy surpassing that of human experts.
Dr. Morren acknowledges ethical challenges with AI, including insufficient patient diversity in datasets used to train early models, which often led to these tools performing poorly among those in underrepresented groups. His team and others have learned from those mistakes and now place a high priority on diversity of data, he says.
“The guiding principle is that if you’re planning on using these tools widely, you need to ensure that the training dataset represents the diverse fabric of the patient population you’re hoping to serve,” he notes. “This is imperative to minimize bias and to promote equity with the advancement of AI in healthcare.”
Patient confidentiality is another potential challenge. While most patients consent to aspects of their healthcare record, including test results, being used in future research, they may not know that their data might be used to train AI models — or they may not fully understand the implications.
Advertisement
“Regulations are lagging behind the technology, so we need to exercise robust ethical principles in the way we protect patient information and confidentiality,” he says.
But it’s not all bad news on the ethics front. Dr. Morren points to a study that found a deep learning model was able use MRI muscle data to classify facioscapulohumeral muscular dystrophy and inflammatory myopathy with similar accuracy to expert radiologists.
“You can imagine how, in resource-poor settings with neuroradiologists or MSK radiologists in short supply, this technology can extend the reach of that expertise to patients,” he says. “So as much as we talk about the negatives and potential bias, here we’re looking at AI being used to create greater equity.”
Looking ahead, Dr. Morren says clinicians should expect to see more automatic integration of AI into regular workflows. Just as users of Microsoft’s Edge web browser are accustomed to its embedded Copilot feature offering help, AI tools will be further integrated into healthcare digital platforms, including electronic medical records.
“For better or for worse, your AI friend will be tapping on your shoulder to say, ‘Hey, you may also want to order this test for your patient, because of these five reasons,’ before you might think of it on your own,” he says. The panoply of wearable devices, like smartphones and smartwatches, will also constantly feed the big data that drives the development of newer AI innovations, he adds. “These will progressively allow the healthcare ‘space’ to extend beyond the traditional walls of healthcare facilities.”
Advertisement
Additionally, AI tools will become increasingly customizable and up to date, tailoring advice to specific health systems’ datasets and operating procedures in a way that will make them more useful to specific patient populations and individual patients, Dr. Morren says. This will serve to accelerate the growth of personalized medicine.
More broadly, Dr. Morren anticipates that AI will lead to new insights into diseases and advances in diagnosis and treatment, as scientists seek to understand and explain how deep learning tools reach their surprising conclusions with high accuracy. This is the process of so-called explainability to address the “black box” aspects of AI.
“It’s like, ‘Oh, now that you’ve pointed that out, I can see it too,’” Dr. Morren says. “So it may ultimately be that the AI will be teaching us human beings along with continually teaching itself.”
Advertisement
Advertisement
Model relies on analysis of peri-ictal scalp EEG data, promising wide applicability
Study demonstrates potential for improving access
Cleveland Clinic uses data to drive its AI implementation strategy
Pairing machine learning with multi-omics revealed potential therapeutic targets
Cleveland Clinic and IBM leaders share insights, concerns, optimism about impacts
Scientific program chair reflects on what may resonate longest from this year’s neurosurgery conference
Up to 3 days faster than waiting for urine culture results
Investigators are developing a deep learning model to predict health outcomes in ICUs.