Advertisement
Where is the balance?
By Ardeshir Z. Hashmi, MD, FACP
Advertisement
Cleveland Clinic is a non-profit academic medical center. Advertising on our site helps support our mission. We do not endorse non-Cleveland Clinic products or services. Policy
Among the scarcest of resources for physicians is time. In the high-volume, low-cost environment of primary care, we don’t often have the luxury of a full cognitive screening for a patient visiting to manage their diabetes, and if we have the time for an assessment, we often lack the time to teach nonpharmacological interventions that address the results. While the idea of cognitive assessment causes some initial distress, early diagnosis of cognitive impairment can alleviate relational stress, connect patients with appropriate specialists and allow them a more active role in planning for the future.
As the population ages and Alzheimer disease becomes more prevalent, we need more efficient tools to assess cognitive status, particularly tools that are sensitive for older patients. I conducted a study with colleagues from the University of Massachusetts, including Boaz Levy, PhD, to test such a tool and reported its convergence validity in the Journal of Geriatric Psychiatry and Neurology.
To address the demands of busy geriatricians and primary care physicians, our study sought to test a cost- and labor-free instrument that met very high clinical standards in a very short time frame. Instead of testing the test’s diagnostic utility, we sought ways to make the screening process more feasible — faster, more automated and more economical. To do so, we used machine learning methods to improve the convergence validity between the Montreal Cognitive Assessment (MoCA) and a cheaper computerized test assessing cognitive limits using a method sensitive to processing speed. We hypothesized that the machine learning algorithms would accurately classify patients according to clinical cutoff MoCA scores for cognitive impairment using the data from the computerized test.
Advertisement
We tailored the design of the computerized tests to the demands and purpose of primary care. Current tools assess impairment on levels; patients who fall below a certain threshold are deemed abnormal, regardless of baseline cognitive score and patient education level/cognitive reserve. The computerized test had the ability to recognize changes in functioning over time (with repetitive valuations) and thus was highly sensitive to cognitive changes indicative of healthy ageing versus abnormal cognitive decline (processing speed, executive functions like task switching, motor speed). The tests three tasks included balloon popping, numeric sequencing and even-odd switching.
During the study, 206 participants (mean age = 67, 73 men) were administered the Mini Mental-State Examination and MoCA according to protocol, followed by a self-administered computerized test in a private, uninterrupted setting. Patients without English proficiency or capacity to consent, or exhibiting visual impairment, acute delirium, intense pain or high fever, were excluded.
We analyzed results using three machine learning algorithms — Support Vector Machine, Random Forest and Gradient Boosting Trees — to classify subjects according to MoCA clinical cutoff. We also used Synthetic Minority Oversampling Technique to adjust for class imbalance.
The Gradient Boosting Trees algorithm proved best (accuracy = 0.81, specificity = 0.88, sensitivity = 0.74, FI score = 0.79 and area under the curve = 0.81), and a K-means clustering of the prediction features resulted in three categories that corresponded closely to the three MoCA score ranges (unimpaired, mildly impaired, moderately impaired).
Advertisement
Overall, we observed only partial conversion between the computerized test and the MoCA. The specificity scores were adequate but the sensitivity of the test was not up to par. The test will need enhancements to identify mild cognitive impairment. Our results do suggest that the computerized test is more sensitive to age-related differences in cognitive function, an important sensitivity for prevention and assessing changes over time — something MoCA does not do particularly well.
This study brings us closer to finding the limits of efficiency within validity. Without adequate time, patients don’t receive longer screenings as often. But shorter, more efficient tests must prove sufficiently valid to demonstrate clinical utility. Machine learning algorithms are but one way we can approach the converging scarcity of time and abundance of patients in need of cognitive screening.
Dr. Hashmi directs the Center for Geriatric Medicine.
Advertisement
Advertisement
Complications highlight need to exercise caution when managing geriatric patients
Structured data helps identify older adults at risk for poor outcomes, defines patients who require more comprehensive assessments
Analysis underscores how telehealth can help pinpoint elder abuse
Accurate, transparent documentation may reduce risks associated with common disorder
Clinicians face difficult conversations when drugs and firearms intersect
New study confirms prevalence, downstream effects of dysphagia and dysphonia
Accuracy of Mini-Cog screening tool enhanced by mandatory training
Clinicians examine strategies to address unique needs of those aged 85 and older