December 6, 2019

Deep Learning Models for Automatic Seizure Detection in Epilepsy

Strong performance from early models heralds eventual reshaping of care

By Balu Krishnan, PhD, and Andreas Alexopoulos, MD, MPH

Advertisement

Cleveland Clinic is a non-profit academic medical center. Advertising on our site helps support our mission. We do not endorse non-Cleveland Clinic products or services. Policy

The case for AI in epilepsy

Epilepsy is the second most common neurological disorder, impacting 1% to 2% of the world’s population. Individuals with epilepsy typically undergo long-term monitoring of the brain’s electrical activity with EEG recordings for several days. The recorded EEG data are manually reviewed by a trained neurologist, a neurophysiologist or a skilled EEG reader to identify epileptic seizures or interictal discharges that characterize the individual’s epilepsy. Manual review of long-term temporal and spatial EEG data is cumbersome, time-consuming, error-prone and nonscalable, and it may be suboptimal for adequate diagnosis and localization of epilepsy. A scarcity of trained epilepsy subspecialists limits the availability of epilepsy clinics, thereby increasing the cost and delay associated with EEG interpretation and diagnosis.

Computer-aided detection of epileptic seizures has been a research topic within the epilepsy community since the advent of digital EEG. Automatic seizure detection from EEG helps streamline care by promoting rapid and accurate EEG review and interpretation, reducing the need for an expert reviewer and lowering diagnostic and treatment costs.

Advances in deep learning techniques provide new avenues for solving the complex problems inherent in automatic seizure detection. The usefulness of machine learning techniques to detect epileptic lesions on MRI,1 identify the epileptogenic zone2 and classify patients with Rasmussen encephalitis3 has been previously demonstrated by our group. These are just some among many reported applications of artificial intelligence (AI)-mediated diagnostic tools in the management of neurological disorders.

In 2017, Cleveland Clinic’s Epilepsy Center collaborated with Google Inc. to develop an AI system to automatically detect epileptic seizures from long-term EEG recordings in patients undergoing evaluation in Cleveland Clinic’s epilepsy monitoring unit.4 EEG data from approximately 7,000 well-characterized epilepsy patients — corresponding to a total dataset of approximately 20 terabytes — were curated. The capability of an AI system to learn from this rich, unparalleled database can significantly contribute to the clinical care of epilepsy and other neurological disorders. Preliminary evaluation of a subset of data obtained from 1,063 patients (21,655 hours of recording, total of 18,741 seizures) has shown promising results, as we detail below.

The temporal graph convolutional network

Epileptic seizures are characterized by rhythmic electrical oscillation that evolves spatially and temporally. An appropriate neural network for seizure detection should be able to identify the spatiotemporal evolution of epileptic seizures. The temporal graph convolutional network (TGCN) is a deep learning model that leverages spatial information in structural time series (Figure 1). TGCNs extract features that are localized and shared over both temporal and spatial dimensions of the input. TGCNs have built-in invariance to when and where a pattern of interest occurs and thus provide an inductive bias to discriminate these patterns. The TGCN model was used for this purpose in the study we profile below.

temporal graph convolutional network

Figure 1. Graphical representations of a structural time series with six sequences. Graph structure is depicted by solid lines, whereas temporal adjacency is denoted by the fainter dashed lines. The red nodes in the bottom series indicate receptive fields of feature extraction operation in the temporal graph convolutional network (TGCN), which extracts features that are spatially and temporally localized.

Datasets

Scalp EEG data from 1,054 patients were used for the study. EEG data were collected with a Nihon Kohden system using a standard 10-20 montage and sampled at 200 Hz. Two sets of data were used:

  • Clipped EEG data, consisting of baseline and seizure data segments (order of minutes), were acquired from 656 patients.
  • Long-term and continuous EEG data, spanning the duration (days to week) of patients’ epilepsy monitoring unit stay, were acquired from 398 patients.

The temporal location of epileptic events — such as seizures and interictal abnormalities — was annotated by a trained epileptologist. EEG time series were segmented into 96-second epochs and annotated as “seizure” and “seizure-free” epochs. Data segments with seizures were deemed positive, and those without seizures were deemed negative.

Training dataset. The TGCN model was trained using EEG data from 995 patients — clipped data from 613 and continuous EEG data from 382. Approximately 15,000 hours of data were used for training the model. The sparse nature of epileptic seizures led to class imbalance since seizure-free epochs are more frequent than seizure epochs. Data epochs without seizures were subsampled at 90% to balance the two classes.

Advertisement

Tuning and testing datasets. Hyperparameters associated with the neural network architecture were optimized using a tuning set consisting of 30 patients (2,800 hours of recording). Performance of the classifier was evaluated using a testing set consisting of 38 patients (4,000 hours of recording). Only long-term data were used for tuning and testing the deep learning models.

As a preprocessing step, short-time Fourier transform was used to extract informative time frequency features associated with an epileptic seizure (Figure 2).

Figure 2. (A) Example of a seizure on an EEG from a single patient. Yellow nodes correspond to the location of EEG sensors on the patient’s scalp. (B) Preprocessing of a raw EEG waveform using time frequency transformation.

The neural network architecture consists of a series of spatiotemporal convolutional layers with max pooling along the temporal dimension between the blocks of convolutional layers4 (Figure 3). Five different TGCN architecture configurations were trained and compared, as we have detailed elsewhere.4

Figure 3. Configuration of TGCN architecture. Dimensions of data at each processing step are indicated in parentheses. A raw EEG waveform of dimensions 19,200 × 21 (time × number of EEG leads) is decomposed using a short-time Fourier transform module to extract the time frequency characteristics of the EEG data. The time frequency decomposed signal has dimensions of 599 × 21 x 33 (time × number of EEG leads × frequency). Four blocks of spatiotemporal convolutional (STC) layers extract localized features over both spatial and temporal dimensions. Temporal max pooling is used to leverage information across a wider time window. To achieve a scalar prediction, the output from the TGCN block is spatially averaged using a pooling layer and flattened. Fully connected layers are used to learn the high-level features, and the scalar prediction is achieved using a sigmoid function.

Preliminary performance evaluation

To emulate real-world scenarios for an epileptic seizure alarm system, model performance was evaluated using (1) sensitivity at 95% specificity, (2) sensitivity at 99% specificity and (3) area under the receiver operating characteristic curve (AUROC). A low number of false positives improves the reliability of the seizure detection tool. Performance of the five TGCN configurations was evaluated using the tuning set, and the TGCN configuration exhibiting superior performance was selected.

The selected TGCN configuration had an AUROC of 0.97, 87% sensitivity at 97% specificity, and 75% sensitivity at 99% specificity. Performance of the TGCN was also compared with other deep learning models that leverage spatiotemporal information for making predictions. The five models tested in the study had performance similar to that of the TGCN.

Overall, on the testing dataset the TGCN achieved an AUROC of 0.93, 64% sensitivity at 97% specificity, and 47% sensitivity at 99% specificity.

Model explainability

One of the major factors limiting adoption of deep learning-based diagnostic tools in healthcare is the lack of transparency of AI systems. It is critical for clinicians to interpret how a deep learning system has made a prediction. Model explainability has significant implications in real-world deployment of deep learning-based diagnostic tools. Additionally, model explainability can provide novel insights into disease diagnosis, providing new research avenues and treatment paradigms. Two separate strategies for model explainability were investigated in this study to extract rich contextual information pertaining to the precise time a seizure occurred and the spatial location of electrodes involved during seizure onset.

Advertisement

Input attribution was the first model explainability tool used in the study. Input attribution assigns an importance level to different input features. Figure 4 shows the input attribution overlaid on an EEG recorded during a single seizure event in a patient. The seizure can be clinically described as follows: Patient has epileptic arousal around 40 to 45 seconds into the EEG data sample, followed by prominent high-frequency activity in the left hemisphere evolving into high-amplitude sharp wave in the Fp1, F7 and F3 leads. The input attribution map corroborates this description, especially through its emphasis on the activity beginning at 40 seconds and the subsequent emphasis on high-frequency activity in the left part of the brain.

Figure 4. Example of an input attribution map overlaid on EEG data. The attribution score is proportional to the waveform intensity. Reprinted, with permission, from Covert et al.4

Sequence dropout is another strategy of model explainability. In sequence dropout, sets of leads are removed from the model and the model’s performance without those leads is evaluated. Sequence dropout can be assessed by ignoring one lead at a time or by simultaneous removal of multiple leads from part of the brain. Figure 5 is an example of seizure localization using sequence dropout. Sequence dropout identified electrodes in the right frontotemporal region concordant with the clinically identified seizure onset.

Figure 5. Example of seizure localization using sequence dropout. Intensity correlates to the reduction in the model’s prediction when (A) one lead was dropped at a time and (B) a set of leads was dropped at the same time. Reprinted, with permission, from Covert et al.4

Next steps to build on the promise

Our investigations reveal that deep learning models can achieve strong performance in automated detection of epileptic seizures. The performance of the TGCN could be improved by integrating recurrent neural networks, managing class imbalance, ensuring superior labeling and data cleanup, and training on the larger set of curated data. The model’s capability to extract rich contextual information pertaining to seizure onset and spatial localization can further enhance its applicability in real-time automated seizure detection, thereby aiding clinical diagnosis and management.

Finally, electrophysiological information from EEGs could be integrated with results from other epilepsy diagnostic modalities — such as MRI, PET, ictal SPECT, magnetoencephalography and genetic data — along with patient demographics and clinical history to develop an AI-based epilepsy management and diagnosis tool to detect epileptic seizures, recommend appropriate plans of care and optimize patient management. Such tools have great potential to streamline epilepsy care, improve patient access and reduce cost associated with epilepsy management. The capability of AI systems to learn from a wealth of clinical data could be successfully harnessed in future studies that may facilitate medication management, optimize patients’ stay in the epilepsy monitoring unit, detect and potentially predict occurrence of epileptic seizures, identify appropriate candidates for resective epilepsy surgery and predict postoperative outcomes.

References

  1. Jin B, Krishnan B, Adler S, et al. Automated detection of focal cortical dysplasia type II with surface‐based magnetic resonance imaging postprocessing and machine learning. Epilepsia. 2018;59(5):982-992. doi:10.1111/epi.14064
  2. Grinenko O, Li J, Mosher JC, et al. A fingerprint of the epileptogenic zone in human epilepsies. Brain. 2018;141(1):117-131. doi:10.1093/brain/awx306
  3. Wang ZI, Krishnan B, Shattuck DW, et al. Automated MRI volumetric analysis in patients with Rasmussen syndrome. AJNR Am J Neuroradiol. 2016;37(12). doi:10.3174/ajnr.A4914
  4. Covert I, Krishnan B, Najm I, et al. Temporal graph convolutional networks for automatic seizure detection. May 2019. http://arxiv.org/abs/1905.01375. Accessed July 11, 2019.

Dr. Krishnan is a staff research scientist and Dr. Alexopoulos is a staff physician, both in Cleveland Clinic’s Epilepsy Center.

Related Articles

21-NEU-2225280_omnidirectional-treadmill_650x450
July 28, 2021
New VR Platform Fuses Physical and Virtual Worlds in Parkinson’s Disease and Beyond

Taking virtual reality-integrated technology from silver screen to clinical laboratory

20-NEU-1990658 NeuroDesign Innovation fellowship_CQD_650x450_896358708
December 29, 2020
‘NeuroDesign’ Fellowship Aims to Shape Next Generation of Neurosurgery Entrepreneurs

Novel collaboration is underway to foster innovation – and a real-world invention

brain mapping in epilepsy
December 5, 2019
Integrating MRI Post-Processing with Artificial Intelligence in Epilepsy

Novel approach is improving presurgical evaluation

nine-hole peg test on an ipad
December 4, 2019
Realizing Next-Generation Motor Assessment Through Massive Tech-Enabled Data Capture

Important progress toward predictive analytics in MS and PD

19-NEU-6003_650x450-CQD-Image
October 24, 2019
Cerebrovascular Roundup: Breakthroughs in Treating Brain Aneurysms

A quick review of 3D-printed models, intrasaccular flow disruption and flow diverter stenting

AMI-Tuck-1642295     06-25-19
July 11, 2019
Therapeutic Arts Program Aims to Make Patients Resilient in the Face of Multiple Sclerosis

Early results with ‘HeRe We Arts’ spur testing in a randomized trial

19-NEU-517-3D-Printed-Aneurysm-650×450
June 5, 2019
3D-Printed Replica of Brain Aneurysm Helps Guide Surgical Repair

One of first reported uses of the technology for a cerebrovascular malformation

650×450-Wang-NIH-grant
April 23, 2019
MR Fingerprinting in Epilepsy Garners $3 Million NIH Grant

Pairing of novel imaging technique with post-processing analyses could ultimately reshape care

Ad