Typically, electroencephalography (EEG) signal activity during a seizure exhibits higher amplitude, more rhythmicity and less irregularity when compared to that of normal periods (see Fig. 1). Research has demonstrated that the change from normal to the seizure state is gradual and accompanied by quantifiable changes in EEG patterns and dynamics . Consequently, many studies have sought to distinguish the preictal (period preceding seizure onset) stage from the interictal (between seizures) stage using machine learning algorithms. Despite many promising Brain-Computer Interfaces (BCIs) being proposed in the literature, most have been implemented in software and remain confined to research laboratories -the necessity of a bulky computer being one of the primary drawbacks in bringing them to the everyday user. Hardware implementation would help bring these technologies closer to the people who need them the most. Here, we present the Field Programmable Gate Array (FPGA) implementation of an EEG-based seizure prediction system using the Hilbert Huang Transform (HHT).Disadvantages:
- Accuracy is less
- Efficiency is low
In the proposed system we are use the Real-Time Empirical Mode Decomposition for the seizure Prediction.
The proposed system was developed using the CHB-MIT scalp EEG database. The data was recorded using scalp electrodes at a sampling frequency of 256 Hz from 22 subjects. Each recording consists of 23 EEG channels acquired using the international 10-20 system (a few recordings comprise of 24 or 26 channels) and the entire dataset contains 198 seizure events.
Microvolt-range scalp EEG signals captured by electrodes have undergone attenuation due to the skull and can be contaminated with muscle noise, eye blink artifacts and power line interference. A bandpass FIR filter with cutoffs set to 0.5 Hz and 100 Hz and a 60 Hz notch filter were used at the preprocessing stage to remove baseline wander and suppress power line noise. Subsequent to decomposing a signal x(t) using Empirical Mode Decomposition (EMD).
EMD decomposes a signal into different scaled data sequences of distinct features. Each sequence is called an IMF, which must satisfy the following two conditions.
- In the whole data set, the numbers of extreme points (local maxima and minima) and zero-crossing points must be equal to each other or differ by at most one.
- At any point of the signal, the mean values of the envelope defined by the local maxima and the envelope defined by the local minima must be equal to zero.
We evaluate a Least-Squares SVM (LSSVM) with a Radial Basis Function kernel and a Logistic Regressor (LR) in classifying features.
The design utilizes saw tooth interpolation instead of the cubic sp line interpolation used in software in order to facilitate continuously processing incoming data. An S number termination criterion is used instead of the conventional convergence criteria involving the number of zero crossings and the number of local extrema of the signal. We decompose each epoch segment x(t) into 5 IMFs and thereafter extract bandwidth features for each one using elements such as accumulators (acc), CORDIC blocks, multipliers etc. using the architecture shown in Fig. 1. Due to the recursive nature of the computations and smaller amplitudes of the components at successive iterations, we use 64-bit fixed point arithmetic in the design. Calculating the proposed features in hardware utilizes a large amount of resources. Hence, having a separate feature extraction module per EEG channel is not feasible.Advantages:
- Better efficiency
- Better accuracy
- Xilinx ISE