Skip to content

Commit 4ab0aa3

Browse files
authored
Merge pull request #369 from Mentalab-hub/APIS-1253-We-want-to-have-an-application-space-example-for-a-BCI
APIS-1253 motor-imagery-bci
2 parents fd1df12 + 74f9550 commit 4ab0aa3

File tree

2 files changed

+1061
-0
lines changed

2 files changed

+1061
-0
lines changed

examples/motor-imagery-bci/README.md

Lines changed: 131 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,131 @@
1+
## Motor Imagery BCI Application
2+
3+
### Overview
4+
This application is a Brain-Computer Interface (BCI) system designed to classify motor imagery tasks (left hand, right hand, and neutral state) using EEG data. The system connects to Explore Pro device via the Lab Streaming Layer (LSL) protocol, processes the EEG signals, and classifies the data using machine learning models, including a Transformer-based neural network.
5+
6+
The application supports real-time classification. It includes tools for data collection, preprocessing, model training, and evaluation.
7+
8+
### Features
9+
- **Real-Time EEG Data Acquisition:** Connects to an EEG device via LSL for real-time data streaming.
10+
- **Motor Imagery Classification:** Classifies EEG data into three states: left hand movement, right hand movement, and neutral state.
11+
- **Transformer Model:** Implements a Transformer-based neural network for EEG signal classification.
12+
- **Multiple Classifiers:** Supports Linear Discriminant Analysis (LDA), Support Vector Machines (SVM), Random Forest (RF), and the custom Transformer model.
13+
- **Data Preprocessing:** Includes bandpass filtering (mu and beta bands) and feature extraction (band power).
14+
- **Model Training and Evaluation:** Allows training and evaluation of multiple classifiers with visualization of results.
15+
- **Real-Time BCI Operation:** Runs in continuous mode for real-time classification and control.
16+
17+
### Requirements
18+
#### Hardware
19+
- An Explore Pro device.
20+
- A computer with sufficient processing power (GPU recommended for Transformer model training).
21+
22+
#### Software
23+
- Python 3.10 or higher
24+
- Required Python packages (install via pip):
25+
26+
```bash
27+
pip install pylsl numpy scipy scikit-learn torch matplotlib seaborn pandas
28+
```
29+
30+
Ensure your EEG device is set up and streaming data via LSL.
31+
32+
### Usage
33+
#### 1. Collect Calibration Data
34+
To train the classifiers, you need to collect calibration data. Run the following command and follow the on-screen instructions:
35+
36+
```bash
37+
python main.py
38+
```
39+
40+
Choose option **1** to train and compare all classifiers. The system will prompt you to perform left hand, right hand, and neutral state tasks.
41+
42+
#### 2. Train Classifiers
43+
The application will automatically preprocess the data, extract features, and train multiple classifiers (LDA, SVM, Random Forest, and Transformer). Training progress and results will be displayed in the console.
44+
45+
#### 3. Run Real-Time BCI
46+
After training, you can run the BCI in real-time mode:
47+
48+
```bash
49+
python main.py
50+
```
51+
52+
Choose option **2** to run the BCI with the best-performing classifier or option **3** to select a specific classifier.
53+
54+
### File Structure
55+
```
56+
motor-imagery-bci/
57+
├── models/ # Saved classifier models
58+
├── dataset/ # Saved calibration datasets
59+
├── main.py # Main application script
60+
├── README.md # This file
61+
```
62+
63+
### Classifiers
64+
The application supports the following classifiers:
65+
- **Linear Discriminant Analysis (LDA)**
66+
- **Support Vector Machine (SVM)**
67+
- **Random Forest (RF)**
68+
- **Transformer Model**
69+
70+
### Customizing the Transformer Model
71+
The Transformer model is defined in the `EEGTransformer` class. You can customize the following parameters:
72+
73+
- `input_dim`: Input feature dimension.
74+
- `num_classes`: Number of output classes (default: 3).
75+
- `d_model`: Dimension of the model (default: 64).
76+
- `nhead`: Number of attention heads (default: 8).
77+
- `num_layers`: Number of Transformer encoder layers (default: 4).
78+
- `dropout`: Dropout rate (default: 0.2).
79+
80+
Example:
81+
82+
```python
83+
model = EEGTransformer(input_dim=64, num_classes=3, d_model=128, nhead=8, num_layers=6, dropout=0.3)
84+
```
85+
86+
### Data Preprocessing
87+
The EEG data is preprocessed as follows:
88+
- **Bandpass Filtering:** Applied to extract mu (8-12 Hz) and beta (16-24 Hz) bands.
89+
- **Feature Extraction:** Band power is calculated for each frequency band.
90+
- **Dataset Preparation:** Features and labels are saved in a CSV file for training.
91+
92+
### Real-Time Operation
93+
In real-time mode, the application:
94+
1. Continuously collects EEG data from the LSL stream.
95+
2. Applies preprocessing and feature extraction.
96+
3. Classifies the data using the selected model.
97+
4. Displays the detected state (left hand, right hand, or neutral) and certainty level.
98+
99+
### Troubleshooting
100+
#### 1. LSL Stream Not Found
101+
- Ensure your device is properly connected and streaming data.
102+
- Verify the stream name in the `MotorImageryBCI` class initialization.
103+
104+
#### 2. Poor Classification Accuracy
105+
- Collect more calibration data.
106+
- Adjust the frequency bands (`mu_band` and `beta_band`) in the `MotorImageryBCI` class.
107+
- Fine-tune the Transformer model parameters.
108+
109+
#### 3. Performance Issues
110+
- Use a GPU for training the Transformer model.
111+
- Reduce the window size or overlap in the `MotorImageryBCI` class.
112+
113+
---
114+
115+
## Future Improvements
116+
While the current system is functional, several enhancements can improve its accuracy, real-time performance, and adaptability:
117+
118+
1. **Advanced Deep Learning Models**
119+
- Replace traditional classifiers (LDA, SVM, RF) with more powerful architectures like **EEGNet, ConvLSTM, or Graph Neural Networks (GNNs)**.
120+
- Optimize the Transformer model by using **EEG-specialized architectures** such as **TS-Transformer** or **EEGFormer** for better time-series representation.
121+
122+
2. **Enhanced Feature Extraction**
123+
- Incorporate **Common Spatial Patterns (CSP)** to improve the separation of motor imagery classes.
124+
- Use **Riemannian geometry-based approaches** to extract more robust spatial features.
125+
126+
3. **Data Augmentation for EEG**
127+
- Implement **synthetic EEG data generation** techniques like **time-domain warping, frequency shifting, and GAN-based augmentation** to improve model generalization.
128+
129+
4. **Online & Transfer Learning**
130+
- Introduce **adaptive learning** to fine-tune the model based on user-specific EEG patterns.
131+
- Use **few-shot learning** or **domain adaptation techniques** to improve performance across multiple users.

0 commit comments

Comments
 (0)