Skip to content

HemaxiN/DL_ECG_Classification

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

191 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

DL_ECG_Classification

Official implementation of the paper: https://www.sciencedirect.com/science/article/pii/S174680942400199X

Practical project to compare how different methods for ECG signal representation perform in ECG classification; and to explore a multimodal DL approach to fuse the two models, leveraging the different structures of signal representations.

Dataset

Details regarding the dataset are presented here.

Examples of the ECG signal obtained with leads I, II and V2 for the ECG record 17110 (with ground truth label NORM).

Image Sub-Net: CNNs for ECG classification

Examples of GAF (left), MTF (middle) and RP (right) images for the ECG record 17110 (with ground truth label NORM), corresponding to leads I, II and V2:

To obtain the images shown above run the file create_images.py, specifying the partition ('train', 'dev' or 'test') and the directory containing the processed files, and the directory where the images and corresponding labels will be saved.

This will create a directory with the training, validation and test sets with the following tree structure:

train_dev_test_dataset
├── train
│   ├── images
│   └── labels
└── dev
│   ├── images
│   └── labels
└── test
    ├── images
    └── labels

To train the model use the following command (selecting the model AlexNet.py, resnet.py, vggnet.py or alexnetattention.py, and correctly specifying the directory of the dataset (using -data) and the directory to save the model (using -path_save_model). Other parameters can be specified as explained here. For instance:

python3 AlexNet.py -data '/dev/shm/dataset' -epochs 100 -batch_size 256 -path_save_model '/mnt/2TBData/hemaxi/ProjetoDL/working' -gpu_id 0 -learning_rate 0.01  

Please note that the optimized configurations for each architecture are based on the findings presented in our paper. We recommend using the parameters specified below and referring to the paper for further details:

  • (a) AlexNet: #filters=16, a batch size of 256 (adjust as needed based on GPU memory), a learning rate of 0.01 and a dropout rate of 0;
  • (b) ResNet: #filters=16, batch size of 128 (adjust as needed based on GPU memory), a learning rate of 0.01;
  • (c) VGGNet: #filters of 16, a batch size of 128 (adjust as needed based on GPU memory), a learning rate of 0.1 and a dropout rate of 0.3;
  • (d) MobileNetV2: #filters of 32, a batch size of 16 (adjust as needed based on GPU memory), a learning rate of 0.1 and a dropout rate of 0;
  • (e) AlexNetAtt: #filters of 8, a batch size of 16 (adjust as needed based on GPU memory), a learning rate of 0.01 and a dropout rate of 0.

The models that we trained are available here, (alexnet, resnet, vggnet and customcnn denote respectively the best AlexNet, ResNet, VGGNet and custom CNN models based on their performance on the validation set).

Configuration of the CNN based on the AlexNet model. nb_filters denotes the number of filters in the first layer used to compute the number of filters in the following layers; batch denotes the batch size:

|           Layer         |                   Size                 |
|:-----------------------:|:--------------------------------------:|
|     Convolutional 2D    |        (batch, nb_filters, 62,62)      |
|           ReLU          |       (batch, nb_filters, 62, 62)      |
|       MaxPooling 2D     |        (batch, nb_filters, 30,30)      |
|        Dropout 2D       |        (batch, nb_filters, 30,30)      |
|     Convolutional 2D    |       (batch, nb_filters×2, 30,30)     |
|           ReLU          |     (batch,  nb_filters×2,   30,30)    |
|       MaxPooling 2D     |     (batch,  nb_filters×2,   14,14)    |
|        Dropout 2D       |     (batch,  nb_filters×2,   14,14)    |
|     Convolutional 2D    |     (batch,  nb_filters×4,   14,14)    |
|           ReLU          |     (batch,  nb_filters×4,   14,14)    |
|     Convolutional 2D    |     (batch,  nb_filters×8,   14,14)    |
|           ReLU          |       (batch, nb_filters×8, 14,14)     |
|     Convolutional 2D    |           (batch, 256, 14,14)          |
|           ReLU          |           (batch, 256, 14,14)          |
|       MaxPooling 2D     |            (batch, 256, 6,6)           |
|        Dropout 2D       |            (batch, 256, 6,6)           |
|          Linear         |              (batch, 4096)             |
|           ReLU          |              (batch, 4096)             |
|          Linear         |              (batch, 2048)             |
|           ReLU          |              (batch, 2048)             |
|          Linear         |               (batch,  4)              |

To evaluate the performance of the model run the file load_alexnet_evaluate.py, specifying the directories of the trained model and dataset. This will output a matrix of dimension (4×4) with the true positives (TP), false negatives (FN), false positives (FP) and true negatives (TN) for each class separately:

|      | TP | FN | FP | TN |
|------|----|----|----|----|
| MI   |    |    |    |    |
| STTC |    |    |    |    |
| CD   |    |    |    |    |
| HYP  |    |    |    |    |

Acknowledgements

About

Official implementation of the paper: "Deep learning for ECG classification: A comparative study of 1D and 2D representations and multimodal fusion approaches"

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors