Recurrent neural networks like plain RNN or more advanced models like LSTM and GRU used to be the goto models for deep-learning practitioners venturing into the time series domain. NLP, providing an abundance of sequence data, provided a willing subject. But transformer architectures like BERT and GPT have definitely taken over in the domain. Apart from these transformer architectures, CNN’s have also made a come-back or advance in the time-series domain. Are CNN’s good at modelling time-series?
How good are CNN’s at modelling time-series?
To answer this question tthis post replicates an article called “ECG Heartbeat Classification: A Deep Transferable Representation” [1] that applies ResNet, a CNN based architecture, to electrocardiogram (ECG) data. To round it of transfer learning is applied to the problem.
Keras code is provided in the form of a notebook that can be readily executed with for example Google Colab here.
This post is structured as follows:
The article with the original study uses two sets of ECG data:
(Both datasets are available on Kaggle, see the notebook for details.)
Both datasets contain standardized ECG signals. Each observation has 187 time-steps per heartbeat. An example observation plotted in 2D renders:
2D representation of an observation
In the original MIT-BIH data set one of the following labels is assigned to each observation:
In the Kaggle data set, that happens to be the source of the original study, these labels have been fused into 5 categories. The data set provides both a training and test datasets of lengths 87554 and 21892 respectively. Not too shabby!
#time-series-analysis #cnn #machine-learning #resnet #data analysis