Match!

A Convolutional Neural Network for the Detection of Asynchronous Steady State Motion Visual Evoked Potential

Published on Jan 1, 2019in IEEE Transactions on Neural Systems and Rehabilitation Engineering3.48
· DOI :10.1109/TNSRE.2019.2914904
Xin Zhang2
Estimated H-index: 2
,
Guanghua Xu15
Estimated H-index: 15
+ 4 AuthorsNing Jiang
Abstract
A key issue in brain-computer interface (BCI) is the detection of intentional control (IC) states and non-intentional control (NC) states in an asynchronous manner. Furthermore, for steady-state visual evoked potential (SSVEP) BCI systems, multiple states (sub-states) exist within the IC state. Existing recognition methods rely on a threshold technique, which is difficult to realize high accuracy, i.e., simultaneously high true positive rate and low false positive rate. To address this issue, we proposed a novel convolutional neural network (CNN) to detect IC and NC states in a SSVEP-BCI system for the first time. Specifically, the steady-state motion visual evoked potentials (SSMVEP) paradigm, which has been shown to induce less visual discomfort, was chosen as the experimental paradigm. Two processing pipelines were proposed for the detection of IC and NC states. The first one was using CNN as a multi-class classifier to discriminate between all the states in IC and NC state (FFT-CNN). The second one was using CNN to discriminate between IC and NC states, and using canonical correlation analysis (CCA) to perform classification tasks within the IC (FFT-CNN-CCA). We demonstrated that both pipelines achieved a significant increase in accuracy for low-performance healthy participants when traditional algorithms such as CCA threshold were used. Furthermore, the FFT-CNN-CCA pipeline achieved better performance than the FFT-CNN pipeline based on the stroke patients’ data. In summary, we showed that CNN can be used for robust detection in an asynchronous SSMVEP-BCI with great potential for out-of-lab BCI applications.
  • References (32)
  • Citations (0)
References32
Newest
#1Kaori Suefusa (H.I., S.I.: University of Agriculture, Faisalabad)H-Index: 2
#2Tanaka Toshihisa (TUAT: Tokyo University of Agriculture and Technology)H-Index: 19
#1Mingfei Liu (SCUT: South China University of Technology)H-Index: 1
#2Wei Wu (SCUT: South China University of Technology)H-Index: 14
Last.Yuanqing Li (SCUT: South China University of Technology)H-Index: 27
view all 6 authors...
#1Shenghong He (SCUT: South China University of Technology)H-Index: 3
#2Rui Zhang (SCUT: South China University of Technology)H-Index: 5
Last.Yuanqing Li (SCUT: South China University of Technology)H-Index: 3
view all 9 authors...
#1No Sang Kwak (KU: Korea University)H-Index: 2
#2Klaus-Robert Müller (Technical University of Berlin)H-Index: 82
Last.Sl Whan (KU: Korea University)H-Index: 41
view all 3 authors...
Jun 1, 2016 in CVPR (Computer Vision and Pattern Recognition)
#1Kaiming He (Microsoft)H-Index: 42
#2Xiangyu Zhang (Xi'an Jiaotong University)H-Index: 23
Last.Jian Sun (Microsoft)H-Index: 79
view all 4 authors...
#1No Sang Kwak (KU: Korea University)H-Index: 2
#2Klaus-Robert Müller (Technical University of Berlin)H-Index: 82
Last.Sl Whan (KU: Korea University)H-Index: 41
view all 3 authors...
View next paperA Frequency Domain Classifier of Steady-State Visual Evoked Potentials Using Deep Separable Convolutional Neural Networks