INVESTIGADORES
PETERSON Victoria
congresos y reuniones científicas
Título:
MIXED PENALIZATION FOR ENHANCING CLASS SEPARABILITY OF EVOKED RELATED POTENTIALS IN BRAIN-COMPUTER INTERFACES
Autor/es:
VICTORIA PETERSON; RUBÉN SPIES
Reunión:
Congreso; 9th International Conference "Inverse Problems: Modeling and Simulation"; 2018
Institución organizadora:
Eurasian Association on Inverse Problems
Resumen:
A brain computer interface (BCI) is a system which provides an alternative way of communication between the mind of a person and the outside world by using only measured brain activity [1]. An efficient and non-invasive way of establish the communication is based on electroencephalography (EEG) and event-related potentials (ERPs). An ERPs is an endogenous potential which results as a consequence of an external and relevant stimuli [2]. Detecting the ERP signal immersed in the ongoing EEG turns out to be an extremely hard and challenging binary pattern recognition problem. The Linear Discriminant Analysis (LDA) criterion is a well-known and widely used dimensionality reduction tool in the context of supervised classification. Although the LDA generally results in good classification performances while keeping the solution simple, it fails when the number of samples is large relative to the number of observations in the given data. This meager classification performance is due to the poor estimation of covariance matrices used within LDA, which usually become ill-conditioned. Several authors, both from the BCI and the statistical research communities, have proposed different regularized versions of LDA, showing always the advantages of such tools. In this work we present a penalized version of the sparse discriminant analysis (SDA) [4], called generalized sparse discriminant analysis (GSDA) [5], for binary classification. This method inherits both the discriminative feature selection and classification properties of SDA and it also improves SDA performances through the addition of Kullback-Leibler class discrepancy information. The GSDA method is designed to automatically select the optimal regularization parameters by means of the L-hypersurface. Numerical experiments with two real ERP-EEG datasets show that, on one hand, GSDA outperforms standard SDA in the sense of classification performance, sparsity and required computing time, and, on the other hand, it also yields better overall performances, as compared to most of the state-of-the-art ERP classification algorithms, for single-trial ERP classification, when insufficient training samples are available.