Language Model-Guided Classifier Adaptation for Brain-Computer Interfaces for Communication.

Abstract

Brain-computer interfaces (BCIs), such as the P300 speller, can provide a means of communication for individuals with severe neuromuscular limitations. BCIs interpret electroencephalography (EEG) signals in order to translate embedded information about a user's intent into executable commands to control external devices. However, EEG signals are inherently noisy and nonstationary, posing a challenge to extended BCI use. Conventionally, a BCI classifier is trained via supervised learning in an offline calibration session; once trained, the classifier is deployed for online use and is not updated. As the statistics of a user's EEG data change over time, the performance of a static classifier may decline with extended use. It is therefore desirable to automatically adapt the classifier to current data statistics without requiring offline recalibration. In an existing semi-supervised learning approach, the classifier is trained on labeled EEG data and is then updated using incoming unlabeled EEG data and classifier-predicted labels. To reduce the risk of learning from incorrect predictions, a threshold is imposed to exclude unlabeled data with low-confidence label predictions from the expanded training set when retraining the adaptive classifier. In this work, we propose the use of a language model for spelling error correction and disambiguation to provide information about label correctness during semi-supervised learning. Results from simulations with multi-session P300 speller user EEG data demonstrate that our language-guided semi-supervised approach significantly improves spelling accuracy relative to conventional BCI calibration and threshold-based semi-supervised learning.

DOI
10.1109/smc53654.2022.9945561
Year