EEG-Based Brain-Computer Interfaces are Vulnerable to Backdoor Attacks.
- Publisher:
- IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
- Publication Type:
- Journal Article
- Citation:
- IEEE Trans Neural Syst Rehabil Eng, 2023, 31, pp. 2224-2234
- Issue Date:
- 2023
Open Access
Copyright Clearance Process
- Recently Added
- In Progress
- Open Access
This item is open access.
Full metadata record
Field | Value | Language |
---|---|---|
dc.contributor.author | Meng, L | |
dc.contributor.author | Jiang, X | |
dc.contributor.author | Huang, J | |
dc.contributor.author | Zeng, Z | |
dc.contributor.author | Yu, S | |
dc.contributor.author | Jung, T-P | |
dc.contributor.author | Lin, C-T | |
dc.contributor.author | Chavarriaga, R | |
dc.contributor.author | Wu, D | |
dc.date.accessioned | 2024-03-05T03:16:31Z | |
dc.date.available | 2024-03-05T03:16:31Z | |
dc.date.issued | 2023 | |
dc.identifier.citation | IEEE Trans Neural Syst Rehabil Eng, 2023, 31, pp. 2224-2234 | |
dc.identifier.issn | 1534-4320 | |
dc.identifier.issn | 1558-0210 | |
dc.identifier.uri | http://hdl.handle.net/10453/176116 | |
dc.description.abstract | Research and development of electroencephalogram (EEG) based brain-computer interfaces (BCIs) have advanced rapidly, partly due to deeper understanding of the brain and wide adoption of sophisticated machine learning approaches for decoding the EEG signals. However, recent studies have shown that machine learning algorithms are vulnerable to adversarial attacks. This paper proposes to use narrow period pulse for poisoning attack of EEG-based BCIs, which makes adversarial attacks much easier to implement. One can create dangerous backdoors in the machine learning model by injecting poisoning samples into the training set. Test samples with the backdoor key will then be classified into the target class specified by the attacker. What most distinguishes our approach from previous ones is that the backdoor key does not need to be synchronized with the EEG trials, making it very easy to implement. The effectiveness and robustness of the backdoor attack approach is demonstrated, highlighting a critical security concern for EEG-based BCIs and calling for urgent attention to address it. | |
dc.format | Print-Electronic | |
dc.language | eng | |
dc.publisher | IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC | |
dc.relation.ispartof | IEEE Trans Neural Syst Rehabil Eng | |
dc.relation.isbasedon | 10.1109/TNSRE.2023.3273214 | |
dc.rights | info:eu-repo/semantics/openAccess | |
dc.subject | 0903 Biomedical Engineering, 0906 Electrical and Electronic Engineering | |
dc.subject.classification | Biomedical Engineering | |
dc.subject.classification | 4003 Biomedical engineering | |
dc.subject.classification | 4007 Control engineering, mechatronics and robotics | |
dc.subject.mesh | Humans | |
dc.subject.mesh | Brain-Computer Interfaces | |
dc.subject.mesh | Electroencephalography | |
dc.subject.mesh | Algorithms | |
dc.subject.mesh | Machine Learning | |
dc.subject.mesh | Brain | |
dc.subject.mesh | Brain | |
dc.subject.mesh | Humans | |
dc.subject.mesh | Electroencephalography | |
dc.subject.mesh | Algorithms | |
dc.subject.mesh | Brain-Computer Interfaces | |
dc.subject.mesh | Machine Learning | |
dc.subject.mesh | Humans | |
dc.subject.mesh | Brain-Computer Interfaces | |
dc.subject.mesh | Electroencephalography | |
dc.subject.mesh | Algorithms | |
dc.subject.mesh | Machine Learning | |
dc.subject.mesh | Brain | |
dc.title | EEG-Based Brain-Computer Interfaces are Vulnerable to Backdoor Attacks. | |
dc.type | Journal Article | |
utslib.citation.volume | 31 | |
utslib.location.activity | United States | |
utslib.for | 0903 Biomedical Engineering | |
utslib.for | 0906 Electrical and Electronic Engineering | |
pubs.organisational-group | University of Technology Sydney | |
pubs.organisational-group | University of Technology Sydney/Faculty of Engineering and Information Technology | |
pubs.organisational-group | University of Technology Sydney/Strength - AAII - Australian Artificial Intelligence Institute | |
pubs.organisational-group | University of Technology Sydney/Faculty of Engineering and Information Technology/School of Computer Science | |
utslib.copyright.status | open_access | * |
dc.date.updated | 2024-03-05T03:16:25Z | |
pubs.publication-status | Published | |
pubs.volume | 31 |
Abstract:
Research and development of electroencephalogram (EEG) based brain-computer interfaces (BCIs) have advanced rapidly, partly due to deeper understanding of the brain and wide adoption of sophisticated machine learning approaches for decoding the EEG signals. However, recent studies have shown that machine learning algorithms are vulnerable to adversarial attacks. This paper proposes to use narrow period pulse for poisoning attack of EEG-based BCIs, which makes adversarial attacks much easier to implement. One can create dangerous backdoors in the machine learning model by injecting poisoning samples into the training set. Test samples with the backdoor key will then be classified into the target class specified by the attacker. What most distinguishes our approach from previous ones is that the backdoor key does not need to be synchronized with the EEG trials, making it very easy to implement. The effectiveness and robustness of the backdoor attack approach is demonstrated, highlighting a critical security concern for EEG-based BCIs and calling for urgent attention to address it.
Please use this identifier to cite or link to this item:
Download statistics for the last 12 months
Not enough data to produce graph