White-box target attack for EEG-based BCI regression problems
- Publisher:
- Springer
- Publication Type:
- Conference Proceeding
- Citation:
- Neural Information Processing 26th International Conference, ICONIP 2019 Sydney, NSW, Australia, December 12–15, 2019 Proceedings, Part I, 2019, pp. 476-490
- Issue Date:
- 2019
Closed Access
Filename | Description | Size | |||
---|---|---|---|---|---|
Meng2019_Chapter_White-BoxTargetAttackForEEG-Ba.pdf | Published version | 946.31 kB |
Copyright Clearance Process
- Recently Added
- In Progress
- Closed Access
This item is closed access and not available.
Machine learning has achieved great success in many applications, including electroencephalogram (EEG) based brain-computer interfaces (BCIs). Unfortunately, many machine learning models are vulnerable to adversarial examples, which are crafted by adding deliberately designed perturbations to the original inputs. Many adversarial attack approaches for classification problems have been proposed, but few have considered target adversarial attacks for regression problems. This paper proposes two such approaches. More specifically, we consider white-box target attacks for regression problems, where we know all information about the regression model to be attacked, and want to design small perturbations to change the regression output by a pre-determined amount. Experiments on two BCI regression problems verified that both approaches are effective. Moreover, adversarial examples generated from both approaches are also transferable, which means that we can use adversarial examples generated from one known regression model to attack an unknown regression model, i.e., to perform black-box attacks. To our knowledge, this is the first study on adversarial attacks for EEG-based BCI regression problems, which calls for more attention on the security of BCI systems.
Please use this identifier to cite or link to this item: