A data-driven attack against support vectors of SVM
- Publication Type:
- Conference Proceeding
- Citation:
- ASIACCS 2018 - Proceedings of the 2018 ACM Asia Conference on Computer and Communications Security, 2018, pp. 723 - 734
- Issue Date:
- 2018-05-29
Closed Access
Filename | Description | Size | |||
---|---|---|---|---|---|
p723-liu.pdf | Published version | 11.65 MB |
Copyright Clearance Process
- Recently Added
- In Progress
- Closed Access
This item is closed access and not available.
© 2018 Association for Computing Machinery. Machine learning (ML) is commonly used in multiple disciplines and real-world applications, such as information retrieval, financial systems, health, biometrics and online social networks. However, their security profiles against deliberate attacks have not often been considered. Sophisticated adversaries can exploit specific vulnerabilities exposed by classical ML algorithms to deceive intelligent systems. It is emerging to perform a thorough security evaluation as well as potential attacks against the machine learning techniques before developing novel methods to guarantee that machine learning can be securely applied in adversarial setting. In this paper, an effective attack strategy for crafting foreign support vectors in order to attack a classic ML algorithm, the Support Vector Machine (SVM) has been proposed with mathematical proof. The new attack can minimize the margin around the decision boundary and maximize the hinge loss simultaneously. We evaluate the new attack in different real-world applications including social spam detection, Internet traffic classification and image recognition. Experimental results highlight that the security of classifiers can be worsened by poisoning a small group of support vectors.
Please use this identifier to cite or link to this item: