Fraud's Bargain Attacks to Textual Classifiers via Metropolis-Hasting Sampling

Publication Type:
Conference Proceeding
Citation:
Proceedings of the 37th AAAI Conference on Artificial Intelligence, AAAI 2023, 2023, 37, pp. 16290-16291
Issue Date:
2023-06-27
Full metadata record
Recent studies on adversarial examples expose vulnerabilities of natural language processing (NLP) models. Existing techniques for generating adversarial examples are typically driven by deterministic heuristic rules that are agnostic to the optimal adversarial examples, a strategy that often results in attack failures. To this end, this research proposes Fraud's Bargain Attack (FBA), which utilizes a novel randomization mechanism to enlarge the searching space and enables high-quality adversarial examples to be generated with high probabilities. FBA applies the Metropolis-Hasting algorithm to enhance the selection of adversarial examples from all candidates proposed by a customized Word Manipulation Process (WMP). WMP perturbs one word at a time via insertion, removal, or substitution in a contextual-aware manner. Extensive experiments demonstrate that FBA outperforms the baselines in terms of attack success rate and imperceptibility.
Please use this identifier to cite or link to this item: