Field |
Value |
Language |
dc.contributor.author |
Tian, H |
|
dc.contributor.author |
Zhang, G |
|
dc.contributor.author |
Liu, B
https://orcid.org/0000-0002-3603-6617
|
|
dc.contributor.author |
Zhu, T |
|
dc.contributor.author |
Ding, M |
|
dc.contributor.author |
Zhou, W |
|
dc.date |
2023-08-19 |
|
dc.date.accessioned |
2024-08-21T03:10:31Z |
|
dc.date.available |
2024-08-21T03:10:31Z |
|
dc.date.issued |
2024-08-01 |
|
dc.identifier.citation |
Proceedings of the Thirty-ThirdInternational Joint Conference on Artificial Intelligence, 2024, pp. 512-520 |
|
dc.identifier.uri |
http://hdl.handle.net/10453/180429
|
|
dc.description.abstract |
<jats:p>While in-processing fairness approaches show promise in mitigating bias predictions, their potential impact on privacy leakage remains under-explored. We aim to address this gap by assessing the privacy risks of fairness-enhanced binary classifiers with membership inference attacks (MIAs). Surprisingly, our results reveal that these fairness interventions exhibit increased resilience against existing attacks, indicating that enhancing fairness does not necessarily lead to privacy compromises. However, we find current attack methods are ineffective as they typically degrade into simple threshold models with limited attack effectiveness. Following this observation, we discover a novel threat dubbed Fairness Discrepancy Membership Inference Attacks (FD-MIA) that exploits prediction discrepancies between fair and biased models. This attack reveals more potent vulnerabilities and poses significant privacy risks to model privacy. Extensive experiments across multiple datasets, attack methods, and representative fairness approaches confirm our findings and demonstrate the efficacy of the proposed attack method. Our study exposes the overlooked privacy threats in fairness studies, advocating for thorough evaluations of potential security vulnerabilities before model deployments.</jats:p> |
|
dc.language |
en |
|
dc.publisher |
International Joint Conferences on Artificial Intelligence |
|
dc.relation |
http://purl.org/au-research/grants/arc/DP230100246
|
|
dc.relation |
http://purl.org/au-research/grants/arc/LP220200808
|
|
dc.relation.ispartof |
Proceedings of the Thirty-ThirdInternational Joint Conference on Artificial Intelligence |
|
dc.relation.ispartof |
Proceedings of the Thirty-ThirdInternational Joint Conference on Artificial Intelligence |
|
dc.relation.isbasedon |
10.24963/ijcai.2024/57 |
|
dc.rights |
info:eu-repo/semantics/openAccess |
|
dc.title |
When Fairness Meets Privacy: Exploring Privacy Threats in Fair Binary Classifiers via Membership Inference Attacks |
|
dc.type |
Conference Proceeding |
|
pubs.organisational-group |
University of Technology Sydney |
|
pubs.organisational-group |
University of Technology Sydney/Faculty of Engineering and Information Technology |
|
pubs.organisational-group |
University of Technology Sydney/Strength - AAII - Australian Artificial Intelligence Institute |
|
pubs.organisational-group |
University of Technology Sydney/Faculty of Engineering and Information Technology/School of Computer Science |
|
pubs.organisational-group |
University of Technology Sydney/Strength - CCSP - Centre for Cyber Security and Privacy |
|
utslib.copyright.status |
open_access |
* |
dc.date.updated |
2024-08-21T03:10:29Z |
|
pubs.finish-date |
2023-08-25 |
|
pubs.publication-status |
Published |
|
pubs.start-date |
2023-08-19 |
|