Privacy-preserving mechanisms for machine learning on graph-structured data

Publication Type:
Thesis
Issue Date:
2024
Full metadata record
Graph neural networks (GNNs) show great capacity in handling graph-structured data across various disciplines. However, GNNs are vulnerable to privacy attacks that may lead to data breach. This thesis rigorously explores privacy-preserving mechanisms for GNNs, introducing three key research works. The first research work addresses property inference attacks on graph data by applying the Information Bottleneck (IB) principle to modify graph structures. This reduces the leakage of sensitive property information in graph embeddings while retaining task-relevant information, ensuring GNNs maintain predictive accuracy. The second research work proposes the Subgraph-Out-of-Subgraph (SOS) approach in federated graph learning to prevent model inversion attacks. This method extracts task-relevant subgraphs using the IB principle, minimizing sensitive information in GNN updates. It incorporates a novel neural network-based approach for mutual information estimation and a generation algorithm for optimized subgraphs. The third research work, zooming in a more realistic scenario, focuses on federated graph learning in intelligent transportation systems for traffic forecasting. It introduces a differential privacy (DP)-based framework to protect the topological information of data contributors. This thesis aims to advance the field of privacy-preserving mechanisms in GNNs, ensuring secure and efficient utilization of graph-structured data.
Please use this identifier to cite or link to this item: