Adversarial training-based robust lifetime prediction system for power transformers
- Publisher:
- Elsevier
- Publication Type:
- Journal Article
- Citation:
- Electric Power Systems Research, 2024, 231, pp. 110351
- Issue Date:
- 2024-06-01
Embargoed
Filename | Description | Size | |||
---|---|---|---|---|---|
Adversarial training-based robust lifetime prediction system for power transformers.pdf | Accepted version | 1.68 MB |
Copyright Clearance Process
- Recently Added
- In Progress
- Open Access
This item is currently unavailable due to the publisher's embargo.
Predictive maintenance, facilitated by smart devices and cyber infrastructure, is used for essential equipment like power transformers, enhancing power grid stability and reducing operating costs. As part of predictive maintenance, machine learning (ML) methods are employed to predict the remaining useful life (RUL) of power transformers, which can be vulnerable to cyber-attacks, especially data contamination attacks. Hence, this work introduces false data injection (FDI) attacks in ML-based RUL prediction and investigates the impacts on lifetime prediction. Three different attack templates of FDI attacks are implemented to corrupt the input data of extreme gradient boosting (XGBoost), extra trees (ETs) and random forest (RF)-based lifetime predictor models, where single attack templates are found severe compared to a mixed attack template of FDI attacks. Also, adversarial training is presented as a countermeasure, where the adversarially trained XGBoost model outperforms the other two models under normal conditions and cyber-attacks. Experiment results indicate that the lifetime prediction errors of the proposed model in all scenarios can be maintained at about 6 and 3 in terms of RMSE and MAE, respectively.
Please use this identifier to cite or link to this item: