A Novel Weights-less Watermark Embedding Method for Neural Network Models

Publisher:
Institute of Electrical and Electronics Engineers (IEEE)
Publication Type:
Conference Proceeding
Citation:
22nd International Symposium on Communications and Information Technologies, ISCIT 2023, 2023, 00, pp. 25-30
Issue Date:
2023-01-01
Filename Description Size
A_Novel_Weights-less_Watermark_Embedding_Method_for_Neural_Network_Models.pdfPublished version1.17 MB
Adobe PDF
Full metadata record
Deep learning-based Artificial Intelligence (AI) technology has been extensively used recently. AI model theft is a regular occurrence. As a result, many academics focus their efforts on safeguarding the Intellectual Property (IP) of trained Neural Network (NN) models. The majority of the most recent white-box setting watermark embedding methods rely on modifying model weights. Weights updated for the NN model during training must take into account the initial task as well as the embedding of watermarks. As a result, the accuracy of the initial task will be affected, necessitating more training time. This research proposes a novel weights-less watermark embedding method for deep neural networks to address this issue. Without actually embedding the watermark within the NN model weights, it uses a principle of code matching between the watermark and the weights. The proposed method requires less time than existing white-box setting watermark embedding methods, and the accuracy of the original task is not much diminished. Additionally, since the NN model weights are left alone, their statistical distribution will remain unchanged, giving the model increased resistance to watermark detection. The experiments in this paper demonstrate the effectiveness, efficiency, and robustness of our method.
Please use this identifier to cite or link to this item: