基于黑盒测试框架的深度学习模型版权保护方法
网络安全与数据治理
屈详颜1,2,于静1,2,熊刚1,2,盖珂珂3
1.中国科学院信息工程研究所,北京100085;2.中国科学院大学网络空间安全学院,北京100049; 3.北京理工大学网络空间安全学院,北京100081
摘要: 当前生成式人工智能技术迅速发展,深度学习模型作为关键技术资产的版权保护变得越发重要。现有模型版权保护方法一般采用确定性测试样本生成算法,存在选择效率低和对抗攻击脆弱的问题。针对上述问题,提出了一种基于黑盒测试框架的深度学习模型版权保护方法。首先引入基于随机性算法的样本生成策略,有效提高了测试效率并降低了对抗攻击的风险。此外针对黑盒场景,引入了新的测试指标和算法,增强了黑盒防御的能力,确保每个指标具有足够的正交性。在实验验证方面,所提方法显示出了高效的版权判断准确性和可靠性,有效降低了高相关性指标的数量。
中图分类号:TP181
文献标识码:ADOI:10.19358/j.issn.2097-1788.2023.12.001
引用格式:屈详颜,于静,熊刚,等.基于黑盒测试框架的深度学习模型版权保护方法[J].网络安全与数据治理,2023,42(12):1-6,13.
文献标识码:ADOI:10.19358/j.issn.2097-1788.2023.12.001
引用格式:屈详颜,于静,熊刚,等.基于黑盒测试框架的深度学习模型版权保护方法[J].网络安全与数据治理,2023,42(12):1-6,13.
Copyright protection for deep learning models utilizing a black box testing framework
Qu Xiangyan 1,2, Yu Jing1,2, Xiong Gang1,2, Gai Keke3
1 Institute of Information Engineering, Chinese Academy of Sciences, Beijing 100085, China; 2 School of Cyber Security, University of Chinese Academy of Sciences, Beijing 100049, China; 3 School of Cyberspace Science and Technology, Beijing Institute of Technology, Beijing 100081, China
Abstract: With the rapid development of generative artificial intelligence technologies, the copyright protection of deep learning models has become increasingly important. Existing copyright protection methods generally adopt deterministic test sample generation algorithms, which suffer from inefficiencies in selection and vulnerabilities to adversarial attacks. To address these issues, we propose a copyright protection method for deep learning models based on a blackbox testing framework. This method introduces a sample generation strategy based on randomness algorithms, effectively improving test efficiency and reducing the risk of adversarial attacks. Additionally, new test metrics and algorithms are introduced for blackbox scenarios, enhancing the defensive capabilities of blackbox testing and ensuring each metric possesses sufficient orthogonality. In experimental validation, the proposed method demonstrates high efficiency in copyright judgment accuracy and reliability, effectively reducing the number of highly correlated indicators.
Key words : generative artificial intelligence; deep learning models; copyright protection; black box defense
引言
在当前生成式人工智能技术的迅猛发展推动下,深度学习模型的版权保护问题日益受到关注。深度学习模型,尤其是大规模和高性能的模型,因其昂贵的训练成本,容易遭受未授权的复制或再现,导致版权侵犯和模型所有者的经济损失[1-2]。传统的版权保护方法大多依赖于水印技术[3-4],通过在模型中嵌入特定的水印来确认所有权。尽管这类方法可以提供确切的所有权验证,但它们对原有模型具有侵入性,可能会影响模型性能或引入新的安全风险;并且这些方法对适应性攻击和新兴的模型提取攻击的鲁棒性不足[5-6]。
作者信息
屈详颜1,2,于静1,2,熊刚1,2,盖珂珂3
(1 中国科学院信息工程研究所,北京100085;2 中国科学院大学网络空间安全学院,北京100049;
3 北京理工大学网络空间安全学院,北京100081)
文章下载地址:https://www.chinaaet.com/resource/share/2000005869
此内容为AET网站原创,未经授权禁止转载。