JavaShuo
栏目
标签
Poison Frogs! Targeted Clean-Label Poisoning Attacks on Neural Networks
时间 2021-01-17
标签
Poison
机器学习
繁體版
原文
原文链接
论文简介 在这项工作中,我们研究了一种新的攻击类型,称为干净标签攻击,攻击者注入的训练示例被认证机构清晰地标记,而不是被攻击者自己恶意地贴上标签。我们的策略假设攻击者不了解训练数据,而是了解模型及其参数。攻击者的目标是当网络在包含中毒实例的增强数据集上进行重新训练后,使重新训练的网络将一个特定测试实例从一个类错误地分类为她选择的另一个类。除了目标的预期预测错误之外,受害的分类器的性能下降并不明显。
>>阅读原文<<
相关文章
1.
【翻译】Poison Frogs! Targeted Clean-Label Poisoning Attacks on Neural Networks
2.
Poison Frogs! Targeted Clean-Label Poisoning Attacks on Neural Networks 论文阅读、复现及思考
3.
Cascade-based attacks on complex networks
4.
【翻译】Neural Cleanse: Identifying and Mitigating Backdoor Attacks in Neural Networks
5.
[S&P 2019翻译]Neural Cleanse: Identifying and Mitigating Backdoor Attacks in Neural Networks
6.
读书笔记17:Adversarial Attacks on Neural Networks for Graph Data
7.
【生成对抗样本】Simple Black-Box Adversarial Attacks on Deep Neural Networks
8.
Stronger Data Poisoning Attacks Break Data Sanitization Defenses
9.
Paper Notes: A Comprehensive Survey on Graph Neural Networks
10.
[娜璋带你读论文] (02) SP2019-Neural Cleanse: Identifying and Mitigating Backdoor Attacks in Neural Networks
更多相关文章...
•
Docker Compose
-
Docker教程
•
ionic 手势事件
-
ionic 教程
•
RxJava操作符(一)Creating Observables
•
PHP Ajax 跨域问题最佳解决方案
相关标签/搜索
networks
poisoning
frogs
attacks
targeted
neural
join..on
join....on
join......on
join...on
0
分享到微博
分享到微信
分享到QQ
每日一句
每一个你不满意的现在,都有一个你没有努力的曾经。
最新文章
1.
resiprocate 之repro使用
2.
Ubuntu配置Github并且新建仓库push代码,从已有仓库clone代码,并且push
3.
设计模式9——模板方法模式
4.
avue crud form组件的快速配置使用方法详细讲解
5.
python基础B
6.
从零开始···将工程上传到github
7.
Eclipse插件篇
8.
Oracle网络服务 独立监听的配置
9.
php7 fmp模式
10.
第5章 Linux文件及目录管理命令基础
本站公众号
欢迎关注本站公众号,获取更多信息
相关文章
1.
【翻译】Poison Frogs! Targeted Clean-Label Poisoning Attacks on Neural Networks
2.
Poison Frogs! Targeted Clean-Label Poisoning Attacks on Neural Networks 论文阅读、复现及思考
3.
Cascade-based attacks on complex networks
4.
【翻译】Neural Cleanse: Identifying and Mitigating Backdoor Attacks in Neural Networks
5.
[S&P 2019翻译]Neural Cleanse: Identifying and Mitigating Backdoor Attacks in Neural Networks
6.
读书笔记17:Adversarial Attacks on Neural Networks for Graph Data
7.
【生成对抗样本】Simple Black-Box Adversarial Attacks on Deep Neural Networks
8.
Stronger Data Poisoning Attacks Break Data Sanitization Defenses
9.
Paper Notes: A Comprehensive Survey on Graph Neural Networks
10.
[娜璋带你读论文] (02) SP2019-Neural Cleanse: Identifying and Mitigating Backdoor Attacks in Neural Networks
>>更多相关文章<<