AIセキュリティポータルbot

Analytical Composition of Differential Privacy via the Edgeworth Accountant

Authors: Hua Wang, Sheng Gao, Huanyu Zhang, Milan Shen, Weijie J. Su | Published: 2022-06-09
Privacy Assessment
Federated Learning
Function Definition

Generative Adversarial Networks and Image-Based Malware Classification

Authors: Huy Nguyen, Fabio Di Troia, Genya Ishigaki, Mark Stamp | Published: 2022-06-08
Prompt Injection
Malware Propagation Means
Image Forensics

To remove or not remove Mobile Apps? A data-driven predictive model approach

Authors: Fadi Mohsen, Dimka Karastoyanova, George Azzopardi | Published: 2022-06-08
Data Management System
User Behavior Analysis
Feature Engineering

Gradient Obfuscation Gives a False Sense of Security in Federated Learning

Authors: Kai Yue, Richeng Jin, Chau-Wai Wong, Dror Baron, Huaiyu Dai | Published: 2022-06-08 | Updated: 2022-10-14
Attack Methods against DFL
Poisoning
Reconstruction Durability

Dap-FL: Federated Learning flourishes by adaptive tuning and secure aggregation

Authors: Qian Chen, Zilong Wang, Jiawei Chen, Haonan Yan, Xiaodong Lin | Published: 2022-06-08
Reinforcement Learning
Deep Learning Method
Federated Learning

Rate Distortion Tradeoff in Private Read Update Write in Federated Submodel Learning

Authors: Sajani Vithana, Sennur Ulukus | Published: 2022-06-07
Data Management System
Privacy Assessment
Federated Learning

Group privacy for personalized federated learning

Authors: Filippo Galli, Sayan Biswas, Kangsoo Jung, Tommaso Cucinotta, Catuscia Palamidessi | Published: 2022-06-07 | Updated: 2022-09-04
Privacy Assessment
Poisoning
Federated Learning

Data Stealing Attack on Medical Images: Is it Safe to Export Networks from Data Lakes?

Authors: Huiyu Li, Nicholas Ayache, Hervé Delingette | Published: 2022-06-07
Attack Methods against DFL
Privacy Assessment
Membership Inference

Building Robust Ensembles via Margin Boosting

Authors: Dinghuai Zhang, Hongyang Zhang, Aaron Courville, Yoshua Bengio, Pradeep Ravikumar, Arun Sai Suggala | Published: 2022-06-07
Poisoning
Robustness
Adversarial Attack Methods

Improving Adversarial Robustness by Putting More Regularizations on Less Robust Samples

Authors: Dongyoon Yang, Insung Kong, Yongdai Kim | Published: 2022-06-07 | Updated: 2023-06-01
Robustness
Adversarial Example
Adversarial Attack Methods