AIセキュリティポータルbot

Neural Networks with (Low-Precision) Polynomial Approximations: New Insights and Techniques for Accuracy Improvement

Authors: Chi Zhang, Jingjing Fan, Man Ho Au, Siu Ming Yiu | Published: 2024-02-17 | Updated: 2024-06-07
Model Design and Accuracy
Model Performance Evaluation
Approximation Error of Negative Inputs

DART: A Principled Approach to Adversarially Robust Unsupervised Domain Adaptation

Authors: Yunjuan Wang, Hussein Hazimeh, Natalia Ponomareva, Alexey Kurakin, Ibrahim Hammoud, Raman Arora | Published: 2024-02-16
Algorithm
Adversarial Training
Watermark Evaluation

Private PAC Learning May be Harder than Online Learning

Authors: Mark Bun, Aloni Cohen, Rathin Desai | Published: 2024-02-16
Watermarking
Online Learning
Watermark Evaluation

Using Hallucinations to Bypass GPT4’s Filter

Authors: Benjamin Lemkin | Published: 2024-02-16 | Updated: 2024-03-11
LLM Security
Prompt Injection
Inappropriate Content Generation

On the Impact of Uncertainty and Calibration on Likelihood-Ratio Membership Inference Attacks

Authors: Meiyi Zhu, Caili Guo, Chunyan Feng, Osvaldo Simeone | Published: 2024-02-16 | Updated: 2025-05-13
Membership Inference
Quantification of Uncertainty
Computational Complexity

Privacy for Fairness: Information Obfuscation for Fair Representation Learning with Local Differential Privacy

Authors: Songjie Xie, Youlong Wu, Jiaxuan Li, Ming Ding, Khaled B. Letaief | Published: 2024-02-16
Privacy Protection Method
Fairness evaluation
Information Hiding Techniques

Measuring and Reducing LLM Hallucination without Gold-Standard Answers

Authors: Jiaheng Wei, Yuanshun Yao, Jean-Francois Ton, Hongyi Guo, Andrew Estornell, Yang Liu | Published: 2024-02-16 | Updated: 2024-06-06
Few-Shot Learning
Detection of Hallucinations
Watermark Evaluation

A chaotic maps-based privacy-preserving distributed deep learning for incomplete and Non-IID datasets

Authors: Irina Arévalo, Jose L. Salmeron | Published: 2024-02-15
Privacy Protection Method
Cryptography
Federated Learning

FedRDF: A Robust and Dynamic Aggregation Function against Poisoning Attacks in Federated Learning

Authors: Enrique Mármol Campos, Aurora González Vidal, José Luis Hernández Ramos, Antonio Skarmeta | Published: 2024-02-15
Poisoning
Attack Method
Federated Learning

How Much Does Each Datapoint Leak Your Privacy? Quantifying the Per-datum Membership Leakage

Authors: Achraf Azize, Debabrota Basu | Published: 2024-02-15
Membership Inference
Hypothesis Testing
Watermark Evaluation