Machine Learning Algorithms & Natural Language Processing
Machine Learning Algorithms & Natural Language Processing
Mar 11, 2026 · Artificial Intelligence

Random Parameter Pruning Boosts Transferable Targeted Attacks Across Model Architectures

The RaPA method introduces random parameter pruning during adversarial generation, creating diverse model variants that markedly increase the success rate of targeted transfer attacks across CNN and Transformer architectures, even against defended models and with higher computational budgets, as demonstrated on ImageNet‑compatible benchmarks.

CNNTransformeradversarial attacks
0 likes · 14 min read
Random Parameter Pruning Boosts Transferable Targeted Attacks Across Model Architectures
Data Party THU
Data Party THU
Feb 1, 2026 · Artificial Intelligence

How Tiny Perturbations Can Fool 95% Accurate Image Classifiers

Despite achieving over 95% accuracy on ImageNet, popular models like ResNet, VGG, and EfficientNet can be easily misled by carefully crafted adversarial examples using FGSM, revealing deep learning’s inherent vulnerability and prompting the need for robust defense strategies.

FGSMPyTorchadversarial examples
0 likes · 11 min read
How Tiny Perturbations Can Fool 95% Accurate Image Classifiers
Data Party THU
Data Party THU
Nov 11, 2025 · Artificial Intelligence

Why Early Adversarial Attacks Still Beat Modern Ones: A Fair Transferability Study

This paper systematically evaluates 23 transferable adversarial attacks and 11 defenses on ImageNet, revealing that early methods like DI outperform many newer attacks when hyper‑parameters are fairly matched, that diffusion‑based defenses give a false sense of security, and that higher transferability often comes at the cost of reduced stealthiness.

ImageNetadversarial attacksdeep learning security
0 likes · 8 min read
Why Early Adversarial Attacks Still Beat Modern Ones: A Fair Transferability Study