Why Early Adversarial Attacks Still Beat Modern Ones: A Fair Transferability Study

This paper systematically evaluates 23 transferable adversarial attacks and 11 defenses on ImageNet, revealing that early methods like DI outperform many newer attacks when hyper‑parameters are fairly matched, that diffusion‑based defenses give a false sense of security, and that higher transferability often comes at the cost of reduced stealthiness.

Data Party THU
Data Party THU
Data Party THU
Why Early Adversarial Attacks Still Beat Modern Ones: A Fair Transferability Study

Background

Adversarial examples that transfer across models threaten black‑box deep learning systems. Transferability means an adversarial perturbation crafted on one model also misleads other unseen models.

Issues in Prior Evaluations

Unfair hyper‑parameter settings : comparisons often use different ε, iteration budgets, or optimizer settings, making results incomparable.

Missing stealthiness metrics : most works report only success rate under an Lp norm, ignoring perceptual quality (PSNR, SSIM, LPIPS) and traceability of perturbations.

Methodology

The authors categorize transferable attacks into five stages of the ML lifecycle (data‑level, model‑level, training‑level, inference‑level, post‑processing) and benchmark 23 representative attacks against 11 defenses (including diffusion‑based denoising and real‑world vision‑API defenses) on ImageNet. All experiments use identical Lp constraints (e.g., ε=16/255 for ℓ∞), the same number of optimization steps (e.g., 10 k iterations), and the same target models (ResNet‑50, Inception‑v3, etc.).

Stealthiness is measured with PSNR, SSIM, LPIPS and a novel “attack traceback” analysis that visualizes where perturbations concentrate (high‑frequency regions, structured patterns) and evaluates detectability by common detectors.

Evaluation pipeline
Evaluation pipeline

Key Findings

Early Diversity Input (DI) attack outperforms many later variants : When hyper‑parameters are equalized, DI achieves higher transfer success than newer methods that previously seemed stronger.

Diffusion‑based defenses give false security : They resist white‑box or adaptive attacks but are easily bypassed by strong transferable black‑box attacks.

Transferability vs. stealthiness trade‑off : Attacks with higher transfer rates produce lower PSNR / SSIM and higher LPIPS, indicating more perceptible perturbations.

Result summary
Result summary

Recommendations for Future Evaluations

Adopt one‑to‑one, hyper‑parameter‑fair designs for any attack/defense comparison.

Report multiple perception and stealthiness metrics (PSNR, SSIM, LPIPS) together with transferability.

Include attack traceback analysis to understand perturbation patterns and detectability.

When testing defenses, incorporate strong transferable black‑box attacks, especially against diffusion/denoising methods.

Release code, hyper‑parameters, and evaluation scripts for reproducibility. Repository:

https://github.com/ZhengyuZhao/TransferAttackEval

Future Outlook

The community is urged to standardize fair evaluation practices, broaden stealthiness assessments, and continuously update benchmarks as new attacks and defenses emerge.

adversarial attacksImageNetdeep learning securitydefense methodsfair evaluationtransferability
Data Party THU
Written by

Data Party THU

Official platform of Tsinghua Big Data Research Center, sharing the team's latest research, teaching updates, and big data news.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.