AI Explorer
Apr 24, 2026 · Artificial Intelligence
Google’s ‘Banana’ Model Redefines Visual Transformers with Dynamic Sparse Attention
Google’s newly unveiled “Banana” visual Transformer introduces dynamic sparse attention that cuts inference cost 3‑5×, reduces memory by 70%, and improves ImageNet accuracy, while demonstrating real‑world gains in autonomous driving, medical imaging, and satellite analysis.
Dynamic Sparse AttentionGoogleImageNet
0 likes · 6 min read
