Data Party THU
Data Party THU
Apr 16, 2026 · Artificial Intelligence

Can Multimodal LLMs Truly Understand Emotions? Inside the MME-Emotion Benchmark

The MME-Emotion benchmark, introduced by researchers from CUHK and Alibaba Tongyi and accepted at ICLR 2026, provides a large‑scale, multimodal evaluation of emotional intelligence in large language models, revealing current models’ limited emotion recognition and reasoning abilities across diverse real‑world scenarios.

AIMME-Emotionbenchmark
0 likes · 10 min read
Can Multimodal LLMs Truly Understand Emotions? Inside the MME-Emotion Benchmark