Why Do Papers with a '?' in the Title Achieve a 45% Acceptance Rate? A Five‑Year ICLR Keyword Analysis
Analyzing five years of ICLR submission metadata reveals that titles containing a question mark boost acceptance to 45.5% in 2022, while emerging keywords such as diffusion, sparse, and planning dominate high‑acceptance lists, and older topics like federated learning, adversarial attacks, and security suffer low acceptance and high withdrawal rates.
The authors collected the full ICLR submission metadata from 2020‑2026 using the Paper Copilot dataset and built an open‑source retrieval tool (GitHub: https://github.com/binyxu/iclr-accept-rate) that supports AND/OR logic and regular‑expression queries across title, abstract, and TL;DR fields.
Using this tool they first examined overall acceptance rates: the official 2026 figure is 27.4%, while the 2022 overall rate was about 32%. A striking outlier appears when the title contains a question mark – 88 papers with "?" in 2022 enjoyed a 45.5% acceptance rate, a 13% increase over the baseline.
Next, they ranked keywords by yearly acceptance percentage. For each year the top three terms (with their acceptance rates) are:
2022: language (43.0%), fl (41.7%)
2023: diffusion (34.5%), 3d (34.1%), large (31.6%)
2024: sparse (43.0%), zero (38.8%), from (37.5%)
2025: planning (44.2%), how (41.7%), flow (40.0%)
2026: less (39.3%), geometry (36.9%), manipulation (36.2%)
These high‑acceptance keywords reflect shifting research hotspots: diffusion in 2023, sparse and zero‑shot learning in 2024, planning and flow in 2025, and geometry‑related topics in 2026.
The analysis also highlights consistently low‑acceptance terms. For example, federated fell from 18.4% in 2022 to 16.0% in 2026; adversarial was only 16.9% in 2023; security and backdoor were 16.3% and 13.5% respectively in 2026. In 2026 the lowest‑acceptance trio were poison (10.8%), quantum (11.5%) and tabular (13.3%).
Beyond acceptance percentages, the authors examined oral rates, withdrawal rates, and desk‑reject statistics. For the keyword RLVR, 32 submissions yielded a 37.5% overall acceptance, but all were posters (0% oral or spotlight) and the withdrawal rate was 25.0% with one desk reject. In contrast, the diffusion language track had 53 submissions, a 41.51% overall acceptance, a 13.64% oral rate (3 oral papers), and a relatively low 15.1% withdrawal rate, indicating a healthy, breakthrough‑driven research area.
High‑withdrawal or desk‑reject rates (up to one‑third) appear in many “hot” topics, suggesting that chasing trends can be risky when experimental results are weak or reviewer feedback is harsh. The authors warn that while some engineering‑focused directions (e.g., LLM agent) can still achieve strong oral rates, incremental fine‑tuning without systemic innovation leads to reviewer fatigue.
Finally, the authors encourage readers to use the tool to evaluate their own research directions, compare acceptance and withdrawal statistics, and avoid “pitfalls” by selecting topics with favorable acceptance‑to‑withdrawal ratios.
Machine Learning Algorithms & Natural Language Processing
Focused on frontier AI technologies, empowering AI researchers' progress.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
