Databases 7 min read

How Descartes Vector Database Crushed ANN‑Benchmarks with a 286% Performance Leap

The newly released Descartes vector database from 01.ai outperformed all competitors on six ANN‑Benchmarks datasets, achieving up to a 286% improvement over previous SOTA, thanks to innovations such as full‑navigation‑graph indexing, adaptive neighbor selection, and two‑level quantization, with open‑source code now available on GitHub.

ITPUB
ITPUB
ITPUB
How Descartes Vector Database Crushed ANN‑Benchmarks with a 286% Performance Leap

Announcement and Benchmark Success

01.ai introduced a new vector database called Descartes , which immediately captured media attention by dominating the ANN‑Benchmarks leaderboard. In the latest evaluation, Descartes secured first place on six different datasets, surpassing the previous state‑of‑the‑art (SOTA) by an astonishing 286% in query‑per‑second (QPS) while maintaining high recall.

The benchmark suite, ANN‑Benchmarks, is widely regarded as an authoritative testing framework for vector databases and approximate nearest neighbor (ANN) algorithms. It uses standardized queries, real‑world datasets, and metrics such as recall, QPS, and memory consumption to provide a fair, reproducible comparison across solutions.

Technical Innovations Behind the Gains

Descartes achieves its performance edge through several novel techniques:

Full‑Navigation‑Graph Indexing : A multi‑level global thumbnail graph enables precise navigation across the vector space, dramatically reducing the candidate set compared with traditional methods like hash tables, KD‑Tree, or VP‑Tree.

Adaptive Neighbor Selection : Instead of relying on a fixed top‑k or static edge count, each node dynamically chooses optimal neighbor edges based on its own and its neighbors’ distribution, accelerating convergence and improving recall by 15‑30%.

Two‑Level Quantization with Columnar Storage : The system applies a hierarchical quantization scheme to lower per‑vector computation cost, while column‑oriented storage leverages SIMD parallelism, delivering a 2‑3× speedup over classic product‑quantization lookup tables.

Additional optimizations—including refined index structures, connectivity guarantees, and hardware‑aware tuning—push overall recall above 99% and keep latency in the millisecond range even on databases containing tens of millions of vectors.

Broader Context and Expert Opinions

While Descartes’ results dominate ANN‑Benchmarks, experts note that other evaluation platforms such as big‑ann (the official NeurIPS competition) and VectorDBBench provide complementary perspectives. Rankings on a single benchmark may not fully reflect real‑world robustness, and rapid advances in vector algorithms mean that today’s leader can be overtaken tomorrow.

Industry observers also highlighted the marketing dynamics: despite the technical achievements, the company’s official website did not showcase the results, and media hype has outpaced formal product announcements.

Open‑Source Release

On March 18, 01.ai announced that the core search kernel of Descartes would be released under an open‑source license for free commercial use. The repository is available at https://github.com/01-ai/Descartes, allowing practitioners to experiment with the full‑navigation‑graph and adaptive neighbor strategies in their own applications.

Performance Illustration

The following chart (log‑scale) shows recall on the horizontal axis and QPS on the vertical axis; points farther to the top‑right indicate superior performance.

Benchmark results
Benchmark results
Original Source

Signed-in readers can open the original source through BestHub's protected redirect.

Sign in to view source
Republication Notice

This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactadmin@besthub.devand we will review it promptly.

Quantizationvector-databaseadaptive-neighborANN-BenchmarksDescartesfull-navigation-graph
ITPUB
Written by

ITPUB

Official ITPUB account sharing technical insights, community news, and exciting events.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.