Big Data Technology Tribe
Big Data Technology Tribe
Feb 26, 2026 · Databases

How optimize_indices Improves Query Performance in Lance

The article explains the purpose and inner workings of Lance's optimize_indices function, detailing how it incorporates newly appended data into existing indexes, merges delta indexes, and manages partition adjustments to maintain fast vector and scalar query performance without full re‑training.

IVFLanceoptimize_indices
0 likes · 8 min read
How optimize_indices Improves Query Performance in Lance
Alibaba Cloud Developer
Alibaba Cloud Developer
Jan 4, 2026 · Databases

Accelerating AliSQL Vector Search with Nodes Cache and SIMD

AliSQL 8.0 introduces a shared Nodes Cache and per‑transaction cache to speed up vector queries, implements RC‑level transaction isolation for read‑only and read‑write operations, and leverages SIMD‑based pre‑computation to dramatically improve high‑dimensional vector distance calculations and concurrency performance.

AliSQLSIMDcache optimization
0 likes · 9 min read
Accelerating AliSQL Vector Search with Nodes Cache and SIMD
Alibaba Cloud Observability
Alibaba Cloud Observability
Oct 20, 2025 · Artificial Intelligence

How We Boosted Embedding Throughput 16× and Cut Vector Index Costs in a Cloud‑Native Setup

This article examines the high cost and low throughput of embedding vectors in log‑processing scenarios, analyzes the performance bottlenecks of inference frameworks, and details a series of cloud‑native optimizations—including switching to vLLM, deploying multiple model replicas with Triton, decoupling tokenization, and priority queuing—that together raise throughput by 16× and reduce per‑token pricing by two orders of magnitude.

EmbeddingGPU inferencePerformance optimization
0 likes · 9 min read
How We Boosted Embedding Throughput 16× and Cut Vector Index Costs in a Cloud‑Native Setup