HyperAI Super Neural
HyperAI Super Neural
Feb 11, 2026 · Artificial Intelligence

Reduce Memory by 75% Using D‑CHAG’s Cross‑Channel Hierarchical Aggregation

Researchers at Oak Ridge National Laboratory introduced D‑CHAG, a distributed cross‑channel hierarchical aggregation method that cuts memory consumption by up to 75% and more than doubles throughput when training massive multi‑channel foundation models on up to 1,024 AMD GPUs, as demonstrated on hyperspectral imaging and weather‑forecasting workloads.

D-CHAGDistributed Trainingfoundation models
0 likes · 14 min read
Reduce Memory by 75% Using D‑CHAG’s Cross‑Channel Hierarchical Aggregation
ITPUB
ITPUB
Apr 27, 2024 · Databases

How Vector Databases Enable High‑Dimensional Stock Quant Analysis

This interview‑style guide explores how vector databases handle massive, high‑dimensional time‑series data for quantitative stock trading, detailing data scaling challenges, selection criteria, and why the research team chose LanceDB over alternatives for efficient, scalable financial analysis.

AI infrastructureLanceDBTime Series Analysis
0 likes · 7 min read
How Vector Databases Enable High‑Dimensional Stock Quant Analysis
Meituan Technology Team
Meituan Technology Team
Apr 11, 2024 · Artificial Intelligence

GPU-Accelerated Mixed Vector-Scalar Retrieval System for Meituan Takeaway Search

Meituan built a GPU‑accelerated mixed vector‑scalar retrieval system that pre‑filters scalar constraints on CPU, stores vectors in GPU memory, and uses IVF and FP16 techniques to achieve over 99 % recall and sub‑20 ms 99th‑percentile latency for more than 100 million takeaway search candidates.

GPUPerformance optimizationapproximate nearest neighbor
0 likes · 19 min read
GPU-Accelerated Mixed Vector-Scalar Retrieval System for Meituan Takeaway Search