AntTech
Author

AntTech

Technology is the core driver of Ant's future creation.

703
Articles
0
Likes
1.9k
Views
0
Comments
Recent Articles

Latest from AntTech

100 recent articles max
AntTech
AntTech
Feb 5, 2026 · Artificial Intelligence

How Triple Alignment and Rationale Generation Supercharge Knowledge‑Based VQA

This paper presents a lightweight, high‑efficiency framework called Triple Alignment with Rationale Generation (TAG) that transforms knowledge‑based visual question answering into a contrastive learning task, dramatically reducing trainable parameters while achieving state‑of‑the‑art performance on major KVQA benchmarks.

CLIPLightweight ModelVQA
0 likes · 7 min read
How Triple Alignment and Rationale Generation Supercharge Knowledge‑Based VQA
AntTech
AntTech
Jan 30, 2026 · Databases

Award-Winning Papers Reveal Databases, AI Typography, and Financial Benchmarks

Three award‑winning papers—OceanBase’s unitized database architecture for billion‑scale map services, a video‑diffusion‑based dynamic typography system that animates text semantically, and the FinBench LDBC financial graph benchmark—are examined, highlighting their design, experimental results, and impact on industry applications.

AIGraph BenchmarkText Animation
0 likes · 6 min read
Award-Winning Papers Reveal Databases, AI Typography, and Financial Benchmarks
AntTech
AntTech
Jan 16, 2026 · Databases

Can Multi‑Agent Collaboration Automatically Tune Database Parameters with High Efficiency?

The paper presents CMA+DB, a hierarchical multi‑agent framework that automatically tunes database parameters across diverse workloads by combining classification‑based collaboration, layered training, and joint action selection, achieving superior performance, faster convergence, and strong generalization compared with existing tuning methods.

CMA+DBDatabase Tuningmulti-agent reinforcement learning
0 likes · 9 min read
Can Multi‑Agent Collaboration Automatically Tune Database Parameters with High Efficiency?
AntTech
AntTech
Jan 14, 2026 · Artificial Intelligence

Boosting Secure AI: HAWK Accelerator and FHEFusion Compiler Break New Ground

This article highlights two cutting‑edge works from Ant Group’s research team—HAWK, a fixed‑word key decomposition switching accelerator that overcomes hardware challenges for FHE, and FHEFusion, a compiler framework that introduces operator fusion to dramatically speed CKKS‑based DNN inference—showcasing their designs, optimizations, and experimental gains.

Compiler OptimizationDNN InferenceFully Homomorphic Encryption
0 likes · 7 min read
Boosting Secure AI: HAWK Accelerator and FHEFusion Compiler Break New Ground
AntTech
AntTech
Dec 18, 2025 · Artificial Intelligence

How AEnvironment Powers Scalable Agentic RL with a Unified MCP Protocol

AEnvironment is an open‑source, unified environment platform for Agentic Reinforcement Learning that abstracts all resources as services via the MCP protocol, enabling trillion‑scale model training, rapid app generation, benchmark integration, and seamless deployment through a high‑performance ASandbox runtime.

AEnvironmentAgentic RLEnvironment Platform
0 likes · 11 min read
How AEnvironment Powers Scalable Agentic RL with a Unified MCP Protocol
AntTech
AntTech
Dec 11, 2025 · Artificial Intelligence

Unlock Scalable RL: AReaL’s Decoupled Agentic Framework & Single‑Controller Design

This article explains how the open‑source AReaL framework boosts large‑scale reinforcement learning by separating agent execution from training logic, introducing a decoupled Agentic RL service and a Single‑Controller architecture that improves data flow, fault tolerance, and GPU utilization.

Agentic AIDistributed TrainingOpen-source
0 likes · 14 min read
Unlock Scalable RL: AReaL’s Decoupled Agentic Framework & Single‑Controller Design
AntTech
AntTech
Dec 6, 2025 · Artificial Intelligence

FinEval‑KR: Diagnosing Knowledge vs. Reasoning Gaps in Financial Large Language Models

FinEval‑KR, a new EMNLP2025 evaluation framework co‑authored by Shanghai University of Finance and Economics and Ant Group, separates knowledge coverage from logical reasoning to reveal why financial LLMs often hallucinate on calculation tasks, introduces KS, RS, and CS metrics, and ranks 18 state‑of‑the‑art models on a rigorously curated finance dataset.

Knowledge vs reasoningLLM evaluationfinance AI
0 likes · 14 min read
FinEval‑KR: Diagnosing Knowledge vs. Reasoning Gaps in Financial Large Language Models
AntTech
AntTech
Dec 4, 2025 · Artificial Intelligence

How AState Reduces Trillion‑Parameter RL Weight Sync to 6 Seconds

AState is a general‑purpose state data management system for reinforcement‑learning tasks that tackles low IO efficiency, slow weight synchronization, and state‑recovery challenges, achieving sub‑10‑second weight sync for trillion‑parameter models through a three‑layer architecture, zero‑redundancy transfers, and hardware‑aware co‑design, with the code openly available on GitHub.

AStateWeight Synchronizationhigh performance computing
0 likes · 23 min read
How AState Reduces Trillion‑Parameter RL Weight Sync to 6 Seconds
AntTech
AntTech
Nov 27, 2025 · Artificial Intelligence

How AMem NCCL‑Plugin Cuts GPU Memory Overhead for Trillion‑Parameter RL Models

The article explains the design, implementation, and performance of the AMem NCCL‑Plugin, a lightweight extension to NVIDIA's NCCL that enables transparent offloading and rapid recovery of GPU memory during reinforcement‑learning training of trillion‑parameter models, detailing its architecture, APIs, benchmarks, installation steps, and integration guidelines.

ASystemDistributed TrainingGPU
0 likes · 18 min read
How AMem NCCL‑Plugin Cuts GPU Memory Overhead for Trillion‑Parameter RL Models
AntTech
AntTech
Nov 21, 2025 · Artificial Intelligence

How Awex Enables Sub‑Second TB‑Scale Weight Sync for Trillion‑Parameter RL Models

Awex is a high‑performance Python framework that synchronizes training and inference weights for trillion‑parameter reinforcement‑learning models in seconds, using unified conversion, metadata management, and NCCL/RDMA transfer plans, dramatically reducing RL training latency and supporting diverse parallel strategies.

Distributed TrainingPythonWeight Synchronization
0 likes · 17 min read
How Awex Enables Sub‑Second TB‑Scale Weight Sync for Trillion‑Parameter RL Models