Tencent Technical Engineering
Tencent Technical Engineering
Apr 23, 2026 · Artificial Intelligence

Tencent Hunyuan Launches Hy3 Preview: Open‑Source Model Boosts Agent Performance

On April 23, Tencent released the open‑source Hy3 preview, a 295 B‑parameter hybrid expert model with 21 B active parameters and 256K context length, delivering substantial gains in complex reasoning, instruction following, code and agent tasks, achieving 40 % faster inference, lower costs, and strong benchmark results across Tencent’s AI products.

Hy3-previewInference EfficiencyLarge Language Model
0 likes · 9 min read
Tencent Hunyuan Launches Hy3 Preview: Open‑Source Model Boosts Agent Performance
Machine Learning Algorithms & Natural Language Processing
Machine Learning Algorithms & Natural Language Processing
Apr 7, 2026 · Artificial Intelligence

Can AI Self‑Evolve? New Meta Research Redefines Agent Rules

A recent Meta‑led study introduces HyperAgents, a framework that merges task agents with meta‑agents to enable metacognitive self‑modification, showing significant gains on coding benchmarks, paper review, robotics reward design, and Olympiad‑level math grading, while also highlighting emerging safety risks as AI systems begin to rewrite their own improvement mechanisms.

Darwin Gödel MachineHyperagentsbenchmark results
0 likes · 10 min read
Can AI Self‑Evolve? New Meta Research Redefines Agent Rules
AI Frontier Lectures
AI Frontier Lectures
Nov 25, 2025 · Artificial Intelligence

How RoMa v2 Achieves Harder, Better, Faster, Denser Feature Matching

RoMa v2 introduces a two‑stage matching‑then‑refinement pipeline powered by DINOv3 features, custom CUDA kernels, and diverse training data, delivering state‑of‑the‑art accuracy, speed, and pixel‑level uncertainty estimation across a wide range of dense matching benchmarks.

DINOv3RoMa v2benchmark results
0 likes · 10 min read
How RoMa v2 Achieves Harder, Better, Faster, Denser Feature Matching
Bighead's Algorithm Notes
Bighead's Algorithm Notes
Oct 17, 2025 · Artificial Intelligence

Exploring MLLM4TS: A Universal Multimodal Framework for Time‑Series Analysis

This article reviews the MLLM4TS framework, which fuses visual representations of multivariate time series with large language models to address complex temporal dependencies, cross‑channel interactions, and task generalization, and demonstrates superior performance on classification, anomaly detection, forecasting, and few‑shot scenarios across multiple benchmarks.

Ablation StudyFew‑Shot LearningTime Series Analysis
0 likes · 11 min read
Exploring MLLM4TS: A Universal Multimodal Framework for Time‑Series Analysis
AI Frontier Lectures
AI Frontier Lectures
May 25, 2025 · Artificial Intelligence

Can Alternating Generation‑Reduction Make LLMs Think Faster? Introducing PENCIL

The paper presents PENCIL, a novel alternating generation‑and‑erasure reasoning paradigm that achieves optimal space‑time complexity for chain‑of‑thought tasks, dramatically improves accuracy and efficiency on hard SAT, QBF, and Einstein puzzle benchmarks, and is provably Turing‑complete.

Pencilbenchmark resultschain of thought
0 likes · 12 min read
Can Alternating Generation‑Reduction Make LLMs Think Faster? Introducing PENCIL
AIWalker
AIWalker
Apr 6, 2025 · Artificial Intelligence

NOVA: Redefining Autoregressive Visual Modeling Without Vector Quantization

NOVA introduces a highly efficient autoregressive video generation framework that eliminates vector quantization, combines frame‑by‑frame causal prediction with set‑by‑set spatial attention, and achieves state‑of‑the‑art quality on VBench and GenEval while offering strong zero‑shot generalization across text‑to‑image and text‑to‑video tasks.

NOVAautoregressive video generationbenchmark results
0 likes · 14 min read
NOVA: Redefining Autoregressive Visual Modeling Without Vector Quantization