Tag

retrieval augmentation

1 views collected around this technical thread.

DevOps
DevOps
Sep 13, 2024 · Artificial Intelligence

15 Advanced Retrieval‑Augmented Generation (RAG) Techniques for Production‑Ready AI Solutions

The article outlines fifteen advanced Retrieval‑Augmented Generation (RAG) techniques—from hierarchical indexing and context caching to multimodal alignment and microservice orchestration—explaining how they help transform AI prototypes into scalable, reliable production systems while highlighting common pitfalls and a concluding call to action.

AI productionLLMRAG
0 likes · 8 min read
15 Advanced Retrieval‑Augmented Generation (RAG) Techniques for Production‑Ready AI Solutions
Baidu Tech Salon
Baidu Tech Salon
Jun 24, 2024 · Artificial Intelligence

Paperpolisher: AI-Powered Academic Paper Translation and Polishing Assistant

Paperpolisher is an AI-powered tool using Baidu's ERNIE large model and Comate to translate and polish Chinese academic papers into high-quality English, leveraging large paper datasets and retrieval augmentation, streamlining code generation and improving acceptance chances for submissions to top conferences.

AI coding assistantBaidu ComateERNIE large model
0 likes · 9 min read
Paperpolisher: AI-Powered Academic Paper Translation and Polishing Assistant
Alimama Tech
Alimama Tech
Oct 18, 2023 · Artificial Intelligence

Technical Challenges and Directions for Large‑Model Applications in E‑commerce

Taobao Group’s ten large‑model challenges target e‑commerce AI by demanding domain‑specific pre‑training, multi‑step reasoning, extended context handling, factual reliability, intelligent tool orchestration, robust retrieval integration, fuzzy‑intent tool selection, scalable multi‑objective RLHF, improved query rewriting, and knowledge‑driven recommendation.

E-commerceRLHFknowledge hallucination
0 likes · 16 min read
Technical Challenges and Directions for Large‑Model Applications in E‑commerce
DaTaobao Tech
DaTaobao Tech
Oct 18, 2023 · Artificial Intelligence

Large Model Application Challenges for E-commerce

Taobao Group’s ten large‑model e‑commerce challenges call for researchers to build domain‑specific data pipelines, mitigate forgetting, balance expertise with generality, enable multi‑step reasoning, handle long contexts, reduce hallucinations, integrate tool use, improve fuzzy intent detection, apply multi‑objective RLHF, and generate cognitively novel recommendations.

E-commerceRLHFknowledge hallucination
0 likes · 14 min read
Large Model Application Challenges for E-commerce
DataFunSummit
DataFunSummit
Sep 23, 2023 · Artificial Intelligence

Personalized Large Models: Technical Practice, Challenges, and Future Directions

This article presents a comprehensive overview of personalized large models, covering their definition, four‑fold capabilities (knowledge, personality, emotion, memory), practical applications, challenges such as knowledge hallucination, retrieval‑augmented solutions, and detailed discussions on persona consistency and controllable language style.

AI dialogue systemsknowledge hallucinationlanguage style control
0 likes · 13 min read
Personalized Large Models: Technical Practice, Challenges, and Future Directions
Baidu Geek Talk
Baidu Geek Talk
May 8, 2023 · Artificial Intelligence

Augmented Language Models: Reasoning and External Tool Utilization

The survey shows that once language models exceed roughly ten billion parameters they spontaneously acquire two complementary abilities—step‑by‑step reasoning, often elicited by chain‑of‑thought prompts or scratch‑pad training, and the capacity to invoke external tools such as search engines, calculators, or robots—enabling them to retrieve up‑to‑date information, perform complex computations, and act in the world, thereby advancing toward general artificial intelligence.

AIPrompt Engineeringlarge language models
0 likes · 20 min read
Augmented Language Models: Reasoning and External Tool Utilization
DataFunSummit
DataFunSummit
Apr 20, 2023 · Artificial Intelligence

Mengzi Lightweight Model Technology System and Advances in Small‑Scale and Retrieval‑Augmented Pretraining

This presentation introduces the Mengzi lightweight model technology stack, covering large‑scale pre‑training, motivations for lightweight models, detailed techniques such as knowledge and sequence‑relation enhancement, training optimization, model compression, retrieval‑augmented pre‑training, multimodal extensions, open‑source releases, and real‑world applications.

Multimodalknowledge distillationlarge language models
0 likes · 23 min read
Mengzi Lightweight Model Technology System and Advances in Small‑Scale and Retrieval‑Augmented Pretraining