How to Build a Retrieval‑Augmented Generation (RAG) System with OpenSearch LLM and Dify

Learn step‑by‑step how to integrate OpenSearch LLM’s intelligent Q&A edition with the Dify large‑model platform to create a robust Retrieval‑Augmented Generation (RAG) system, covering architecture, workflow setup, API authentication, result parsing, and practical code examples.

Alibaba Cloud Big Data AI Platform
Alibaba Cloud Big Data AI Platform
Alibaba Cloud Big Data AI Platform
How to Build a Retrieval‑Augmented Generation (RAG) System with OpenSearch LLM and Dify

Background

With the rapid evolution of AIGC technology, LLM applications are continuously iterating. Retrieval‑Augmented Generation (RAG) has become a core component in enterprise knowledge bases, intelligent customer service, and e‑commerce recommendation scenarios.

Large Model Application Platforms

Platforms such as Alibaba Cloud Bailei and Dify enable developers to quickly build business applications, and they usually embed RAG capabilities.

However, existing RAG solutions often suffer from poor usability, limited customizability, and weak enterprise‑grade features.

Why Use OpenSearch LLM Intelligent Q&A Edition

OpenSearch LLM provides built‑in data parsing, slicing, vectorization, text & vector search, and multimodal LLM capabilities, making it a suitable RAG engine that can be integrated with platforms like Bailei and Dify.

Overall Architecture

Developers import a knowledge base into OpenSearch, then a workflow sends user queries to OpenSearch’s RAG system. OpenSearch returns answers, reference links, and images, which the workflow can further process before delivering the final response to the end user.

Building a RAG System in OpenSearch LLM

1. Deploy the RAG System

OpenSearch LLM Intelligent Q&A Edition is a one‑stop RAG product that can be set up within minutes. The console allows visual model selection, prompt customization, and performance tuning.

2. Create and Obtain API Key

Generate a public API domain and API Key in the OpenSearch console and keep them securely.

3. Build the Workflow in Dify

The basic RAG workflow consists of four stages:

Start – capture user input.

HTTP request to OpenSearch LLM – send the query and receive RAG results.

Parse output (code execution) – extract the answer from the JSON response.

Return answer – deliver the final response to the user.

4. API Authentication Details

Auth type: API‑Key

Bearer token

API key obtained from OpenSearch LLM console

Endpoint URL = public API domain + "/v3/openapi/apps/[app_name]/actions/knowledge-search". Use JSON body as defined in the SearchKnowledge documentation.

5. Parse the Output

The OpenSearch response is JSON containing the answer, reference links, and images. Developers can extract the answer with a short script:

def main(body: str) -> str:
    import json
    dat = json.loads(body)
    return {
        'result': [ans['answer'] for ans in dat['result']['data']][0]
    }

Result Preview

Beyond the basic RAG flow, you can create additional workflows to build richer business applications, such as an intelligent dialogue assistant that classifies user intent (post‑sale, product usage, or casual chat) and routes queries to the appropriate knowledge base or to the Qwen model for open‑ended conversation.

Alternatively, developers can use Alibaba Cloud AI Search Studio to access granular capabilities (document parsing, vectorization, search, re‑ranking) and customize Dify tools for fine‑tuned RAG pipelines.

For any RAG‑related questions, join the OpenSearch LLM Intelligent Q&A DingTalk support group (ID: 34895000837) for further assistance.

AILLMworkflowRAGDifyOpenSearch
Alibaba Cloud Big Data AI Platform
Written by

Alibaba Cloud Big Data AI Platform

The Alibaba Cloud Big Data AI Platform builds on Alibaba’s leading cloud infrastructure, big‑data and AI engineering capabilities, scenario algorithms, and extensive industry experience to offer enterprises and developers a one‑stop, cloud‑native big‑data and AI capability suite. It boosts AI development efficiency, enables large‑scale AI deployment across industries, and drives business value.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.