Unlock Seamless Document Search with WeKnora: An Open‑Source LLM‑Powered Retrieval Framework

WeKnora is an open‑source, LLM‑driven document understanding and semantic search framework that extracts structured content from PDFs, Word files, and images, builds a unified knowledge graph, and enables natural‑language queries through a modular RAG architecture with flexible deployment options.

Java Backend Technology
Java Backend Technology
Java Backend Technology
Unlock Seamless Document Search with WeKnora: An Open‑Source LLM‑Powered Retrieval Framework

Overview

WeKnora is an open‑source document understanding and semantic search framework built on large language models (LLM). It parses multi‑format documents such as PDFs, Word files, and images, constructs a unified semantic view, and enables natural‑language querying.

Key Features

Agent Mode : Implements ReACT‑style agents that can invoke built‑in tools, MCP utilities, and web search, iterating to produce comprehensive reports.

Precise Document Understanding : Extracts and structures content from PDFs, Word, and images.

Intelligent Reasoning : Uses LLMs to capture document context and user intent for accurate Q&A and multi‑turn dialogue.

Multi‑type Knowledge Bases : Supports FAQ‑style and full‑document knowledge bases with folder, URL import, tag management, and online entry.

Extensible Pipeline : Decouples parsing, embedding, retrieval, and generation, allowing custom integration.

Hybrid Retrieval : Combines keyword, vector, and knowledge‑graph search, enabling cross‑knowledge‑base queries.

Web Search Integration : Extensible DuckDuckGo search engine.

MCP Tool Integration : Provides built‑in uvx and npx tools with multiple transport methods.

Conversation Strategy : Configurable agent model, normal model, retrieval thresholds, and prompts for precise multi‑turn control.

User‑Friendly Interface : Web UI and standard REST API with zero‑code onboarding.

Secure & Controllable Deployment : Supports on‑premise and private‑cloud setups; data remains under user control.

Technical Architecture

Document Processing Layer : Parses and preprocesses PDFs, Word files, and images.

Knowledge Modeling Layer : Generates deep representations via vectorization, chunking, and knowledge‑graph techniques.

Retrieval Engine Layer : Fuses keyword, vector, and knowledge‑graph strategies for efficient and accurate recall.

Inference Generation Layer : Leverages LLMs for understanding and answer generation, with integrated agent reasoning.

Interaction Presentation Layer : Provides a web UI and standard API for client interaction.

The architecture allows flexible swapping of LLM back‑ends (e.g., Ollama, Qwen, DeepSeek) and vector databases, while maintaining full control for private deployments.

Quick Start

Prerequisites

Docker

Docker Compose

Git

Installation Steps

1. Clone the repository

git clone https://github.com/Tencent/WeKnora.git
cd WeKnora

2. Copy the example environment file and edit required variables

# Copy example config
cp .env.example .env
# Edit .env to set model endpoints, vector DB credentials, etc.

3. Start all services (includes Ollama for LLM runtime) ./scripts/start_all.sh or using Make: make start-all 4. To stop the services

./scripts/start_all.sh --stop
# or
make stop-all

Service Endpoints

Web UI: http://localhost Backend API: http://localhost:8080 Jaeger tracing UI:

http://localhost:16686

Open‑Source Repository

GitHub:

https://github.com/Tencent/WeKnora
AILLMRAGSearchWeKnora
Java Backend Technology
Written by

Java Backend Technology

Focus on Java-related technologies: SSM, Spring ecosystem, microservices, MySQL, MyCat, clustering, distributed systems, middleware, Linux, networking, multithreading. Occasionally cover DevOps tools like Jenkins, Nexus, Docker, and ELK. Also share technical insights from time to time, committed to Java full-stack development!

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.