Tag

Dify

0 views collected around this technical thread.

Instant Consumer Technology Team
Instant Consumer Technology Team
Jun 5, 2025 · Artificial Intelligence

How DeepWiki MCP Boosts AI Code Retrieval and Cuts Hallucinations

This article introduces DeepWiki MCP, explains its advantages, limitations, and integration steps with Cursor and Dify, and shows how its SSE and Streamable HTTP protocols enable accurate GitHub documentation access, improving AI‑driven code assistance while highlighting timeliness challenges.

AICursorDeepWiki
0 likes · 9 min read
How DeepWiki MCP Boosts AI Code Retrieval and Cuts Hallucinations
Architect
Architect
Mar 26, 2025 · Artificial Intelligence

Agent Memory Mechanisms and Dify Knowledge Base Segmentation & Retrieval Details

This article explains the fundamentals of AI agent memory—including short‑term, long‑term, and working memory types and their storage designs—and then details Dify's knowledge‑base segmentation modes, indexing strategies, and retrieval configurations for effective RAG applications.

DifyKnowledge BaseLLM
0 likes · 14 min read
Agent Memory Mechanisms and Dify Knowledge Base Segmentation & Retrieval Details
37 Interactive Technology Team
37 Interactive Technology Team
Mar 12, 2025 · Operations

Implementing Nginx Reverse Proxy for Dify to Access Claude Model

To bypass policy restrictions that block direct AWS Bedrock access from China, the team implemented an Nginx stream‑mode reverse proxy with ssl_preread to route Claude model requests, updated Dify’s docker‑compose hosts, and restarted services, achieving low‑cost, minimal‑impact access without migrating data centers.

AWSClaudeDify
0 likes · 4 min read
Implementing Nginx Reverse Proxy for Dify to Access Claude Model
Cognitive Technology Team
Cognitive Technology Team
Mar 11, 2025 · Artificial Intelligence

Deploying DeepSeek R1:7b Model Locally with Ollama and Building AI Applications Using Dify

This tutorial explains how to set up Ollama for CPU or GPU environments, run the DeepSeek R1:7b large language model, and use the open‑source Dify platform to create and deploy a custom AI application, providing step‑by‑step commands and configuration details.

AIDeepSeekDify
0 likes · 8 min read
Deploying DeepSeek R1:7b Model Locally with Ollama and Building AI Applications Using Dify
DevOps
DevOps
Mar 6, 2025 · Artificial Intelligence

Building Multi-Model Chat Agents with Dify: Integrating DeepSeek‑R1 and Gemini

This article explains how to create a high‑performance multi‑model chat agent on the Dify platform by combining DeepSeek‑R1 for reasoning and Gemini for answer generation, covering the underlying principles, configuration steps, API integration, performance benchmarks, and practical deployment guidance.

API IntegrationChatbotDeepSeek
0 likes · 12 min read
Building Multi-Model Chat Agents with Dify: Integrating DeepSeek‑R1 and Gemini
Efficient Ops
Efficient Ops
Feb 25, 2025 · Artificial Intelligence

How to Deploy DeepSeek R1 Locally: A Step‑by‑Step Guide for AI Enthusiasts

This guide explains what DeepSeek R1 is, compares its full and distilled versions, details hardware requirements for Linux, Windows, and macOS, and provides step‑by‑step instructions for local deployment using Ollama, LM Studio, Docker, and visual interfaces like Open‑WebUI and Dify.

AI modelDeepSeekDify
0 likes · 9 min read
How to Deploy DeepSeek R1 Locally: A Step‑by‑Step Guide for AI Enthusiasts
Alibaba Cloud Infrastructure
Alibaba Cloud Infrastructure
Feb 13, 2025 · Artificial Intelligence

Deploying DeepSeek‑R1 671B Distributed Inference Service on Alibaba Cloud ACK with vLLM and Dify

This article explains how to quickly deploy the full‑parameter DeepSeek‑R1 671B model in a multi‑node GPU‑enabled Kubernetes cluster on Alibaba Cloud ACK, covering prerequisites, model parallelism, vLLM‑Ray distributed deployment, service verification, and integration with Dify to build a private AI Q&A assistant.

DeepSeekDifyDistributed Deployment
0 likes · 12 min read
Deploying DeepSeek‑R1 671B Distributed Inference Service on Alibaba Cloud ACK with vLLM and Dify
37 Interactive Technology Team
37 Interactive Technology Team
Dec 9, 2024 · Artificial Intelligence

Optimizing Request Concurrency for LLM Workflows: Rationale, Implementation, and Results

By breaking iterable inputs into parallel LLM calls and batching 20 items across three languages within Dify’s platform limits, the workflow achieves 43‑64% average runtime reductions and markedly higher success rates, demonstrating that request‑level concurrency dramatically improves throughput for large‑scale translation tasks.

CozeDifyLLM
0 likes · 6 min read
Optimizing Request Concurrency for LLM Workflows: Rationale, Implementation, and Results
Alibaba Cloud Infrastructure
Alibaba Cloud Infrastructure
Oct 17, 2024 · Cloud Native

Deploying Dify on Alibaba Cloud ACK for High Availability and Scalability

This guide explains how to deploy the Dify LLMOps platform on Alibaba Cloud Container Service for Kubernetes (ACK), configuring cloud databases, enabling high‑availability replicas, setting up elastic scaling, and exposing the service via Ingress to create a production‑grade, scalable AI application environment.

AckDevOpsDify
0 likes · 12 min read
Deploying Dify on Alibaba Cloud ACK for High Availability and Scalability
DaTaobao Tech
DaTaobao Tech
Aug 30, 2024 · Artificial Intelligence

Overview of Large Model Application Development Platforms: LangChain, Dify, Flowise, and Coze

The article reviews open‑source and commercial large‑model development platforms—LangChain, Dify, Flowise, and Coze—detailing their architectures, low‑code visual tools, model integrations, extensibility, and a step‑by‑step Dify example, and concludes they are essential infrastructure for rapid AI application deployment.

AI Application DevelopmentDifyFlowise
0 likes · 13 min read
Overview of Large Model Application Development Platforms: LangChain, Dify, Flowise, and Coze
37 Interactive Technology Team
37 Interactive Technology Team
Aug 5, 2024 · Artificial Intelligence

Case Study: Applying AIGC to Component Activity Business with Dify

This case study shows how AIGC, implemented through Dify’s low‑code platform, enables a natural‑language AI assistant to recommend and insert the optimal components from a 200‑plus library, streamlining selection, building an embedding‑based knowledge base, exposing a RAG‑driven agent via API, and demonstrating rapid AI‑business validation compared with custom frameworks.

AI AgentAIGCDify
0 likes · 8 min read
Case Study: Applying AIGC to Component Activity Business with Dify