How to Install and Configure Ollama Locally for a CRM AI Engine

This guide walks through installing Ollama on Windows 10, downloading a Chinese‑friendly LLM such as Qwen2, configuring a CRM’s application‑dev.yml to point to the local Ollama service, restarting the backend, and handling optional CORS settings, highlighting zero‑cost, privacy, and stability benefits.

Coder Trainee
Coder Trainee
Coder Trainee
How to Install and Configure Ollama Locally for a CRM AI Engine

Background

Because the client wants to avoid additional API fees for large‑model services, a local deployment is required. Ollama was chosen as it works well with Chinese.

Step 1 – Install Ollama

Download the Windows installer from the official Ollama website.

Run OllamaSetup.exe; after installation the Ollama icon appears in the system tray.

Screenshot of Ollama installation
Screenshot of Ollama installation

Step 2 – Download a Local LLM

Qwen2 or Llama 3 are recommended for good Chinese support and moderate hardware requirements.

Open PowerShell or CMD.

Run the following command (example uses the 7B version):

Wait until the >>> prompt appears, then test the model with a simple greeting such as “你好”.

Screenshot of model download
Screenshot of model download

Step 3 – Configure the CRM Backend

Edit the CRM’s application-dev.yml (or application.yml) and set the AI engine parameters to point to the local Ollama service.

crm:
  ai:
    # Ollama provides an OpenAI‑compatible interface on local port 11434
    base-url: http://localhost:11434/v1
    # Local models usually don’t need a key; set any string for validation
    api-key: ollama
    # Must match the model name used with 'ollama run'
    model: qwen2

Step 4 – Restart the Service and Verify

Restart the Java backend so the new configuration takes effect.

Open the CRM system, navigate to any customer detail page, click the AI assistant in the lower‑right corner, and select “Sales Strategy Suggestion” to confirm the model responds.

Advanced Tip – Resolve Cross‑Origin Issues (If Needed)

If the frontend calls Ollama directly or the service runs on another machine, set the environment variable OLLAMA_ORIGINS to *:

Search “environment variables” in Windows.

Under “System variables” click “New” and add:

Restart the Ollama application.

Why Use a Local Model?

Zero cost – no API token fees.

Privacy – customer and financial data stay within the LAN.

Stability – unaffected by network fluctuations, lower latency.

Original Source

Signed-in readers can open the original source through BestHub's protected redirect.

Sign in to view source
Republication Notice

This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactadmin@besthub.devand we will review it promptly.

AI deploymentWindowsOllamaLocal LLMCRM integrationQwen2
Coder Trainee
Written by

Coder Trainee

Experienced in Java and Python, we share and learn together. For submissions or collaborations, DM us.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.