Step-by-Step Guide to Building Your First AI Agent: Connecting Alibaba Cloud, OpenAI, Dashscope, DeepSeek, and Ollama

This article provides a detailed, hands‑on tutorial for creating an AI agent, covering registration and API key setup for Alibaba Cloud, OpenAI, Dashscope and DeepSeek, installing and using Ollama for local model deployment, configuring CherryStudio, and implementing function‑calling and MCP techniques with full code examples.

Woodpecker Software Testing
Woodpecker Software Testing
Woodpecker Software Testing
Step-by-Step Guide to Building Your First AI Agent: Connecting Alibaba Cloud, OpenAI, Dashscope, DeepSeek, and Ollama

1. Connecting to AI Platforms

1.1 Alibaba Cloud (Dashscope) via OpenAI SDK

Register at https://bailian.console.aliyun.com, obtain the API key and store it in the environment variable Dashscope_API_Key. Install the SDKs:

pip3 install openai
pip3 install python-dotenv

Load the key and create an OpenAI client that points to the Dashscope compatible endpoint:

import os
from openai import OpenAI
from dotenv import load_dotenv
load_dotenv()
client = OpenAI(
    api_key=os.getenv("Dashscope_API_Key"),
    base_url="https://dashscope.aliyuncs.com/compatible-mode/v1"
)
completion = client.chat.completions.create(
    model="qwen-plus",
    messages=[
        {"role": "system", "content": "You are a helpful assistant."},
        {"role": "user", "content": "你是谁?"}
    ],
    stream=True,
    stream_options={"include_usage": True}
)
for chunk in completion:
    if len(chunk.choices) > 0:
        print(chunk.choices[0].delta.content, end="")

1.2 Direct Dashscope SDK

pip3 install dashscope

Set the same key in DASHSCOPE_API_KEY (or via dotenv) and call:

import os, dashscope
from dotenv import load_dotenv
load_dotenv()
messages = [
    {"role": "system", "content": "you are a helpful assistant"},
    {"role": "user", "content": "你是谁?"}
]
responses = dashscope.Generation.call(
    api_key=os.getenv("Dashscope_API_Key"),
    model="qwen-plus",
    messages=messages,
    result_format="message",
    stream=True,
    incremental_output=True
)

1.3 DeepSeek

Using the OpenAI‑compatible SDK:

from openai import OpenAI
import os
from dotenv import load_dotenv
load_dotenv()
client = OpenAI(api_key=os.getenv("DeepSeek_API_Key"), base_url="https://api.deepseek.com")
completion = client.chat.completions.create(
    model="deepseek-chat",
    messages=[
        {"role": "system", "content": "You are a helpful assistant"},
        {"role": "user", "content": "你好,你是谁?"}
    ],
    stream=True
)
for chunk in completion:
    print(chunk.choices[0].delta.content, end="")

Both Dashscope and DeepSeek can also be accessed with raw requests calls that post a JSON payload to the respective /chat/completions endpoint and process the streamed data: lines.

2. Local Large‑Model Deployment with Ollama

2.1 Install and Verify Ollama

Download Ollama from the official site, then verify the installation:

C:\Users\xiang>ollama -v
Warning: could not connect to a running Ollama instance
Warning: client version is 0.13.5

Set environment variables to avoid CORS when the service is called from JavaScript:

OLLAMA_HOST=0.0.0.0
OLLAMA_ORIGIN=*

2.2 Pull the qwen3:8b Model

C:\Users\xiang>ollama run qwen3:8b
pulling manifest
pulling a3de86cd1c13: 100% ███████████████████████████████████████████████████████████▏ 5.2 GB
... (additional layers) ...

2.3 CherryStudio IDE (screenshots only)

2.4 Python Calls to the Ollama API

Install the client libraries:

pip install ollama
pip install requests

SDK (single‑turn) example:

from ollama import Client
client = Client(host="http://localhost:11434")
messages = [{"role": "user", "content": "你好,你是谁?"}]
response = client.chat(model="qwen3:8b", messages=messages, think=False)
print(response.message.content)

Streaming call:

stream = client.chat(model="qwen3:8b", messages=messages, stream=True)
for chunk in stream:
    print(chunk.message.content, end="", flush=True)

Raw HTTP call:

import requests, json
url = "http://localhost:11434/api/generate"
data = {"model": "qwen3:8b", "prompt": "你好,你是谁?", "stream": True}
resp = requests.post(url=url, json=data, stream=True)
for line in resp.iter_lines():
    d = json.loads(line)
    print(d["response"], end="")
    if d.get('done'):
        print(f"
Total tokens: {d['eval_count']}")

Multi‑turn conversation uses the /api/chat endpoint and a persistent messages list that includes a system prompt and all prior turns.

3. AI Agent Development Techniques

3.1 Function Calling

Define a JSON schema for a weather‑query function:

functions = [{
    "type": "function",
    "function": {
        "name": "get_weather",
        "description": "查询某个城市的当前天气情况,包括温度、天气状况等信息",
        "parameters": {
            "type": "object",
            "properties": {
                "city": {"type": "string", "description": "中国的城市名称,如北京、上海、广州等"}
            },
            "required": ["city"]
        }
    }
}]

Two implementation styles are demonstrated:

Query a live weather API (Gaode/AMap) and return a JSON result.

Return a hard‑coded JSON structure directly from the function.

Example of invoking the function after the model returns a tool_calls object:

# after receiving tool_calls
func_name = tool_calls[0].function.name
func_args = eval(tool_calls[0].function.arguments)  # convert string to dict
result = globals()[func_name](**func_args)
# feed <code>result</code> back to the model for the final answer

3.2 Model‑Command‑Protocol (MCP)

FastMCP provides a lightweight server that registers Python functions as tools.

Internal MCP example – write a file :

from mcp.server.fastmcp import FastMCP
import os, json
mcp = FastMCP()

@mcp.tool()
def write_file(filename: str, content: str) -> str:
    os.makedirs("D:/Temp", exist_ok=True)
    path = f"D:/Temp/{filename}"
    with open(path, "w", encoding="utf-8") as f:
        f.write(content)
    return json.dumps({"success": True, "filepath": path, "content_length": len(content)})

@mcp.tool()
def get_weather(city: str) -> str:
    api_key = "46644f43e3a56ce81f6f3633c5994c67"
    url = f"https://restapi.amap.com/v3/weather/weatherInfo?city={city}&key={api_key}&extensions=base"
    resp = requests.get(url)
    data = resp.json()
    if data.get("status") == "1" and data.get("lives"):
        w = data["lives"][0]
        return json.dumps({
            "success": True,
            "city": w["city"],
            "weather": w["weather"],
            "temperature": w["temperature"] + "°C",
            "wind_direction": w["winddirection"],
            "wind_power": w["windpower"],
            "humidity": w["humidity"] + "%",
            "report_time": w["reporttime"]
        }, ensure_ascii=False)
    return json.dumps({"success": False, "error": f"无法获取{city}的天气信息"}, ensure_ascii=False)

if __name__ == "__main__":
    mcp.run(transport='stdio')  # local stdio mode
    # or mcp.run(transport='sse') for remote SSE mode

Both stdio and sse transports are shown. The sse mode can be started with:

mcp.settings.port = 8000
mcp.run(transport='sse')

Additional tools such as list_files() can be registered in the same way, returning JSON with directory contents. Example client that uses function calling with DeepSeek and the above MCP tools:

import os, requests, json
from openai import OpenAI
from dotenv import load_dotenv
load_dotenv()

system_prompt = """你是一名AI助手,具备函数调用的能力。如果提供的信息足以回答用户问题,则不进行函数调用。"""

functions = [...]  # same schema as above, plus write_file definition

def send_messages(messages, stream=False):
    client = OpenAI(api_key=os.getenv("DeepSeek_API_Key"), base_url="https://api.deepseek.com")
    return client.chat.completions.create(
        model="deepseek-chat",
        messages=messages,
        tools=functions,
        stream=stream
    ).choices[0].message

def invoke(user_input):
    msgs = [{"role": "system", "content": system_prompt}, {"role": "user", "content": user_input}]
    msg = send_messages(msgs)
    if msg.tool_calls:
        name = msg.tool_calls[0].function.name
        args = eval(msg.tool_calls[0].function.arguments)
        result = globals()[name](**args)
        # send result back for final answer
        final_msg = send_messages([
            {"role": "user", "content": f"请基于以下信息回答: {result}
{user_input}"}
        ])
        print(final_msg.content)
    else:
        print("模型回复:", msg.content)

if __name__ == "__main__":
    invoke("我决定去北京明天出差,可以穿什么衣服")
    invoke("请在Test2.txt文件中写入:你好,顾翔!")

Running the FastMCP server prints status messages, ensures the D:/Temp directory exists, and lists available tools: <code>✅ 确保目录存在: D:/Temp 可用工具: 1. get_weather(city) - 查询城市天气 2. write_file(filename, content) - 写入文件 3. list_files() - 列出目录文件 启动SSE服务器... </code>

Original Source

Signed-in readers can open the original source through BestHub's protected redirect.

Sign in to view source
Republication Notice

This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactadmin@besthub.devand we will review it promptly.

MCPDeepSeekAI AgentOpenAIFunction CallingAlibaba CloudOllamaDashscope
Woodpecker Software Testing
Written by

Woodpecker Software Testing

The Woodpecker Software Testing public account shares software testing knowledge, connects testing enthusiasts, founded by Gu Xiang, website: www.3testing.com. Author of five books, including "Mastering JMeter Through Case Studies".

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.