How to Build a Multi‑Turn Chatbot with DeepSeek’s API in Python

This guide walks you through using DeepSeek’s large‑model API via the OpenAI request format, covering API advantages, key parameters, Python setup, code examples for single and multi‑turn conversations, and a full reference table of request options.

Fun with Large Models
Fun with Large Models
Fun with Large Models
How to Build a Multi‑Turn Chatbot with DeepSeek’s API in Python

OpenAI‑compatible request format

DeepSeek follows the OpenAI chat‑completion schema, which consists of three steps:

Instantiate a client with base_url and api_key.

Build the messages list. Each entry contains role (system, user, assistant, or tool) and content. For typical Q&A only system, user, and assistant messages are required.

Optionally include tool_calls and tool_message when using function calling.

Python setup and a basic request

pip install openai
from openai import OpenAI
client = OpenAI(api_key="YOUR_API_KEY", base_url="https://api.deepseek.com")
response = client.chat.completions.create(
    model="deepseek-chat",
    messages=[
        {"role": "system", "content": "You are a helpful assistant"},
        {"role": "user", "content": "Hello"},
    ],
    stream=False,
)
print(response.choices[0].message.content)

Obtaining an API key

Open the DeepSeek Open Platform page.

Register a developer account.

Generate an API key and ensure sufficient token quota.

Helper functions for multi‑turn dialogue

def create_message(role, content):
    return {"role": role, "content": content}

def process_user_input(input_text):
    return create_message("user", input_text)

def chat_with_DeepSeek(client, messages):
    response = client.chat.completions.create(
        model="deepseek-chat",
        messages=messages,
    )
    return response.choices[0].message.content

def multi_round_chat():
    messages = []
    system_message = create_message("system", "You are a helpful assistant.")
    messages.append(system_message)
    while True:
        user_input = input("User: ")
        user_message = process_user_input(user_input)
        messages.append(user_message)
        assistant_reply = chat_with_DeepSeek(client, messages)
        print(assistant_reply)
        messages.append(create_message("assistant", assistant_reply))
        if user_input.lower() == 'exit':
            print("Conversation ended.")
            break

multi_round_chat()

Parameter reference

model (string, required) : Model ID, e.g., deepseek-chat or deepseek-reasoner.

store (boolean or null, optional, default false) : Whether to store the conversation output for refinement or evaluation.

metadata (object or null, optional) : Developer‑defined tags for filtering results.

frequency_penalty (number or null, optional, default 0) : Range –2.0 to 2.0; positive values reduce repeated content.

logit_bias (map, optional, default null) : Adjust token probabilities, range –100 to 100.

logprobs (boolean or null, optional, default false) : Return log probabilities for each generated token.

top_logprobs (integer or null, optional) : Number of top tokens to return when logprobs is enabled.

max_completion_tokens (integer or null, optional) : Maximum number of tokens generated (including visible and reasoning tokens).

n (integer or null, optional, default 1) : Number of completion alternatives to generate.

presence_penalty (number or null, optional, default 0) : Range –2.0 to 2.0; positive values encourage new topics.

response_format (object, optional) : Set output format, e.g., json_schema or json_object.

seed (integer or null, optional) : Seed for deterministic generation.

service_tier (string or null, optional, default auto) : Service latency tier for paid subscriptions.

stop (string/array/null, optional) : Up to four stop sequences; generation halts when encountered.

stream (boolean or null, optional, default false) : Enable streaming response; tokens are returned incrementally.

stream_options (object or null, optional) : Options for streaming when stream is true.

temperature (number or null, optional, default 1) : Controls randomness; higher values produce more varied output. Adjust temperature or top_p, not both.

top_p (number or null, optional, default 1) : Nucleus sampling; selects tokens whose cumulative probability reaches top_p. Use instead of temperature.

tools (array, optional) : List of tools (currently only function calling) the model may invoke.

user (string, optional) : Unique identifier for the end user, used for monitoring and abuse detection.

Summary

The article demonstrates how to call DeepSeek’s large language models from Python using the OpenAI‑compatible request format, including a minimal example, a reusable multi‑turn chat helper, and a comprehensive list of supported request parameters.

Original Source

Signed-in readers can open the original source through BestHub's protected redirect.

Sign in to view source
Republication Notice

This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactadmin@besthub.devand we will review it promptly.

PythonAPIDeepSeekChatbot
Fun with Large Models
Written by

Fun with Large Models

Master's graduate from Beijing Institute of Technology, published four top‑journal papers, previously worked as a developer at ByteDance and Alibaba. Currently researching large models at a major state‑owned enterprise. Committed to sharing concise, practical AI large‑model development experience, believing that AI large models will become as essential as PCs in the future. Let's start experimenting now!

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.