Build a Multi‑Agent Movie Script Generator with LangGraph and Ollama

This article walks through creating a LangGraph‑based multi‑agent application that automatically generates a movie scene, selects characters, simulates dialogue, and writes the script to a Word document using local Ollama‑powered LLMs and a custom workflow.

AI Large Model Application Practice
AI Large Model Application Practice
AI Large Model Application Practice
Build a Multi‑Agent Movie Script Generator with LangGraph and Ollama

Overview

LangGraph extends the LangChain framework by allowing developers to model complex LLM workflows as directed graphs. It is well‑suited for advanced Retrieval‑Augmented Generation (RAG), fine‑grained agent control, and Multi‑Agent Systems (MAS).

Multi‑Agent System Concept

A MAS consists of several AI agents, each with its own language model, prompts, tools, or custom code, collaborating to accomplish a task. In this example the agents simulate characters in a movie scene.

Application Goal

The application automatically creates a movie scene, extracts a list of characters, runs multi‑turn dialogue among them, and writes the complete screenplay to a Word document.

Graph Design

The workflow graph contains four nodes:

create_scene : generate a brief scene description and a list of characters.

select_speaker : choose the next character to speak based on the scene and dialogue history.

handle_dialogue : let the chosen character produce the next line of dialogue.

write : save the scene, characters, and dialogue history to movie.docx.

LangGraph workflow diagram
LangGraph workflow diagram

Implementation Details

LLM Invocation Setup

from typing import Dict, TypedDict, Optional
from langgraph.graph import StateGraph, END
from langchain.output_parsers import CommaSeparatedListOutputParser
from langchain_community.llms import Ollama
from langchain_openai import ChatOpenAI
from docx import Document

# Model calls
llm_qwen = Ollama(model='qwen2')
llm_openai = ChatOpenAI(model='gemini-pro')

def llm(x):
    response = llm_qwen.invoke(x)
    return response

Prompt Definitions

prompt_scene = "设计一个需要台词的电影场景,题材是:{}。简单描述故事的背景和角色名称,但不要设计台词。不超过100字。角色不超过4个。"
prompt_roles = "识别执行此电影场景所需的不同角色,仅以逗号分隔的列表形式输出最简单的角色名称,角色名称不要包含头衔或称呼、称号、昵称等。这是电影场景:{}"
prompt_select_speaker = """
请根据电影场景、已有对话内容,从以下角色中选出适合下一个说话的人。
如果没有对话内容,请选择一个角色开始对话。如果故事结束,输出END。
-----------
{}
-----------
电影场景:
-----------
{}
-----------
当前对话内容:
-----------
{}
-----------
"""
prompt_speak = """
你现在是{},根据下面对话和场景,说出你的下一段台词。输出格式:
------
{}:台词
------
要求:
1. 台词与场景和角色的设定相符。
2. 台词能推动剧情发展。
3. 不要重复之前的台词。
到目前为止的对话内容:
----------
{}
----------
电影场景:
----------
{}
----------
"""

Shared State Definition

class GraphState(TypedDict):
    next_speaker: Optional[str] = None
    history: Optional[str] = None
    current_response: Optional[str] = None
    current_speaker: Optional[str] = None
    dialogues_left: Optional[int] = None
    scene: Optional[str] = None
    subject: Optional[str] = None
    roles: Optional[str] = None
    results: Optional[str] = None

Node Implementations

create_scene

def create_scene(state):
    scene = llm(prompt_scene.format(state.get('subject')))
    actors = llm(prompt_roles.format(scene))
    output_parser = CommaSeparatedListOutputParser()
    roles = output_parser.parse(actors)
    print(f"Scene created: {scene}")
    print(f"Actors created: {roles}
")
    return {"scene": scene, "roles": roles}

select_speaker

def select_speaker(state):
    scene = state.get('scene')
    summary = state.get('history', '').strip()
    roles = state.get('roles')
    next_speaker = llm(prompt_select_speaker.format(','.join(roles), scene, summary))
    if next_speaker == "END":
        return {"dialogues_left": 0}
    return {"next_speaker": next_speaker}

handle_dialogue

def handle_dialogue(state):
    summary = state.get('history', '').strip()
    count = state.get('dialogues_left')
    next_speaker = state.get('next_speaker', '').strip()
    roles = state.get('roles')
    scene = state.get('scene')
    # Find the role that matches the LLM output
    index = roles.index([x for x in roles if x in next_speaker][0])
    prompt = prompt_speak.format(roles[index], roles[index], summary, scene)
    argument = llm(prompt)
    print(f"{argument}
")
    return {
        "history": summary + '
' + argument,
        "current_speaker": roles[index],
        "current_response": argument,
        "dialogues_left": count - 1,
    }

write

def write(state):
    doc = Document()
    doc.add_heading('Scene', level=1)
    doc.add_paragraph(state['scene'])
    doc.add_heading('Roles', level=1)
    doc.add_paragraph(', '.join(state['roles']))
    doc.add_heading('Dialogue History', level=1)
    doc.add_paragraph(state['history'])
    doc.save('movie.docx')
    return {"results": "剧本已生成"}

Workflow Assembly

def check_end(state):
    return "end" if state.get('dialogues_left') == 0 else "continue"

workflow = StateGraph(GraphState)
workflow.set_entry_point('create_scene')
workflow.add_node('create_scene', create_scene)
workflow.add_node('select_speaker', select_speaker)
workflow.add_node('handle_dialogue', handle_dialogue)
workflow.add_node('write', write)

workflow.add_edge('create_scene', 'select_speaker')
workflow.add_conditional_edges(
    'select_speaker',
    check_end,
    {"continue": 'handle_dialogue', "end": 'write'}
)
workflow.add_conditional_edges(
    'handle_dialogue',
    check_end,
    {"continue": 'select_speaker', "end": 'write'}
)
workflow.add_edge('write', END)
app = workflow.compile()

Testing the Application

# Run the workflow
conversation = app.invoke({
    'dialogues_left': 20,
    'next_speaker': '',
    'history': '',
    'current_response': '',
    'subject': '一个关于三国的搞笑短视频'
}, {'recursion_limit': 50})

After execution, movie.docx contains the generated scene description, character list, and the full dialogue history.

Conclusion

The prototype shows that LangGraph can orchestrate multiple LLM agents to produce a coherent screenplay. Model selection and prompt engineering have a strong impact on output quality, and more sophisticated tool‑using agents could further improve realism.

PythonLLMMulti-agentOllamaLangGraph
AI Large Model Application Practice
Written by

AI Large Model Application Practice

Focused on deep research and development of large-model applications. Authors of "RAG Application Development and Optimization Based on Large Models" and "MCP Principles Unveiled and Development Guide". Primarily B2B, with B2C as a supplement.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.