Mastering LangChain Serialization: Save, Load, and Share Your AI Workflows

Learn how to serialize LangChain components—including prompts, chains, and agents—using JSON and YAML, enabling reproducibility, collaboration, persistence, and decoupling, with step‑by‑step code examples for dumping objects to files and loading them back into executable LLM pipelines.

BirdNest Tech Talk
BirdNest Tech Talk
BirdNest Tech Talk
Mastering LangChain Serialization: Save, Load, and Share Your AI Workflows

What Is Serialization in Software Development?

Serialization converts a data structure or object state into a storable or transmittable format (e.g., a file or network payload) that can later be reconstructed into the original object.

Why Serialize LangChain Components?

Reproducibility : Saving a working prompt or chain guarantees identical behavior when reloaded.

Collaboration & Sharing : Exporting prompts or chains as JSON/YAML lets teams share complex logic without exchanging code; LangChain Hub is built on this capability.

Persistence : Serializing programmatically built chains avoids rebuilding them on every application start.

Decoupling : Storing configuration separately from Python code lets non‑technical users edit prompts without touching code.

How to Serialize in LangChain

Dumping (Saving)

Any LangChain object that implements Runnable can be turned into a dictionary with its .dict() method, then written to a file using the standard json library.

import json
from langchain_core.prompts import ChatPromptTemplate

prompt = ChatPromptTemplate.from_template("Tell a joke about {topic}")
# Convert the prompt object to a dict
prompt_dict = prompt.dict()
# Write the dict to a JSON file
with open("my_prompt.json", "w") as f:
    json.dump(prompt_dict, f)

The same approach works for LCEL chains.

Loading

To reconstruct an object, import the load function from langchain_core.load, read the JSON/YAML file into a dictionary, and pass it to load. LangChain automatically detects the object type and rebuilds it.

import json
from langchain_core.load import load

# Load the dictionary from file
with open("my_prompt.json", "r") as f:
    loaded_prompt_dict = json.load(f)
# Convert the dict back to a prompt object
loaded_prompt = load(loaded_prompt_dict)

Complete Example: Save and Load an LCEL Chain

The script below demonstrates the full lifecycle: building a chain, serializing it to JSON, inspecting the saved file, loading the chain back, and verifying that the loaded chain produces the expected output.

import os
from dotenv import load_dotenv
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.output_parsers import StrOutputParser
from langchain_core.load import dumps, loads
import json

# Load environment variables
load_dotenv()
openai_api_key = os.getenv("OPENAI_API_KEY")
if not openai_api_key:
    raise ValueError("OPENAI_API_KEY not found in environment variables. Please set it in a .env file.")

def main():
    # 1. Build an LCEL chain (Prompt → Model → Parser)
    prompt = ChatPromptTemplate.from_template("Write a poem about {topic}")
    model = ChatOpenAI(model="deepseek-chat")
    parser = StrOutputParser()
    chain = prompt | model | parser

    print("--- Original LCEL Chain ---")
    try:
        for i, step in enumerate(chain.get_graph().nodes.values()):
            print(f"Step {i+1}: {step.name}")
    except Exception:
        print("Cannot directly print graph structure, but the chain is created.")
    print("-" * 30)

    # 2. Serialize the chain and save to file
    file_path = "my_lcel_chain.json"
    chain_json = dumps(chain, pretty=True)
    with open(file_path, "w", encoding="utf-8") as f:
        f.write(chain_json)
    print(f"
LCEL chain serialized and saved to: {file_path}")
    print("-" * 30)

    # 3. Inspect a portion of the saved JSON
    with open(file_path, "r", encoding="utf-8") as f:
        print("
--- Saved JSON (excerpt) ---")
        print(f.read()[:500] + "
...")
        print("-" * 30)

    # 4. Load the chain from the file
    with open(file_path, "r", encoding="utf-8") as f:
        chain_json = f.read()
        loaded_chain = loads(chain_json)
    print("
--- Loaded LCEL Chain ---")
    try:
        for i, step in enumerate(loaded_chain.get_graph().nodes.values()):
            print(f"Step {i+1}: {step.name}")
    except Exception:
        print("Cannot directly print graph structure, but the chain was loaded.")
    print("-" * 30)

    # 5. Verify the loaded chain works
    print("
--- Invoking Loaded Chain (topic='Moonlight') ---")
    response = loaded_chain.invoke({"topic": "Moonlight"})
    print("
--- Output from Loaded Chain ---")
    print(response)
    print("-" * 30)
    print("
Conclusion: Serialization and deserialization enable easy saving and reuse of a fully executable chain.")

# Install dependencies: pip install langchain langchain-openai python-dotenv
if __name__ == "__main__":
    main()

References

How to: save and load LangChain objects – https://python.langchain.com/docs/how_to/save_load

PythonLLMLangChainserializationAI workflow
BirdNest Tech Talk
Written by

BirdNest Tech Talk

Author of the rpcx microservice framework, original book author, and chair of Baidu's Go CMC committee.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.