Getting Started with LangChain: Overview, Core Components, and Python Code Samples

This article introduces the LangChain framework for large language model integration, explains its key components and advantages, and provides step‑by‑step Python examples for setting up environment variables, creating prompts, chaining models, and using embeddings, completions, and chat models.

JD Tech
JD Tech
JD Tech
Getting Started with LangChain: Overview, Core Components, and Python Code Samples

LangChain is an LLM integration framework that simplifies development by offering multi‑model support, easy integration, powerful tools, extensibility, and performance optimizations, with official support for both Python and Node.js (and Java).

The article outlines the main components of LangChain, including Prompt (system role and placeholders), Retriever (used for Retrieval‑Augmented Generation), Models (Embedding, Completion, Chat), and Parser (StringParser, JsonParser) that convert model outputs into readable formats.

It then presents a practical Python tutorial. First, environment variables for the OpenAI API are set:

import os
# gpt 网关调用
os.environ["OPENAI_API_KEY"] = "{申请的集团api key}"
os.environ["OPENAI_API_BASE"] = "{您的url}"

import openai
from dotenv import load_dotenv, find_dotenv
_ = load_dotenv(find_dotenv())
openai.api_key = os.environ['OPENAI_API_KEY']

Next, a prompt template, model, and output parser are defined and chained together:

from langchain.prompts import ChatPromptTemplate
from langchain.chat_models import ChatOpenAI
from langchain.schema.output_parser import StrOutputParser

prompt = ChatPromptTemplate.from_template("tell me a short joke about {topic}")
model = ChatOpenAI()
output_parser = StrOutputParser()

chain = prompt | model | output_parser
chain.invoke({"topic": "bears"})

Additional examples show how to use embeddings, completions, and chat models:

from langchain_community.embeddings import OpenAIEmbeddings
embeddings = OpenAIEmbeddings(model="text-embedding-ada-002", openai_api_key=os.environ["OPENAI_API_KEY"], openai_api_base=os.environ["OPENAI_API_BASE"])
text = "text"
query_result = embeddings.embed_query(text)
from langchain_community.llms import OpenAI
llm = OpenAI(model_name='gpt-35-turbo-instruct-0914', openai_api_key=os.environ["OPENAI_API_KEY"], base_url=base_url, temperature=0)
llm.invoke("I have an order with order number 2022ABCDE, but I haven't received it yet. Could you please help me check it?")
from langchain_openai import ChatOpenAI
model = ChatOpenAI(model_name="gpt-35-turbo-1106")
model.invoke("你好,你是智谱吗?")

The article concludes that LangChain provides an intuitive workflow for building LLM‑powered applications and encourages readers to explore its components further.

pythonLLMLangChainEmbeddingtutorialChatModel
JD Tech
Written by

JD Tech

Official JD technology sharing platform. All the cutting‑edge JD tech, innovative insights, and open‑source solutions you’re looking for, all in one place.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.