tags:
- ai
- python
- langchain/langgraph
title: How to use LangGraph
permalink: how-to-use-langgraph
date created: Tuesday, September 10th 2024, 9:32:17 pm
date modified: Wednesday, April 30th 2025, 8:42:43 amIntro to LangGraph is a course on LangChain Academy. These are the course notes.
git clone https://github.com/langchain-ai/langchain-academy.git
cd langchain-academy
python3 -m venv .venv
. .venv/bin/activate
Upgrade pip.
pip install --upgrade pip
Install the requirements.
pip install -r requirements.txt
Install Jupyter with Voila, a chart tool.
pip install jupyterlab voila
These are legacy instructions. LangGraph Studio now runs in the browser.
I found the following settings helpful for VS Code. They minimize deviations from the original repo and make reading the notebook more comfortable.
mkdir -p .vscode && touch .vscode/settings.json
settings.json
{
"editor.formatOnSave": false,
"editor.trimAutoWhitespace": false,
"files.insertFinalNewline": false,
"files.trimFinalNewlines": false,
"files.trimTrailingWhitespace": false,
"notebook.output.textLineLimit": 500
}
studio directory from the example project.Copy your .env file into each module's studio directory to use its secrets in LangGraph Studio.
langgraph.checkpoint saves checkpoints, e.g., memory to remember statefrom operator import add
from typing import Annotated
class State(TypedDict):
# "Add" to the list, i.e., concatenate the list
foo: Annotated[list[int], add]
operator libraryStandard operators ( +, -, etc.) as functions
The following two reducers are equivalent. You can rebuild MessagesState, which is a convenience class for handling messages.
from typing import Annotated
from langgraph.graph import MessagesState
from langchain_core.messages import AnyMessage
from langgraph.graph.message import add_messages
# Define a custom TypedDict that includes a list of messages with add_messages reducer
class CustomMessagesState(TypedDict):
messages: Annotated[list[AnyMessage], add_messages]
added_key_1: str
added_key_2: str
# etc
# Use MessagesState, which includes the messages key with add_messages reducer
class ExtendedMessagesState(MessagesState):
# Add any keys needed beyond messages, which is pre-built
added_key_1: str
added_key_2: str
# etc
Filter output with schemas.
graph = StateGraph(OverallState, input=InputState, output=OutputState)
Like a debugger, pause the graph and wait for a human.
interrupt_before - Interrupt before a node executes, e.g., ask permission to use a toolinterrupt_after - Interrupt after a node executesThese properties can be used in these ways:
graph = builder.compile(interrupt_before=["tools"])client.runs.stream(thread["thread_id"], interrupt_before=["tools"])None to carry onPass None to stream() to pick up where the last checkpoint left off. This is how you resume executing a graph after an event like a human-in-the-loop approval.
# Pass None as the first argument.
for event in graph.stream(None, thread, stream_mode="values"):
event['messages'][-1].pretty_print()
Synchronous stream uses uses stream_mode.
values: Full state of the graph after each node is calledupdates: Only updates to the state of the graph after each node is calledmessages: Convenience mode for dealing with chat messagesFor example, to mimic typing:
# Create a thread
config = {"configurable": {"thread_id": "1"}}
# Start conversation
for chunk in graph.stream({"messages": [HumanMessage(content="hi! I'm Lance")]}, config, stream_mode="updates"):
print(chunk)
Asynchronous astream provides more convenience:
node_to_stream = 'conversation' # name of the node to stream
config = {"configurable": {"thread_id": "5"}}
input_message = HumanMessage(content="Tell me about the 49ers NFL team")
async for event in graph.astream_events({"messages": [input_message]}, config, version="v2"):
# Get chat model tokens from a particular node
if event["event"] == "on_chat_model_stream" and event['metadata'].get('langgraph_node','') == node_to_stream:
data = event["data"]
# Override the default carriage return and
# use the empty string instead to mimic typing.
print(data["chunk"].content, end="")
metadatamessages/complete: fully-formed message messages/partial: chat model tokensCreate a no-op "dummy" node.
# no-op node that should be interrupted on
def human_feedback(state: MessagesState):
pass
Interrupt the graph where you want (user_input), then update the state as_node.
# Get user input (from a Jupyter notebook, in this case)
user_input = input("Tell me how you want to update the state: ")
# We now update the state as if we are the human_feedback node
graph.update_state(thread, {"messages": user_input}, as_node="human_feedback")
NodeInterrupt Allow the graph to interrupt itself with the NodeInterrupt error.
all_states = [s for s in graph.get_state_history(thread)]
Replaying states may lend itself to memoizing parts of a graph so they do not have to be executed repeatedly, just replayed.
Use Pydantic and with_structured_output to define schemas for model data.
from langchain_openai import ChatOpenAI
from pydantic import BaseModel
class Subjects(BaseModel):
subjects: list[str]
model = ChatOpenAI(model="gpt-4o", temperature=0)
subjects_prompt = """Generate a list of 3 sub-topics that are all related to this overall topic: {topic}."""
prompt = subjects_prompt.format(topic=state["topic"])
response = model.with_structured_output(Subjects).invoke(prompt)
Use sub-graphs to encapsulate graph state from other graphs' states, i.e., sandboxing.
builder.add_node("conduct_interview", interview_builder.compile())
interview_builder is its own graph with its own state, but it is part of the larger builder graph.
pretty_print display(Image(graph.get_graph().draw_mermaid_png()))Use Pydantic to enforce data types.
from pydantic import BaseModel, field_validator, ValidationError
class PydanticState(BaseModel):
name: str
mood: Literal["happy", "sad"]
@field_validator('mood')
@classmethod
def validate_mood(cls, value):
# Ensure the mood is either "happy" or "sad"
if value not in ["happy", "sad"]:
raise ValueError("Each mood must be either 'happy' or 'sad'")
return value
try:
state = PydanticState(name="John Doe", mood="mad")
except ValidationError as e:
print("Validation Error:", e)
# Build graph
builder = StateGraph(PydanticState)
Use tools_condition to determine whether to call a tool or not.
from langgraph.prebuilt import tools_condition
builder.add_conditional_edges(
"tool_calling_llm",
# If the latest message (result) from assistant is a tool call -> tools_condition routes to tools
# If the latest message (result) from assistant is a not a tool call -> tools_condition routes to END
tools_condition,
)
Use trim_messages to trim messages by token.
# Node
def chat_model_node(state: MessagesState):
messages = trim_messages(
state["messages"],
max_tokens=100, # trim to 100 tokens
strategy="last", # start at the end
token_counter=ChatOpenAI(model="gpt-4o"),
# Allow some of a prior message -
# continuous, not discrete
allow_partial=False,
)
return {"messages": [llm.invoke(messages)]}