TL;DR
CrewAI’s role/task model and LangGraph’s node/edge model both express the same idea — a multi-step agent — but as different abstractions. Migration is straightforward if you build the equivalence map up-front.
- Roles → nodes (each role becomes a function node).
- Tasks → edges + state updates (a task transitions state from “before” to “after”).
- Crew → graph (the orchestrator becomes the explicit StateGraph).
- Memory → state field (LangGraph state holds everything CrewAI’s shared memory did).
- Validate with the same eval suite.
When this migration is worth doing
CrewAI is fast to start with. LangGraph is more verbose but gives you:
- Explicit state with a typed schema
- Conditional edges (CrewAI’s role-based delegation is harder to inspect)
- Checkpointing and replay for debugging long runs
- First-class support for human-in-the-loop pauses
- Better observability via LangSmith integration
If your CrewAI agent is becoming hard to debug or you need pause/resume, migrate.
The equivalence map
| CrewAI | LangGraph |
|---|---|
| Agent (role) | Function node |
| Task | State transition (edge + node update) |
| Crew | StateGraph |
| Process (sequential) | Linear edges |
| Process (hierarchical) | Conditional edges with router node |
| Shared memory | TypedDict state field |
| Tools | Same — both pass tools to the LLM |
Walkthrough: a 3-role crew → graph
CrewAI:
researcher = Agent(role="researcher", goal="...")
writer = Agent(role="writer", goal="...")
editor = Agent(role="editor", goal="...")
crew = Crew(agents=[researcher, writer, editor],
tasks=[research_task, write_task, edit_task])
LangGraph equivalent:
class State(TypedDict):
research: str
draft: str
final: str
graph = StateGraph(State)
graph.add_node("research", researcher_fn)
graph.add_node("write", writer_fn)
graph.add_node("edit", editor_fn)
graph.add_edge(START, "research")
graph.add_edge("research", "write")
graph.add_edge("write", "edit")
graph.add_edge("edit", END)
app = graph.compile()
Each *_fn is a function that reads the current state and returns the field updates.
Conditional routing (CrewAI hierarchical → LangGraph)
CrewAI’s hierarchical mode picks the next agent dynamically. In LangGraph, this becomes a router node:
def router(state):
if needs_more_research(state): return "research"
if needs_editing(state): return "edit"
return END
graph.add_conditional_edges("write", router)
Now the routing logic is explicit code you can test in isolation — much easier than reasoning about CrewAI’s manager prompts.
Memory mapping
CrewAI’s shared memory becomes a state field. If your crew used short-term memory plus long-term memory, model both:
class State(TypedDict):
messages: list[Message] # short-term
embeddings_id: str # long-term lookup key
The long-term store stays where it is (vector DB); the LangGraph state just holds the lookup key.
Validate with the same eval suite
Don’t switch traffic until you’ve run the same eval suite against both. Score on:
- Output correctness
- Completion rate (did the graph actually finish?)
- Cost per run
- Latency
If LangGraph is slower at first (very likely), profile per node and tighten prompts before declaring the migration complete.