What Is CrewAI?
CrewAI is a Python framework for orchestrating role-playing autonomous AI agents. Instead of a single agent trying to do everything, you define a crew of specialized agents — a researcher, a writer, a code reviewer — and let them collaborate on complex tasks.
Think of it like hiring a small team where each member has a defined role, a set of tools, and a goal. CrewAI handles the communication between agents, task delegation, and output passing.
Key features:
- Role-based agents — Each agent has a name, role, goal, and backstory
- Sequential or hierarchical execution — Agents work in order or a manager delegates
- Tool integration — Agents can search the web, read files, run code
- Memory — Agents remember context across tasks
Installation
pip install crewai crewai-tools python-dotenv
Set your API key:
# .env
OPENAI_API_KEY=sk-your-key-here
CrewAI uses OpenAI’s gpt-4o by default. You can configure Anthropic, Gemini, or any LiteLLM-compatible model.
Your First Crew: Research + Write
This example builds a two-agent crew that researches a topic and writes a summary article.
import os
from dotenv import load_dotenv
from crewai import Agent, Task, Crew, Process
from crewai_tools import SerperDevTool
load_dotenv()
# Tool: web search (requires SERPER_API_KEY in .env)
search_tool = SerperDevTool()
# Agent 1: Researcher
researcher = Agent(
role="Senior Research Analyst",
goal="Find accurate, up-to-date information on the given topic",
backstory="""You are an expert researcher with 10 years of experience
analyzing technical topics. You always cite sources and verify facts.""",
tools=[search_tool],
verbose=True,
)
# Agent 2: Writer
writer = Agent(
role="Technical Content Writer",
goal="Write clear, engaging articles based on research findings",
backstory="""You are a skilled technical writer who transforms complex
research into accessible, well-structured articles for developers.""",
verbose=True,
)
# Task 1: Research
research_task = Task(
description="""Research the current state of LangChain in 2026.
Focus on: latest version features, adoption rates, and common use cases.
Provide at least 5 key findings with sources.""",
expected_output="A bullet-point research report with 5+ key findings",
agent=researcher,
)
# Task 2: Write (uses output from Task 1 automatically)
write_task = Task(
description="""Using the research provided, write a 500-word article
about LangChain's current state. Include an introduction, 3 main points,
and a conclusion. Target audience: junior Python developers.""",
expected_output="A 500-word article in Markdown format",
agent=writer,
)
# Assemble the crew
crew = Crew(
agents=[researcher, writer],
tasks=[research_task, write_task],
process=Process.sequential, # tasks run in order
verbose=True,
)
# Run it
result = crew.kickoff()
print(result.raw)
Understanding Crew Processes
CrewAI supports two execution modes:
Sequential (Process.sequential) — Tasks run one after another. Each task’s output is available to subsequent tasks. Best for linear pipelines.
Hierarchical (Process.hierarchical) — A manager LLM delegates tasks to agents and reviews outputs. More autonomous but requires manager_llm parameter. Best for open-ended problems.
# Hierarchical example
crew = Crew(
agents=[researcher, writer],
tasks=[research_task, write_task],
process=Process.hierarchical,
manager_llm=ChatOpenAI(model="gpt-4o"),
verbose=True,
)
Adding Memory
Enable long-term memory so agents remember previous runs:
crew = Crew(
agents=[researcher, writer],
tasks=[research_task, write_task],
process=Process.sequential,
memory=True, # enables short-term + long-term memory
embedder={
"provider": "openai",
"config": {"model": "text-embedding-3-small"},
},
)
With memory enabled, agents build up knowledge across multiple kickoff() calls — useful for ongoing research projects.
Frequently Asked Questions
How is CrewAI different from LangChain agents?
LangChain’s agent system is a single agent with tools and a reasoning loop (ReAct, OpenAI functions, etc.). CrewAI is designed for multi-agent collaboration — multiple specialized agents working together. CrewAI actually uses LangChain under the hood for its LLM calls, so they complement rather than compete.
Can I run CrewAI without an OpenAI key?
Yes. CrewAI supports any LiteLLM-compatible model. Set llm on each agent: Agent(role="...", llm="anthropic/claude-3-5-sonnet-20241022", ...). You can mix models — use a cheap model for the writer, a powerful one for the researcher.
How do I stop agents from going in circles?
Set max_iter on each agent (default: 25) and max_rpm to rate-limit API calls. For the hierarchical process, set max_execution_time on the crew to cap total runtime.
Next Steps
- Introduction to LangChain — Understand the foundation CrewAI builds on
- LangChain vs LlamaIndex — Compare the major frameworks your agents can use