AI Agents

CrewAI Local Setup Guide: Build Multi-Agent Systems

February 4, 2026
18 min read
Local AI Master Research Team
🎁 4 PDFs included
Newsletter

Before we dive deeper...

Get your free AI Starter Kit

Join 12,000+ developers. Instant download: Career Roadmap + Fundamentals Cheat Sheets.

No spam, everUnsubscribe anytime
12,000+ downloads

CrewAI Quick Start

Installation:
pip install 'crewai[tools]'
crewai create crew my_project
cd my_project && crewai run

Key Stats:
• 12M+ daily executions in production
• 57,000+ GitHub stars
• Python 3.10-3.13 supported
• No LangChain dependency

What is CrewAI?

CrewAI is a lean, lightning-fast Python framework for orchestrating role-playing, autonomous AI agents. Built entirely from scratch—completely independent of LangChain or other agent frameworks—it empowers developers with both high-level simplicity and precise low-level control.

CrewAI operates on a two-layer architecture: Crews for autonomous agent collaboration and Flows for enterprise-grade, event-driven orchestration. This combination enables starting with simple agent teams and layering in production control logic as needed.

Why CrewAI?

  • Production-Proven: 12M+ daily executions across industries from finance to federal to field operations
  • Zero Dependencies: No LangChain bloat—lean, focused, fast
  • Role-Based Design: Intuitive agent organization with roles, goals, and backstories
  • Flexible Control: From autonomous crews to deterministic flows
  • Local-First: Native support for Ollama and local LLMs

Core Architecture Components

ComponentDescription
AgentsAI entities with specific roles, goals, backstories, and capabilities
TasksDefined objectives agents accomplish with descriptions and expected outputs
CrewsTeams of agents working together toward common goals
FlowsEvent-driven orchestration layer for production state management
ToolsExternal capabilities (web search, file operations, APIs, custom functions)
ProcessesWorkflows defining how agents collaborate (sequential or hierarchical)

Installation and Project Setup

Prerequisites

  • Python >= 3.10 and < 3.14
  • pip package manager
  • Ollama (for local LLMs)

Installation Commands

# Basic installation
pip install crewai

# With tools package (recommended for most use cases)
pip install 'crewai[tools]'

# Verify installation
python -c "import crewai; print(crewai.__version__)"

Creating a New Project

# Create project structure using CLI
crewai create crew my_project

# Navigate and run
cd my_project
crewai run

This generates the following structure:

my_project/
├── .gitignore
├── pyproject.toml
├── README.md
└── src/
    └── my_project/
        ├── __init__.py
        ├── main.py          # Entry point
        ├── crew.py          # Agent and task definitions
        ├── tools/
        │   ├── custom_tool.py
        │   └── __init__.py
        └── config/
            ├── agents.yaml   # Agent configurations
            └── tasks.yaml    # Task configurations

Environment Setup

Create a .env file for API keys:

# For cloud LLMs (optional)
OPENAI_API_KEY=your_openai_api_key
ANTHROPIC_API_KEY=your_anthropic_api_key

# For web search tools
SERPER_API_KEY=your_serper_api_key

# For local LLMs (Ollama)
OLLAMA_BASE_URL=http://localhost:11434

Understanding Agents

Agents are the core building blocks of CrewAI. Each agent is an autonomous unit with:

  • Role: What the agent does (e.g., "Senior Research Analyst")
  • Goal: What the agent aims to achieve
  • Backstory: Context that shapes the agent's behavior
  • Tools: Capabilities the agent can use
  • LLM: The language model powering the agent

Agent Definition

from crewai import Agent

researcher = Agent(
    role='Senior Research Analyst',
    goal='Discover and analyze cutting-edge AI developments',
    backstory="""You are a seasoned research analyst with 10 years
    of experience in artificial intelligence and technology trends.
    You have a keen eye for identifying significant developments
    and separating hype from substance.""",
    verbose=True,
    allow_delegation=False,  # Disabled by default
    max_iter=15,             # Maximum reasoning iterations
    max_rpm=None,            # Rate limiting (requests per minute)
)

writer = Agent(
    role='Technical Content Writer',
    goal='Create engaging, accurate articles about AI technology',
    backstory="""You are an experienced tech writer known for
    making complex topics accessible to general audiences while
    maintaining technical accuracy.""",
    verbose=True,
    allow_delegation=False,
)

Agent Parameters Reference

ParameterTypeDefaultDescription
rolestrRequiredAgent's job title/function
goalstrRequiredWhat the agent aims to achieve
backstorystrRequiredContext shaping agent behavior
llmLLMgpt-4o-miniLanguage model to use
toolslist[]Tools available to agent
verboseboolFalseEnable detailed logging
allow_delegationboolFalseCan delegate to other agents
max_iterint15Maximum reasoning iterations
max_rpmintNoneRate limit requests per minute
memoryboolFalseAgent-specific memory

Defining Tasks

Tasks are the objectives that agents need to accomplish. Each task has a description, expected output, and assigned agent.

Task Definition

from crewai import Task

research_task = Task(
    description="""Research the latest developments in {topic}.
    Focus on:
    - Major breakthroughs and announcements
    - Key players and their contributions
    - Technical innovations
    - Future implications and trends

    Provide comprehensive, factual information with sources.""",
    expected_output='A detailed research report with key findings, organized by category',
    agent=researcher,
)

writing_task = Task(
    description="""Using the research provided, write a comprehensive
    blog article about {topic}. The article should:
    - Be engaging and accessible to technical audiences
    - Include specific examples and data points
    - Maintain factual accuracy
    - Be approximately 1500 words""",
    expected_output='A well-written, publication-ready blog article',
    agent=writer,
    context=[research_task],  # Uses output from research_task
)

Task Parameters Reference

ParameterTypeDescription
descriptionstrWhat the task requires (supports {variables})
expected_outputstrWhat the output should look like
agentAgentAgent assigned to the task
contextlist[Task]Previous tasks whose output is used
toolslistTask-specific tools (overrides agent tools)
async_executionboolRun asynchronously
output_filestrSave output to file
human_inputboolRequest human input before completion

Creating and Running Crews

A Crew is a team of agents working together to accomplish tasks.

Basic Crew Setup

from crewai import Agent, Task, Crew, Process

# Define agents
researcher = Agent(
    role='Senior Research Analyst',
    goal='Discover cutting-edge AI developments',
    backstory='Expert analyst with deep technical knowledge',
    verbose=True
)

writer = Agent(
    role='Content Writer',
    goal='Create engaging articles about AI',
    backstory='Tech writer known for clarity and accuracy',
    verbose=True
)

# Define tasks
research_task = Task(
    description='Research {topic} thoroughly',
    expected_output='Detailed research notes with sources',
    agent=researcher
)

writing_task = Task(
    description='Write a comprehensive article about {topic}',
    expected_output='Publication-ready article (1500+ words)',
    agent=writer,
    context=[research_task]
)

# Create and run crew
crew = Crew(
    agents=[researcher, writer],
    tasks=[research_task, writing_task],
    process=Process.sequential,
    verbose=True
)

# Execute with inputs
result = crew.kickoff(inputs={'topic': 'Local AI and Privacy'})
print(result)

Using CrewAI with Ollama (Local LLMs)

One of CrewAI's strengths is native support for local LLMs through Ollama.

Step 1: Set Up Ollama

# Install Ollama (macOS)
brew install ollama

# Start Ollama server
ollama serve

# Pull recommended models for agents
ollama pull llama3.1:8b
ollama pull mistral:7b
ollama pull openhermes
ollama pull nomic-embed-text  # For embeddings

Step 2: Configure CrewAI with Ollama

Method 1: Using LLM Class (Recommended)

from crewai import Agent, LLM

# Configure Ollama LLM
ollama_llm = LLM(
    model="ollama/llama3.1:8b",
    base_url="http://localhost:11434",
    temperature=0.2,  # Lower for focused output
)

agent = Agent(
    role='Local AI Expert',
    goal='Process information using local models',
    backstory='An AI assistant running completely offline',
    llm=ollama_llm,
    verbose=True
)

Method 2: Using LangChain's Ollama Class

from crewai import Agent
from langchain_community.llms import Ollama

ollama_llm = Ollama(
    model="llama3.1:8b",
    base_url="http://localhost:11434",
    temperature=0.2,
)

agent = Agent(
    role='Research Analyst',
    goal='Analyze data thoroughly',
    backstory='Expert analyst with attention to detail',
    llm=ollama_llm,
    verbose=True
)
ModelSizeBest For
Llama 3.1 8B8BGeneral agent tasks, balanced performance
Mistral 7B7BFast inference, good reasoning
OpenHermes7BInstruction following, versatile
Llama 3.1 70B70BComplex multi-agent workflows
Qwen 2.5 32B32BExcellent reasoning, multilingual
DeepSeek R1 Distill8B-70BAdvanced reasoning with thinking

Tips for Local LLMs

SettingRecommendationWhy
Temperature0.1-0.3More focused, consistent output
Model Size8B+ parametersBetter reasoning for agent tasks
QuantizationQ8_0 (8-bit)Better quality than Q4 for agents
MemoryEnableBetter context retention
Max Iterations15-25Allow complex reasoning

Complete Local Crew Example

from crewai import Agent, Task, Crew, Process, LLM

# Configure local LLM
local_llm = LLM(
    model="ollama/llama3.1:8b",
    base_url="http://localhost:11434",
    temperature=0.2,
)

# Create agents with local LLM
researcher = Agent(
    role='Research Analyst',
    goal='Find and analyze information',
    backstory='Expert researcher with analytical skills',
    llm=local_llm,
    verbose=True
)

writer = Agent(
    role='Technical Writer',
    goal='Create clear, accurate content',
    backstory='Writer skilled at explaining complex topics',
    llm=local_llm,
    verbose=True
)

# Define tasks
research = Task(
    description='Research {topic} and identify key points',
    expected_output='Structured research notes',
    agent=researcher
)

write = Task(
    description='Write an article based on the research',
    expected_output='Complete article',
    agent=writer,
    context=[research]
)

# Create and run crew
crew = Crew(
    agents=[researcher, writer],
    tasks=[research, write],
    process=Process.sequential,
    memory=True,  # Enable memory for better context
    verbose=True
)

result = crew.kickoff(inputs={'topic': 'Open Source AI Models'})

Process Types: Sequential vs Hierarchical

Sequential Process (Default)

Tasks execute in order, with each task's output available as context for subsequent tasks.

from crewai import Crew, Process

crew = Crew(
    agents=[researcher, analyst, writer, editor],
    tasks=[research_task, analysis_task, writing_task, editing_task],
    process=Process.sequential,  # Default
    verbose=True
)

Flow: Task 1 → Task 2 → Task 3 → Task 4

Hierarchical Process

A manager agent orchestrates task delegation based on agent capabilities.

from crewai import Crew, Process, Agent

# Optional: Define custom manager
manager = Agent(
    role='Project Manager',
    goal='Coordinate the team to deliver high-quality results',
    backstory='Experienced manager with excellent coordination skills',
    allow_delegation=True
)

crew = Crew(
    agents=[researcher, analyst, writer],
    tasks=[task1, task2, task3],
    process=Process.hierarchical,
    manager_agent=manager,  # Optional: auto-created if not provided
    manager_llm="gpt-4",    # Required for hierarchical
    verbose=True
)

Process Comparison

AspectSequentialHierarchical
Task OrderFixed, predefinedDynamic, manager decides
Output FlowLinear chainManager validates and routes
Setup ComplexitySimpleMore configuration
Use CasePipelines, workflowsAdaptive problem-solving
ManagerNot requiredAuto-created or custom

Tools and Custom Tool Creation

Built-in Tools

from crewai_tools import (
    SerperDevTool,      # Web search
    FileReadTool,       # Read files
    DirectoryReadTool,  # Read directories
    WebsiteSearchTool,  # Search websites
    PDFSearchTool,      # Search PDFs
    DOCXSearchTool,     # Search Word docs
    CSVSearchTool,      # Search CSV files
    JSONSearchTool,     # Search JSON files
    CodeDocsSearchTool, # Search code documentation
    YoutubeVideoSearchTool,  # Search YouTube
)

# Configure tools
search_tool = SerperDevTool()
file_tool = FileReadTool()
pdf_tool = PDFSearchTool()

# Assign to agent
researcher = Agent(
    role='Research Analyst',
    goal='Find comprehensive information',
    backstory='Expert researcher with access to various sources',
    tools=[search_tool, file_tool, pdf_tool],
    verbose=True
)

Creating Custom Tools

Method 1: Using @tool Decorator

from crewai import tool

@tool("Database Search")
def search_database(query: str) -> str:
    """Search the internal database for relevant information.

    Args:
        query: The search query string

    Returns:
        Relevant results from the database
    """
    # Your implementation here
    results = perform_database_search(query)
    return f"Found {len(results)} results for: {query}"

@tool("API Fetcher")
def fetch_api_data(endpoint: str, params: str = "") -> str:
    """Fetch data from external API.

    Args:
        endpoint: API endpoint to call
        params: Optional query parameters

    Returns:
        API response data
    """
    response = requests.get(f"https://api.example.com/{endpoint}?{params}")
    return response.json()

Method 2: Subclassing BaseTool

from crewai.tools import BaseTool
from typing import Type
from pydantic import BaseModel, Field

class DatabaseSearchInput(BaseModel):
    query: str = Field(description="Search query for database")
    table: str = Field(description="Table to search in")
    limit: int = Field(default=10, description="Max results")

class DatabaseSearchTool(BaseTool):
    name: str = "Database Search Tool"
    description: str = "Search internal database for records"
    args_schema: Type[BaseModel] = DatabaseSearchInput

    def _run(self, query: str, table: str, limit: int = 10) -> str:
        # Your implementation
        results = db.query(table, query, limit)
        return f"Found {len(results)} records in {table}"

# Use the tool
db_tool = DatabaseSearchTool()
agent = Agent(
    role='Data Analyst',
    goal='Analyze database records',
    backstory='Expert at querying and analyzing data',
    tools=[db_tool]
)

Memory System

CrewAI provides a sophisticated memory system for maintaining context across tasks and sessions.

Memory Types

TypeStoragePurposeUse Case
Short-TermChromaDB (RAG)Current execution contextWithin a single run
Long-TermSQLiteTask results across sessionsLearn from past executions
EntityRAGTrack people, places, conceptsMaintain entity knowledge
ContextualCombinedUnified awarenessCombined context

Enabling Memory

from crewai import Crew, Process

crew = Crew(
    agents=[researcher, writer],
    tasks=[research_task, writing_task],
    process=Process.sequential,
    memory=True,  # Enable all memory types
    verbose=True,
    embedder={
        "provider": "openai",
        "config": {
            "model": "text-embedding-3-small"
        }
    }
)

Using Local Embeddings with Ollama

crew = Crew(
    agents=[researcher, writer],
    tasks=[research_task, writing_task],
    memory=True,
    embedder={
        "provider": "ollama",
        "config": {
            "model": "nomic-embed-text",
            "base_url": "http://localhost:11434"
        }
    }
)

Custom Memory Storage

from crewai.memory.storage.ltm_sqlite_storage import LTMSQLiteStorage
from crewai.memory import EnhanceLongTermMemory

crew = Crew(
    agents=[researcher, writer],
    tasks=[research_task, writing_task],
    memory=True,
    long_term_memory=EnhanceLongTermMemory(
        storage=LTMSQLiteStorage(
            db_path="/path/to/custom/memory.db"
        )
    )
)

CrewAI Flows for Production

Flows are CrewAI's enterprise-grade solution for deterministic, event-driven orchestration with structured state management.

Why Flows?

  • Deterministic Execution: Predictable control flow for production
  • Structured State: Type-safe state with Pydantic validation
  • Event-Driven: Methods trigger based on previous completions
  • Conditional Logic: Branches, loops, and parallel execution
  • Crew Integration: Embed Crews within Flow steps

Basic Flow Example

from crewai import Flow
from crewai.flow.flow import start, listen
from pydantic import BaseModel

class ContentState(BaseModel):
    topic: str = ""
    research: str = ""
    article: str = ""

class ContentFlow(Flow[ContentState]):

    @start()
    def research_topic(self):
        """Entry point - research the topic"""
        # Could call a Crew here or do direct work
        self.state.research = f"Research about {self.state.topic}..."
        return self.state.research

    @listen(research_topic)
    def write_article(self):
        """Triggered after research completes"""
        self.state.article = f"Article based on: {self.state.research}"
        return self.state.article

    @listen(write_article)
    def publish(self):
        """Final step - publish the article"""
        return f"Published: {self.state.article}"

# Run the flow
flow = ContentFlow()
result = flow.kickoff(inputs={"topic": "AI Trends 2026"})
print(result)

Flow with Crew Integration

from crewai import Flow, Crew, Agent, Task
from crewai.flow.flow import start, listen
from pydantic import BaseModel

class ResearchState(BaseModel):
    topic: str = ""
    research_results: str = ""
    final_report: str = ""

class ResearchFlow(Flow[ResearchState]):

    @start()
    def conduct_research(self):
        """Use a Crew for research"""
        researcher = Agent(
            role='Researcher',
            goal='Research thoroughly',
            backstory='Expert researcher'
        )

        research_task = Task(
            description=f'Research {self.state.topic}',
            expected_output='Detailed findings',
            agent=researcher
        )

        crew = Crew(
            agents=[researcher],
            tasks=[research_task]
        )

        result = crew.kickoff()
        self.state.research_results = str(result)
        return self.state.research_results

    @listen(conduct_research)
    def generate_report(self):
        """Generate final report from research"""
        self.state.final_report = f"Report: {self.state.research_results}"
        return self.state.final_report

flow = ResearchFlow()
result = flow.kickoff(inputs={"topic": "Quantum Computing"})

Crews vs Flows Comparison

AspectCrewsFlows
ExecutionAutonomous collaborationEvent-driven control
Decision MakingDynamic, agent-drivenDeterministic, code-driven
StateImplicit (context)Explicit (Pydantic models)
Best ForCreative tasks, explorationProduction pipelines, validation
FlexibilityHigh autonomyHigh control
DebuggingAgent tracesClear execution path

YAML Configuration (Production Pattern)

agents.yaml

researcher:
  role: Senior Research Analyst
  goal: Discover and analyze cutting-edge developments in {topic}
  backstory: >
    You are a seasoned research analyst with 10 years of experience
    in technology trends. You have a keen eye for identifying
    significant developments and separating hype from substance.

analyst:
  role: Data Analyst
  goal: Transform research into actionable insights
  backstory: >
    You are an expert at pattern recognition and data analysis.
    You excel at finding connections and trends in complex data.

writer:
  role: Technical Content Writer
  goal: Create engaging, accurate content about {topic}
  backstory: >
    You are an experienced tech writer known for making complex
    topics accessible while maintaining technical accuracy.

tasks.yaml

research_task:
  description: >
    Research the latest developments in {topic}.
    Focus on major breakthroughs, key players, and future trends.
    Provide comprehensive, factual information.
  expected_output: Detailed research report with categorized findings
  agent: researcher

analysis_task:
  description: >
    Analyze the research findings and identify key patterns,
    trends, and actionable insights.
  expected_output: Analysis summary with top 5 insights
  agent: analyst
  context:
    - research_task

writing_task:
  description: >
    Write a comprehensive article about {topic} using the
    research and analysis provided. Make it engaging and informative.
  expected_output: Publication-ready article (1500+ words)
  agent: writer
  context:
    - research_task
    - analysis_task

Using YAML Configuration in Code

from crewai import Crew
from crewai.project import CrewBase, agent, crew, task

@CrewBase
class ContentCrew:
    """Content creation crew"""

    agents_config = 'config/agents.yaml'
    tasks_config = 'config/tasks.yaml'

    @agent
    def researcher(self) -> Agent:
        return Agent(
            config=self.agents_config['researcher'],
            verbose=True
        )

    @agent
    def writer(self) -> Agent:
        return Agent(
            config=self.agents_config['writer'],
            verbose=True
        )

    @task
    def research_task(self) -> Task:
        return Task(
            config=self.tasks_config['research_task']
        )

    @task
    def writing_task(self) -> Task:
        return Task(
            config=self.tasks_config['writing_task']
        )

    @crew
    def crew(self) -> Crew:
        return Crew(
            agents=self.agents,
            tasks=self.tasks,
            process=Process.sequential,
            verbose=True
        )

Real-World Use Case Examples

Content Creation Pipeline

from crewai import Agent, Task, Crew, Process
from crewai_tools import SerperDevTool

# Agents
researcher = Agent(
    role='Content Researcher',
    goal='Research topics thoroughly for accurate content',
    backstory='Expert researcher with journalism background',
    tools=[SerperDevTool()],
    verbose=True
)

writer = Agent(
    role='Content Writer',
    goal='Create engaging, SEO-optimized content',
    backstory='Experienced content writer for tech blogs',
    verbose=True
)

editor = Agent(
    role='Content Editor',
    goal='Ensure content quality, accuracy, and style',
    backstory='Senior editor with strict quality standards',
    verbose=True
)

# Tasks
research = Task(
    description='Research {topic} thoroughly. Find key facts, statistics, and expert opinions.',
    expected_output='Comprehensive research notes with sources',
    agent=researcher
)

write = Task(
    description='Write a 2000-word article about {topic}. Include introduction, main sections, and conclusion.',
    expected_output='Complete article draft',
    agent=writer,
    context=[research]
)

edit = Task(
    description='Edit the article for clarity, accuracy, and engagement. Fix any errors.',
    expected_output='Publication-ready article',
    agent=editor,
    context=[write]
)

# Crew
content_crew = Crew(
    agents=[researcher, writer, editor],
    tasks=[research, write, edit],
    process=Process.sequential,
    memory=True
)

result = content_crew.kickoff(inputs={'topic': 'AI in Healthcare 2026'})

Code Review System

from crewai import Agent, Task, Crew, Process
from crewai_tools import FileReadTool

code_reviewer = Agent(
    role='Senior Code Reviewer',
    goal='Identify bugs, code smells, and improvement opportunities',
    backstory='10+ years of software development experience',
    tools=[FileReadTool()],
    verbose=True
)

security_analyst = Agent(
    role='Security Analyst',
    goal='Find security vulnerabilities and potential exploits',
    backstory='Cybersecurity expert with OWASP expertise',
    verbose=True
)

# Create crew for code review
review_crew = Crew(
    agents=[code_reviewer, security_analyst],
    tasks=[code_review_task, security_review_task],
    process=Process.sequential
)

Lead Qualification System

from crewai import Agent, Task, Crew

data_collector = Agent(
    role='Data Collector',
    goal='Gather comprehensive lead information',
    backstory='Expert at finding company and contact data',
    verbose=True
)

analyst = Agent(
    role='Lead Analyst',
    goal='Score and qualify leads based on ICP criteria',
    backstory='Sales analyst with pattern recognition expertise',
    verbose=True
)

# Lead qualification crew
lead_crew = Crew(
    agents=[data_collector, analyst],
    tasks=[collect_task, analyze_task],
    process=Process.sequential
)

CrewAI vs AutoGen vs LangGraph

FeatureCrewAIAutoGenLangGraph
Design PhilosophyRole-based organizational structureConversational collaborationGraph-based workflows
Best ForStructured tasks, rapid prototypingEnterprise, dynamic collaborationComplex multi-step workflows
Learning CurveLow - intuitive structureMediumHigh - requires graph concepts
State ManagementBuilt-in Flows (structured/unstructured)Manual managementNative graph state
Agent CommunicationBuilt-in delegationMulti-agent conversationNode-to-node edges
IndependenceStandalone (no LangChain)Microsoft ecosystemLangChain dependent
Production Readiness12M+ daily executionsEnterprise focusLangChain ecosystem

When to Choose Each

Choose CrewAI when:

  • You need rapid prototyping and quick setup
  • Tasks benefit from role-based organization
  • Content creation, research, or analysis workflows
  • You want both simplicity and control
  • Local LLM support is important

Choose AutoGen when:

  • Enterprise environments with strict requirements
  • Dynamic multi-agent conversations are key
  • Advanced error handling and logging needed
  • Microsoft ecosystem integration

Choose LangGraph when:

  • Complex workflows with many conditional branches
  • RAG is central to your application
  • Maximum control over execution paths
  • Already invested in LangChain ecosystem

Troubleshooting Common Issues

Issue: Agent Not Using Local LLM

# Wrong: Default will use OpenAI
agent = Agent(role='Test', goal='Test', backstory='Test')

# Correct: Explicitly set Ollama
from crewai import LLM
ollama_llm = LLM(model="ollama/llama3.1:8b", base_url="http://localhost:11434")
agent = Agent(role='Test', goal='Test', backstory='Test', llm=ollama_llm)

Issue: Ollama Connection Refused

# Ensure Ollama is running
ollama serve

# Check if model is available
ollama list

# Pull model if needed
ollama pull llama3.1:8b

Issue: Agent Stuck in Loop

# Increase max iterations or lower temperature
agent = Agent(
    role='Analyst',
    goal='Analyze data',
    backstory='Expert analyst',
    llm=LLM(model="ollama/llama3.1:8b", temperature=0.1),
    max_iter=25,  # Increase from default 15
    verbose=True
)

Issue: Memory Not Persisting

# Ensure memory is enabled at Crew level
crew = Crew(
    agents=[...],
    tasks=[...],
    memory=True,  # Required for memory features
    embedder={
        "provider": "ollama",
        "config": {"model": "nomic-embed-text"}
    }
)

Best Practices

Agent Design

  1. Clear Roles: Give agents specific, non-overlapping roles
  2. Detailed Backstories: Backstories shape behavior—be specific
  3. Appropriate Tools: Only give agents tools they need
  4. Temperature Settings: Use 0.1-0.3 for focused output

Task Design

  1. Specific Descriptions: Clearly state what you want
  2. Clear Expected Outputs: Define success criteria
  3. Use Context: Chain tasks using context parameter
  4. Variables: Use {variables} for dynamic inputs

Crew Organization

  1. Sequential for Pipelines: Use when order matters
  2. Hierarchical for Flexibility: Use when tasks need dynamic routing
  3. Enable Memory: For better context across tasks
  4. Verbose in Development: Turn off in production

Key Takeaways

  1. CrewAI is the fastest path to production-ready multi-agent systems
  2. Agents have roles, goals, and backstories that shape behavior
  3. Tasks define objectives with expected outputs and context chains
  4. Crews orchestrate agent collaboration with sequential or hierarchical processes
  5. Flows provide enterprise-grade, event-driven orchestration
  6. Local LLMs work seamlessly with Ollama integration
  7. Memory enables context retention across tasks and sessions
  8. Tools extend agent capabilities with built-in and custom options

Next Steps

  1. Build AI agents for specific use cases
  2. Set up RAG for document-aware agents
  3. Choose models for your crew
  4. Compare vector databases for memory storage

CrewAI brings the power of collaborative AI agents to developers with an intuitive, production-ready framework. From simple two-agent crews to complex enterprise flows, CrewAI scales with your needs while keeping complexity manageable.

🚀 Join 12K+ developers
Newsletter

Ready to start your AI career?

Get the complete roadmap

Download the AI Starter Kit: Career path, fundamentals, and cheat sheets used by 12K+ developers.

No spam, everUnsubscribe anytime
12,000+ downloads
Reading now
Join the discussion

Local AI Master Research Team

Creator of Local AI Master. I've built datasets with over 77,000 examples and trained AI models from scratch. Now I help people achieve AI independence through local AI mastery.

My 77K Dataset Insights Delivered Weekly

Get exclusive access to real dataset optimization strategies and AI model performance tips.

Want structured AI education?

10 courses, 160+ chapters, from $9. Understand AI, don't just use it.

AI Learning Path

Comments (0)

No comments yet. Be the first to share your thoughts!

📅 Published: February 4, 2026🔄 Last Updated: February 4, 2026✓ Manually Reviewed

My 77K Dataset Insights Delivered Weekly

Get exclusive access to real dataset optimization strategies and AI model performance tips.

Was this helpful?

PR

Written by Pattanaik Ramswarup

AI Engineer & Dataset Architect | Creator of the 77,000 Training Dataset

I've personally trained over 50 AI models from scratch and spent 2,000+ hours optimizing local AI deployments. My 77K dataset project revolutionized how businesses approach AI training. Every guide on this site is based on real hands-on experience, not theory. I test everything on my own hardware before writing about it.

✓ 10+ Years in ML/AI✓ 77K Dataset Creator✓ Open Source Contributor
Free Tools & Calculators