---
title: "CrewAI Tasks: Defining Work Units with Expected Outputs and Context"
description: "Master CrewAI Task design including task structure, expected_output specifications, context chaining between tasks, and async task execution for parallel agent workflows."
canonical: https://callsphere.ai/blog/crewai-tasks-defining-work-units-expected-outputs-context
category: "Learn Agentic AI"
tags: ["CrewAI", "Tasks", "Workflow Design", "Multi-Agent", "Python"]
author: "CallSphere Team"
published: 2026-03-17T00:00:00.000Z
updated: 2026-05-06T01:02:44.043Z
---

# CrewAI Tasks: Defining Work Units with Expected Outputs and Context

> Master CrewAI Task design including task structure, expected_output specifications, context chaining between tasks, and async task execution for parallel agent workflows.

## Tasks Are the Real Work Units

While agents define who does the work, tasks define what work gets done. In CrewAI, a `Task` is the atomic unit of execution. Each task has a description of the work, a specification of the expected output, and an assigned agent. The quality of your task definitions directly determines the quality of your crew's output.

Poorly defined tasks produce ambiguous results that downstream agents cannot use. Well-defined tasks create a clear contract between what you need and what the agent delivers.

## Task Structure Fundamentals

Every task requires three core fields:

```mermaid
flowchart TD
    GOAL(["Crew goal"])
    MGR["Manager agent
hierarchical process"]
    R1["Researcher agent
role plus backstory"]
    R2["Analyst agent"]
    W1["Writer agent"]
    T1["Task A
research"]
    T2["Task B
analyze"]
    T3["Task C
draft"]
    TOOLS[("Tools
web search, files")]
    OUT(["Crew output"])
    GOAL --> MGR
    MGR --> T1 --> R1 --> TOOLS
    R1 --> T2 --> R2
    R2 --> T3 --> W1 --> OUT
    style MGR fill:#4f46e5,stroke:#4338ca,color:#fff
    style TOOLS fill:#ede9fe,stroke:#7c3aed,color:#1e1b4b
    style OUT fill:#059669,stroke:#047857,color:#fff
```

```python
from crewai import Agent, Task

analyst = Agent(
    role="Market Analyst",
    goal="Deliver accurate market intelligence",
    backstory="Senior analyst at a Fortune 500 consulting firm.",
)

task = Task(
    description="""Analyze the competitive landscape of the cloud
    infrastructure market. Compare the top 3 providers (AWS, Azure, GCP)
    across pricing, market share, and developer ecosystem strength.
    Use data from 2024-2026.""",
    expected_output="""A structured comparison table with rows for each
    provider and columns for: market share percentage, pricing model
    summary, key developer tools, and competitive advantage. Follow
    the table with a 200-word strategic summary.""",
    agent=analyst,
)
```

The `description` tells the agent what to do. The `expected_output` tells the agent what the result should look like. Together, they form a contract that the agent's reasoning process tries to fulfill.

## Crafting Effective Expected Outputs

The `expected_output` field is the most powerful lever for controlling task quality. Vague expected outputs produce vague results. Specific ones produce structured, usable data:

```python
# Vague — agent has too much freedom
vague_task = Task(
    description="Research AI trends.",
    expected_output="A summary of findings.",
    agent=analyst,
)

# Specific — agent knows exactly what to produce
specific_task = Task(
    description="Research the top 5 agentic AI frameworks released in 2025-2026.",
    expected_output="""A numbered list of 5 frameworks, each entry containing:
    1. Framework name and creator
    2. Primary use case (1 sentence)
    3. Key differentiator from competitors (1 sentence)
    4. GitHub stars count (approximate)
    5. Maturity assessment: Production-Ready, Beta, or Experimental""",
    agent=analyst,
)
```

The specific version tells the agent exactly how many items, what fields each item needs, and even the format of categorical values. This eliminates ambiguity and makes the output predictable.

## Context Chaining Between Tasks

In a sequential crew, each task automatically receives the output of the previous task. But sometimes you need more control. The `context` parameter lets you explicitly specify which prior tasks feed into the current one:

```python
from crewai import Agent, Task, Crew, Process

researcher = Agent(role="Researcher", goal="Find data", backstory="Expert researcher.")
analyst = Agent(role="Analyst", goal="Analyze data", backstory="Expert analyst.")
writer = Agent(role="Writer", goal="Write reports", backstory="Expert writer.")

research_task = Task(
    description="Research the current state of quantum computing.",
    expected_output="A list of 10 key facts about quantum computing in 2026.",
    agent=researcher,
)

analysis_task = Task(
    description="Analyze the business implications of quantum computing advances.",
    expected_output="A SWOT analysis for enterprises considering quantum adoption.",
    agent=analyst,
    context=[research_task],
)

report_task = Task(
    description="Write an executive briefing combining research and analysis.",
    expected_output="A 500-word executive briefing with recommendations.",
    agent=writer,
    context=[research_task, analysis_task],
)
```

The `report_task` explicitly receives output from both the research and analysis tasks. Without context chaining, it would only see the immediately preceding task's output. This is especially important in non-sequential workflows where task execution order is not linear.

## Async Task Execution

CrewAI supports running tasks asynchronously when they do not depend on each other. Mark tasks with `async_execution=True` to enable parallel processing:

```python
data_task_1 = Task(
    description="Gather pricing data for AWS services.",
    expected_output="A JSON-formatted pricing table for top 10 AWS services.",
    agent=researcher,
    async_execution=True,
)

data_task_2 = Task(
    description="Gather pricing data for Azure services.",
    expected_output="A JSON-formatted pricing table for top 10 Azure services.",
    agent=researcher,
    async_execution=True,
)

comparison_task = Task(
    description="Compare AWS and Azure pricing from the gathered data.",
    expected_output="A side-by-side comparison with cost-saving recommendations.",
    agent=analyst,
    context=[data_task_1, data_task_2],
)

crew = Crew(
    agents=[researcher, analyst],
    tasks=[data_task_1, data_task_2, comparison_task],
    process=Process.sequential,
)

result = crew.kickoff()
```

Tasks marked with `async_execution=True` run in parallel. The `comparison_task` waits for both async tasks to complete before starting because it lists them in its `context`. This pattern significantly reduces total execution time when gathering data from independent sources.

## Task Output Callbacks

You can attach a callback to any task to process its output as soon as it completes:

```python
def log_task_output(output):
    print(f"Task completed: {output.description[:50]}")
    print(f"Output length: {len(output.raw)} characters")

task = Task(
    description="Summarize the latest AI safety research papers.",
    expected_output="A bullet-point summary of 5 key papers.",
    agent=researcher,
    callback=log_task_output,
)
```

Callbacks are useful for logging, saving intermediate results to disk, or triggering downstream processes outside the crew.

## FAQ

### Can a single agent be assigned to multiple tasks?

Yes. An agent can handle as many tasks as you assign it. In sequential mode, the agent will execute each task in order. This is common for specialized agents — a "researcher" agent might handle three different research tasks before a "writer" agent synthesizes the results.

### What happens if a task's expected_output does not match what the agent produces?

CrewAI does not enforce strict schema validation on expected_output. The field is used as guidance in the agent's prompt, not as a runtime validator. If you need strict output formatting, use Pydantic models with the `output_pydantic` parameter, which parses and validates the agent's response against your schema.

### How do I pass dynamic inputs to tasks at runtime?

Use curly-brace placeholders in your task description and pass values through `crew.kickoff(inputs={})`. For example, a description containing `{topic}` will be replaced when you call `crew.kickoff(inputs={"topic": "quantum computing"})`.

---

#CrewAI #Tasks #WorkflowDesign #MultiAgent #Python #AgenticAI #LearnAI #AIEngineering

---

Source: https://callsphere.ai/blog/crewai-tasks-defining-work-units-expected-outputs-context
