A multi-agent research system built on AgentEx that demonstrates orchestrator + subagent communication using Temporal workflows. An orchestrator agent dispatches specialized research subagents (GitHub, Docs, Slack) in parallel, collects their findings, and synthesizes a comprehensive answer.
┌─────────────────────┐
│ Orchestrator │
User ────▶│ (GPT-5.1) │
Query │ Dispatches & │
│ Synthesizes │
└───┬─────┬─────┬─────┘
│ │ │
┌─────────┘ │ └─────────┐
▼ ▼ ▼
┌────────────┐ ┌────────────┐ ┌────────────┐
│ GitHub │ │ Docs │ │ Slack │
│ Researcher │ │ Researcher │ │ Researcher │
│ (GPT-4.1 │ │ (GPT-4.1 │ │ (GPT-4.1 │
│ mini) │ │ mini) │ │ mini) │
│ │ │ │ │ │
│ GitHub MCP│ │ Web Search│ │ Slack MCP │
│ Server │ │ + Fetcher │ │ Server │
└────────────┘ └────────────┘ └────────────┘
The orchestrator creates child tasks on subagents using adk.acp.create_task(), sends queries via EVENT_SEND, and waits for research_complete callback events.
All subagents write messages to the orchestrator's task ID (passed as source_task_id), so the user sees all research progress in a single conversation thread.
Subagents use a batched Runner.run() pattern with conversation compaction between batches to stay within Temporal's ~2MB payload limit during long research sessions.
GitHub and Slack subagents use MCP (Model Context Protocol) servers via StatelessMCPServerProvider for tool access.
| Agent | Port | Model | Tools |
|---|---|---|---|
| Orchestrator | 8010 | gpt-5.1 | dispatch_github, dispatch_docs, dispatch_slack |
| GitHub Researcher | 8011 | gpt-4.1-mini | GitHub MCP (search_code, etc.) |
| Docs Researcher | 8012 | gpt-4.1-mini | web_search (Tavily), fetch_docs_page |
| Slack Researcher | 8013 | gpt-4.1-mini | Slack MCP (search_messages, etc.) |
- AgentEx CLI installed
- OpenAI API key
- GitHub Personal Access Token (for GitHub researcher)
- Tavily API key (for Docs researcher) - get one at https://tavily.com
- Slack Bot Token (for Slack researcher)
Create a .env file in each agent directory with the required keys:
orchestrator/.env:
OPENAI_API_KEY=your-openai-key
github_researcher/.env:
OPENAI_API_KEY=your-openai-key
GITHUB_PERSONAL_ACCESS_TOKEN=your-github-token
docs_researcher/.env:
OPENAI_API_KEY=your-openai-key
TAVILY_API_KEY=your-tavily-key
slack_researcher/.env:
OPENAI_API_KEY=your-openai-key
SLACK_BOT_TOKEN=your-slack-bot-token
SLACK_TEAM_ID=your-slack-team-id
Start each agent in a separate terminal:
# Terminal 1 - Orchestrator
cd orchestrator
agentex agents run --manifest manifest.yaml
# Terminal 2 - GitHub Researcher
cd github_researcher
agentex agents run --manifest manifest.yaml
# Terminal 3 - Docs Researcher
cd docs_researcher
agentex agents run --manifest manifest.yaml
# Terminal 4 - Slack Researcher
cd slack_researcher
agentex agents run --manifest manifest.yamlOpen the AgentEx UI and send a research question to the orchestrator agent. You should see:
- The orchestrator dispatching queries to subagents
- Each subagent streaming its research progress to the same conversation
- The orchestrator synthesizing all findings into a final answer
You can adapt the subagents to search different sources:
- Replace the GitHub MCP server with any other MCP server
- Replace Tavily with your preferred search API
- Replace the Slack MCP with any communication platform's MCP
- Update the system prompts to match your target repositories, docs, and channels
To add a new research subagent:
- Copy one of the existing subagent directories
- Update the manifest.yaml with a new agent name and port
- Modify the workflow.py system prompt and tools
- Add a new dispatch tool in the orchestrator's workflow.py
The key pattern that makes all agents write to the same conversation:
- Orchestrator passes its
task_idassource_task_idwhen creating child tasks - Subagents extract
parent_task_id = params.task.params.get("source_task_id") - Subagents use
message_task_id = parent_task_id or params.task.idfor alladk.messages.create()calls andTemporalStreamingHooks - This means all messages and streamed LLM output appear in the orchestrator's task conversation