Multi-Agent Orchestration
Three A2A agents communicating via the protocol: an LLM-powered orchestrator discovers remote agents and delegates tasks to the most appropriate one.
What you'll learn
- Agent-to-agent communication via
A2A::Client - LLM-powered routing (Brute) to select the right agent for a request
- Agent card discovery at runtime
- Multi-service Docker Compose setup
Architecture
| Service | Port | Role |
|---|---|---|
| greeter | 9292 | LLM-powered greeting generator |
| translator | 9293 | LLM-powered language translator |
| host | 9294 | Orchestrator -- discovers agents, routes requests via LLM |
The host agent discovers the greeter and translator agent cards at startup, then uses an LLM to decide which agent should handle each incoming request. It delegates via A2A::Client.send_message and returns the remote agent's response.
Prerequisites
Requires an LLM API key. Set one of:
ANTHROPIC_API_KEYOPENAI_API_KEYGEMINI_API_KEY
Step 1: Start all three services
git clone https://github.com/general-intelligence-systems/a2a.git
cd a2a/examples/multi-agent
ANTHROPIC_API_KEY=sk-... docker compose up -d --build
Replace sk-... with your actual API key. If using OpenAI or Gemini, substitute the appropriate env var:
OPENAI_API_KEY=sk-... docker compose up -d --build
Expected output:
[+] Building 18.5s (27/27) FINISHED
[+] Running 3/3
✔ Container multi-agent-greeter-1 Started
✔ Container multi-agent-translator-1 Started
✔ Container multi-agent-host-1 Started
Step 2: Check the logs
docker compose logs
Expected output:
greeter-1 | 0.0s info: main [pid=1] [2025-05-01 12:00:00 +0000]
greeter-1 | | Greeter Agent starting on :9292...
translator-1 | 0.0s info: main [pid=1] [2025-05-01 12:00:00 +0000]
translator-1 | | Translator Agent starting on :9293...
host-1 | 0.0s info: main [pid=1] [2025-05-01 12:00:00 +0000]
host-1 | | Host Orchestrator starting on :9294...
host-1 | 0.0s info: main [pid=1] [2025-05-01 12:00:00 +0000]
host-1 | | Remote agents: greeter=http://greeter:9292, translator=http://translator:9293
All three services should be running. The host knows where to find the remote agents but discovers their cards lazily on first request.
Step 3: Send a greeting request (routed to greeter)
curl -s -X POST http://localhost:9294/a2a \
-H "Content-Type: application/json" \
-d '{"jsonrpc":"2.0","id":1,"method":"SendMessage","params":{
"message":{"messageId":"m1","role":"ROLE_USER","parts":[{"text":"Greet Alice for her birthday"}]}
}}' | jq .
Expected output:
{
"jsonrpc": "2.0",
"id": 1,
"result": {
"task": {
"id": "be85b851-1234-5678-9abc-def012345678",
"contextId": "a1b2c3d4-5678-9abc-def0-123456789abc",
"status": {
"state": "TASK_STATE_COMPLETED",
"timestamp": "2025-05-01T12:00:02.345Z"
},
"artifacts": [
{
"artifactId": "...",
"name": "delegated-response",
"description": "Response from greeter agent",
"parts": [
{
"text": "Happy Birthday, Alice! May your special day be filled with joy, laughter, and all the wonderful things you deserve. Here's to another year of amazing adventures!"
}
],
"metadata": {
"delegatedTo": "greeter",
"remoteTaskId": "..."
}
}
],
"history": [
{
"messageId": "...",
"role": "ROLE_AGENT",
"parts": [
{
"text": "[Delegated to greeter] Happy Birthday, Alice! May your special day be filled with joy, laughter, and all the wonderful things you deserve. Here's to another year of amazing adventures!"
}
]
}
]
}
}
}
The key things to notice:
- The host routed this to the greeter agent (see
metadata.delegatedTo). - The greeting is generated by the LLM, so the exact text will vary.
- The history shows the delegation with
[Delegated to greeter]prefix.
Check the host logs to see the routing decision:
docker compose logs host
You should see:
host-1 | 1.0s info: A2A::Agent [pid=1] [2025-05-01 12:00:01 +0000]
host-1 | | Discovered agent: Greeter Agent at http://greeter:9292
host-1 | 1.0s info: A2A::Agent [pid=1] [2025-05-01 12:00:01 +0000]
host-1 | | Discovered agent: Translator Agent at http://translator:9293
host-1 | 1.5s info: A2A::Agent [pid=1] [2025-05-01 12:00:01 +0000]
host-1 | | Routing to 'greeter' for: Greet Alice for her birthday
Step 4: Send a translation request (routed to translator)
curl -s -X POST http://localhost:9294/a2a \
-H "Content-Type: application/json" \
-d '{"jsonrpc":"2.0","id":2,"method":"SendMessage","params":{
"message":{"messageId":"m2","role":"ROLE_USER","parts":[{"text":"Translate hello to Japanese"}]}
}}' | jq .
Expected output:
{
"jsonrpc": "2.0",
"id": 2,
"result": {
"task": {
"id": "f6a7b8c9-0123-4567-89ab-cdef01234567",
"contextId": "d5e6f7a8-9012-3456-789a-bcdef0123456",
"status": {
"state": "TASK_STATE_COMPLETED",
"timestamp": "2025-05-01T12:00:05.678Z"
},
"artifacts": [
{
"artifactId": "...",
"name": "delegated-response",
"description": "Response from translator agent",
"parts": [
{
"text": "こんにちは (Konnichiwa)"
}
],
"metadata": {
"delegatedTo": "translator",
"remoteTaskId": "..."
}
}
],
"history": [
{
"messageId": "...",
"role": "ROLE_AGENT",
"parts": [
{
"text": "[Delegated to translator] こんにちは (Konnichiwa)"
}
]
}
]
}
}
}
This time the host routed to the translator agent. The LLM generates the actual translation, so the exact output will vary.
Step 5: Cleanup
docker compose down
Files
| File | Purpose |
|---|---|
greeter/config.ru |
Greeter agent -- generates creative greetings via Brute |
translator/config.ru |
Translator agent -- translates text via Brute |
host/config.ru |
Host orchestrator -- discovers agents, routes via LLM |
*/falcon.rb |
Falcon server configs for each service |
*/Gemfile |
Per-service dependencies |
*/Dockerfile |
Per-service container build |
docker-compose.yml |
Three-service compose config |