Building multi-agent solutions: what are your options?

When we meet with customers, the topic of a “multi-agent solution” often comes up. This isn’t surprising. There’s a lot of excitement around their potential to transform business processes, strengthen customer relationships, and more.

The first question you have to ask yourself though is this: “Do I really need a multi-agent solution?”. Often, we find that a single agent with a range of tools or a workflow is sufficient. If that’s the case, always go for that option!

On the other hand, if you do need a multi-agent solution, there are several things to think about. Suppose you want to build something like this:

Generic multi-agent setup

Users interact with a main agent that maintains the conversation with the user. When the user asks about a project, a RAG agent retrieves project information. If the user also asks to research or explain the technologies used in the project, the web agent is used to retrieve information from the Internet.

⚠️ If I were to follow my own advice, this would be a single agent with tools. There is no need for multiple agents here. However, let’s use this as an example because it’s easy to reason about.

What are some of your options to build this? The list below is not exhaustive but contains common patterns:

  • Choose a framework (or use the lower-level SDKs) and run everything in the same process
  • Choose an Agent PaaS like Azure AI Foundry Agents: the agents can be defined in the platform; they run independently and can be linked together using the connected agents feature
  • Create the agents in your framework of choice, run them as independent processes and establish a method of communication between these agents; in this post, we will use Google’s A2A (Agent-to-Agent) as an example. Other options are ACP (Agent Communication Protocol, IBM) or “roll your own”

Let’s look at these three in a bit more detail.

In-Process Agents

Running multiple agents in the same process and have them work together is relatively easy. Let’s look at how to do this with OpenAI Agents SDK. Other frameworks use similar approaches.

Multi-agent in-process using the OpenAI Agents SDK

Above, all agents are written using the OpenAI Agents SDK. In code, you first define the RAG and Web Agent as agents with their own tools. In the OpenAI Agents SDK, both the RAG tool and the web search tool are hosted tools provided by OpenAI. See https://openai.github.io/openai-agents-python/tools/ for more information about the FileSearchTool and the WebSearchTool.

Next, the Conversation Agent gets created using the same approach. This time however, two tools are addedd: the RAG Agent Tool and the Web Agent Tool. These tools get called by the Conversation Agent based on their description. This simply is tool calling in action where each tool calls another agent and returns the agent result. The way these agents interact with each other is hidden from you. The SDK simply takes care of it for you.

You can find an example of this in my agent_config GitHub repo. The sample code below shows how this works:

rag_agent = create_agent_from_config("rag")
web_agent = create_agent_from_config("web")

agent_as_tools = {
    "rag": {
        "agent": rag_agent,
        "name": "rag",
        "description": "Provides information about projects"
    },
    "web": {
        "agent": web_agent,
        "name": "web",
        "description": "Gets information about technologies"
    }
}

conversation_agent = create_agent_from_config("conversation", agent_as_tools)

result = await Runner.run(conversation_agent, user_question)

Note that I am using a helper function here that creates an agent from a configuration file that contains the agent instructions, model and tools. Check my previous post for more information. The repo used in this post uses slightly different agents but the concept is the same.

Creating a multi-agent solution in a single process, using a framework that supports calling other agents as tools, is relatively straightforward. However, what if you want to use the RAG Agent in other agents or workflows? In other words, you want reusability! Let’s see how to do this with the other approaches.

Using a Agent PaaS: Azure AI Foundry Agents

Azure AI Foundry Agents is a PaaS solution to create and run agents with enterprise-level features such as isolated networking. After creating an Azure AI Foundry resource and project, you can define agents in the portal:

Agents defined in Azure AI Foundry

⚠️ You can also create these agents from code (e.g., Foundry SDK or Semantic Kernel) which gives you extra flexibility in agent design.

The web and rag agents have their own tools, including hosted tools provided by Foundry, and can run on their own. This is already an improvement compared to the previous approach: agents can be reused from other agents, workflows or any other application.

Azure AI Foundry allows you to connect agents to each other. This uses the same approach as in the OpenAI Agents SDK: agents as tools. Below, the Conversation Agent is connected to the other two agents:

Connected Agents for the Conversation Agent

The configuration of a connected agent is shown below and has a name and description:

It all fits together like in the diagram below:

Multi-agent with Azure AI Foundry

As discussed above, each agent is a standalone entity. You can interact with these agents using the AI Foundry Agents protocol, which is an evolution of the OpenAI Assistant’s protocol. You can read more about it here. In short, to talk to an agent you do the following:

  • Create the agent in code or reference an existing agent (e.g., our conversation agent)
  • Create a thread
  • Put a message on the thread (e.g., the user’s question or a question from another agent via the connected agents principle)
  • Run the thread on the agent and grab the response

Below is an example in Python:

from azure.ai.projects import AIProjectClient
from azure.identity import DefaultAzureCredential
from azure.ai.agents.models import ListSortOrder

project = AIProjectClient(
    credential=DefaultAzureCredential(),
    endpoint="https://YOUR_FOUNDRY_ENDPOINT")

agent = project.agents.get_agent("YOUR_ASSISTANT_ID")

thread = project.agents.threads.create()
print(f"Created thread, ID: {thread.id}")

message = project.agents.messages.create(
    thread_id=thread.id,
    role="user",
    content="What tech is used in some of Contoso's projects?"
)

run = project.agents.runs.create_and_process(
    thread_id=thread.id,
    agent_id=agent.id)

if run.status == "failed":
    print(f"Run failed: {run.last_error}")
else:
    messages = project.agents.messages.list(thread_id=thread.id, order=ListSortOrder.ASCENDING)

    for message in messages:
        if message.text_messages:
            print(f"{message.role}: {message.text_messages[-1].text.value}")

The connected agents feature uses the same protocol under the hood. Like in the OpenAI Agents SDK, this is hidden from you.

When you mainly use Azure AI Foundry agents, there is no direct need for agent-to-agent protocols like A2A or ACP. In fact, even when you have an agent that is not created in Azure AI Foundry, you can simply create a tool in that agent. The tool can then use the thread/message/run approach to get a response from the agent hosted in Foundry. This can all run isolated in your own network if you wish.

You could argue that the protocol used by Azure AI Foundry is not an industry standard. You cannot simply use this protocol in combination with other frameworks. Unless you use something like https://pypi.org/project/llamphouse/, a project written by colleagues of mine which is protocol compatible with the OpenAI Assistants API.

Let’s take a look at the third approach which uses a protocol that aspires to be a standard and can be used together with any agent framework: Google’s A2A.

Using Google’s A2A in a multi-agent solution

The basic idea of Google’s A2A is the creation of a standard protocol for agent-to-agent communication. Without going into the details of A2A, that’s for another post, the solution looks like this:

A multi-agent solution with A2A

A2A allows you to wrap any agent, written in any framework, in a standard JSON-RPC API. With an A2A client, you can send messages to the API which uses an Agent Executor around your actual agent. Your agent provides the response and a message is sent back to the client.

Above, there are two A2A-based agents:

  • The RAG Agent uses Azure AI Foundry and its built-in vector store tool
  • The Web Agent uses OpenAI Agent SDK and its hosted web search tool

The conversation agent can be written in any framework as long as you define tools for that agent that use the A2A protocol (via an A2A client) to send messages to the other agents. This again is agents as tools in action.

To illustrate this standards-based approach, let’s use the A2A Inspector to send a message to the RAG Agent. As long as your agent has an A2A wrapper, this inspector will be able to talk to it. First, we connect to the agent to get its agent card:

Connecting to the RAG Agent with A2A

The agent card is defined in code and contains information about what the agent can do via skills. Once connected, I can send a message to the agent using the A2A protocol:

Sending a message which results in a task

The message that got sent was the following (JSON-RPC):

{
  "id": "msg-1752245905034-georiakp8",
  "jsonrpc": "2.0",
  "method": "message/send",
  "params": {
    "configuration": {
      "acceptedOutputModes": [
        "text/plain",
        "video/mp4"
      ]
    },
    "message": {
      "contextId": "27effaaa-98af-44c4-b15f-10d682fd6496",
      "kind": "message",
      "messageId": "60f95a30-535a-454f-8a8d-31f52d7957b5",
      "parts": [
        {
          "kind": "text",
          "text": "What is project Astro (I might have the name wrong though)"
        }
      ],
      "role": "user"
    }
  }
}

This was the response:

{
  "artifacts": [
    {
      "artifactId": "d912666b-f9ff-4fa6-8899-b656adf9f09c",
      "parts": [
        {
          "kind": "text",
          "text": "Project \"Astro\" appears to refer to \"Astro Events,\" which is a web platform designed for users to discover, share, and RSVP to astronomy-related events worldwide. The platform includes features such as interactive sky maps, event notifications, and a community forum for both amateur and professional astronomers. If you were thinking about astronomy or space-related projects, this may be the correct project you had in mind【4:0†astro_events.md】. If you're thinking of something else, let me know!"
        }
      ]
    }
  ],
  "contextId": "27effaaa-98af-44c4-b15f-10d682fd6496",
  "history": [
    HISTORY HERE
  ],
  "id": "d5af08b3-93a0-40ec-8236-4269c1ed866d",
  "kind": "task",
  "status": {
    "state": "completed",
    "timestamp": "2025-07-11T14:58:38.029960+00:00"
  },
  "validation_errors": []
}

If you are building complex multi-agent solutions, where multiple teams write their agents in different frameworks and development languages, establishing communication standards pays off in the long run.

However, this approach is much more complex than the other two approaches. We have only scratched the surface of A2A here and have not touched on the following aspects:

  • How to handle authentication?
  • How to handle long running tasks?
  • How to scale your agents to multiple instances and how to preserve state?
  • How to handle logging and tracing across agent boundaries?

⚠️ Most of the above is simply software engineering and has not much to do with LLM-based agents!

Conclusion

In this article, we discussed three approaches to building a multi-agent solution

ApproachComplexityReusabilityStandardizationBest For
In-processLowLimitedNoSimple, single-team use cases
Agent PaaSMediumGoodNo (vendor-specific)Org-wide, moderate complexity
A2A ProtocolHighExcellentYesCross-team, cross-platform needs

When you really need a multi-agent solution, I strongly believe that the first two approaches should cover 90% of use cases.

In complex cases, the last option can be considered although it should not be underestimated. To make this option a bit more clear, a follow-up article will discuss how to create and connect agents with A2A in more detail.

Building Configurable AI Agents with the OpenAI Agents SDK

In this post, I will demonstrate how to build an AI agent system where agents can collaborate in different ways. The goal is to create agents that can either work independently with their own tools or collaborate with other agents through two distinct patterns: using agents as tools or handing off control to other agents.

In this post, I work directly with OpenAI models. You can also use Azure OpenAI if you want, but there are some caveats. Check this guide on using Azure OpenAI and potentially APIM with the OpenAI Agents SDK for more details.

All code can be found in this repo: https://github.com/gbaeke/agent_config. Not all code is shown in this post so be sure to check the repo.

Agent Factory and Configuration System

The core of this system is an agent factory that creates agents from JSON configurations stored either on the filesystem or in Redis. The factory reads configuration files that define:

  • Agent name and instructions
  • Which AI model to use (e.g., gpt-4o-mini)
  • Available tools from a centralized tool registry
  • Validation against a JSON schema

For example, a weather agent configuration looks like:

{
    "name": "Weather Agent",
    "instructions": "You are a helpful assistant for weather questions...",
    "model": "gpt-4o-mini",
    "tools": ["get_current_weather", "get_current_temperature", "get_seven_day_forecast"]
}

The agent factory validates each configuration against a schema and can load configurations from either JSON files in the configs/ directory or from Redis when USE_REDIS=True (env var). This flexibility allows for dynamic configuration management in a potential production setting. Besides the configuration above, other configuration could be useful such as MCP configuration, settings like temperature, guardrails and much more.

⚠️ Note that this is example code to explore ideas around agent configuration, agent factories, agents-as-tools versus handoffs, etc…

Tool Registry

All available tools are maintained in a centralized tools.py file that exports an all_tools dictionary. This registry includes:

  • Function-based tools decorated with @function_tool
  • External API integrations (like web search): the built-in web search tool from OpenAI is used as an example here
  • Remote service calls: example tool that uses a calculator agent exposes via an API (FastAPI); this is the same as agent-as-tool discussed below but the agent is remote and served as an API.

In a production environment, tool management would likely be handled differently – for example, through a dedicated tool registry service implementing the Model Context Protocol (MCP). This would allow tools to be dynamically registered, versioned, and accessed across multiple services while maintaining consistent interfaces and behaviors. The registry service could handle authentication, rate limiting, and monitoring of tool usage across all agents in the system.

Agent-as-Tool vs Handoff Patterns

The system supports two distinct collaboration patterns:

Agent-as-Tool

With this pattern, one agent uses another agent as if it were a regular tool. The main agent remains in control of the conversation flow. For example:

agent_as_tools = {
    "weather": {
        "agent": weather_agent,
        "name": "weather", 
        "description": "Get weather information based on the user's question"
    }
}
conversation_agent = create_agent_from_config("conversation", agent_as_tools)

When the conversation agent needs weather information, it calls the weather agent as a tool, gets the result, and continues processing the conversation. The main agent simply passes what is deems necessary to the agent used as a tool and uses the agent response to form an output.

The way you describe the tool is important here. It influences what the conversation agents sends to the weather agent as a parameter (the user’s question).

Handoff Pattern

With handoffs, control is transferred to another agent entirely. The receiving agent takes over the conversation until it’s complete or hands control back. This is implemented by passing agents to the handoffs parameter:

agent_handoffs = [simulator_agent]
conversation_agent = create_agent_from_config("conversation", {}, agent_handoffs)

The key difference is control: agent-as-tool keeps the original agent in charge, while handoffs transfer complete control to the receiving agent.

To implement the handoff pattern and to allow transfer back to the original agent, support from the UI is needed. In the code, which uses a simple text-based UI, this is done by using a current_agent variable that refers to the agent currently in charge and by falling back to the base conversation agent when the user types ‘exit`. Note that this pattern is quite tricky to implement correctly. Often, the main agent thinks it can do the simulation on its own. When the user does not type exit but asks to go back to the conversation agent, the simulator agent might seem to comply but in reality, you are still in the simulator. This can be solved by prompting both agents properly but do not expect it to be automatic.

A look at the code

If you look at agent_from_config.py (the main script), you will notice that it is very simple. Most of the agent creation logic is in agent_factory.py which creates the agent from a config file or a config stored in Redis.

# Create specialized agents
weather_agent = create_agent_from_config("weather")
news_agent = create_agent_from_config("news")
simulator_agent = create_agent_from_config("simulator")

# Configure agents as tools
agent_as_tools = {
    "weather": {
        "agent": weather_agent,
        "name": "weather",
        "description": "Get weather information based on the user's full question"
    },
    "news": {
        "agent": news_agent,
        "name": "news", 
        "description": "Get news information based on the user's full question"
    }
}

# Configure handoff agents
agent_handoffs = [simulator_agent]

# Create main agent with both patterns
conversation_agent = create_agent_from_config("conversation", agent_as_tools, agent_handoffs)

Above, we create three agents: weather, news (with OpenAI built-in web search) and simulator. These agents are used by the conversation agent created at the end. To provide the conversation agent with two agents as tools and one agent handoff, the create_agent_from_config function that returns a value of type Agent has two optional parameters:

  • a dictionary that with references to agents and their tool descriptions (used by the main agent to know when to call the agent)
  • a list with agents to handoff to

In this code, you need to build these arrays in code. This could also be done via the configuration system but that was not implemented.

To simulate a chat session, the following code is used:

async def chat():
    current_agent = conversation_agent
    convo: list[TResponseInputItem] = []
    
    while True:
        user_input = input("You: ")
        
        if user_input == "exit":
            if current_agent != conversation_agent:
                current_agent = conversation_agent  # Return to main agent
            else:
                break
        
        convo.append({"content": user_input, "role": "user"})
        result = await Runner.run(current_agent, convo)
        
        convo = result.to_input_list()
        current_agent = result.last_agent  # Track agent changes

We always start with the conversation agent. When the conversation agent decides to do a handoff, the last_agent property of the result of the last run will be the simulation agent. The current agent is then set to that agent so the conversations stays within the simulation agent. Note that the code also implements callbacks to tell you which agent is answering and what tools are called. Those callbacks are defined in agent_factory.py.

Built-in Tracing

The OpenAI Agents SDK includes tracing capabilities that are enabled by default. Every agent interaction, tool call, and handoff is automatically traced and can be viewed in the OpenAI dashboard. This provides visibility into:

  • Which agent handled each part of a conversation
  • What tools were called and when
  • Performance metrics for each interaction
  • The full conversation flow across multiple agents

Tracing can be customized or disabled if needed, but the default implementation provides comprehensive observability out of the box.

This is what the traces look like:

These traces provide detailed insights into a conversation’s flow. Track down issues and adjust agent configs, especially instructions, when things go awry.

Conclusion

In this post, we looked at a simple approach to build multi-agent systems using the OpenAI Agents SDK. The combination of configurable agents, centralized tool management, and flexible collaboration patterns creates a foundation for more complex AI workflows. The agent factory pattern allows for easy deployment and management of different agent configurations, while the built-in tracing provides the observability needed for production systems.

However, much more effort is required to implement this in production with more complex agents. As always keep things as simple as possible and implement the minimum amount of agents possible. You should also ask yourself if you even need multi-agent because state management, chat history, tracing, testing etc… become increasingly complex in a multi-agent world.