Post

Single vs Multi Agents - When One Brain Isn’t Enough

Thinking out loud about Agentic AI system design; what to keep simple, when to split, and why complexity isn’t always better. Still early in this space, but experimenting as I go.

Single vs Multi Agents - When One Brain Isn’t Enough

Since the past week, I’ve been on the quest to explore one of the most recent topics of discussion, The Agentic AI. While playing with the tools like PydanticAI, CrewAI, and smolagents, I found myself circling back to the basic yet integral question:

When do we actually need to chart out a multiple-agent workflow or a graph architecture?

tldr; I don’t have a full answer yet but here’s where I’m at so far; what I’ve tried, why single agent is good to start with, when multiple agents might make sense, and where graphs can be considered only for highly complexy systems despite them being powerful.

🧠 So… What’s so cool in an “Agent” anyway?

In simple terms, an agentic system leverages an AI model to interact with its environment to achieve a user-defined objective. It combines reasoning, planning, and the execution of actions (often via external tools) to fulfill tasks.

Unlike a traditional script that runs line by line, agents are role-driven and often decision-aware. Instead of passively responding to prompts, an agent is like, “Cool, I’ve got a goal. I know my purpose. I have the right set of tools handy. Let me figure out how to get there reasonably.”

Let’s say we have a task like: “Check the current weather in XYZ state and recommend an outfit.”

In an agent-based setup, the agent might:

  • Call a weather API, i.e., use tools (APIs, calculators, databases)
  • Interpret the temperature, i.e., make decisions based on intermediate outputs
  • Respond based on logic: “It’s 10°C. Wear a warm jacket.”, i.e., follow instructions with some level of reasoning

👤 Single Agent, Straightforward Sequence

A single agent is a one-shot, sequential “single processor” in a continuous thought process, capable of executing the defined actions from start to end, without losing much of the context between the steps.

This approach feels clean when we are understanding how the system behaves. It works well when the task is focused without many layers being involved, e.g., pulling logs, interpreting output, or generating a config based on some known inputs.

👥 Where Multi-Agent Might Seek Validation

But sometimes, the task itself starts to feel like more than one person’s job. You need something that can research, another part that decides what to do with that research, and maybe another part that neatly executes that decision.

At first, it feels like maybe one well-designed agent should be able to handle that entire flow, especially if it has multiple tools access and the prompt is clear enough. But then I wonder: would it be more effective to split these roles? One agent that focuses just on gathering, another that interprets, and a third that acts? No doubt, it would add parallelization and specialization, but at the same time, agent hand-offs might expose a certain level of context miss, dependent coordination might speculate decision conflicts, excessive token utilization might result in overhead, and what-not.

I’m not sure yet if that separation actually improves performance or just adds structure for the sake of it. That’s something I’m still exploring; where the line is between a well-instructed agent and a crew of focused ones.

🕸 Graphs? Only If the Task Demands It

From an excerpt in the PydanticAI documentation,

Don’t use a nail gun unless you need a nail gun

If PydanticAI agents are a hammer, and multi-agent workflows are a sledgehammer, then graphs are a nail gun:

  • sure, nail guns look cooler than hammers
  • but nail guns take a lot more setup than hammers
  • and nail guns don’t make you a better builder, they make you a builder with a nail gun
  • Lastly, (and at the risk of torturing this metaphor), if you’re a fan of medieval tools like mallets and untyped Python, you probably won’t like nail guns or our approach to graphs. (But then again, if you’re not a fan of type hints in Python, you’ve probably already bounced off PydanticAI to use one of the toy agent frameworks — good luck, and feel free to borrow my sledgehammer when you realize you need it)

That metaphor lands well. And for me, it sums up exactly where I stand with graphs right now: curious, but not fully convinced I need them yet.

They’re powerful, sure. But they also come with extra complexity and structure that I haven’t personally run into a need for. So, unless the workflow “demands” that kind of orchestration. They feel like the kind of nail guns to put into action “after” hitting bottlenecks with simpler setups.

👩🏻‍💻 What I am Still Diving Into

This isn’t a deep dive or a framework comparison. It’s just me thinking out loud.

Right now, I’m still experimenting with Agentic AI.

Still deciding when tooling improves a task, and when it just adds layers.

Still figuring out which frameworks, agents, and LLMs offer real flexibility instead of unnecessary constraints.

But that’s also the fun part: staying curious without needing all the answers yet. Stay tuned!

This post is licensed under CC BY 4.0 by the author.