Every week a new agent framework drops.
LangGraph. AutoGen. CrewAI. OpenAI Agents SDK. Claude Agent SDK. Haystack. Mastra.
Each promises to help you “build agents.” Tutorials show you how to wire up tool calls, memory, and chains. But most of them answer the same question:
How do you run an agent?
Before installing any framework, there is a more fundamental question most guides skip:
What is an agent, structurally?
Not “an AI system that autonomously completes tasks.” That definition tells you nothing about how to design one.
What does the internal structure actually look like?
How do you reason about it?
How do you explain it to another engineer?
A useful way to answer these questions starts with a simple concept from mathematics: graphs.
Start with graph theory
At its core, every complex system can be represented as a graph.
Graph theory is one of the simplest ideas in mathematics:
nodes + edges = graph
Nodes represent things.
Edges represent relationships between them.
A city map is a graph.
A social network is a graph.
An organizational chart is a graph.
Graph theory does not care what sits inside each node. It only cares about how they connect.
Add rules and you get a finite state machine
Graph theory becomes more practical when you add rules.
A finite state machine (FSM) is a graph with two constraints:
- States
Nodes become states.
The number of states is finite and must be defined ahead of time.
- Transitions
Edges become transitions.
A transition occurs when a specific condition is met.
A traffic light is a simple FSM:
Red → Green → Yellow → Red
Three states, three transitions, repeating forever.
The value of a finite state machine is predictability. You can draw every possible path before the system ever runs.
Where agents fit in
Now we can connect the ideas.
Graph theory → structure
Finite state machine → structure + rules
Agent → FSM where an LLM decides the transitions
An AI agent is a finite state machine where the transition logic is determined by the LLM at runtime.
In traditional software:
State A → State B → State C
Transitions are controlled by code.
In an agent:
State A → State B → State C
The LLM decides the transition based on its output.
You still design the structure.
The model decides how the system moves through it.
Prompts are the core of each state
Once you understand agents as state machines, the next question becomes:
What lives inside a state?
In most agent systems, the answer is simple:
a prompt.
Each state corresponds to a prompt that defines what the LLM should do in that step.
The prompt typically specifies:
- the goal of the state
- the tools available
- the format of the output
- the context or memory available
From a design perspective, prompts become the first-class component of the system.
Designing an agent becomes designing a graph of prompts.
Two fundamental agent patterns
Once the graph exists, most agents fall into one of two patterns.
Pipeline
Prompt A → Prompt B → Prompt C → Done
The sequence is fixed.
Each prompt passes its output to the next step.
This works well when the workflow is deterministic and does not require branching.
Examples:
- writing then formatting text
- translation pipelines
- document processing
Orchestrator–Workers
Orchestrator
/ | \
Search Write Review
The orchestrator prompt decides what to do next.
Worker prompts perform specialized tasks.
Examples:
- search information
- analyze data
- generate output
- validate results
This pattern is used when the workflow is dynamic.
A real-world example: coding agents
Coding agents illustrate this architecture clearly.
A simplified state graph might look like this:
User request
↓
Plan task
↓
Search codebase
↓
Read files
↓
Write code
↓
Run tests
↓
Done
A single request like “fix this bug” may involve many transitions:
search → read → edit → test → edit again → test again
The graph stays the same, but the path through it changes depending on what happens at each step.
How to actually design an agent
Once you view agents as state graphs, the design process becomes straightforward.
- Define the states
Identify the distinct capabilities required to complete the task.
Examples:
- plan the task
- search information
- analyze context
- generate output
- validate results
Each state should represent one responsibility.
- Define the transitions
Determine how states connect.
For example:
Plan → Search
Search → Read
Read → Write
Write → Test
Test → Done or Write
The LLM chooses the transition, but the graph limits where it can go.
- Design the prompts
A good state prompt answers three questions:
What is the goal of this state?
What tools are available?
What format should the output follow?
Clear prompts make the agent’s decisions far more reliable.
- Add guardrails
Finally, define constraints that prevent the system from drifting.
Examples:
- maximum loop counts
- structured outputs
- error recovery states
- tool usage limits
These guardrails keep the system stable even when the model makes imperfect decisions.
The key takeaway
Agents are not magic.
They are structured systems built from graphs, states, and transitions.
Graph theory provides the structure.
Finite state machines provide the framework.
LLMs introduce probabilistic decision-making.
Once you understand that structure, designing agents becomes far easier to reason about.


