I want to finish this series with an observation that might sound philosophical, but I think it's the most practically important point of the three.
In the first and second posts, I argued that LLM agents need deterministic workflow structure (the harness), and that an affordance engine is the right way to provide that structure without the rigidity of hand-maintained state machines. I talked about guard rails, composability, auditability, lifecycle governance — the engineering concerns.
All of that is true, and it matters. But I was describing the architecture from the outside — from the perspective of the system designer looking down at the board. And that perspective, while useful, misses something fundamental about what an affordance engine actually does to the relationship between an agent and the system it operates within.
Key Takeaways
- Traditional agent platforms operate from a God's-eye view: the orchestrator sees every tool, every state, every possible path, and decides from above. At enterprise scale, that perspective collapses under its own weight
- An affordance engine inverts the perspective. The agent doesn't see the full graph — it sees only what's available right now, given current context, role, and state. The vast structure of business rules and lifecycle logic shapes what's possible but is never directly visible
- This solves capability discovery — the single biggest practical problem in deploying LLM agents at scale. The agent's context window is freed from possibility and focused on actuality
- The agent experiences something that looks like agency. The system knows better — but the harness isn't visible to the thing being harnessed. That's the right relationship between probabilistic intelligence and deterministic structure
- This isn't hypothetical. We've been building an affordance engine in Graphshare for some time. The infrastructure is working. The architecture described across this series is where we're heading — and the foundation is already in place
The Chess Board View
Traditional workflow architecture is a God's-eye-view discipline. You stand above the chess board. You see all the pieces. You define all the legal moves. You map every state, every transition, every guard. You orchestrate from the top down.
This is how most enterprise AI agent platforms work today. The orchestrator sees the full graph. It knows every tool, every function, every possible pathway. It looks down at the board and decides where to move the pieces.
It works — until the board gets complex. Until there are hundreds of entity types, thousands of possible states, tens of thousands of contextual combinations of role, permission, lifecycle phase, and business rule. At that point, the God's-eye-view becomes a God's-eye-burden. The orchestrator drowns in possibility. The context window fills up. The agent hallucinates moves that don't exist, or misses moves that do, because it's trying to hold the entire board in its head at once.
The Inversion
An affordance engine inverts the perspective entirely.
Instead of looking down at the board and computing all possible moves, the agent — or the entity being acted upon — raises its head above water and looks around. Where am I? What's available from here? What can I do next?
The agent doesn't see the full graph. It doesn't need to. It sees only the affordances that are available right now, given the current state of the entity, the role it's operating in, the business rules that apply, the lifecycle phase, the permissions in play. Everything else — the vast structure of the data model, the totality of business rules, the complete map of every possible state and transition — is invisible. It's the fabric of reality. It shapes everything. But the agent never interacts with it directly.
Why This Matters Practically, Not Just Philosophically
This isn't just a nice metaphor. It solves the single biggest practical problem in deploying LLM agents at scale: capability discovery.
The current industry approach to capability discovery is the chess board view: hand the agent a catalogue of every tool, function, and API it might ever need, described in natural language with parameter schemas, and hope it picks the right ones in the right order. This is the tool-description paradigm that every major agent framework uses today.
It's fine for demos. Ten tools, clear task, short workflow. The agent reads the descriptions, picks the right tool, calls it correctly. Impressive.
Now scale it. A hundred tools. Five hundred. Tools with overlapping descriptions. Tools with subtle precondition dependencies. Tools that are only valid in certain lifecycle phases, for certain roles, on certain entity types, in certain states. The agent can't hold all of that in context. It starts guessing. It hallucinates tool names. It calls functions out of order. It ignores preconditions it was told about three thousand tokens ago.
The affordance engine doesn't give the agent the catalogue. It gives the agent the current menu. Not "here's everything this restaurant has ever served" but "here's what's available right now, at this table, for this customer, given what you've already ordered." The agent's context window isn't consumed by possibility. It's focused on actuality.
The reasoning the agent does — the probabilistic, creative, intelligent part — is applied to a curated, contextually appropriate set of options. The agent chooses well because it's choosing from a set that has already been shaped by the deterministic structure of the domain. It doesn't need to understand why certain options aren't available. It just needs to work with the ones that are.
Free Will Within Bounds
There's something worth sitting with here. The agent, operating within an affordance engine, experiences something that looks like autonomy. It assesses a situation. It weighs options. It makes a judgement call. It acts. From the agent's perspective — if we can anthropomorphise for a moment — it has agency.
But the system knows better. The agent's "choices" were pre-filtered by the data model, the business rules, the lifecycle state, the role-based access controls, and the domain logic. The agent can't make a move that violates the workflow. Not because it's been told not to — but because that move was never presented as an option. The harness isn't visible to the thing being harnessed.
This is, I'd argue, the right relationship between probabilistic intelligence and deterministic structure. Not a cage where the agent rattles the bars. Not a straitjacket that prevents movement. A reality — coherent, consistent, governed — within which genuine intelligence operates freely.
The agent doesn't need the map. It needs to know where it's standing and which doors are open. The affordance engine provides exactly that — nothing more, nothing less.
The Oracle Was an Affordance Engine
Since we're in the Matrix — and I've stretched the analogy this far, so I might as well take it home — consider the Oracle.
The Oracle doesn't tell Neo what to do. She doesn't show him the full architecture of the Matrix. She doesn't hand him a state transition diagram of every possible future. She gives him just enough context — a fragment of insight, a carefully chosen provocation — for him to make his own choice. And the choice feels entirely free. Neo believes he's exercising agency. He is. But the Oracle shaped what he could see, and in doing so, shaped what he would choose.
She's not an orchestrator. She's an affordance engine.
That's the architecture. Not control versus autonomy. Not determinism versus intelligence. A well-modelled reality — and agents free to operate within it, never needing to see the walls they'll never walk into.
That's where my thinking has landed across these three posts. The harness is deterministic structure. The intelligence is probabilistic reasoning. The affordance engine is the mechanism that connects them — invisibly, dynamically, and without requiring either the agent or the system designer to hold the full complexity in their head at once.
And at this point, I should probably tell you: this isn't hypothetical.
With our Graphshare platform, we've been building an affordance engine — for a while now. Building a matrix like this doesn't happen overnight. Today, the engine operates as a system of information: a hypergraph-based semantic data model where relationships are first-class, identity-bearing entities, with a UI-independent affordance model that LLM agents can already interrogate — reading the data, interpreting the information, reasoning over the semantics. The infrastructure and mechanics are working. An agent can discover what's there, understand what it means, and navigate the structure.
The architecture described in these posts — the deterministic harness, the fuzzy guards, the action affordances — that's where we're heading. And the foundation is already in place.
If anything in this series resonated — if you've hit the precondition problem, if you're struggling with capability discovery at scale, if you're looking at your current platform and realising it wasn't built for the world we're heading into — I'd welcome the conversation. Whether that's as a client, a collaborator, an investor, or just someone who wants to argue about affordance engines over coffee.



