Beyond the “Model” Trap
Why Agentic AI Needs a Constitutional Framework
The financial services sector is drifting into a quiet but material governance gap.
Over the past year, many institutions have moved quickly to adopt baseline standards such as the FINOS AI Governance Framework (AIGF). These frameworks are well-engineered for their intended purpose: treating AI as a software asset—tracking data lineage, validating outputs, and enforcing model risk controls.
But that framing is starting to break.
We are crossing a structural threshold—from Static AI, where models inform human decisions, to Agentic AI, where systems act with delegated authority. When an AI is no longer merely recommending a credit limit but is actively adjusting it, executing trades, or engaging customers in real time, the traditional Model Risk Management (MRM) paradigm begins to show its strain.
This is where the divergence emerges: traditional frameworks govern the tool. The Enterprise Decision Systems (EDS) Governance Framework MANDATE (Machine Authority Non-Delegable Autonomous Technology Enforcement) is designed to govern the actor.
From Asset to Actor: A Constitutional Shift
Most existing frameworks—including AIGF—implicitly treat AI as code.
The EDS Framework takes a different stance. It treats an AI agent as a Digital Actor operating under formally delegated authority. This is not just a semantic shift; it’s a governance redesign.
In human organisations, we don’t govern a loan officer by auditing their internal logic—we govern them through a Delegation of Authority (DoA), defined mandates, and enforceable limits.
EDS extends that logic to machines through constructs like the Machine Decision Authority Matrix (MDAM). Instead of forcing Boards into technical oversight, it aligns governance with something they already understand: authority, accountability, and fiduciary boundaries.
Making Risk Measurable: The ADAE Model
Traditional frameworks tend to describe AI risk in qualitative bands—low, medium, high. Useful for classification, but insufficient for systems operating at machine speed.
EDS introduces a more quantitative lens: the Autonomous Decision Authority Exposure (ADAE) score.
Rather than asking “How risky is this model?”, ADAE asks “How much authority have we delegated?”
It evaluates exposure across dimensions such as:
Financial impact
Customer reach
Decision reversibility
This enables a concept largely absent from current governance discourse: an Autonomy Budget.
Instead of vague risk appetite statements, a Board can define a hard ceiling—for example, a capped aggregate exposure across all autonomous agents. Governance becomes not just descriptive, but financially enforceable.
From Human Bottlenecks to System Constraints
The default industry control for AI risk remains Human-in-the-Loop (HITL).
In low-frequency environments, that works. In high-frequency, agentic systems, it doesn’t scale. Humans become bottlenecks—or worse, passive approvers susceptible to automation bias.
EDS reframes this with the Safety Runtime Environment (SRE)—a technical enforcement layer that operates independently of the AI system itself.
Instead of relying on humans to intercept bad decisions, the system is architected so that invalid actions are non-executable by design. When an agent exceeds its ADAE-defined authority, a circuit breaker is triggered before any real-world impact occurs.
This shifts the human role from Human-in-the-Loop (HITL) to Human-on-the-Loop (HOTL)—oversight without friction, supervision without latency.
The Identity Problem: Non-Human Actors as a Security Perimeter
Most governance discussions still centre on data privacy or prompt injection.
EDS highlights a different, often underdeveloped risk surface: Non-Human Identity (NHI).
In an agentic environment, identity is the control plane.
If an AI agent holds credentials capable of moving funds or initiating transactions, that identity must be governed with the same rigour as a human employee. This includes lifecycle management through established patterns such as Joiner–Mover–Leaver (JML).
EDS formalises this through its NHI security policies—treating agents not as ephemeral processes, but as credentialed entities whose access must be provisioned, monitored, rotated, and revoked.
Board-Level Accountability: Governing Authority, Not Code
Many frameworks stop at management reporting.
EDS is designed to extend further—into Director-level defensibility.
Through instruments such as the Board Accountability Playbook, Directors receive structured prompts that enable them to exercise their duty of care and diligence without interpreting technical details.
Because ultimately, in a failure scenario, the question is not:
“Did the Board understand the model?”
It is:
“Did the Board appropriately govern the authority it delegated?”
That is a question every Board is already equipped to answer—if the framework is designed correctly.
Conclusion: Guardrails That Enable Speed
Effective governance is not about slowing innovation—it’s about enabling it safely.
The AIGF and similar frameworks remain essential for managing model risk. But they are anchored in a worldview where AI is advisory.
Agentic AI changes that premise.
By shifting the governance lens from Model Risk to Decision Authority, the EDS Framework introduces a more scalable paradigm—one that allows organisations to move faster precisely because the boundaries are explicit, enforceable, and measurable.
This is the essence of what might be called guarded innovation: not restraint, but controlled acceleration.


