ARE
Abstract control architecture with policy signals, execution boundary, and governed action paths

Agent Responsibility Engineering

Govern autonomous systems at the moment they act.

ARE is the discipline of controlling autonomous systems at the execution boundary, where authority must be proven before an action is allowed to proceed.

Intelligence doesn't grant authority. Authority is earned through verifiable, governed policy.

Not prompts. Not filters. Not human checkpoint theatre. Governance as an engineering discipline.

Why ARE

The failure is not the sentence. It is the handoff from output to action.

Most AI governance is written as policy, review, monitoring, or documentation. Those are necessary, but they are often adjacent to the moment of failure.

ARE exists to govern the execution boundary: the place where a model result becomes a tool call, a workflow step, a record mutation, a notification, a payment, or a security decision.

Control premise 01

Capability is not authority.

Control premise 02

Approval is not execution control.

Control premise 03

Policy must be interposed, not merely documented.

Control premise 04

A system without a controller is not a governed system.

The Tenets

The foundational elements of Agent Responsibility Engineering.

The tenets define the discipline before any implementation pattern does. They describe how ARE is built, operated, verified, and defended when autonomous systems are permitted to act.

The architecture must make authority visible before the system can make action permissible.

Foundational

How you build it

  1. IGovernance is architectural, not operational.
  2. IIThe spawn chain is the authority chain.
  3. IIIThe ledger is ground truth.

Operational

How you run it

  1. IVIntelligence never grants authority.
  2. VScope is a contract, not a suggestion.
  3. VITrust has a half-life.

Epistemological

How you know it

  1. VIIEvery agent must be provably legible.
  2. VIIIProof requires falsification.

Posture

How you hold the line

  1. IXAn unknown agent is an ungovernable agent.
  2. XThe system must explain itself.

Core Primitives

ARE gives governance a small set of enforceable engineering objects.

These primitives make authority inspectable, enforceable, and reviewable without pretending that intent alone can control an autonomous system.

01

Passport

A verifiable authority token binding identity, scope, and delegated right to act.

02

Execution Re-validation

Authority must still hold at execution, not only at approval or planning time.

03

Deny-Path

A governed system must be able to refuse action, not merely observe it.

04

Evidence Bundles

Every consequential decision should produce verifiable evidence, not just logs.

05

Operations Ledger

Immutable operational memory for decisions, actions, outcomes, and feedback.

06

Governance Coprocessor

Interposed control between intent and execution, independent of agent preference.

07

SDAVL Loop

Signals to Decisions to Actions to Verification to Learning, closed as a governing cycle.

The ARE Difference

Common approaches can reduce risk without proving authority.

ARE does not discard prompts, filters, reviews, or observability. It places them inside a stricter question: can the system prove that this action is authorized now, under governed policy, with evidence?

Prompts

They may shape intent, but they do not create enforceable authority at execution.

Guardrails

They often surround generation while the action boundary remains under-controlled.

Output filtering

It can remove unsafe text without proving the agent is allowed to perform the act.

Human checkpoints

Review theatre collapses when action is fast, delegated, repeated, or operationally hidden.

Dashboards alone

Observability describes what happened; governance must be able to interpose before it happens.

Responsible AI language

Principles matter, but principles without a controller do not govern production systems.

Execution Boundary

The control point is the moment intent attempts to become action.

ARE is not a diagram of internal reasoning. It is a governing structure for the transition from proposed action to permitted execution.

A system proves authority before it acts, then carries the result back into the next decision.

Foreground control path

Public model / no implementation detail

Intent

01

Agent proposes

A request to act enters the governed boundary.

Control

02

Authority is tested

Policy is interposed before execution, not reviewed afterward.

Verdict

03

Permit, modify, or deny

The system must be able to refuse consequential action.

Action

04

Only governed execution proceeds

Capability moves only through proven authority.

Evidence

05

Outcome becomes memory

The action path leaves an attributable operational record.

Closed-loop governance

Feedback is shown as a public principle, not an implementation recipe: execution changes the process model, and the next action meets an updated controller.

Accountability

ARE does not let responsibility disappear into the agent.

The question is not whether an autonomous system can be blamed. The question is whether authority, control, and evidence were engineered before the system was allowed to act.

ARE separates attribution from accountability. The agent can be identified. The action can be governed. The organization remains responsible for the control structure it put into production.

01

Attribution

The agent is attributable.

ARE gives every consequential action an identity path. That does not make the model morally responsible; it makes the action traceable.

02

Control

The system owner is responsible for control.

Policy, scope, denial, and evidence are architectural duties. If the execution boundary can be bypassed, the control design is accountable.

03

Governance

The organization remains accountable.

Leaders own the authority they delegate, the scopes they approve, and the evidence they can produce when autonomous systems act.

“A governed agent is accountable at the point it acts.”

Governance that cannot stop, shape, or evidence execution is not a control system. It is commentary after the fact.

STAMP Alignment

Safety is a control problem. ARE applies that logic to autonomous agents.

STAMP treats safety as an emergent property of a controlled system, not as a checklist of component failures. ARE carries that structure into agentic execution.

This is not metaphorical alignment. It is structural alignment: controller, process model, feedback, control actions, and a closed loop that can govern behavior under changing conditions.

Read the STAMP-ARE paper

Closed-loop governing structure

Controller01
Process model02
Feedback03
Control actions04
Closed loop05

Production Reality

The issue is already operational.

Autonomous agents are moving from experiments into systems where actions have regulatory, financial, security, and reputational consequence.

Agents already operate near regulated workflows, enterprise systems, customer records, financial controls, and security boundaries.

The risk is not that agents speak incorrectly. The risk is that an apparently valid output becomes an unauthorized act.

ARE was shaped from production control problems: defensibility, auditability, enforcement, and the operational need to stop an action cleanly.