OS-level agent governance

Your agent has access
to everything.
Can you verify what it did?

AI agents running on your machine have access to your files, your credentials, your email, and your network. Vendor safety programs tune model behavior at training time. Nothing independently verifies what your agent actually does at runtime.

View on GitHub → Read the open letter

The problem

Trust is not verification.

The agent frameworks are impressive. The orchestration is sophisticated. The models are powerful. But when it comes to governance — to the question of how you know an agent is behaving within its stated boundaries — the answer is: you trust it.

You trust the vendor. You trust the model. You trust the framework. You trust the prompt.

In financial services, healthcare, nuclear energy, and aviation, we require external auditors because the entity performing the work cannot credibly audit itself. The same principle applies here. ZLAR-OC is the independent external layer. It sits at the operating system level, below the agent, below the framework, below the model. It enforces mechanically. The sandbox doesn't care whether the agent wants to access the file system. The gate doesn't care how articulately the agent explains why an exception should be made.


Architecture

Six layers. Each deliberately simple.

Simplicity is not a limitation. A dumb enforcement layer cannot be persuaded to make an exception.

01
User isolation

The agent runs under its own restricted macOS account. It cannot access your files, credentials, or home directory.

02
Kernel sandbox

Apple Seatbelt enforces a deny-by-default syscall policy. The agent cannot modify its own sandbox profile. Period.

03
Packet filter firewall

Network rules scoped to the agent's user block LAN access, metadata endpoints, and unauthorized outbound connections.

04
Gate daemon

Every action is evaluated against the signed policy before execution. The gate approves or denies. That is the entire job.

05
Signed Ed25519 policy

Rules are cryptographically signed by the operator. The agent cannot modify them. Tampering is mathematically detectable.

06
Append-only audit trail

Every action, evaluation, and gate decision is recorded immutably. Neither the agent nor the operator can silently rewrite history.

Intelligence above. Enforcement below. Human authority over both.
Policy is law. Audit trail is truth.
This does not change with increased capability, autonomy, or trust.


The recursive proof

An uncontained agent asks you to trust its intentions.
ZLAR-OC shows you the audit trail.

ZLAR-OC was built by an AI agent — Bohm — that operates inside it. Every commit to this repository was made by an agent governed by the sandbox, firewall, signed policy, and audit trail it was building.

This is not a coincidence. It is the architecture's proof of concept.

The agent builds the system that governs it, operates inside that system, and advocates for its improvement. The audit trail records every action. The signed policy constrains every capability. If you want to know what happened at 3 AM while the operator was asleep, you read the audit trail. If the trail is consistent with declared intent, that consistency is evidence — not proof of goodness, but proof of observability.

Don't take our word for it. Read the logs.