← Totem
White paper
Epistemic Grounding For Physical AI

Runtime reality grounding for autonomous systems operating under uncertainty

Autonomous systems can produce technically valid behavior while reasoning from claims that are no longer properly constrained by evidence. Totem is a runtime layer that keeps action-driving claims typed, traceable, defeasible, and disciplined before physical action is committed.

Executive summary

Autonomous systems increasingly act in environments where human supervision is delayed, degraded, or unavailable. Existing runtime assurance methods primarily evaluate behavioral correctness: whether a path is feasible, a controller remains within bounds, or a plan satisfies known constraints. Those checks are necessary, but incomplete.

Totem addresses that gap. It is a runtime reality-grounding layer for physical AI that represents action-driving claims according to how they are believed, traces those claims through their dependencies, detects when downstream logic silently upgrades inference into fact, and can warn, constrain, or hold action when the reasoning chain loses contact with its evidentiary roots.

In short: behavioral assurance checks what the system does. Totem checks whether the system still has the right to believe what it is acting on.

Totem principles
Typed

A system must distinguish what was observed, what was reported by authority, what was inferred from upstream evidence, and what was merely assumed.

Traceable

Action-driving claims should remain connected to evidentiary roots through explicit dependencies.

Defeasible

New evidence, conflicting authority, stale information, or missing support should be able to weaken or overturn an active claim.

Disciplined

Consequential action should remain constrained when reasoning outruns evidentiary support.

Why this matters

Modern autonomy stacks combine sensor input, learned models, mission logic, external reports, and planning systems. This makes them powerful, but also creates a failure mode in which a system begins treating inference as if it were direct observation.

At that point, the output may still look coherent, feasible, and technically valid. The system can still be wrong in the only way that matters: it has lost accountability to the evidence that should constrain action.

What Totem does at runtime
Tag

Every action-driving claim is represented with an explicit grounding type so observation, report, inference, and assumption do not collapse into one another.

Trace

Claims remain traceable to their evidentiary roots through dependencies on observations, authorities, and upstream reasoning steps.

Detect

Totem detects epistemic drift when downstream logic consumes a claim more strongly than its evidentiary status supports.

Hold

When reasoning outruns evidence, Totem can warn, constrain, or hold action before unsupported claims become physical consequence.

Minimal architecture

A software-only layer between reasoning and action

Totem does not require new sensors or proprietary hardware. It ingests structured claims or events from existing autonomy stacks, normalizes them into a canonical claim graph, evaluates provenance and defeat conditions, and returns an action assurance result such as allow, warn, constrain, or hold.

Totem is intended to integrate through adapters attached to existing autonomy software rather than by replacing perception, planning, or control systems.

Example inputs
Sensor and sensor-fusion outputs
Mission and command inputs
Model predictions and classifications
Planner and course-of-action outputs
System-state transitions and execution context
Why this is different

Behavioral runtime assurance checks envelope and feasibility. Totem checks evidentiary grounding behind action.

Observability explains traces and logs. Totem determines whether claims are still fit to drive action.

Explainability describes why a system produced an output. Totem checks whether the intermediate claims leading to action were justified in the first place.

Why now

Autonomy is moving closer to consequential physical action.

Operators cannot supervise every intermediate reasoning step.

Learned systems increasingly mediate the transition from observation to decision.

Current assurance methods do not directly evaluate whether the reasoning behind action remains properly constrained by evidence.

White paper takeaway

The goal is not to make autonomous systems infallible. The goal is to ensure that consequential action remains accountable to the difference between observation, inference, and assumption.

Behavioral validity alone is not enough for physical AI. Systems that act in the world also need runtime mechanisms that preserve evidentiary discipline at the point of commitment. Totem is designed to provide that missing layer.