Autonomous systems can produce technically valid behavior while reasoning from claims that are no longer properly constrained by evidence. Totem is a runtime layer that keeps action-driving claims typed, traceable, defeasible, and disciplined before physical action is committed.
Autonomous systems increasingly act in environments where human supervision is delayed, degraded, or unavailable. Existing runtime assurance methods primarily evaluate behavioral correctness: whether a path is feasible, a controller remains within bounds, or a plan satisfies known constraints. Those checks are necessary, but incomplete.
Totem addresses that gap. It is a runtime reality-grounding layer for physical AI that represents action-driving claims according to how they are believed, traces those claims through their dependencies, detects when downstream logic silently upgrades inference into fact, and can warn, constrain, or hold action when the reasoning chain loses contact with its evidentiary roots.
In short: behavioral assurance checks what the system does. Totem checks whether the system still has the right to believe what it is acting on.
A system must distinguish what was observed, what was reported by authority, what was inferred from upstream evidence, and what was merely assumed.
Action-driving claims should remain connected to evidentiary roots through explicit dependencies.
New evidence, conflicting authority, stale information, or missing support should be able to weaken or overturn an active claim.
Consequential action should remain constrained when reasoning outruns evidentiary support.
Modern autonomy stacks combine sensor input, learned models, mission logic, external reports, and planning systems. This makes them powerful, but also creates a failure mode in which a system begins treating inference as if it were direct observation.
At that point, the output may still look coherent, feasible, and technically valid. The system can still be wrong in the only way that matters: it has lost accountability to the evidence that should constrain action.
Every action-driving claim is represented with an explicit grounding type so observation, report, inference, and assumption do not collapse into one another.
Claims remain traceable to their evidentiary roots through dependencies on observations, authorities, and upstream reasoning steps.
Totem detects epistemic drift when downstream logic consumes a claim more strongly than its evidentiary status supports.
When reasoning outruns evidence, Totem can warn, constrain, or hold action before unsupported claims become physical consequence.
Totem does not require new sensors or proprietary hardware. It ingests structured claims or events from existing autonomy stacks, normalizes them into a canonical claim graph, evaluates provenance and defeat conditions, and returns an action assurance result such as allow, warn, constrain, or hold.
Totem is intended to integrate through adapters attached to existing autonomy software rather than by replacing perception, planning, or control systems.
Behavioral runtime assurance checks envelope and feasibility. Totem checks evidentiary grounding behind action.
Observability explains traces and logs. Totem determines whether claims are still fit to drive action.
Explainability describes why a system produced an output. Totem checks whether the intermediate claims leading to action were justified in the first place.
Autonomy is moving closer to consequential physical action.
Operators cannot supervise every intermediate reasoning step.
Learned systems increasingly mediate the transition from observation to decision.
Current assurance methods do not directly evaluate whether the reasoning behind action remains properly constrained by evidence.
The goal is not to make autonomous systems infallible. The goal is to ensure that consequential action remains accountable to the difference between observation, inference, and assumption.
Behavioral validity alone is not enough for physical AI. Systems that act in the world also need runtime mechanisms that preserve evidentiary discipline at the point of commitment. Totem is designed to provide that missing layer.