Totem is a software runtime layer for physical AI. It checks whether the claims driving action are grounded in observation, inference, report, or assumption before an autonomous system commits to action in the real world.
Behavioral assurance asks whether the system stayed within bounds. Totem asks whether the system still had the right to believe what it was acting on.
Autonomous systems can produce valid-looking actions from reasoning that is no longer grounded in observed reality. Totem is the runtime check for that gap.
Totem checks whether the reasons for action remain grounded in evidence. That matters when a planner, classifier, or objective function silently promotes an inference into a fact.
A path can be flyable and a COA can be feasible while the reasoning chain is already wrong.
Most stacks do not preserve whether an action-driving claim was observed, inferred, reported, or assumed.
A system can sound certain while drifting away from the observations that originally grounded the decision.
A platform sees something real: sensor data, operator command, mission report, or authenticated telemetry.
Models and planners derive conclusions from those observations, often under uncertainty and time pressure.
A downstream component silently consumes an inferred or assumed claim as if it were directly observed.
The path is flyable and the plan is feasible, but the reasoning driving action has lost contact with reality.
Totem represents action-driving claims as observed, reported, inferred, or assumed so the system stays honest about what it knows.
Every action-driving claim can be traced through its dependencies to the sensor, authority, or model outputs that produced it.
When downstream reasoning consumes an inferred or assumed claim as if it were observed, Totem flags the mismatch immediately.
Totem can warn, constrain, or hold action until the chain is re-grounded or a human review resolves the conflict.
Checks whether autonomy stays within safe envelopes and feasible operating bounds.
Explains traces, spans, logs, and system behavior after the fact or during debugging.
Captures policy, auditability, and compliance posture across models and workflows.
Checks whether the reasons driving action remain grounded in observed reality before the system commits to action.
The autonomous drone scenario shows the exact transition Totem is built to catch: a claim enters the chain as inference or assumption, then gets operationalized as if it were observed reality.
Physical AI is moving toward more autonomous operation in environments where human supervision is delayed, degraded, or overloaded.
Simulation-first, software-only, and designed to integrate with existing autonomy stacks before hardware-in-the-loop validation.
Defense-first runtime assurance for mission autonomy, with a grant-backed path into deeper technical validation.
Industrial robotics and OT systems, Autonomous vehicles and mobile robots, High-consequence medical decision support
A public white paper now explains the runtime grounding model, principles, and software architecture in one place.