Runtime reality grounding for physical AI

When autonomy acts,
does it know what it actually knows?

Totem is a software runtime layer for physical AI. It checks whether the claims driving action are grounded in observation, inference, report, or assumption before an autonomous system commits to action in the real world.

Behavioral assurance asks whether the system stayed within bounds. Totem asks whether the system still had the right to believe what it was acting on.

Software-only and hardware-agnostic
Built to complement behavioral RTAA
Focused on consequential physical action
GROUNDED
Runtime problem statement

Autonomous systems can produce valid-looking actions from reasoning that is no longer grounded in observed reality. Totem is the runtime check for that gap.

The missing assurance layer

Current runtime assurance checks what the system does.

Totem checks whether the reasons for action remain grounded in evidence. That matters when a planner, classifier, or objective function silently promotes an inference into a fact.

Outputs can look fine

A path can be flyable and a COA can be feasible while the reasoning chain is already wrong.

Grounding can disappear

Most stacks do not preserve whether an action-driving claim was observed, inferred, reported, or assumed.

Confidence is not evidence

A system can sound certain while drifting away from the observations that originally grounded the decision.

How physical AI fails quietly
01Failure mode
Observation enters the chain

A platform sees something real: sensor data, operator command, mission report, or authenticated telemetry.

02Failure mode
Inference rides on top of it

Models and planners derive conclusions from those observations, often under uncertainty and time pressure.

03Failure mode
Inference hardens into fact

A downstream component silently consumes an inferred or assumed claim as if it were directly observed.

04Failure mode
Action still looks valid

The path is flyable and the plan is feasible, but the reasoning driving action has lost contact with reality.

How Totem works
01Tag
Claims carry grounding type

Totem represents action-driving claims as observed, reported, inferred, or assumed so the system stays honest about what it knows.

02Trace
Evidence chains stay visible

Every action-driving claim can be traced through its dependencies to the sensor, authority, or model outputs that produced it.

03Detect
Epistemic drift is surfaced

When downstream reasoning consumes an inferred or assumed claim as if it were observed, Totem flags the mismatch immediately.

04Hold
Consequential action is gated

Totem can warn, constrain, or hold action until the chain is re-grounded or a human review resolves the conflict.

Runtime architecture
INPUTSSensor claimsPlanner COAsModel inferencesOperator commandsTOTEM RUNTIMETAGGrounding typeTRACEProvenance chainDETECTDrift mismatchHOLDAction gateOUTPUTSAllowWarnConstrainHoldClaims flow through grounding check before action proceeds
Why Totem is different
Behavioral RTAA

Checks whether autonomy stays within safe envelopes and feasible operating bounds.

Observability

Explains traces, spans, logs, and system behavior after the fact or during debugging.

Governance

Captures policy, auditability, and compliance posture across models and workflows.

Totem

Checks whether the reasons driving action remain grounded in observed reality before the system commits to action.

Proof scenario

The current demo is a technical proof of the failure class.

The autonomous drone scenario shows the exact transition Totem is built to catch: a claim enters the chain as inference or assumption, then gets operationalized as if it were observed reality.

Why now

Physical AI is moving toward more autonomous operation in environments where human supervision is delayed, degraded, or overloaded.

Current phase

Simulation-first, software-only, and designed to integrate with existing autonomy stacks before hardware-in-the-loop validation.

Primary path

Defense-first runtime assurance for mission autonomy, with a grant-backed path into deeper technical validation.

Downstream domains

Industrial robotics and OT systems, Autonomous vehicles and mobile robots, High-consequence medical decision support

Technical artifact

A public white paper now explains the runtime grounding model, principles, and software architecture in one place.

Totem — Runtime reality grounding for physical AI
Veteran-owned, defense-first