Epistemic Grounding for Autonomous AI

Does your AI
know it's dreaming?

Every autonomous AI system makes decisions based on a chain of reasoning. But no system today can distinguish what it observed from what it inferred — or detect when an assumption silently became a fact. Totem is the grounding check.

GROUNDED
The Problem

No Provenance

AI systems produce answers without tracking whether each claim was observed, inferred, or assumed. The epistemic basis is invisible.

Silent Drift

Inferences silently become treated as observations. Assumptions harden into facts. Nobody notices until the decision is made.

No Grounding Check

There is no layer in the current AI stack whose job is to ask: is this reasoning chain still in contact with reality?

What Totem Does
Track Grounding

Every claim carries its epistemic type: observed, reported, inferred, or assumed.

Trace Provenance

Follow any claim back to the observations it ultimately rests on.

Detect Drift

Flag when an inference is silently treated as an observation in a decision chain.

Hold Action

Block ungrounded claims from driving consequential actions. Require human review.

Domains
Defense
Autonomous Weapons

Prevent autonomous systems from acting on ungrounded inferences. Catch the moment a drone mistakes its operator for a threat.

Industrial
OT / Control Systems

Know where your plant data disagrees with itself. Trace every asset relationship back to what was actually observed.

Medical
Clinical AI

Ensure diagnostic AI distinguishes what was measured from what was inferred. Flag when treatment decisions rest on assumptions.

See what ungrounded reasoning looks like.

Walk through a drone targeting scenario and watch the totem fall.

Totem — Epistemic grounding for autonomous AI
Veteran-owned