Every autonomous AI system makes decisions based on a chain of reasoning. But no system today can distinguish what it observed from what it inferred — or detect when an assumption silently became a fact. Totem is the grounding check.
AI systems produce answers without tracking whether each claim was observed, inferred, or assumed. The epistemic basis is invisible.
Inferences silently become treated as observations. Assumptions harden into facts. Nobody notices until the decision is made.
There is no layer in the current AI stack whose job is to ask: is this reasoning chain still in contact with reality?
Every claim carries its epistemic type: observed, reported, inferred, or assumed.
Follow any claim back to the observations it ultimately rests on.
Flag when an inference is silently treated as an observation in a decision chain.
Block ungrounded claims from driving consequential actions. Require human review.
Prevent autonomous systems from acting on ungrounded inferences. Catch the moment a drone mistakes its operator for a threat.
Know where your plant data disagrees with itself. Trace every asset relationship back to what was actually observed.
Ensure diagnostic AI distinguishes what was measured from what was inferred. Flag when treatment decisions rest on assumptions.
Walk through a drone targeting scenario and watch the totem fall.