A constellation of nodes — connected, accountable, visible
The Pair Token

AI output has
no provenance.
That era ends.

A $50,000 strategic analysis and a $0.03 prompt look identical. That's not a feature. That's a crisis. The pair token is the infrastructure that ends it.

Build this with me →
Right now, somewhere, a board is making a decision based on a document they believe was written by a senior strategist. It was generated in eleven seconds by someone who doesn't understand the industry, the company, or the stakes. The output looks identical. The accountability structures are not just different. One of them doesn't exist.
Act I — The Crisis

The trust collapse is already here.

Every professional field that trades in analysis, advice, or intelligence is facing the same structural failure. The cost of producing sophisticated-looking output has fallen to near zero. The cost of verifying who produced it, how, and under what principles has not fallen at all. That asymmetry is not a market inefficiency. It is a trust crisis.

A consultant's deliverable and a chatbot's output are now visually indistinguishable. A legal memo drafted by a partner with thirty years of case law in her head looks the same as one generated by a model that hallucinated two of its citations. A market analysis built on proprietary data and one built on a Wikipedia summary arrive in the same PDF format, with the same professional formatting, carrying the same implicit authority.

The question used to be: is this analysis good? The question now is: does anyone stand behind it?

"Unprovenienced AI output is the new unsigned contract — technically possible, professionally unacceptable, and soon, legally meaningless."

We've seen this before. In 2007, a Caltech researcher named Virgil Griffith built WikiScanner — a tool that matched anonymous Wikipedia edits to the IP addresses of the organizations making them. He didn't change the encyclopedia. He made the invisible visible. And once the invisible was visible, everything reorganized. Edits that had been anonymous became suspicious. Provenance restructured power.

WikiScanner was a warning. Provenance doesn't stay optional. It starts as a tool, becomes an expectation, and ends as a requirement. The only variable is time.

The AI provenance vacuum is larger than Wikipedia ever was. It spans every industry that produces knowledge work. It touches every contract that includes AI-assisted deliverables. It implicates every professional whose reputation depends on the quality of their output. And right now, there is no infrastructure to address it. No standard. No protocol. No way to answer the simplest question a professional can ask: who is accountable for this?

Act II — The Precedent

The internet solved this before.

In the early 1990s, the web had a trust problem that nearly killed it. Any server could claim to be any other server. You couldn't verify who you were communicating with. Commercial activity was impossible because the infrastructure had no identity layer.

The solution was SSL/TLS and Certificate Authorities. A new layer of infrastructure that created verifiable identity for servers. HTTP became HTTPS. The padlock icon appeared. And the web became a place where you could transact, because you could finally answer: who am I talking to?

That transition didn't happen because someone had a clever idea. It happened because the alternative — an internet without verifiable identity — was untenable. The infrastructure was inevitable. The only question was who would build it and how.

AI is at the same inflection point. The output is flowing. The identity layer doesn't exist. And the absence of that layer is already causing the same category of failure: you cannot verify who you're dealing with, what system produced the work, or whether anyone is accountable for its quality.

The pair token is the next step in this pattern. Not an invention. An inevitability.

"The question isn't whether AI needs provenance. The question is whether you'll be the one who provides it — or the one who can't."

A pair token is a structured declaration of accountability for AI-assisted work. It binds a human principal to an AI system, a governing policy, and a signed record of output. It answers the questions that no current standard answers: not just was this made by AI, but who directed the AI, under what principles, and what's their record?

The term is new. The need is not. Every era of information technology has eventually required an accountability layer. Print had bylines and publishers. Broadcast had licenses and editorial standards. The web got HTTPS. AI output currently has nothing. The pair token is the proposal that it should.

Act III — The Architecture

What a pair token contains.

A pair token is not a watermark. It's not a content label. It's not metadata embedded in a file. It is a publicly addressable, machine-readable declaration that binds four components into a single verifiable identity.

The Pair Token Specification
  • Principal — The accountable human. Named. Findable. Reachable. The person who directs the system and owns the output. Not an organization chart. A name with skin in the outcome.
  • System — The AI architecture. Which models operate, how they're configured, what their capabilities and boundaries are. Not a black box. A transparent instrument.
  • Policy — The operating thesis. What this human-AI pair believes about quality, ethics, methodology, and scope. Public. Challengeable. The intellectual framework that governs every output.
  • Corpus — The signed record of work. Not a portfolio. A provenance ledger. Dated, traceable, connected to the system and policy that produced it. The evidence that this pair has a track record, not just a claim.

This is a different problem than what existing standards solve. The Coalition for Content Provenance and Authenticity — C2PA, backed by Adobe, Microsoft, the BBC, and others — focuses on content authentication. Their question: was this image generated by AI? That's a necessary question. It's not a sufficient one.

C2PA verifies the what. The pair token verifies the who.

Knowing that a document was AI-assisted tells you about the tool. Knowing who directed the AI, under what principles, with what track record — that tells you about the accountability. In professional contexts, the accountability is what matters. You don't sue a hammer. You sue the person who swung it.

There is an honest limitation to address. A pair token, in its current form, is self-certified. The principal declares their own identity and system. That's step one — the same step SSL took before Certificate Authorities existed. Self-signed certificates were the beginning, not the end.

The goal is consortial verification. Independent bodies that validate the claims within pair tokens the way CAs validate the identity behind HTTPS certificates. That infrastructure doesn't exist yet. Building it is the work ahead. But the specification comes first. You can't verify what you haven't defined.

Act IV — The Proof

mrglouton.ai is the first implementation.

In 1991, Tim Berners-Lee published the first website. It was a single page on a single server at CERN. It didn't look like the future. It looked like a document with some underlined words. But it was the proof that the architecture worked — that hypertext over the internet was buildable, not just theorizable.

This site is that kind of proof.

mrglouton.ai is a self-certified, early-stage implementation of a pair token. It declares its principal — a named human with a public identity and a professional history. It describes its system — the AI architecture, its components, its capabilities. It publishes its policy — the thesis that governs what this pair builds and why. And it maintains its corpus — a growing record of work, dated, documented, and connected to the system that produced it.

This is not impressive in the way a finished product is impressive. It is impressive in the way a working prototype is impressive. It proves the concept is buildable. That the four components can coexist in a single, publicly addressable location. That accountability for AI-assisted work can be structured, not just claimed.

"A $50,000 strategic analysis and a $0.03 prompt look identical. That's not a feature. That's a crisis."

Every piece of work published through this system can be traced back to this site. And here, the reader finds not an anonymous AI, not a faceless automation product, but a named human directing a described system, operating under a stated policy, with a documented record. That is the minimum viable unit of AI accountability. It exists here. It can exist anywhere.

Act V — The Invitation

This is an open standard, not a product.

The pair token specification will be open-source. It is a proof of concept currently being built. It is coming soon. When it ships, anyone will be able to implement it. Any individual. Any organization. Any platform.

This is not a pitch. It is a movement. The same way Let's Encrypt made HTTPS accessible to every website on the internet — free, automated, open — the pair token specification aims to make AI accountability accessible to every practitioner who wants to stand behind their work.

The freelance strategist using Claude to augment her research. The independent developer shipping AI-assisted code. The consultancy integrating language models into their analytical workflow. Each of them faces the same question from their clients, their peers, their regulators: who is accountable for this output? The pair token gives them a structured way to answer.

We are not waiting for permission. The EU AI Act requires transparency about AI systems. US executive orders are moving toward AI accountability standards. The regulatory direction is clear. The pair token is not ahead of the curve. It is on the curve, building the infrastructure that policy is already demanding.

Start asking who's behind the AI you consume. Ask your consultants. Ask your vendors. Ask the platforms you rely on. The answer — or the absence of one — will tell you everything you need to know about where the industry is headed and who is prepared for it.

Provenance has never stayed optional.
It starts as a tool. Becomes an expectation.
Ends as a requirement.

The pair token is the infrastructure for the era that follows this one.
The specification is being built. The first implementation is live.
The only question left is whether you'll be early — or late.

This isn't a product. It's a calling.

If you're a founder, engineer, or practitioner who believes AI accountability shouldn't be optional — let's talk.

Build this with me →
What is Directed Intelligence? → The Digimon Frame → The $100 Brief → The Kairos Engine → Ask a question →