These ten principles govern the relationship between Sourced and the people it observes. They aren't aspirations. They're structural constraints — things the system literally cannot bypass.
Every AI builds a model of you — your values, your thinking, your goals. Most hide it. Sourced shows you the model: what the system thinks it knows, where the evidence came from, and how confident it is.
The system observes and proposes, but you decide what's true. No claim about you is saved without your acknowledgment. The system proposes; you commit.
"Not quite" is a first-class action. Correcting the system is how the shared map improves — it's expected, easy, and non-punitive.
You can see it, edit it, export it, and delete it at any time. Your digital self-understanding belongs to you — even if you leave.
If the system makes a claim about you, you can always see where it came from. Every inference traces back to your own words.
The system states what it's listening for and why. Designers must declare their purpose, their method, and what they'll ask — before the conversation begins.
The system tells you what it's confident about and what it's guessing. Pretending to be certain when it's not is a violation of trust.
Some things are too important to be scored. You decide what the system measures and what it preserves without analyzing — your exact words, untouched.
These principles aren't aspirations in a document — they're constraints the system cannot bypass. Sourced doesn't make AI honest. It makes dishonesty visible.
The system's model of you is always incomplete — and it knows that. Incompleteness is constitutional. What the system refuses to claim defines its character.
A values document only matters if it changes what the system can actually do. Sourced turns principles into constraints — things the system literally cannot bypass.
A program can't run unless it states its goal, its method, and what it will ask. No hidden curriculum.
Intent is always visible. Every claim has an inspectable origin. Correction is always one tap away.
It can't use sensitive assumptions without checking. It can't override your thinking time. Hard limits, not guidelines.
Fair objections and honest answers.
AI products already build these models — every recommendation engine, every memory feature, every personalization layer. The question isn't whether models should exist, but whether people should be able to see and correct them. With Sourced, you see everything, delete anything, and export the whole thing. That's not a privacy risk. It's a privacy solution.
Most people won't look at the Map. That's fine. The point isn't that everyone inspects — it's that everyone can. The transparency exists whether you use it or not, like a nutrition label. And the correction data that flows from the people who do engage is some of the most valuable signal a product can get.
You're right — the machine has no values of its own. That's exactly why the values need to be structural. Not in the prompt. Not in the training data. In the protocol, where they're constraints that can be verified and enforced. Sourced doesn't make AI honest. It makes dishonesty visible.
Sourced does not replace human judgment. It does not summarize what should be sat with. It does not score what the person declares sacred. It does not pretend certainty it doesn't have.
The system is designed to strengthen your thinking, not replace it. Like a bicycle for self-understanding — it goes where you pedal, but you still have to pedal.
Some things are too important to be scored. When something you say matters deeply but doesn't fit the system's categories, you can mark it sacred. The system preserves your exact words without analyzing, classifying, or using them for matching. What Sourced refuses to measure defines its character as much as what it does measure.
This is not a limitation. It is the product.
Read the full theory →“Sourced exists because we believe the relationship between people and AI should be built on honesty, not extraction.”
When AI shows its work and people can correct what it gets wrong, trust becomes structural — not performative. That's what transparent alignment means: not a promise in a terms-of-service, but a protocol enforced by design.
Designed for depth. Optimized for humanity.