Back to Sourced
For builders

Make your AI's intent visible

If you're building with LLMs, your product already has a point of view. Sourced makes that point of view explicit and contestable.

Step 1

Create Credentials

Sign in to create API credentials and start integrating.

Sign In
Step 2

Quickstart

Builder Baseline (CLI)

Before you build a custom UI, use the CLI to verify your stack behavior. This ensures your listening criteria actually produce the signals you expect.

# 1. Start your local environment
uvicorn server.main:app --reload --port 8787

# 2. Run the automated verify -> observe -> insights loop
python3 scripts/quickstart_no_chat.py
  --base http://127.0.0.1:8787
  --admin-secret "$SOURCED_ADMIN_SECRET"

This script validates your stack_id, observes a test payload, and checks insights.

The 3-Step Integration Flow

1

Validate: Call /v1/stacks/{id}/validate to ensure your input meets the contract (min length, non-empty targets).

2

Observe: Send text to /observe. The system extracts claims based on your stack's listening criteria.

3

Insights: Use /v1/stacks/{id}/insights to see the aggregate signal across all participants.

Default workflow

POST /v1/stacks/validate → POST /v1/stacks/observe → GET /v1/stacks/insights

Storage + API model (important)

  • Use this API with your app credentials from Step 1 (`X-Sourced-App` + `Authorization: Bearer ...`).
  • Supabase is internal storage for Sourced. External API users do not need Supabase keys.
  • Studio uses Supabase login for workspace UX; public integrations use app credentials only.
  • Local library mode still works without Modal: run Sourced in-process with Memory/SQLite stores.
Step 3

Full Example

A complete Python script that registers, observes text, and fetches insights. Copy it, fill in your credentials from Step 1, and run.

sourced_quickstart.py

#!/usr/bin/env python3
"""Sourced quickstart — observe text and fetch insights."""
import requests

BASE = "https://bilalghalib--sourced-api-fastapi-app.modal.run"
APP_ID = "<your-app-id>"
APP_SECRET = "<your-app-secret>"

headers = {
    "X-Sourced-App": APP_ID,
    "Authorization": f"Bearer {APP_SECRET}",
    "Content-Type": "application/json",
}

# 1. Observe some text
resp = requests.post(f"{BASE}/v1/observe", headers=headers, json={
    "subject_id": "demo-user",
    "text": "I feel energized when I build products that help people.",
    "source_id": "turn_1",
})
print("Observe:", resp.status_code, resp.json())

# 2. Fetch insights
resp = requests.get(
    f"{BASE}/v1/stacks/<your-stack-id>/insights",
    headers=headers,
    params={"subject_ids": "demo-user"},
)
print("Insights:", resp.status_code, resp.json())
Architecture

Stacks define what your AI listens for

Grant Application

Replaces grant forms with conversation. Looks for the problem, the impact, and your specific role — listens for evidence of accountability.

Values in Action

Replaces the 121-question VIA Character Strengths survey with 7 conversations. Surfaces all 24 strengths — from Creativity to Spirituality — through stories instead of scales.

Coaching Onboarding

Replaces intake forms with guided dialogue. Looks for grounding, vision, obstacles, and commitment — asks where you are and what you're ready to change.

Aristotle's Golden Mean Demo

A playful philosophical demo. Looks for virtues, their excess, and their deficiency — find the mean between too much and too little.

Developer model

Keep objects simple, keep behavior declarative

Core primitives

Span → Trace → Thread. Data stays stable; behavior is configured in YAML.

Declarative YAML

Schema, Attunement, and Abilities define ontology, retrieval, and outcomes — without hardcoding app semantics.

Capability-driven outcomes

When confirmed signals meet an ability's requirements, Sourced resolves parameters and hands them to your app. You define what happens next.

Compose Targets

Targets with type: "compose" trigger synthesis instead of extraction. When enough evidence accumulates, the Fractal Weave engine composes an artifact — a portrait, match summary, or cohort theme — from the participant's own words. The same atomic operation powers all three scopes: self (N=1), dyad (N=2), and cohort (N=All).

Signal taxonomy

Four questions + one meta-signal. MICE, not MECE.

Every trace gets a signal_kind classifying what type of human signal it carries. The taxonomy uses verb predicates — database queries read like sentences.

Held
settled
Seeking
in motion
Values
axiology
cares_about
Clear values, deep commitments
“I care deeply about honesty”
torn_between
Two goods colliding
“I want freedom but need stability”
Knowledge
epistemology
knows
Stable skills, firm beliefs
“I’m good at systems thinking”
wondering
Testing ideas, uncertain
“Maybe I’d thrive in a smaller team”
Direction
teleology
working_on
Active practices, doing
“I started writing every morning”
reaching_for
Aspirations, not yet started
“I want to build something meaningful”
Energy
phenomenology
alive_in
Flow, vitality, joy
“Time disappears when I’m making things”
stuck_on
Blocked, depleted, stuck
“I can’t take that risk right now”
Sensemaking
hermeneutics
META
means
Settled stories, made peace
“That failure taught me resilience”
remaking
Story is shifting
“I’m starting to see it differently”

Three ToM Layers

tom.author
The Facilitator’s Philosophy

Rules and intent from worldview config, loaded before conversation starts.

tom.person
The State Vector

System’s evolving belief about participant, multiple signals per turn (MICE).

tom.system
System Confidence

How honest the system is about its own uncertainty.

Platform Gate

ConfidentUncertain
HeldReflectWait
SeekingOfferBe honest
ConstraintUnblockBe honest

Tension is relational (two held signals colliding), not a row.

MICE, not MECE: Human experience overlaps. One sentence triggers multiple dimensions — reaching_for + stuck_on + torn_between simultaneously. The taxonomy is Mutually Inclusive. MECE discipline applies to the processing pipeline (Constitution → Interpret → Update → Gate → Speak), not the perception layer.

How it observes

Observation is a dial, not a switch

Observation is controlled by two runtime knobs: retrieval (match_mode) and interpretation (method_mode).

Both

Keyword/semantic retrieval plus an LLM interpretation pass. Richest candidates.

Standard cost

Smart

Always run retrieval; invoke the LLM only when confidence clears the gate.

Best balance

Keyword

No runtime LLM call. Produce trace candidates directly from matched evidence.

Lowest cost
Consent loop

Reliability lives in the loop

A candidate that the user confirms is just as reliable as a rich AI-synthesized candidate. The user's confirmation is what makes claims true.

1. Observe
System notices signal
2. Confirm
User says "Save"
3. Grow
The map updates

Standard Pages

  • /studio: design programs, send invites, review signals.
  • Conversational Sessions (/chat): participant-facing guided response.

Common Friction

  • Returns 200 but no traces: run validate and inspect issues for source collisions.
  • Need stack-level rollups: use the /insights endpoint for a cross-subject view.
  • Preset is too heavy: start with tom_base and add specialized targets as needed.