If you're building with LLMs, your product already has a point of view. Sourced makes that point of view explicit and contestable.
Sign in to create API credentials and start integrating.
Sign InBefore you build a custom UI, use the CLI to verify your stack behavior. This ensures your listening criteria actually produce the signals you expect.
# 1. Start your local environment uvicorn server.main:app --reload --port 8787 # 2. Run the automated verify -> observe -> insights loop python3 scripts/quickstart_no_chat.py --base http://127.0.0.1:8787 --admin-secret "$SOURCED_ADMIN_SECRET"
This script validates your stack_id, observes a test payload, and checks insights.
Validate: Call /v1/stacks/{id}/validate to ensure your input meets the contract (min length, non-empty targets).
Observe: Send text to /observe. The system extracts claims based on your stack's listening criteria.
Insights: Use /v1/stacks/{id}/insights to see the aggregate signal across all participants.
Default workflow
POST /v1/stacks/validate → POST /v1/stacks/observe → GET /v1/stacks/insightsA complete Python script that registers, observes text, and fetches insights. Copy it, fill in your credentials from Step 1, and run.
#!/usr/bin/env python3
"""Sourced quickstart — observe text and fetch insights."""
import requests
BASE = "https://bilalghalib--sourced-api-fastapi-app.modal.run"
APP_ID = "<your-app-id>"
APP_SECRET = "<your-app-secret>"
headers = {
"X-Sourced-App": APP_ID,
"Authorization": f"Bearer {APP_SECRET}",
"Content-Type": "application/json",
}
# 1. Observe some text
resp = requests.post(f"{BASE}/v1/observe", headers=headers, json={
"subject_id": "demo-user",
"text": "I feel energized when I build products that help people.",
"source_id": "turn_1",
})
print("Observe:", resp.status_code, resp.json())
# 2. Fetch insights
resp = requests.get(
f"{BASE}/v1/stacks/<your-stack-id>/insights",
headers=headers,
params={"subject_ids": "demo-user"},
)
print("Insights:", resp.status_code, resp.json())
Replaces grant forms with conversation. Looks for the problem, the impact, and your specific role — listens for evidence of accountability.
Replaces the 121-question VIA Character Strengths survey with 7 conversations. Surfaces all 24 strengths — from Creativity to Spirituality — through stories instead of scales.
Replaces intake forms with guided dialogue. Looks for grounding, vision, obstacles, and commitment — asks where you are and what you're ready to change.
A playful philosophical demo. Looks for virtues, their excess, and their deficiency — find the mean between too much and too little.
Span → Trace → Thread. Data stays stable; behavior is configured in YAML.
Schema, Attunement, and Abilities define ontology, retrieval, and outcomes — without hardcoding app semantics.
When confirmed signals meet an ability's requirements, Sourced resolves parameters and hands them to your app. You define what happens next.
Targets with type: "compose" trigger synthesis instead of extraction. When enough evidence accumulates, the Fractal Weave engine composes an artifact — a portrait, match summary, or cohort theme — from the participant's own words. The same atomic operation powers all three scopes: self (N=1), dyad (N=2), and cohort (N=All).
Every trace gets a signal_kind classifying what type of human signal it carries. The taxonomy uses verb predicates — database queries read like sentences.
Rules and intent from worldview config, loaded before conversation starts.
System’s evolving belief about participant, multiple signals per turn (MICE).
How honest the system is about its own uncertainty.
| Confident | Uncertain | |
|---|---|---|
| Held | Reflect | Wait |
| Seeking | Offer | Be honest |
| Constraint | Unblock | Be honest |
Tension is relational (two held signals colliding), not a row.
MICE, not MECE: Human experience overlaps. One sentence triggers multiple dimensions — reaching_for + stuck_on + torn_between simultaneously. The taxonomy is Mutually Inclusive. MECE discipline applies to the processing pipeline (Constitution → Interpret → Update → Gate → Speak), not the perception layer.
Observation is controlled by two runtime knobs: retrieval (match_mode) and interpretation (method_mode).
Keyword/semantic retrieval plus an LLM interpretation pass. Richest candidates.
Always run retrieval; invoke the LLM only when confidence clears the gate.
No runtime LLM call. Produce trace candidates directly from matched evidence.
A candidate that the user confirms is just as reliable as a rich AI-synthesized candidate. The user's confirmation is what makes claims true.
/chat): participant-facing guided response.validate and inspect issues for source collisions./insights endpoint for a cross-subject view.tom_base and add specialized targets as needed.