a context-first operating system for go-to-market
Every function. Every metric. Every surface.
From first principles.
Substrate makes the context the first-class object. Goals you can score. Skills that refuse to ship on stale or missing context. A reconciliation loop that catches drift before it ships. The same operator, with Substrate, produces materially better work, and can prove it.
The pain it fixes
Product, marketing, sales, and support each know different parts of the business: what we offer, why it matters, who it's for, and how we deliver it. None of it is first-class data. Put plainly: understanding of the market, the customer, our positioning, and how we serve our customer's jobs in the real-world is often siloed. Briefs live in docs. Positioning is theoretical. Messaging changes on a whim. Competitor research is fragmented. Battle cards rot in folders nobody reads. And the data lives somewhere else altogether. AI-assisted work makes it worse: outputs get faster, the floor stays put.
And underneath all of that, measurement, attribution, and calculated prioritization keep losing to internal politics. Whoever's most territorial about a decision usually wins it, even when the evidence says otherwise.
The thesis
The same person, with the same model, with shared context, produces materially better work than the same person without it. Faster, sharper, with citations that hold up. Most teams treat context as documentation. Substrate treats it as the load-bearing layer, alongside goals you can score and skills that refuse bad input. The AI is not the multiplier. The context the AI reads is. An AI-native system that raises the floor, so humans own the ceiling.
Context
Ten layers per project. Positioning, ICP, voice-of-customer, competitive, product knowledge, conversion narrative, brand voice, market context, roadmap, strategy. Canonical. Cited. Freshness-windowed.
Goals
Calibrated predictions with measurement contracts. Open with confidence; close with a Brier score. Authority follows accuracy, not the org chart.
Skills
Gated CLI runtimes that refuse bad input before it ships. pre-publish-check, lp-ship, voice-enforce, audience-test, competitive-scout, claim-verify, +10 more.
Who Substrate is for
| function | what Substrate gives you |
|---|---|
| product | A read on what the buyer panel actually says, not what marketing wishes they said. Roadmap claims under the same gate. |
| PMM | Canonical positioning, ICP cut on real evidence, claim-verified narrative, calibrated launches. |
| content | Voice gate on every draft (kill-list, em-dash, throat-clearing), citation-aware claim checks, narrative coherence across pieces. |
| growth | LP gate ladder (lp-ship), CRO test queue with measurement contracts, baseline + lift targets cited from analytics. |
| SEO/AEO/GEO | Weekly aeo-tune watcher, schema.org delta, per-vertical AEO pass, off-domain citation audit. Built for AI-mediated buyers. |
| sales | Battle cards that update themselves, displacement framing, gated voice on outbound, claim-verify on every promise. |
| success | A single source of truth for "what we actually deliver", tied to the same positioning sales sold. No drift. |
| support | Help-doc voice + claim coherence with the rest of the surface. The same kill-list. The same gate. |
| leadership | Calibrated bets with measurement contracts. Brier-scored predictions. Authority follows accuracy. |
The eight layers
| # | layer | where | what |
|---|---|---|---|
| 01 | context | clients/<client>/ | Ten layers per project. Canonical. Cited. |
| 02 | skills | skills/ | Gated CLI runtimes. Refuse bad input. |
| 03 | goals | goals/ledger.md | Falsifiable predictions. Brier-scored. |
| 04 | routines | routines/ | Recurring workflows on a schedule. |
| 05 | ux | bin/substrate-status | Operator queue + dashboard. |
| 06 | calibration | metrics/ | Per-(operator, taste-type) Brier history. |
| 07 | principles | PRINCIPLES.md | Operating rules. Slowest layer to change. |
| 08 | reconciliation | weekly | Always-on freshness + link integrity. |
Quick start
$ git clone https://github.com/k3sava/substrate.git
$ cd substrate
$ export PATH="$PWD/bin:$PATH"
$ substrate --list
$ substrate pre-publish-check <draft.md> --client <your-product>
Self-evolution loop
Substrate is fed by a daily research-and-digest pipeline. The pipeline scans operator-relevant sources and produces dated digests with insights and sandbox-tested ideas. Each digest tags an "apply-to-substrate" section: which skills, principles, or knowledge layers should change.
Substrate's daily ingest reads those tagged sections and either auto-merges low-risk knowledge updates or files a proposal for the human-in-loop gate. The framework keeps pace with what's actually working in the field, without anyone having to remember to update it.
$ substrate-ingest-digests --since yesterday
[ingest] used ask-flash for 2026-05-07
[ingest] extracted: knowledge/learnings/2026-05-07.md
[ingest] processed 1 digest through 2026-05-07
Why open source
So anyone can learn to build and implement AI-native systems for any business, and so we can all teach each other how to make them work better. Humans get to focus on creating real value.
Consulting
I'm available for consulting engagements if you'd like me to help your team set Substrate up for your context.