I sat across from a well-known entrepreneur at dinner last week. We got into
the product. He asked good questions. Then he asked the one I wasn't fully
ready for.
"Why won't one of the big AI labs just build this?"
I gave an answer. It wasn't a bad one. But it wasn't the real one — the one
I've been working through since I walked out of that restaurant.
This is that answer.
---
## The wrong frame
The question assumes we're building without an edge. We're not.
Any frontier model can explain anything. Ask it how to process a refund and
you'll get a clean, accurate answer. In seconds. For free.
The problem is not generating the explanation. The problem is verifying the employee understood it — and ensuring that understanding persists.
Those are different problems. Radically different. One is a retrieval problem.
The other is a systems problem. Confusing them is why most corporate training
is theater.
---
## What the LMS industry got wrong
LMS measures completion. Did you finish the module? Did you pass the quiz?
These are proxies for learning. Bad ones. You can click through a fire
extinguisher safety module in four minutes and still not know how to use one.
Competence is not completion.
The real question is: can you do the thing? Under pressure, in a novel
situation, three months from now? That's what actual training is supposed to
produce — and almost nothing currently measures it.
Foundation models don't fix this. They make it worse. Now employees can ask
the AI to pass their compliance quiz for them. The checkbox still gets checked. Nobody
learned anything.
---
## The other side
Your new account manager starts Monday. Two years at a competitor, sharp on
the phone, no idea how your product handles multi-currency invoicing.
Her onboarding is a shared Drive folder. Forty-seven documents — some current,
some not. She can't tell which. A quiz asks her to identify the company
values.
Three weeks later she's on a call. The client asks about multi-currency. She
tabs to the folder. Ctrl+F. Nothing useful. She improvises. The client
notices.
Nobody failed her on purpose. The system failed her by not existing.
---
## The knowledge problem
When a new hire joins a company, the relevant knowledge is scattered. PDFs.
Slack threads. The institutional memory of a senior employee who's leaving
next quarter. Process docs that haven't been updated since the product changed.
The big labs have none of this. They have general knowledge — which is
genuinely remarkable. But they don't know your refund policy changed from 45
days to 30 last March. They don't know your best sales reps handle objections
differently than the training materials say.
The hard part is not the reasoning. It's the context.
We're building the brain that holds that context. Not as static content — as a
living knowledge graph. It updates when your docs update. It
connects knowledge to the people who need it, at the moment they need it.
A language model reasons over context. We're building the context layer for
every company that touches us. That's not something any of them ships.
---
## The comprehension loop
The LLM is the engine. We're building the system.
What makes Duolingo work isn't vocabulary lists. It's the system around the
vocabulary: spaced repetition, forced recall, immediate feedback, adaptive
difficulty, streaks that make quitting feel costly. The pedagogy is the
product.
We're doing the same for enterprise knowledge. Socratic dialogue that probes
gaps. Feynman tests that force you to explain it simply — and reveal when you
can't. Roleplay that puts you in the scenario before it happens for real.
Spaced reinforcement so the thing you learned in onboarding is still there six
months later.
Any major model can power those interactions.
None of them can close the loop.
Connect comprehension back to the workflow. Correlate it with error rates in
your CRM. Flag when re-training is needed because something changed upstream.
That loop — from knowledge, to comprehension, to performance, back to
knowledge — is infrastructure. It compounds with every company we touch, every
document ingested, every gap found and filled.
The generation pipeline runs five specialized agents with typed contracts and
validated outputs. But the agents are interchangeable. The loop is not.
---
## The integration moat
The big labs will just build this. Maybe. Eventually.
But they won't build the CRM hook that fires a micro-learning session before
your sales rep opens a new account type for the first time. They won't build
the Slack integration that surfaces the right context at the right moment.
They won't build the HR system routing that delivers the right training to the
right person based on role, location, and what they've already completed.
Not because they can't. Because that's not their business. They build
foundations. We build vertically, on top of them, in one domain, with
obsessive depth.
The moat is not the model. The moat is everything the model needs to actually work.
The data on what actually produces competence — what works, what doesn't, what
sticks — compounds with every company we touch. It doesn't exist anywhere
else.
---
## What we're building
Not a chatbot. Not a video player. Not a better PowerPoint generator.
A system that turns a company's raw knowledge into something that actually
teaches. It ingests what you have — documents, procedures, institutional
memory scattered across tools — and converts it into adaptive learning. It
verifies comprehension, not completion. It updates when the source changes. It
meets employees where they already work.
The goal: no one on your team should ever wonder whether they know what they need to know.
You'll see it soon.
---
The model is not the product. The system built around it is. And systems take
time to build, integrate, and earn trust.
We're building the system.