Recurse
Recurse is a sense-making substrate for AI work — a user-owned knowledge layer that treats memory as navigable structure, not similar chunks to retrieve.
For years I’ve been chasing a specific kind of interface: workspaces that organize themselves through use, surfaces that materialize around intent rather than waiting behind app icons, knowledge systems that mirror how we actually think instead of forcing everything into hierarchies. Every project I built — Metasphere (knowledge as navigable terrain), Tweetscape (graph-based exploration), Trails (path-based discovery), Hunch (AI as thinking partner) — circled the same gap: where does the structured understanding actually accumulate?
The substrate was missing. You can’t build an application layer that surfaces structure if there’s no durable structure underneath. You can’t have context that persists across tools if there’s nowhere for that context to live. When I wrote about The Stack Behind Ephemeral Interfaces, I was trying to articulate what such a substrate would need: persistent memory, relationship-first retrieval, multi-scale structure, provenance tracking, cross-tool portability. That piece became less vision and more spec. So I started building.
Why retrieval isn't enough
Why retrieval isn't enough
Most AI memory systems optimize for similarity: retrieve what looks most like your query. This works if you already know what you’re looking for. But similarity-only retrieval has two failure modes that compound over time.
Context collapse. When relevance becomes the only objective, systems drift toward reinforcing priors, smoothing out contradiction, surfacing the same clusters repeatedly. Inquiry narrows to “things like what you already asked.” The architecture has no mechanism for productive difference — no way to surface the counterexample, the adjacent frame, the thing that doesn’t quite fit but might matter. This is the retrieval version of model collapse: just as language models trained on their own output converge toward blander text, similarity-only retrieval converges toward a flattened version of your knowledge. The edges get smoothed. The tensions disappear.
Vendor lock-in. The accumulated understanding of your work lives inside Claude Projects, ChatGPT memory, or proprietary workspaces. You build understanding with one tool, switch tools, and the work evaporates — not because you forgot, but because the system has no durable representation of what you learned, what supports it, what remains unresolved. Every new session starts from scratch. Even when you paste context forward, you paste text, not structure. Provenance, contradictions, scale relationships — gone.
What Recurse does
What Recurse does
Recurse ingests chat history, documents, videos, notes, URLs — anything that can be represented as text. The system extracts semantic structure, discovers patterns in the material, and builds a graph that humans or AI agents can query and walk. The animation below shows what this looks like: a source becomes sections, sections become typed frames (claims, evidence, insights, decisions), and frames reference other frames until cross-document connections emerge.
Beyond the graph itself, Recurse provides infrastructure for keeping knowledge current and making it accessible across tools:
| Mechanism | Function |
|---|---|
| Frames | Typed semantic units with named slots that reference each other. Navigate by relationship, not keyword. |
| Adaptive schemas | Structure emerges from your content. No upfront ontology required. |
| Temporal versioning | Updates create new versions with timestamps and diffs — not overwrites. |
| Source subscriptions | Subscribe to RSS feeds, documentation, newsletters. New content flows in automatically. |
| Context streams | Subscribe to expert-curated knowledge bundles, queryable alongside your own material. |
| Proxy injection | Route AI requests through the proxy. Context gets assembled and injected automatically. |
The architecture: RAGE
The architecture: RAGE
Recurse runs on RAGE (Recursive, Agentic Graph Embeddings) — an architectural approach that treats retrieval as navigation rather than matching.
The core move: apply the same processing recursively at multiple levels of hierarchy. For each level (document → section → paragraph), RAGE embeds the unit, summarizes and re-embeds it at a different scale, connects it into broader ↔ narrower relationships, and cross-links across documents. The same idea can be traced as a sentence, a section, or a whole document — each scale offering a different lens. Retrieval becomes a feedback loop: retrieve initial candidates, inspect what paths got activated, decide whether to go deeper, broader, or sideways.
This enables capabilities that flat chunk retrieval can’t support:
| Capability | What changes |
|---|---|
| Provenance | Trace a claim to its origin: what it came from, what supports it, what it contradicts. |
| Multi-scale navigation | Move between overview and detail without losing the path — and without re-chunking the world each time. |
| Emergent structure | Structure grows over time. No perfect taxonomy required up front. |
Frames: the semantic unit
Frames: the semantic unit
Recurse doesn’t store text chunks — it extracts frames: structured patterns where specific roles consistently appear together. A troubleshooting frame has slots for problem, trigger, diagnosis, cause, solution. A research frame has slots for claim, evidence, method, limitations. The schema emerges from the patterns in your material — no upfront ontology required. Process research papers and the system develops claim/evidence frames; process meeting transcripts and it discovers decision/action patterns.
What makes frames powerful: slots can reference other frames. A decision references a discussion, which references prior decisions, which reference technical constraints. This creates nested structure that mirrors how concepts actually relate — typed relationships you can navigate, not keywords to match.
The term “frame” comes with baggage — mention symbolic AI and people start running. Fair enough: Fillmore’s frame semantics, Minsky’s knowledge frames, Lisp’s homoiconicity — the GOFAI systems of the 1970s and 80s had the right question (how do you represent a situation?) but the wrong tools. Hand-coded schemas, brittle rules that shattered outside narrow domains. When neural networks won in the 1990s, we gained flexibility but lost explicit structure.
RAGE is neurosymbolic: symbolic structure with learned, adaptive frames. The key difference from GOFAI: frames here are emergent, not hand-coded. Fluid, not static. Contextual, not universal. Write-back cascades let them learn from reasoning. What we’re attempting is what GOFAI couldn’t — with tools GOFAI didn’t have.
Living knowledge
Living knowledge
Standard knowledge systems present an unfortunate choice: keep outdated information and risk propagating errors, or delete it and lose historical context.
Recurse implements a third approach: knowledge evolves while preserving its evolution. When new content arrives that updates previous understanding, the frame gets rewritten with current knowledge but maintains links to previous versions — complete with timestamps, diffs, and explanations of what changed. A 2020 claim about “state-of-the-art performance” doesn’t get deleted when 2024 papers show superior methods. It gets updated to reflect both the historical claim and what superseded it. When you query, you get the current understanding plus how it developed.
The trajectory
The trajectory
Recurse is infrastructure for something else: auto-associative workspaces that organize themselves through use, that surface relevant context without you having to remember where you put things, that adapt to how you think rather than demanding you adapt to their taxonomy. These need something to associate with — durable, portable, structured understanding that persists across sessions, tools, and time.
This is the hypothesis behind Ephemeral Interfaces: transient surfaces over persistent substrates. What computation makes possible now are interfaces that materialize around intent, shaped by where you are in the loop, then dissolve when the work is done. What persists is not the UI, but understanding — structured traces, relationships, provenance. Without a durable, navigable context layer that keeps provenance intact, ephemeral interfaces become ephemeral in the wrong way: they dissolve and leave nothing behind.
Recurse is in active development. If you’re building systems that need to hold context over time — or if you’re tired of watching your AI work evaporate every time you switch tools — the product is live.
recurse.cc — Upload sources, route AI requests through the proxy, and start building context that actually persists.