Divergence Engines

25. October 2025
AIRAGGraphRAGContext EngineeringRetrievalMemory SystemsKnowledge GraphsComplexity

Most AI “memory” and context-engineering stacks optimize for one thing: similarity. It works — until it doesn’t. When relevance becomes the only objective, exploration decays, novelty dies, and systems converge towards semantic heat death.

Context collapse is not a training-data problem. It’s an inference-time dynamic produced by retrieval architectures that fear surprise. The fix isn’t “better similarity search,” but adding a second force: divergence primitives that preserve variance, surface contradictions, and build bridges across attractor basins.

Failed to load mediasrc: https://kqdcjvdzirlg4kan.public.blob.vercel-storage.com/content/articles/2025-divergence-engines/published/website/images/cover.png

Overview

This series is a compact argument for a design shift in “AI memory” and context engineering: away from stacks that only optimize relevance, and toward systems that can deliberately expand the inquiry when the moment calls for it.


This series is part of ongoing research tied to my work on Recurse — a sense‑making substrate for AI work focused on context injection, relationship-based retrieval, and portability across providers. Worthwhile reading for anyone building context graphs, retrieval and agent memory systems, or trying to design “AI assistants” that help inquiry expand rather than just answer faster.