How Trail Works

How a Brain Actually Remembers: Why Trail Compiles Knowledge Instead of Searching For It

RAG treats knowledge like a filing cabinet. Brains do something fundamentally different — and so does trail. The architecture matters more than the model.

Why Trail compiles knowledge instead of searching for it

In July 1945, Vannevar Bush published an essay in The Atlantic describing a machine he called the memex. The machine was hypothetical — built around microfilm reels, optical projectors, and electromechanical levers — but the idea behind it was radical. Bush argued that human memory does not work the way library catalogs work. The mind, he wrote, "operates by association." It follows trails of meaning from one thought to the next, building chains of connection that no hierarchical index can capture.

Eighty-one years later, the dominant architecture for AI knowledge systems — retrieval-augmented generation, or RAG — has quietly inherited the very paradigm Bush was pushing back against. RAG is, fundamentally, a library catalog with a language model bolted on top. It searches. It retrieves. It generates. And then it forgets.

Trail is built on a different premise. It is closer to how a brain actually remembers — and that difference is not metaphorical. It is architectural, and it has measurable consequences for how knowledge accumulates over time.

This is the story of what your brain actually does when you remember something, why RAG only superficially resembles that process, and why Trail's compile-time architecture is the right answer for the same reasons it has been the right answer for two billion years of evolution.

What your brain is actually doing right now

The first thing to understand about human memory is that it is not storage. It is reconstruction.

When you recall the smell of coffee from a café you visited last spring, your brain is not playing back a recording. It is re-assembling the experience in real time from fragments scattered across different cortical regions. The visual elements are reconstructed from the visual cortex. The smell is rebuilt by the olfactory bulb. The emotional tone comes from the amygdala. The verbal narrative is constructed by the language areas. The hippocampus fires a coordinated signal that activates these fragments simultaneously, and consciousness assembles them into what feels like a single coherent memory.

This is why every time you recall something, you change it slightly. Memories are not static files. They are active reconstructions that get updated each time they are accessed. Neuroscientists call this memory reconsolidation, and it has been studied extensively for the past two decades.

The second thing to understand is that knowledge is not stored in neurons. It is stored in the connections between them. The human brain has roughly 86 billion neurons and an estimated 100 to 1,000 trillion synapses — the connections through which neurons signal each other. Learning something new strengthens specific synapses. Forgetting weakens them. The principle, summarized by Donald Hebb in 1949, is that "neurons that fire together, wire together."

Meaning, then, is a pattern of connections. A concept like "stress" does not exist in any single neuron. It exists as a distributed pattern across thousands of neurons firing together — pulling in associations with cortisol, sleep disruption, treatment options, your aunt's nervous breakdown, the smell of a hospital waiting room, and forty-seven other related fragments.

The third thing — and this is where the architecture starts to matter for AI — is that most of your brain's work happens when you are not asking questions. It happens during consolidation, primarily during sleep, when the hippocampus replays the day's experiences in compressed form and transfers them to the neocortex for long-term integration. New experiences are not just filed alongside old ones. They are woven in. Contradictions get flagged. Cross-references get drawn. Patterns get extracted. By the time you wake up, what you learned yesterday is no longer a separate item in your memory; it has become part of an integrated structure.

When you are then asked a question, your brain does not search a database of memory fragments. It activates an already-integrated structure. The work is already done. The answer comes quickly and coherently because the compilation happened during the night.

This distinction — compile-time versus query-time — is the single most important difference between how brains work and how RAG works.

What RAG actually does

A RAG system works like this. The user asks a question. The question is converted to a vector embedding. The system searches a database of document chunks for the N most similar embeddings. The retrieved chunks are stuffed into a language model's context window. The language model generates an answer based on what it just read.

Then the system forgets everything and waits for the next question.

There is no persistent structure being built up over time. The documents are never integrated with one another. Contradictions are not detected. Cross-references are not created. The system never says "we have seen this before, let us update our understanding." Every query is an amnesiac moment. The system wakes up, searches, answers, forgets.

This is not a knowledge system. It is a search engine with a language model glued on as a presentation layer.

The reason RAG became the dominant pattern is not that it is the best architecture for accumulating knowledge. It became dominant because it is easy to build. A vector database, an embedding model, a language model, and a few hundred lines of code, and you have a working RAG pipeline. The architecture skips the hard problem — how new information should be integrated into existing understanding — and replaces it with a simpler problem: how to find relevant fragments fast.

For some use cases, that trade-off is fine. If you just need to look something up in a static corpus, RAG works. If you need a chatbot that can answer questions about your product documentation, RAG works.

But for knowledge that should accumulate — clinical experience built up over decades, organizational knowledge that compounds across projects, scientific understanding that grows with each new paper read — RAG fails the same way a library card catalog fails to be a brain. It can find the books. It cannot read them, integrate them, and become wiser.

Where the analogies actually map

If we line up the biological process against the two AI paradigms, the picture becomes clear:

Biological process Trail RAG
Hippocampus encodes new experience Source ingestion Document ingestion
Consolidation transfers knowledge to neocortex Compile step updates the wiki (does not exist)
Neocortex stores integrated long-term knowledge Wiki pages (does not exist)
Recall activates an existing pattern Read the wiki page (must reconstruct from scratch every query)
Spreading activation finds related knowledge Follow wiki links Vector similarity search (weak parallel)
Sleep consolidates the day's learning Background lint pass (does not exist)
Forgetting prunes the irrelevant Stale page detection (does not exist)

RAG has, essentially, only the last row of useful parallel — vector similarity search as a thin imitation of spreading activation. But it is search without integration, retrieval without consolidation, recall without long-term memory. It is the part of cognition that happens in the first 200 milliseconds of seeing a word on a page. The rest of cognition — the part that turns information into understanding — has no equivalent in RAG.

Trail has the full picture. Sources are ingested. The compile step integrates them with the existing wiki, the way consolidation integrates new experiences with neocortical memory. Wiki pages are the long-term store, the way neocortex is. Queries activate already-compiled structures, the way recall activates patterns rather than searching a database. A background lint pass spots stale pages, contradictions, and gaps — the way sleep does maintenance on the brain's storage. And like a healthy mind, Trail forgets selectively, marking pages that have not been touched in months as candidates for review or retirement.

Why Karpathy was right

In October 2025, Andrej Karpathy described this same architecture as an "LLM Wiki." His framing was that the next frontier for language models was not to generate more code or write more text, but to manage knowledge as a continuous compiler — turning raw sources into a structured, cross-referenced wiki that the model maintains over time.

The argument landed because it named what RAG had quietly been getting wrong. RAG had treated language models as oracles that need their context refilled with each query. The LLM Wiki framing flipped it: the model is not the oracle, the wiki is. The model's job is to keep the wiki coherent, integrated, and up to date. The user reads the wiki — sometimes through the model, sometimes directly — and the work of building understanding happens in the background, before any query is ever asked.

This is exactly what Bush proposed in 1945. He could not have built it then; the technology to compile knowledge automatically did not exist. But he saw clearly that the human mind compiles, it does not search. He named the mechanism: associative trails. He sketched the user interface: a desk with screens, a way to mark and follow connections, a permanent record that grew with use.

The microfilm reels in the base of Bush's memex desk — the same spools we chose as the visual mark for Trail — were not chosen for their technical sophistication. They were chosen because they could hold a Trail. A Trail is a sequence of frames linked in a meaningful order with annotations along the way. The Trail is the unit of compiled knowledge. It is what the memex was for.

Trail, the engine, builds and maintains those trails for you. the SaaS successor, the SaaS product built on Trail, is where you can see them, walk them, and add to them. The architecture is the architecture Bush described. The compiler is the language model he could not have anticipated.

What this means in practice

For someone building a knowledge base on top of Trail, the practical consequences of compile-time architecture show up in four places:

Knowledge accumulates instead of resetting. The tenth source you ingest changes the wiki in light of the previous nine. Patterns get extracted across sources. Contradictions get flagged when they appear. The wiki becomes more useful with each addition, the way a researcher becomes more knowledgeable with each paper read. RAG, by contrast, treats the tenth source the same way it treated the first — as just another chunk to potentially retrieve.

Provenance is structural, not bolted on. Because every wiki page in Trail is compiled from specific sources, and every claim is linked to its source revision, you can always answer the question "where did this come from?" Trail's wiki pages know their parents. When a source is updated or retracted, the affected pages are flagged for re-review automatically. RAG retrieves citations, but the citations are correlations between vectors, not structural lineage.

Curation is first-class. Bush's memex assumed a human "trailblazer" who built trails of meaning from raw material. Trail assumes the same. The Curation Queue is where new candidate knowledge — auto-summaries, contradiction alerts, gap suggestions, chat-derived insights — flows to a human who decides what becomes part of the wiki. The LLM proposes; the curator disposes. This matches how scientific knowledge actually accumulates: through review, integration, and validation, not through search.

Better models compound your existing knowledge. Because Trail's wiki is the persistent artifact, not the language model, replacing the underlying model with a better one means your existing wiki gets recompiled with sharper reasoning. The knowledge you built up over months gets smarter without being rebuilt. RAG systems, by contrast, are largely indifferent to model upgrades — the search results are the same, only the prose answer changes.

The point Bush was actually making

Bush's 1945 essay is often read as a technological prediction. It is more usefully read as a critique of how knowledge work was being done.

He looked at the explosion of scientific literature after the war and saw that the bottleneck was no longer information storage. Microfilm could already fit the Encyclopædia Britannica into a matchbox. The bottleneck was integration — the work of connecting one piece of information to the next, of building trails of meaning that future researchers could follow. He believed that mechanizing this work was the great task of post-war science.

Eighty-one years later, that task is finally tractable. Language models are good enough to do the integration work that human researchers cannot keep up with. But only if we use them for that purpose — to compile, to consolidate, to build long-term structures of meaning — instead of using them as more sophisticated search engines.

RAG is the search engine version of the future Bush did not want. Trail is the version he did.

The brain does not search. It compiles. So does Trail.


Further reading

Related
← MORE FROM HOW TRAIL WORKS