Three Filters on the Gate: How Trail's Curation Policy Borrows from the Structure of Your Mind
A knowledge system without filters isn't a brain — it's a garbage heap. Trail's trusted-pipeline, confidence-threshold, and no-contradictions trio isn't an arbitrary engineering choice. It mirrors a pattern two billion years of evolution converged on.
The gate is the architecture
Every persistent knowledge system eventually has to answer the same question: what gets written down, and on whose authority?
For a brain, the answer is not conscious. You do not sit down each evening and decide which of the day's experiences to encode into long-term memory. Something in you decides, below the level of attention, and what gets through is the substrate of everything you will later think of as your understanding of the world. What gets filtered out is, by definition, unrecoverable — you will never know what you nearly remembered.
For a knowledge infrastructure engine, the same question becomes an explicit design problem. Trail compiles sources into a cross-referenced wiki maintained by a language model. If every candidate that the compiler proposes were admitted without scrutiny, the wiki would drift into incoherence within weeks. Contradictions would accumulate. Low-confidence speculation would be indistinguishable from verified claims. The system would look like it was learning, but what it was actually doing was accreting noise.
So Trail has a gate. Three filters on that gate, to be precise. They sit between the candidate queue (F17) and the wiki itself, and they decide — per candidate, per event — whether a piece of proposed knowledge gets auto-approved, gets raised to a human curator, or gets rejected outright. Today the first filter is live in packages/core/src/queue/policy.ts. The second ships in F19 iteration 2. The third depends on F32, the background lint pass that surfaces contradictions across sources.
What is interesting — what this essay is about — is that these three filters are not a set of engineering heuristics chosen for convenience. They are, almost exactly, the three filters that Bandler and Grinder named in 1975 when they wrote The Structure of Magic and founded what became Neuro-Linguistic Programming. They are the filters that thalamic gating, the reticular activating system, and the prefrontal cortex implement biologically. They are, in a different vocabulary, the filters Karl Friston's free-energy principle formalises as the way a predictive brain keeps itself coherent.
The correspondence is not metaphor. It is structural, and it gives us a way to reason about why Trail's architecture has the shape it does.
What NLP actually said
Neuro-Linguistic Programming — the original 1975 work, not the self-help cottage industry it later spawned — was an attempt to model how Fritz Perls, Virginia Satir, and Milton Erickson achieved therapeutic results that seemed almost magical to observers. Bandler and Grinder's claim, reduced to its core, was that every one of us builds an internal model of the world, and that this model is not the world itself. It is a map, and the territory is always larger.
The map gets built from experience. But experience does not enter the map unfiltered. Three universal processes mediate between what happens to a person and what they come to believe about the world:
- Generalisations. Patterns extracted across many experiences, then applied as defaults. A child who burned their hand on one hot stove generalises to "hot things can hurt." The pattern becomes a rule, and the rule runs automatically on future candidates.
- Deletions. Selective filtering of what gets stored based on attention, relevance, and perceived trust. You are not currently aware of the pressure of your chair against your back, but you are now. Most of what strikes your senses is deleted before it ever becomes a memory.
- Distortions. Reshaping of incoming information to fit the existing model. New facts that fit slide into place quietly. New facts that contradict get reshaped, questioned, or held in a provisional state until the inconsistency is resolved.
Bandler and Grinder argued that these filters are not flaws. They are what makes cognition possible. Without generalisation, you would have to relearn every situation from scratch. Without deletion, you would be overwhelmed. Without distortion, you would be unable to hold a coherent model of anything. The filters are the price of having a map at all.
But the filters also explain how a model of the world can become pathological. When deletion is too aggressive, important signals are missed. When generalisation is too rigid, exceptions cannot register. When distortion dominates, the model becomes immune to correction and drifts away from reality.
A good knowledge system, biological or artificial, has to tune these filters carefully. Too open and the model becomes noise. Too closed and it becomes dogma.
The thalamic gate
Neuroscience arrived at the same three-filter picture from a different direction.
The thalamus is a small, dense structure buried deep in the brain. Almost every signal entering the cortex — vision, hearing, touch, proprioception, the lot — passes through it first. The thalamus is not a relay station in the passive sense. It is a gate. It decides what reaches the cortex with enough signal strength to register, what gets attenuated, and what gets dropped entirely. This is thalamic gating, and it is running continuously beneath the floor of conscious experience.
Sitting alongside it, the reticular activating system (RAS) — a diffuse network in the brainstem — controls the general arousal level that determines whether the gate is wide or narrow. When you are alert, the RAS is active, the gate is wide, and more signals reach cortex. When you are drowsy, the gate narrows. When you sleep, most of it closes entirely so that consolidation can happen without interference.
Above both sits the prefrontal cortex, which provides executive control. When a signal does reach awareness, the prefrontal cortex evaluates it against existing beliefs, prior experience, and current goals. If the signal conflicts with something the brain already takes to be true, the anterior cingulate cortex lights up, the amygdala joins in, and what the person experiences is the subjective feeling of this does not fit. Cognitive dissonance is the name psychologists gave this signal decades before anyone mapped the circuitry.
Lay these three structures against the NLP filters and the correspondence is almost one-to-one:
- Deletions → thalamic gating. A signal either passes or does not.
- Generalisations → prior patterns encoded in cortex and basal ganglia that bias what the gate admits.
- Distortions → prefrontal reconciliation between new input and existing model.
Lay them against Trail's three filters and the correspondence holds again.
Trail's three filters, one by one
Trusted pipeline — deletion by provenance
The first filter is already in production. shouldAutoApprove(candidate) in packages/core/src/queue/policy.ts returns true only when two conditions hold: the candidate has no createdBy user (meaning it originated from a trusted background pipeline — ingest, scheduled recompile, source retraction — rather than from a human) and the candidate's kind is on the trusted list. Everything else flows to a curator.
This is deletion by provenance. A signal from a known source — the compiler itself, the retraction handler, the scheduled lint — is treated the way a brain treats a signal from one of its own well-calibrated sensory systems. It does not require conscious scrutiny. It passes the gate.
A signal from an unknown or less reliable source — a chat widget, a user-submitted correction, a reader feedback button — does not pass the gate unattended. It goes to conscious attention, which in Trail's case is a human curator in the admin UI.
This is the same policy a child learns to run implicitly. The parent's voice is trusted by default. The stranger's voice is not. Trust is not a static property; it is a learned posture toward classes of signal, updated over time by how those classes have behaved in the past. The brain calibrates this continuously. Trail's policy engine will calibrate it across F19's iterations as trusted-source lists expand and contract based on observed candidate quality.
Confidence threshold — attention and arousal
The second filter lands in F19 iteration 2. Every candidate will carry a confidence score between 0 and 1. Candidates above threshold pass automatically. Candidates below threshold are raised.
This is what neuroscientists would recognise as signal-strength gating, and what NLP would recognise as attentional deletion. Weak signals are deleted. Strong signals cross the threshold. The brain runs this computation at the level of individual neurons via membrane potentials: inputs must reach a certain summed strength to trigger firing. Below threshold, nothing happens. Above threshold, the signal propagates.
The same logic governs memory encoding. Weak, brief, unemotional experiences are rarely consolidated. Strong, repeated, or emotionally salient experiences cross into long-term storage. The amygdala modulates this by stamping experiences that arrived alongside fear, reward, or surprise with extra encoding weight.
Karl Friston's free-energy principle, developed over the last two decades, formalises this into a Bayesian framework. A brain, on this account, is continuously running predictions about incoming signals and updating its internal model only when the evidence exceeds a confidence threshold. A signal that confirms what the model already predicts carries low information and can be processed with minimal updating. A signal that disconfirms the model carries high information and demands attention. The threshold is tunable — more attentive states lower it, fatigue raises it.
Trail's confidence threshold is the same mechanism made explicit. The compiler emits, alongside each proposed candidate, a scalar representing how strongly the evidence supports the claim. High-confidence claims enter the wiki without ceremony. Low-confidence claims wait for a human to look at them. The threshold is configurable, and it will almost certainly need to be tuned per category of knowledge over time — a medical claim deserves a higher bar than a meeting note.
This filter is the difference between a system that accumulates verified knowledge and a system that accumulates speculation. Brains that skip it become delusional. Wikis that skip it become garbage heaps.
No contradictions — cognitive dissonance
The third filter depends on F32, the background lint pass that cross-references claims across the compiled wiki. Once F32 is running, the policy engine can check any incoming candidate against existing claims. If the candidate contradicts something the wiki already says, the candidate does not auto-approve — regardless of confidence or pipeline. It gets raised to curator review with the conflict explicitly surfaced.
This is the hard part. In the brain, it is the part that makes learning difficult. When a new fact contradicts an established belief, the brain does not silently overwrite. Synapses that encode the old belief have been reinforced through repetition and are physically stronger than those carrying a single new input. The anterior cingulate cortex registers the mismatch. The amygdala fires. Dopamine pathways pause. The person experiences something like wait, that does not sound right, and they slow down.
This pause is what Elizabeth Loftus's long research programme on false memory has taught us to respect. Memories admitted without this check are memories that can be rewritten by suggestion, repetition, and social pressure. Entire bodies of remembered experience have turned out to be reconstructions built from contradictory inputs that the brain failed to flag at encoding time. The filter matters. Where it weakens, the model of the world loses its grip on the world.
For a wiki compiled by a language model over thousands of sources, the equivalent failure mode is catastrophic in a slower but equally corrosive way. Source A says one thing. Source B says the opposite. Without a contradiction check, both claims land on the wiki under different phrasings, and subsequent queries retrieve whichever compiled more recently. The wiki becomes internally inconsistent, and every query over the inconsistent region returns noise.
With F32 in place, the contradiction does not get silently absorbed. It gets surfaced. A curator looks at both sources. A resolution is written — sometimes in favour of A, sometimes in favour of B, sometimes as a new claim that says both are reported; the current evidence is mixed. The wiki encodes not just what is known but what is contested, which is closer to how scientific knowledge actually accumulates than any auto-merging system could produce.
Why three and not one
None of these filters is sufficient on its own.
A trusted-pipeline filter alone would admit every signal from trusted sources, including the ones with low confidence and the ones that contradict existing wiki claims. The brain-equivalent failure is a person who believes everything their preferred news source tells them, no matter how poorly supported or internally inconsistent.
A confidence threshold alone would admit high-confidence claims from adversarial sources. The brain-equivalent failure is a person who updates freely on any signal that comes in loud and clear, which is why scams work.
A contradiction check alone would admit low-confidence novelties as long as they happened to match whatever the wiki already said. The brain-equivalent failure is confirmation bias: a model that grows by accumulating whatever reinforces itself, never correcting.
The three filters compose. A candidate must come from a trusted pipeline, carry sufficient confidence, and not contradict what the wiki already holds — or else it goes to a human. This is the policy Trail's queue enforces. It is the same policy, in different vocabularies, that Bandler and Grinder named, that thalamic and prefrontal circuits implement, that Friston's free-energy principle formalises. Two billion years of evolution converged on this shape because no other shape produces a stable model of a complex world.
The memex, again
Vannevar Bush's 1945 memex essay did not directly address curation. He was writing at a moment when the bottleneck seemed to be retrieval — the explosion of post-war scientific literature outpacing any researcher's ability to find what they needed. His proposed machine was a tool for building associative trails, not for filtering noise.
But read carefully, the memex assumed a curator. The person operating it was the one who decided which frames went on which reel, which trails were worth marking, which annotations were worth attaching. The gate was the human.
Trail inherits that assumption and scales it. The compiler proposes. The policy engine filters. The curator decides. And the wiki — the long-term persistent structure — only admits what passes all three checks, plus any human-reviewed exceptions.
This is what makes a knowledge system a knowledge system. Not the sophistication of the model attached to it. Not the size of the corpus behind it. The architecture of its gate — and whether the three filters on that gate are doing the work that evolution figured out a long time ago was the only way a model of the world keeps a grip on the world.
Trail has three filters. The brain has three filters. This is not a coincidence. It is the shape the problem keeps converging on.
Further reading
- Richard Bandler & John Grinder, The Structure of Magic, 1975 — the original NLP model of generalisation, deletion, and distortion
- Karl Friston, The free-energy principle: a unified brain theory?, Nature Reviews Neuroscience, 2010
- Elizabeth Loftus, Memory: Surprising New Insights Into How We Remember and Why We Forget, 1980, and subsequent work on false memory
- Donald Hebb, The Organization of Behavior, 1949
- For the companion essay on why Trail compiles instead of searches, see How a Brain Actually Remembers