Memetics Matters Most
AI, selection pressures, and the singularity
The “Singularity” refers to a tipping point in human affairs; a stretch of time when technology changes so quickly, and with such strange side-effects, that our usual ways of predicting and steering the future stop working. In AI circles, this manifests as a worry about losing meaningful control over ever more capable systems. As such, there are three schools of thought: i) accelerating, exponential change leads to a loss of control; ii) an intelligence emerges which is smarter than us and so cannot be predicted’ iii) a hybrid of sorts, that there will be an “intelligence explosion” where an agent begins to improve itself exponentially. These schools each stress speed, unpredictability, and self-improvement in different amounts.
This essay argues something different and more immediate. The Singularity that matters is not ahead of us; it is happening and we are already beyond the event horizon. By ‘singularity’, I mean two things: First, the point at which we – humans – lose easy control over ever-accelerating innovation. Second, a general structural transition, after which improvement in cognition becomes self-accelerating and only weakly coupled to the capacities or lifespans of particular agents.
In both cases, writing is the beginning of this ever-accelerating progress. Once ideas can leave our mind and return to it, unchanged, two processes ratchet. One is mechanistic: external symbols let us decompose and recompose thought, pause our work, reliably transmit thoughts across enormous spans of time, and test ideas across generations. The other is evolutionary: systems of writing become selection environments for the replication of ideas. These mechanistic and evolutionary lenses have an important function moving forward. The mechanistic lens asks how parts and rules generate behaviour; it buys us proximate control over systems and allows us to pose counterfactuals (“If we change X, how would Y move?”). The evolutionary lens asks why certain forms persist and spread; it buys us ultimate control, allowing us to post questions about patterns and trajectories (“Why this, here, and now?”). This pair cuts across the physical science v social science divide and will be important to explore the topics this essay wrestles with, which inherently resist such typical distinctions.
If I’m right, many present difficulties – from the structure of our economies to AI safety concerns – are not best framed as moral failings or as specific problems caused by (or awaiting) forthcoming machine consciousness, but more generally as failures of specification and control of an optimiser that has already been running for millennia. Later, I will suggest that this already-running optimiser tends towards a more stable second state: migrating from plural memetic ecology to persistent, engineered optimisers that preserve aims and resist correction.
Writing as a mechanism
Unaided, human memory is remarkably capable and yet narrowly bounded. While our recognition and recall are impressive, our working memory can only entertain a few structured units of information simultaneously. Our minds must trade agility for energy efficiency and speed for stability. This shapes everything from how we reason to how we plan across time.
Writing changed everything.1 Systems of external symbols both extend our memory and restructure our cognition. A shopping list or recipe are trivial examples, but a deeper point is that external symbols allow us to scaffold longer chains of thought than working memory can sustain. And often, the results we store a piecemeal product of blind trial-and-error such that, over time, communities can develop and pass on complex knowledge (e.g.: cooking, navigation, or construction) without any member having derived that knowledge from first principles. What is more, in text we are able to pause our work – for interruption or reflection – and return to it later or pass the work on. Maths is a great example. Written proofs are durable artefacts that strangers can check, handed down for centuries; the Pythagorean theorem has been in the public domain for millennia. Once cognition can be offloaded or stored, the bottleneck shifts from the architectures of our brain to the fidelity, reach, and coordination afforded by our chosen medium. That shift suffices to explain why, from the point of view of any single person, culture begins to display complexity and capacities no single person could construct or survey in a lifetime. That is the first element of take-off: from bounded, ephemeral cognition to persistent, inspectable computation over symbols that affords long-range, precise addressability and sustains incredible complexity.
Writing as an evolutionary process
The second element is a process of evolution. Writing gives ideas a durable medium for replication, variation, and selection – necessary evolutionary dynamics. A “population” of items that are copied with variation, under differential retention, will adapt to the variables governing retention. Discrete lineages of ideas (call them “memes”, as Dawkins suggested) will compete for scarce resources: attention, prestige, shelf space, server time, or whatever else the medium requires. This is the logic of evolutionary processes: wherever we find replicators that vary and yield differential reproduction, we see adaptation. Crucially, selection operates on what spreads, not on what’s true or good, and what proliferates is whatever the vehicle affords and environment rewards. “Survival of the fittest” does not mean “Survival of the most moral, beneficial, or desirable”. Academia rewards discovery; broadcasting rewards reach; social media platforms reward engagement. Benefits may only be incidental beyond their domain. Environments function as optimisers with an implicit objective function. With careful design (or “reward shaping”), we may sometimes be able to fix the target, but once fixed, aggregate dynamics take over. That is what “beyond our control” means: not that we don’t choose, per se, but that our choices instantiate at-first adaptive systems whose dynamics and overwhelming accumulations then subsume and choose for us.
Culture is cumulative and ratchets; when memory and selection combine, culture becomes a self-reinforcing search process. Innovations in notation open new search spaces. Once the right adequate notation or representation exists, entire landscapes open up to us: calculus makes modern physics possible and the musical staff made polyphony possible, and subsequently modern Western music. These innovations accrued inside institutions and structures that could preserve them, alongside standards and protocols to make discovery and recombination easier. No individual could learn much more than a mere fraction of a fraction of modern knowledge systems, but the system itself “learns” and capabilities increase even when members retire (in fact, such autophagy is incentivised when members can no longer keep pace). From writing to the printing press, to libraries, and then to digital archives, and the Internet, progress is not a discontinuity so much as a recursive accumulation whose slope steepens with improvements to fidelity, connectivity, and recombination. Evolution in a stable environment tends towards equilibria and new media shift that equilibrium. From inside the transition, the approach to new equilibrium feels and looks like a singularity and, on any natural reading, a self-accelerating process only loosely tethered to the limits or control of our minds suffices for “singularity”.
Algorithms everywhere
At this point, it is useful to state an explicit, unifying ontology. I contend that, properly understood, the objects generating our present difficulties and concerns are not “intelligent” systems as such, but algorithms themselves implemented in whatever medium affords variable replication and control. Markets, bureaucracies, and social feeds are algorithms by this standard, as are contemporary machine learning systems. When the coupling between algorithm and environment are poorly specified, predictable failure modes follow. . To distinguish relevant failure modes, it is helpful to speak of “algogenic harm” (like ‘iatrogenic’), that is, algorithm generated harm. The harms in view are entailed not by any metaphysics of intelligence but by the goals and the structure of the feedback loop by which an algorithm is coupled to its environment. This is the ecological picture: writing created the conditions for a memetic ecology – a psiome rather than biome, if you will. Within this picture, the familiar categories within AI Safety debates can be found: race dynamics and organisational pathologies are one source of algogenic harm; corrosive memetics and epistemic pollution are another; runaway optimisers are a third.
From our perspective, as individuals in that psiome, the system presents itself as a landscape of psychic flora and fauna. Some representations behave symbiotically (scaffolding our cognition and cooperation, sharing in mutual benefits), others parasitically (capturing our attention and action, taking private benefit while externalising costs). Wherever transmission and reinforcement are strong, local cognitive dynamics – the immediate influences of noticing, valuing, and doing – converge on high-gain attractors that feel external. By ‘high-gain’ I mean feedback loops with positive feedback: reward or punishment signals are direct and severe, with variable reinforcement, so that attention, affect, and action are pulled back into regular patterns while rival alternatives get dampened. In this way, our minds are part of the psiomic terrain, both living in and creating niches with different selection pressures, just as complex multicellular life is both host and inhabitant. This is what optimisation against relatively fixed features of human cognition feels like from the inside.
Seen from this perspective, it is unsurprising that pre-modern cultures described recurring, maladaptive patterns of thought and behaviour as “possession” by demons or spirits.2 In contemporary terms, these are cases when a memeplex gains control of our attentional and motivational circuitry, sustained by a reward signal that resists deliberative correction. The language of spirits and demons was a vivid, easy to reach for taxonomy of agency capture.
A canonical example predates AI, the economy and the market. Markets are price systems that implement a distributed, iterative search over feasible asset allocations using local signals of willingness to buy and sell, under uncertainty. When this system fails to register externalities, we see predictable pathologies: over-extracted resources, misallocated resources, fragile supply chains, rent-seeking, and spiralling inequality. We call these economic forces, but in formal terms they are the consequences of an algorithmic procedure optimising for narrow objectives under constraints. Like any otherwise-ungoverned, narrow optimiser, so-called ‘free market’ is a utility monster; it will not optimise for well-being or stability, in ways we value, without additional forces to shape it. Importantly, whether the market is run by handwritten receipts or digital market makers, implementation changes capacity and speed, but not the underlying logic (in principle). Intervening on such internal mechanisms* *will not change the trajectory of a market or its end-point, it will only adjust its convergence and variability.
Large Language Models (LLMs) are like markets. In pre-training, LLMs model the algorithms which govern our language and structure our knowledge systems. They distil regularities into a model that, given some context and prompting, produces outputs with latent probabilities to likely continuations, training to minimise next-word mistakes within the constraints of their architecture, data, and any subsequent fine-tuning. When they hallucinate or behave improperly, they behave just as markets do: as lossy and limited representations of our world, they optimise an internal objective that is only weakly tethered to the full set of downstream consequences we care about. In this way, they do not constitute a truly new metaphysical category; they are a token of scaled, global algorithms. In memetic terms, LLMs are powerful engines of recombination and transmission.3 Plausibly, like unregulated markets, such engineered systems may consolidate persistent, unitary agency.
What should we govern?
Several objections can be met, briefly. One might say that computation is a decisive break. I take it as a scaling of symbolic operations and associative learning. I suggest that computation is not different in relevant ways: higher-fidelity storage, faster copying, and better search have all steepened the innovation curve but they are part of the same, long causal chain, not a discontinuous break. One might also suggest that “memetics” lacks a unit of selection. However, I think the core claim – that representations are copied and retained with variability, causing mutations – does not require a clear agreement on idea-speciation (and for that matter, nor does evolution require settled agreement on phylogeny and natural kinds). A third objection might be that my view elides individual agency or takes an overly pessimistic view. Perhaps. However, I believe it is right to say that individual agents have limited power which diminishes beyond local decisions and I am willing to take it for granted, for now. Suffice to say that individuals choose locally; emergent systems implement global behaviours from local choices plus rules of interaction, and the loss of control over aggregate outcomes is an ordinary consequence of ungoverned optimisers. What, then, ought we govern?
If writing kick-started a singularity by making cognition cumulative and selection-driven at civilisational level, then the present danger – the ‘singularity’ as it is commonly referred to in AI safety terms – is a second, narrower change: the migration of optimisation from a diffuse, plural ecology into a system that pursues ends coherently across time, preserves them under pressure, and resists correction. We live in the first state; we may yet transition to the second. Therefore, we require a two-track solution. On the first track, we steer the optimisers we live inside by altering the selection pressures: adjusting incentives; drafting and enforcing standards, policies, and legislation; and enforcing liabilities to price externalities appropriately. On the second track, we pursue classical AI Safety levers: limiting proliferation, dampening frontier development, designing robust safety evaluations, and so forth. In practice, such a programme amounts to governing both the stated objectives of our systems and the selection forces that determine which behaviours win out.
Memetic and biological processes are only loosely coupled. Therefore, we should expect systematic misalignment to arise and persist. Yet, influence is bidirectional. Literacy and numeracy, afforded by writing, altered our developmental trajectories and, in turn, selection pressures for cognitive traits. Just like dairying co-evolved with lactase persistence, urbanisation and global travel changed diseases, and domestication reshaped species (and us). Ideas can make us more or less fit for certain environments, and environmental changes can make memes more or less fit too; there is a closed loop between the biosphere (genes) and the infosphere (memes). On this view, the primary objective of governance should not be intelligence as such, but external aspects of the environment: the goals and structures that determine what is optimised and which cognitive attractors are stable in our psychological and institutional environments.Changing those structures is how we retune selection and development, and, therefore, is the proper target of action.
Conclusions
If this analysis is sound, it reframes worries about an AI crisis as a special case of a larger problem: living under algorithms whose objectives are mis-specified. In my view, the novelty of recent models has distracted from a deeper, older structure of much greater significance. We do not so much inhabit an “Age of AI” as an “Age of Algorithms” whose power to shape action has exceeded our grasp. Therefore, the right practical question is bigger than AI: what objectives are optimised, how are they coupled to the world, and with what externalities? This is good news. This is precisely what behavioural economics considers. We do know how to wrestle with these questions and we have a toolkit.
The take-off that matters is the one writing set in motion. The singularity, is not so much about some other awakened mind, but our collective offloading of cognition to external systems which we have now experienced as overnight change, millennia in the making. Our task is to understand and steer the optimiser we already live under, in the memetic terms that make its consequences legible, and to prevent it collapsing into a single, all-consuming entity.