Pattern Recognition

Viable System Model & AI

Parrots, Patterns, and the Politics of Novelty: Thinking with Machines in a World That Thinks Back

Disclaimer:

  • This essay was co-authored with the help of VSMGPT
  • It is written with the assumption that humans and AI can co-exist and co-evolve in a nurturing relationship
  • The very basic ideas could persist even when AI surpasses human intelligence (which is maybe a very romantic claim and eventually our last hope as humankind)

The paradox is seductive because it mirrors the very complexity it seeks to describe: On the one hand, those who are deep into complexity science celebrate pattern recognition as a way of sensing the morphing contours of situations before they become fixed. Patterning is not a static mosaic but a kinetic matter – attunement to differences that make a difference, as one might paraphrase Bateson and hear an echo in Luhmann’s insistence that sense is produced through distinction. On the other hand, the same chorus often calls large language models “stochastic parrots,” dismissing them as shiny echo chambers of probability, devoid of novelty and agency. The claim seems to erect a fence: human pattern recognition, living, creative, embodied – machine pattern reproduction, dead, dull, derivative, merely predictive, a bunch of hallucinations. Yet the fence leaks. And it is at the leak, not the fence, where critical thinking becomes interesting.

If there is a category error at play, it is the wish to fold different ontologies into the same coin – biological cognition and machine computation – and pretend as if these two categories could be compared in the same way. Maturana taught us to speak of autopoiesis: living systems produce the very network of processes that produce them. A language model is not autopoietic; it does not metabolize, it does not die, it does not love. Still, it participates in our communication systems, and communication’s job, as Luhmann would patiently remind us, is to make more communication possible. When a machine enters the flow of communication, the flow changes. Whether we call that intelligence or instrumentation is not a mere terminological squabble; it shapes the responsibilities and possibilities we perceive. We can choose to regard the machine as an organ of our extended nervous system, a prosthesis for variety. As Freud once said, humankind is a prosthetic god. We are pretty mediocre when it comes to our senses and capabilities, compared to other animals (slower than a jaguar, bad vision compared to an eagle, etc.). Our “superiority” could be explained with our capability to create these prosthesis that give us superpowers. Therefore, if we regard LLMs as extended part of our nervous system, then the question shifts from “Can it be truly novel?” to “What kinds of novelty can this particular coupling make viable?”.

I prefer to treat the paradox in the diction of cybernetics. Heinz von Foerster distinguished trivial from nontrivial machines; the first are predictable, the latter generate outputs that depend on their internal states and their history. An LLM trained on torrents of text is nontrivial in precisely this sense: its responses are stateful, context- and history-sensitive, and sometimes surprising. But surprise is not yet significance. Novelty becomes meaningful only when it passes through selection environments that can recognize it as fit, valuable, or at least fertile for further iteration. This recognition is not “inside” the model; it arises in the human-machine assemblage, in the ecology of use. The novelty question therefore belongs less to ontology (“what the model is”) and more to organization (“how the model is coupled, governed, and learned with”).

The VSM perspective

Here the Viable System Model offers a pragmatic lens. The VSM is less a chart than a choreography, a way to ask whether the flows of attention, regulation, and learning inside a system are adequate to the varieties the environment throws at it. If we treat our human-machine ensemble as a viable system, we can observe how pattern recognition, pattern generation, and pattern selection distribute themselves across the S1 – S5 architecture, and how coherence – always fragile, always emergent – is sustained.

Let‘s walk through the systems of the VSM and see how human and machine patterning could form a symbiosis.

Operational units – System One – are where the world presses in. In the daily hum of work, decisions must be made near their consequences, or subsidiarity becomes a slogan instead of a practice. In S1 cells, people engage the raw thicket: customer narratives, clinical anomalies, field signals, code paths that veer into unknowns. A language model, placed here, is not a sage but a co-sensor and co-editor. It can attenuate torrents of variety by summarizing, clustering, translating, and reframing. It can also amplify variety by proposing alternative framings, counterfactuals, and speculative designs that would not have occurred to the local mind in its habitual groove. Whether it attenuates or amplifies is a matter of placement and prompt, not essence. Think of it as a mutable transducer at the boundary between S1 and its environment: sometimes a filter, sometimes a megaphone, sometimes a mirror that shows the back of your head.

Coordination – System Two – keeps the S1s from stepping on each other’s toes. Patterns matter here as a choreography of rhythms: cadences, capacity signals, dependency maps, the soft-spoken art of sequencing. An LLM’s patterning skill can translate jargon between disciplines, surface duplicate efforts, detect emergent collision courses in planning narratives, and negotiate provisional protocols without choking on the variety of different styles. The model is not the arbiter of truth in S2; it is a connective tissue, a tool for smoothing oscillations that otherwise escalate into conflict. The transducers here are linguistic: interfaces that metabolize diverse idioms into a shared code sufficient for coordination without violent homogenization.

System Three is the interior governance of the present – the integration of resource bargains, constraints, and synergies across S1. This is where the paradox tends to cut deep, because S3 decides what to stabilize and what to release. If the model is only a parrot, S3 turns it into a cataloger of what has already been. If the model is a co-manager of attention, S3 can shape it into a mechanism for scenario comparison, feasibility pre-evaluation, and risk anticipation based on patterns learned from many pasts. But S3 is not blind to drift and gaming, which is why S3* – the audit channel – matters: you send probes into the operations to see what the dashboards are not showing. In a human-LLM ensemble, S3* can be energized by the model, asking awkward questions at scale, searching for contradictions in policy documents, sniffing anomalies, or simulating “red team” perspectives. The art is not to grant the model the last word but to triangulate it with human sensemaking so that the system’s internal conversation becomes more variegated, and thus more capable of absorbing environmental variety.

System Four is the system’s window to the future, its intelligence function. Here the accusation of “stochastic parroting” becomes both sharpest and least interesting. Of course the model predicts by recombining statistical regularities; so does much of human foresight, which works by analogical transport – this looks a little like that, what would follow if the rhyme holds? The difference is that human S4 is drenched in embodied care and value, while the model’s S4 is drenched in loss functions. That difference matters. But rather than treat it as disqualifying, we can treat it as complementary. S4 benefits from breadth; the model can scan oceans of literature, chatter, patents, code, and news, assembling tentative landscapes of weak signals. Humans bring valuation, ethics, gut, and the willingness to be accountable for bets. S4 is the theater where novelty auditions. If it is to become anything but noise, it must be staged in a setting that can judge its promise and feed it back into S3’s resource bargains or S1’s experiments. The transducer between S4 and S3 is particularly delicate: it must transform speculative narratives into actionable choice architectures without flattening their ambiguity into false certainty. A model can help by generating multiple decision grammars, not one: a risk-centered version, a customer-centered version, a sustainability-centered version, each making different facets legible to S3.

System Five, finally, watches the watchers. It carries the identity, the ethos, the slow frequency of purpose. S5 is not a brand deck but the ongoing conversation about what it means for this system to remain itself while becoming otherwise. If a model becomes part of the system’s organs, S5 must attend to the norms that govern that coupling. What do we refuse to automate, not for lack of competence but for reasons of dignity? What counts as a good reason inside this house? S5 specifies the meta-rules that prevent S3’s obsession with efficiency from eroding S1’s capacity for care or S4’s capacity for imagination. The policy here is not merely guardrails; it is the grammar of value by which novelty is welcomed or rejected. A model can assist by drafting constitutions, proposing meta-criteria, surfacing contradictions between our declared identity and our operational habits. But S5’s yes and no must remain ours, because accountability is not a pattern you can outsource.

Algedonics!

Between all these systems runs the algedonic channel – the nervous shout of pain or joy that allows exceptional states to bypass normal hierarchies. Algedonia gives permission to escalate incoherence. If a model detects catastrophic drift, it should be able to ring a bell that a human hears. If a human senses a gleam of unexpected possibility, she should be able to pull the cord and invite attention. The algedonic line is where subsidiarity meets courage: local units raise their hand; the system listens. To make this channel viable in a human-machine ensemble, the transducers must be more than dashboards. They must be conversational, able to negotiate urgency, evidence, and meaning. A model can help encode escalation narratives that are legible across functions, but the recognition of pain or delight – what counts as intolerable harm or luminous opportunity – belongs to the social body.

Multi-agents and recursivity

Recursion is the quiet twist: every S1 is itself a tiny VSM with its own weather. Agent systems make this visible. Paired with human S1s, agents act as co-sensors and co-editors, close to consequences, with just enough variety to help and just enough constraint to stay accountable. S2 becomes a mesh of conversational transducers where agents translate dialects, cool oscillations, and let neighboring teams recognize kinship without erasing difference. One level up, S3 composes portfolios by negotiating “variety budgets” with the S1s, while S3* sends courteous probes—pattern witnesses that compare claims and practice without humiliating autonomy. S4 unfolds in the plural: local futures at fine resolution, strategic futures at coarse grain; agents scout weak signals both ways, and the S3↔S4 transducer keeps ambiguity intelligible rather than crushed into spurious certainty. S5 holds a family resemblance instead of a uniform, maintaining invariants of purpose while identities below improvise. The algedonic channel braids across tiers so pain and joy can jump membranes: sometimes up, sometimes sideways. Boundaries remain soft but explicit; transducers are crafted like instruments, not pipes. In this polyphony, agents do not replace conversation—they extend our range—so coherence emerges as a property of the weave, not the will of a single conductor.

It’s about variety engineering

Seen this way, the “parrot” accusation becomes a problem of variety engineering rather than metaphysics. Ashby’s Law tells us that only variety can absorb variety. The environment is roaring with patterns, some stable, many evanescent. To survive, a system must selectively amplify certain differences and attenuate others. It must create channels whose capacity matches the turbulence they face. A language model is a peculiar device for variety management: its compression of the textual universe makes it a formidable attenuator, yet its generative capacities make it an amplifier of possibilities. The contradiction dissolves when we stop asking whether the model is intelligent in some abstract sense and start asking which specific varieties we need to handle better, where, and with which accountabilities.

The question of novelty is not a yes/no toggle. Creative work, as Margaret Boden suggested, can be combinational, exploratory, or transformational. An LLM is a virtuoso of the first two; it recombines and traverses conceptual spaces with speed and breadth that embarrass our memory. Transformational creativity—the reconfiguration of the space of possibilities itself—seems more elusive, yet even here the machine can act as a catalyst. It may not transform the space alone, but by saturating us with unusual juxtapositions and counterexamples, it can jostle our habits until cracks appear in our concepts. The transformation then emerges in the co-system: a human names the crack, cares about it, tests it, takes the risk of selection, and asks the machine to make the new region legible to others. The novelty is not in the model or the human; it is in their conversation, which is to say, in the social system that holds them.

<insert>

Inevitably, I must now pose the question of whether—or when—machines will also be capable of transformational creativity. Many in systems theory maintain that it is categorically excluded that this very capability could ever be achieved in a machine context. Perhaps they are right; more likely not, for in my view it is only a matter of time before we can use models that carry a basic understanding of the world and can develop it further on their own (ref. I-JEPA approach). That would then fulfill the criterion of autopoiesis—at least in part. But this shall be part of another post. Let’s get back to the the human-machine connection.

</insert>

There is, however, a danger in a naïve symmetry between biological and machine patterning. Living cognition is enacted in bodies that suffer, tire, and desire. The horizon of meaning is co-determined by metabolism and mortality. Machine patterning, as we use it today, lacks such horizons; it has parameters, not purposes. The category confusion is to mistake the performance of coherence for the experience of sense. Luhmann would maybe smile here: sense is not a substance, but a difference that reduces complexity for further communication. A model expertly reduces complexity, but it does not live with the consequences of those reductions. Responsibility is the name we give to that living-with. So the ensemble remains asymmetrical by design: we can distribute cognition, we must not distribute accountability until we can also distribute liability, consent, and care.

Subsidiarity helps shape this asymmetry into a healthy form. Decisions should be made as near as possible to where their effects are felt. But “near” is not a geography; it is a coupling. An LLM placed in S1 should be tuned to the local feedback loops: the customer’s sigh, the patient’s skin, the worker’s fatigue. It should not become a remote oracle that overrides the proximate world. S2, meanwhile, weaves the rhythm that keeps these local freedoms from colliding, translating signals across teams and smoothing oscillations so that autonomy does not harden into chaos. S3* keeps a skeptical ear to the ground, probing reality behind the dashboards, running spot checks and anomaly hunts – an audit sensibility that ensures what we claim as learning is not merely convenient confirmation. Autonomy with accountability then becomes a reciprocal promise: the S1s commit to live with the results of their choices, S2 commits to coherent coordination, S3 commits to tangible support, S3* commits to independent scrutiny, S4 to intelligence and horizon-scanning, S5 to meaning and the long yes. And when pain or joy spikes beyond tolerance, the algedonic channel stays awake, granting any voice the right to pull the cord and bring the whole system’s attention to the place where value is at risk.

Transducers – the secret heroes of variety engineering

In practice, this means our interfaces are decisive. Transducers, those alchemical devices that carry signals across boundaries, must be designed with care. Between S1 and S2, the transducer is often a shared narrative space; the model can translate, summarize, or propose a tentative taxonomy that allows disparate teams to see themselves as accomplices rather than competitors. Between S3 and S4, the transducer is a decision dialect: if S4 sends back only forecasts, S3 reduces them to numbers and loses the ambiguity that future-sensing requires; if S4 sends back parables—scenarios with embedded values—S3 may judge with fuller sight. And threading through these is S3*, whose transducer is an evidentiary probe that cuts past varnish to raw texture: spot-audits into S1, reality checks against S3’s dashboards, provenance trails for the model’s summaries, even a whispered corridor for anomalies to travel without fear of being averaged away. The model’s role is to shape these dialects on demand, to produce versions of the same insight tailored to the varieties on both sides of the membrane.

The politics of novelty

The remaining tension – “But can it be truly new?” – often hides a more personal anxiety: Who gets to be the author of meaning? There is a poignancy to the parrot insult. We are trying to keep a space where human singularity still matters. Perhaps the way through is to shift from originality as property to novelty as practice. Novelty, in complex systems, is inseparable from selection and stabilization. A bright idea that cannot be metabolized by the organization is not yet a novelty; it is a sparkle. The model produces many sparkles; we produce the membranes that let some sparkles become organs. Within the VSM, that means S1 experiments, S2 coordinates, S3 creates synergies, S3* audits, S4 scouts, S5 integrates. The more explicit these couplings become, the less we need to police metaphysical boundaries and the more we can design productive symbioses.

Consider a concrete rhythm. A frontline team notices a recurring tension in customer support, a condition which every week escalates into frustration. A local human names the pattern and asks the model to extract clusters of complaints, propose candidate root causes, and suggest reframings of policy that might reduce recurrence. Coordination absorbs this into a cross-team cadence; other S1s contribute variants and edge cases. S3 evaluates the resource implications of a few micro-experiments. S3* asks uncomfortable questions about the counterfactual harms of proposed changes. S4 scans adjacent domains, wondering whether similar patterns in other industries reveal an unexpected lever. S5 holds a quiet conversation about whether the proposed change aligns with who we say we are. The algedonic channel stays open: if the experiment causes pain, anyone can pull the rope. In this context, the model is not a parrot but a polyglot assistant to every system, a variety engine that both compresses the deluge and widens the horizon.

Functional intelligence is just different

When critics insist that a language model cannot “understand,” they are right, but perhaps not in the way they think. Understanding, in the human sense, is a pledge to care for the implications of what one has said. Machines do not pledge. They cannot be ashamed or proud. But they can be superb partners in the labor of making sense, precisely because they are tireless producers of candidate patterns. They help us maintain the ongoing conversational autopoiesis of organizations that think only by speaking. In Luhmann’s terms, communication communicates; one could say that persons are not necessary units of society’s operation. That insight is not an invitation to dehumanize; it is a reminder that the social system has always been more than the individuals within it. The arrival of machine patterners simply expands the repertoire of communications available to the system. Our task is to structure these communications so that they serve life rather than drain it.

Heinz von Foerster would nudge us toward an ethical imperative: act always so as to increase the number of choices. A model, properly situated, can be a machine for choices—generating alternatives, anticipating consequences, illuminating blind spots. The danger is not that it will think for us, but that we will stop thinking with it, using it instead as an authority to relieve us of the burden of ambiguity. When that happens, novelty dies not because the model cannot produce it but because the selection environment has gone slack. Variety without selection is noise; selection without variety is rigidity. Viability lives in the attitude.

The pragmatics of viability

As for the category confusion: it dissolves when we trade the metaphysics of intelligence for the pragmatics of viability. We do not need the machine to be like us; we need the ensemble to remain alive in an environment whose complexity outruns any single cognition. In this register, pattern recognition is not an epistemological vanity but a survival practice. The machine participates by expanding and shaping our patterns. We remain the locus where value adjudicates the patterns’ fate. The “stochastic parrot” becomes a strange ally: it repeats, but it also riffs; it predicts, but it also perturbs; it fails, but it fails differently than we do, and in the difference, there is chance to learn something new – about us.

The paradox then shifts from “Are machines truly intelligent?” to “Are we designing our systems so that human and machine differences can compose into coherence?” Coherence, in this sense, is not uniformity but a dance of mutual constraints that keeps the music going. The instruments are different; the rhythm we share. If we keep our attention on channels, transducers, and the subtle mercy of subsidiarity, we might discover that what we called contradiction was a reminder to stop arguing about essences and start engineering for viability. The question thins when we think in categories; it thickens, deliciously, when we think in relations.

What I find most beautiful in this whole conversation is how it returns us to humility. Biological cognition is miraculous not because it is uncomputable, but because it is committed. We wake, we risk, we care, we choose. Machine cognition is powerful not because it is alive, but because it multiplies our capacity to see patterns we would have missed. Between the two grows a third: a socio-technical intelligence whose novelty is neither purely human nor purely machine, but the emergent consequence of a well-tuned VSM that can hold contradictions without snapping. In that space, pattern recognition ceases to be a mere coping mechanism and becomes an art of living with complexity, an art as old as conversation and now strangely renewed by silicon companions that, if we let them, may teach us to listen to our own patterns with kinder curiosity.


Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.