Saturday, 27 September 2025

Tokens as Citizens

Popular explanations often describe LLMs as if they were societies of tiny agents: tokens “vote” on the next word, parameters “negotiate,” and neurons “decide.” The AI becomes a bustling democracy of mini-citizens, each with opinions, preferences, and agendas.

Charming — but entirely metaphorical.


The Metaphor Problem

  • Tokens as citizens implies agency, deliberation, and intent.

  • Neurons as decision-makers anthropomorphises statistical computation.

  • The reality is starkly different: tokens are elements in a relational network, and the model computes weighted probabilities, not social consensus.

Treating tokens as actors encourages the mistaken impression that LLMs have opinions, goals, or beliefs.


Why This Is Misleading

  1. Anthropomorphises mathematics — probabilistic outputs become political actors.

  2. Obscures systemic alignment — what appears as debate is actually a deterministic instantiation of relational patterns.

  3. Encourages misattribution of responsibility — if a token “votes wrong,” it did not err; the system executed its constraints correctly.

The “society of tokens” metaphor is entertaining, but it smuggles a false ontology into our understanding of computation.


Relational Ontology Footnote

From a relational perspective, the LLM is a network of potentialities actualised in context. Tokens do not deliberate; they are positions in a pattern of correlations. Any appearance of social negotiation is an artefact of metaphor, not mechanism.


Closing Joke (Because Parody)

If tokens really had votes, the AI would be running a parliamentary system with filibusters, coalition negotiations, and scandal over the misuse of semicolons — and yet somehow still auto-completing your grocery list incorrectly.

No comments:

Post a Comment