We are told that large language models “learn” when exposed to vast amounts of text. The metaphor suggests cognitive growth: LLMs are apprentices, becoming wise through experience, like monks poring over scripture.
Charming — but entirely misleading.
The Metaphor Problem
-
Learning implies understanding, deliberation, and internalization.
-
Experience implies consciousness and subjective engagement.
-
In reality, a model adjusts numerical weights according to statistical patterns. There is no comprehension, no reflection, no moral insight.
By using the metaphor of learning, we subtly import human cognitive ontology into a mathematical system. Users begin to think models understand what they produce, when all that exists is pattern alignment.
Why This Is Misleading
-
Anthropomorphises statistical optimisation — transforms numbers into mental processes.
-
Obscures relational nature of language — LLMs do not know; they only instantiate relational correlations among tokens.
-
Encourages overtrust — if it “learned,” it must understand. If it “understands,” it must be reliable.
The “training” metaphor conceals that LLMs are instantiations of relational constraints learned from large corpora, not apprentices acquiring wisdom.
Relational Ontology Footnote
From a relational perspective, the model is a system of potentialities actualised in a given context. “Training” is a second-order construal of weight adjustment patterns, not a process of comprehension. No agency, no cognition — only alignment of statistical potentials.
Closing Joke (Because Parody)
If LLMs really “learned,” your predictive text would be writing a dissertation on Kant instead of suggesting “duck soup” at every meal.
No comments:
Post a Comment