Tuesday 2 September 2025 by Bradley Kuhn
Big Tech seeks every advantage to convince users that computing is
revolutionized by the latest fad. When the tipping point of Large
Language Models (
There's so much to criticize about generative AI, but I focus now merely on the pseudo-scientific rhetoric adopted to describe the LLM-backed user-interactive systems in common use today. “Ugh, what a convoluted phrase”, you may ask, “why not call them ‘chat bots’ like everyone else?” Because “chat bot” exemplifies the very anthropomorphic hyperbole of concern.
Too often, software freedom activists (including me — 😬) have asked us to
police our language as an advocacy tactic. Herein, I seek not to cajole everyone
to end AI anthropomorphism. I suggest rather that, when you
write about the latest Big Tech craze, ask yourself: Is my
rhetoric actually reinforcing the message of the very bad actors that I
seek to criticize?
This work now has interested parities with varied motivations. Researchers, for example,
will usually
admit that
they have nothing to contribute to philosophical debates about whether it is
appropriate to … [anthropomorphize] … machines
. But
researchers also can never resist a nascent area of study — so all
the academic disclaimers do not prevent the “world of
tomorrow” exuberance
expressed
by those whose work is now the flavor of the month (especially after they toiled at it for
decades in relative obscurity). Computer science (
The research behind these LLM-backed generative AI systems is (mostly) not actually new. There's just more electricity, CPUs/GPUs, & digital data available now. When given ungodly resources, well-known techniques began yielding novel results. That allowed for quicker incremental (not exponential) improvement. But, a revolution it is not.
I once asked a fellow Do you know why it's wrong when
it's wrong and why it's right when it's right?
. She grimaced and
answered: Not at all. It doesn't think.
. 30 years later, machines still don't think.
Precisely there lies the danger of anthropomorphization. While we may never know why our fellow humans believe what they believe — after centuries that brought1 Heraclitus, Aristotle, Aquinas, Bacon, Decartes, Kant, Kierkegaard, and Haack — we do know that people think, and therefore, they are. Computers aren't. Software isn't. When we who are succumb to the capitalist chicanery and erroneously project being unto these systems, we take our first step toward relinquishing our inherent power over these systems.
Counter-intuitively, the most dangerous are the AI anthropomorphism that criticize rather than laud the systems. The worst of these, “hallucination”, is insidious. Appropriation of a diagnostic term from the DSM-5 into CS literature is abhorrent — prima facie . The term leads the reader to the Bizarro world where programmers are doctors who heal sick programs for the betterment of society. Annoyingly and ironically — even if we did wish to anthropomorphize — LLM-backed generative AI systems almost never hallucinate. If one were to insist on lifting an analogous term from mental illness diagnosis (which I obviously don't recommend), the term is “delusional”. Frankly, having spent hundreds of hours of my life talking with a mentally ill family member who is frequently delusional but has almost never hallucinated — and having to learn to delineate the two for the purpose of assisting in the individual's care — I find it downright offensive and triggering that either term could possibly be used to describe a thing rather than a person.
Sadly, Big Tech really wants us to jump (not walk) to the conclusion that these systems
are human — or, at least, as beloved pets that we can't
imagine living without. Critics like me are easily framed as Luddites
when we've been socially manipulated into viewing — as “almost
human” — these machines poised to replace the artisans, the law enforcers, and the grocery stockers. Like many of you, I read
Asimov as a child. I later cheered during ST:TNG S02E09 (“Measure of a
Man”) when Lawyer Picard established Mr. Data's right to sentience
by shouting:
Your Honour, Starfleet was founded to seek out new life. Well, there it
sits.
But, I assure you as someone who has devoted much of my life to
considering the moral and ethical implication of Big Tech: they have
yet to give us Mr. Data — and if they eventually do, that Mr. Data2
is
probably going to work for ICE, not Starfleet. Remember, Noonien Soong's
fictional positronic opus was altruistic only because Soong worked in a post-scarcity society.
While I was still working on a draft of this essay, Eryk Salvaggio's essay “Human Literacy” was published. Salvaggio makes excellent further reading on the points above.
0I always find that, in science, the answers simplest questions are always the most illuminating. I'm reminded how Clifford Stoll wrote about the most pertinent question at his PhD Physics prelims was “why is the sky blue?”.
1I really just picked a list of my favorite epistemologists here that sounded good when stated in a row; I apologize in advance if I left out your favorite from the list.
2I realize fellow Star Trek fans will say I was moving my lips and nothing came out but a bunch of gibberish because I forgot about Lore. 😛 I didn't forget about Lore; that, my readers, would have to be a topic for a different blog post.
Posted on Tuesday 2 September 2025 at 12:25 by Bradley Kuhn.
Comment on this post in this discussion forum conversation.
This website and all documents on it are licensed under a
Creative Commons Attribution-Share Alike 3.0 United States License
.
#include <std/disclaimer.h>
use Standard::Disclaimer;
from standard import disclaimer
SELECT full_text FROM standard WHERE type = 'disclaimer';
Both previously and presently, I have been employed by and/or done work for various organizations that also have views on Free, Libre, and Open Source Software. As should be blatantly obvious, this is my website, not theirs, so please do not assume views and opinions here belong to any such organization.
— bkuhn
ebb is a (currently) unregistered service mark of Bradley Kuhn.
Bradley Kuhn <bkuhn@ebb.org>