Surely I’m not just an LLM
(EDIT: I’ve rewritten and softened the text below, based on some valuable feedback from people that pointed out that I was wrong. Ah well.)
I just read an Ars Technica article by Benji Edwards titled “The personhood trap: How AI fakes human personality” , with the subtitle “AI assistants don’t have fixed personalities—just patterns of output guided by humans.”
I have lots of things to say, but I’m just going to post this very brief reply:
In my opinion, in this article Benji Edwards falls into the classic “cartesian illusion” of many people who talk about humans; they believe that behind the mechanism that generates the words, there is some kind of “knower”, essentially a little person inside their heads. The sentence in this article that most clearly communicates this illusion in this article starts like this: “You’re interacting with a system that generates plausible-sounding text based on patterns in training data, not a person with …” The problem here is this: humans are also systems that generate plausible-sounding text based on patterns in training data. The inputs here are sensory (including many internal nervous system inputs), and I’m willing to concede that there are parts of the mind that are not well captured by LLMs, but the use of words like “system”, “plausible-sounding”, and “training-data” are red herrings, representative (I claim) of a misunderstanding of how humans work, and an unwillingness to admit that in most ways that matter, LLMs generate text in exactly the same way as humans do.
So: yes. You learn from patterns in training data, just like an LLM.