1 Comment

LLMs are interesting, but I’m not convinced that they are a simple model of intelligence. For one, language is a very human way to understand intelligence, but it isn’t the only way. Species, including rats, pigs, dolphins, apes, crows, and octopuses, all exhibit intelligence through complex social structures, memory, using tools, solving puzzles, non-verbal communication, and more, without the ability to use language. Additionally, there are humans who for a variety of reasons aren’t capable of using language, but who are no less intelligent for it. Written or verbal language is only one way that we can understand and convey information to other humans. Demonstration, tone, drawings and diagrams, gestures, and more are other ways that our understanding is encoded and transferred to others.

LLMs are great at recognizing patterns, but they have no conceptual model of the world, which means that they lack something that makes human intelligence very different from machines. Humans and other species are able to flexibly adapt their knowledge of one thing to another with little to no additional training. Part of that is that they can build analogies between similar things in their mind: this new thing is kind of like this other thing that I’ve experienced before…I’m going to try this approach and adapt based on what happens, adjusting my analogy as I go. And humans are often able to make analogies between concepts that might seem rather disparate, even to other humans. Someone may compare designing a digital product to the process of writing music or to the scientific method, depending on what they’ve had previous experience with. In that process, they often find skills that translate to a new context, making it easier and faster to pick up new skills.



At the moment, our poor facsimiles for intelligence as exhibited by LLMs actually give me more interest in how fascinating and wondrous humans are. If we think about how humans learn, they are able to absorb both the mechanics and meaning of words through only a small sample of language modeled with them independently typically in under 10 years. And even beyond that, their vocabulary and grammar continues to expand and evolve organically as new words and constructions are introduced to the lexicon. And they do that without having to have all the material ever authored by humans fed into them. There are efficiencies in this system of learning that we simply haven’t come to understand.



Instead, I’d argue that the LLMs that are being built to give the illusion of intelligence, like very advanced automatons who can seemingly replicate the output of intelligent life, but not the process. We’ve become, possibly dangerously, enamored of our own creations without considering their limitations or our own. I’d highly recommend reading this article about the nature of LLMs: https://nymag.com/intelligencer/article/ai-artificial-intelligence-chatbots-emily-m-bender.html

Expand full comment