Discover more from Stone Mind by Juan J. Ramirez
The Subtle Unfolding: GPT and the Fundamental Essence of Intelligence
How Large Language Models (LLMs) Hint at the Simplicity of Intelligent Life
Like many people, I have been fascinated with the recent surge of Large Language Models (such as OpenAI’s GPT) and their almost unbelievable capacity for knowledge retrieval and knowledge reasoning tasks.
Yes, LLMs hallucinate and often they are incapable of dealing with very simple contextual tasks such as character counting.
But despite these weaknesses, LLMs are a remarkable tool that expand the horizon of what computers can do against the cumulative knowledge we have captured as humans in books, press articles, research papers and the the web in general.
Even with their somewhat choppy reliability, LLMs are capable of piecing together information in truly intelligent-like ways and can display the attributes of an intelligent actor.
But what make LLMs incredibly interesting, is the simplicity behind what they do.
LLMs are just character/token prediction machines that rely on their hyper-scaled size, in terms of data and synapses, to exhibit the type of utility they have become popular for.
Of course the way these models have been trained depend on multiple layers of tactical complexity, so they are definitely a mind-blowing accomplishment that wouldn’t have been possible a few years ago.
But at the heart of it all, the transformer architecture in LLMs is just a rather simple mechanism to understand the order of words and how they connect to each other.
This relative simplicity is what makes them even more special.
If a LLM is capable of showing early signs of intelligence, then it’s highly likely that the fundamental rules that allow intelligence to emerge are more elemental and primitive than what our own conscious perception of intelligence would suggest.
This discovery opens the door to amazing possibilities in our attempts to understand the very fabric of reality.
Is it possible that the evasive fundamental theory of everything could be as simple as as the the straightforward scaled mathematics of language that make LLMs possible?
Perhaps the universe is just a system with similar or even simpler rules and the only reason why we have failed to decode its workings is because we have failed to to explore in a direction of simplicity instead of complexity.
The promise and potential of LLMs is a mesmerizing and inspiring example of the simplicity from which the world seems to stem.
I’m not sure LLMs will be capable of achieving AGI, but I wouldn’t be surprised if they did. After all, language is the most important interface for human intelligence and the only mechanism that allows us to relay our stream of consciousness.
It would make perfect sense that decoding the mechanics of natural language and a little bit of reinforcement learning it’s all it takes for artificial general intelligence to happen. Maybe then we could ask that AGI what’s the meaning of life, and we would be surprised how obvious and elegantly simple the answer is.
Thanks for reading Stone Mind by Juan J. Ramirez! Subscribe for free to receive new posts and support my work.