
Introduction
Are we reaching a hard limit on AI progress?
For anyone who has been paying attention (at least to this newsletter), the answer is probably yes.
Cal Newport provides a good timeline of the “scaling hypothesis” for The New Yorker here. The short summary is that Gary Marcus was ridiculed by the machine learning community, including celebrities such as Sam Altman, Elon Musk, and Yann LeCun, when he argued in 2022 that the current progress in AI was unsustainable. Now, after the release of GPT-5, he has finally been vindicated. Obviously, exponential technological progress can’t go on forever - that anyone is willing to stand on the other side of that argument, is surprising.
To be clear, the problem is not whether, or to what extend AI is useful or not. Clearly, AI is useful for some things and not useful for others. The real problem, as I see it, is that the economy favors the Sam Altmans of the world many times more than the Gary Marcus’s of the world. Shareholders in BigTech companies and venture capitalists are like children; they like to be told captivating stories, not hear about boring facts. When you own a couple of beach houses, private jets, and yachts, choosing to believe in good stories over facts is presumably that much easier compared to other people who are forced to find a sense of meaning and satisfaction in normal, boring reality.
The Hard Limit on AI
A few months ago, Thomas Wolf, Chief Science Officer of the popular AI community Hugging Face, wrote a blog post titled “The Einstein AI Model”. He critiqued a claim made by Anthropic’s CEO Dario Amodei, who said we’ll have a "country of Einsteins sitting in a data center”. Wolf contests that no, we are building “a country of yes-men on servers:
“I’ve always been a straight-A student. Coming from a small village, I joined the top French engineering school before getting accepted to MIT for PhD. School was always quite easy for me. I could just get where the professor was going, where the exam's creators were taking us and could predict the test questions beforehand.
That’s why, when I eventually became a researcher (more specifically a PhD student), I was completely shocked to discover that I was a pretty average, underwhelming, mediocre researcher. While many colleagues around me had interesting ideas, I was constantly hitting a wall. If something was not written in a book I could not invent it unless it was a rather useless variation of a known theory. More annoyingly, I found it very hard to challenge the status-quo, to question what I had learned. I was no Einstein, I was just very good at school. Or maybe even: I was no Einstein in part *because* I was good at school.
(..)
“The main mistake people usually make is thinking Newton or Einstein were just scaled-up good students, that a genius comes to life when you linearly extrapolate a top-10% student.”
To come up with ingenious ideas requires asking tough and courageous questions that no one has thought of or dared to ask before. The current political climate in the US is the opposite of a fertile breeding ground for exploring groundbreaking ideas and tough scientific questions.
The AI models of today are straight-A students, but they arguably lack the “Einstein factor”. The metaphor can be extended even further. The top AI models of today are straight-A students, but before the exams, they have been studying nonstop without sleep for days, doped up on coffee, Adderall, and Ritalin while suffering under self-esteem issues and the burden of expectation from strict parents. Even once the AI models pass all tests with flying colors and go on to have prosperous careers as doctors, lawyers, and engineers, they continue to feel insufficient. That is because the AI models are extremely good at performing, but the performance comes with invisible costs, including huge capital spending, climate impacts, lowly paid data annotators who work under demeaning conditions, and “theft” or “stealing” as one judge recently described the custom of using books from piracy repositories to train AI models. Also, impressive as the models’ auto-generations are, they still lack a certain human quality - which could also be said about many “straight-A students”.
In this post, I try to tackle the limits of AI from a perspective of existential philosophy. The hard limit AI is coming up against is related to the hard problem of consciousness. The hard problem of consciousness means, in a nutshell, that we can observe and measure phenomena in the physical world but not explain why physical processes in the brain give rise to subjective experience in the first place. We can measure a human’s heart rate and brain activity, but not how they are subjectively thinking, feeling, and experiencing the world. Because we can never put our essential “humanness” into a matrix, there is a hard limit on how “human” AI can become. In turn, this makes it very challenging to touch the philosopher’s stone of American AI labs: AGI that eliminates all human labor.
It can be summed up like this: Anything that can’t be measured can’t be data. And all things that make life worth living can't be measured. On this background, the hard limit on AI progress is not related to compute bottlenecks or inefficiency of algorithms, but to quality of data. Unfortunately, this hard limit does not only pertain to AI progress but to the progress of our entire data-driven society.
Keep reading with a 7-day free trial
Subscribe to Futuristic Lawyer to keep reading this post and get 7 days of free access to the full post archives.