AI Could be Heading Towards the Trough of Disillusionment
Is AI really heading towards super intelligence? Or the trough of disillusionment?
I wrote a guest post on Michael Spencer’s publication AI Supremacy earlier in June, titled “The Rise of the AGI Profiteers”.
I criticize, in no uncertain terms, how the young man and OpenAI refugee, Leopold Aschenbrenner, in a 165-page manifesto and a 4.5 hours conversation on Dwarkesh Patel’s popular podcast, warns the public against the imminent threat of superintelligent AI.
Aschenbrenner recommends the US to quickly ramp up its compute infrastructure so it can achieve AGI (whatever that means!) before the Chinese Communist Party (CCP) gets there. Apparently, Aschenbrenner is having sleepless nights over the thought of CCP stealing the model weights or algorithmic secrets to OpenAI's models which he claims is a military secret that is more important to national security than nuclear weapons.
Aschenbrenner’s extreme viewpoints are likely shared to some extent by OpenAI’s executives. The Information reported in March that Microsoft and OpenAI are planning to build a $100 billion data center set to launch in 2028. Sam Altman has been in talks with investors including the United Arab Emirates to raise up to $7 trillion to ramp up the semiconductor industry’s chip-manufacturing capacity. And just recently, OpenAI appointed the retired US Army general, former NSA Director, and cyber security expert, Paul M. Nakasone, to its Board of Directors. Edward Snowden commented :
“Do not ever trust OpenAI or its products. There’s only one reason for appointing [an NSA director] to your board. This is a willful, calculated betrayal of the rights of every person on earth. You have been warned.”
OpenAI is far from treating AI as a feature in consumer products but as critical infrastructure with the view that continuing to build larger and more powerful AI models is not a choice but a necessity for protecting national security interests and the future competitiveness of the US. AI has been promised to supercharge economic growth, scientific discovery, military capabilities, and make a decisive impact across all sectors and human endeavors. But taking all these claims at face value is in my view a dangerous mistake considering the trustworthiness of the actors that are eagerly spreading them.
For example, here’s a brutal but not entirely inaccurate characterization of OpenAI’s , venture capitalist CEO, Sam Altman, from Ed Zitron’s essay, Silicon Valley’s False Prophet:
“Altman has taken advantage of the fact that the tech industry might not have any hyper-growth markets left, knowing that ChatGPT is, much like Altman, incredibly adept at mimicking depth and experience by parroting the experiences of those who have actually done things. Like Altman, ChatGPT consumes information and feeds it back to people in a way that feels superficially satisfying in a way that’s impressive to those who don’t really care about creativity or depth, taking advantage of the fact that the tech ecosystem has become dominated — and funded — by people who don’t really do anything that involves building software.
While ChatGPT isn’t inherently useless, Altman realizes that it’s impossible to generate the kind of funding and hype he needs based on its actual achievements, and that to continue to accumulate power and money, he must speciously hype it to wealthy and powerful people who don’t participate in the creation of anything.
Sam Altman is a monster created by Silicon Valley’s sin of empowering and elevating those that don’t actually build software, which, in turn, led to the greater sin of allowing the tech industry to drift away from fixing the problems of actual human beings. Altman’s manipulative power plays have been so effective because so many of the power players in venture capital and the public markets are disconnected from the process of building software and hardware, making them incapable — or unwilling — to understand that Altman is leading them to a desolate place.”
It’s a well-known fact that people who are experiencing a sense of diminished self-value tend to find comfort in grandiose ideas, delusional fantasies, and exaggerated self-importance. This is a psychological self-defense mechanism that protects a swollen ego from facing its own trauma and cold truths of reality.
I wonder if AI will truly be revolutionary, or if the American tech elite is going through something that resembles the manic phase of bipolar disorder after too many years of disappointing, incremental improvement to existing consumer products rather than life-changing breakthroughs. OpenAI’s quest of building AGI could be viewed as a symptom hereof. If this is true, the only essential difference between OpenAI’s internal mission of building a superintelligent AI, and a homeless person in a parking lot who screams “I am God” is that the former is deeply connected with capital and political influence that has a vested interest in making the disillusionment seem real.
My guess is that AI technology is either very near to or has already surpassed the “Peak of Inflated Expectations” on the Gartner hype cycle. When we reach a point in the foreseeable future where the technology struggles to continue to progress and impress people, it may join NFTs and the metaverse in the third stage of the Gartner Hype Cycle, the "Trough of Disillusionment". Right before the “Slope of Enlightenment” and finally the “Plateau of Productivity”.
No matter what happens, we can be fairly sure that AI’s commercial appeal and technical capabilities cannot continue to grow exponentially year after year. It doesn’t matter how many large data centers the techno elite gets away with building. For two obvious reasons:
AI is not useful for many applications due to hallucination risks
For example, a team of researchers from Stanford University, assessed two of the leading AI research tools for lawyers by Thomson Reuters and LexisNexis and found that they each hallucinate between 17% and 33% of the time. At the same time, as I wrote about last month, the industry for legal AI software is booming. We can’t deny that legal AI tools are valuable and useful for lawyers, but unless the hallucination rate moves close to 0, there is a hard limitation on just how useful they can be.
AI is not intelligent
The "superpowers" of current large language models come from their training on a sizeable portion of the internet and from memorizing this colossal amount of data. AIs are skilled at solving narrow and clearly pre-defined tasks but they are not able to learn or apply existing knowledge to novel problems in the way that humans are. For example, a new study finds that even state-of-the-models show a complete breakdown in reasoning abilities when prompted to solve simple common-sense problems.
Francois Chollet, famous French AI researcher currently working at Google, says that OpenAI has set back progress towards AGI by 5-10 years by creating an industry standard of closed AI research and by promoting a single-minded focus on LLMs which has "sucked the oxygen out of the room". I recommend checking out his appearance on Dwarkesh Patel’s podcast below.
If you have better information than I do, please feel free to criticize my views in the comments. I am still trying to learn.
Honestly everyday I wonder about this, as I check the AI stocks like Nvidia and their market cap. The exuberance of course eventually has to normalize which we call mean reversion in investing.
The second point besides disillusionment is of course that the valuation and revenue generation of just a few companies makes the majority of all the profits. If we assume that OpenAI makes about four times that of Anthropic and can reach 4 billion in revenue by the end of this year, those two purely Cloud funded join Microsoft and Google has the primary beneficiaries of the revenue generation of frontier models.
Just four companies in the world making the majority of all revenue from generative AI. It seems like big Tech has far too much control over this momentum and movement, wherever it might lead.