

Discover more from The Gap
Tech Is Booming, Trust Is Collapsing
In a debate facilitated by The Economist, Yuval Noah Harari and Mustafa Suleyman sit down for an open discussion about the future of AI. (YouTube link here)
Harari is a historian, author of several best-selling books such as “Sapiens: A Brief History of Humankind”, and he is one of the most well-known thinkers on AI dangers. Suleyman is the co-founder of Inflection.AI and DeepMind and recently authored the book “The Coming Wave: Technology, Power, and the Twenty-first Century's Greatest Dilemma”.
Both men are hugely influential thinkers in the AI space. Harari from studying the past and development of human civilization, Suleyman as a visionary who invests in the future of human-AI relationships with deep industry knowledge and experience. Bringing two people from opposite camps together in the same room - like Harari and Suleyman - is important.
In fact, as Harari explains in the video around the 22:40 mark, Western countries have the most sophisticated information technology in history, yet we are no longer able to talk with each other. Trust is collapsing. In the world’s richest and most technologically advanced country, people cannot even agree on the most basic fabrics of democracy, such as the result of the last election. Isn’t that crazy to think about?
In case anyone has wondered if Futuristic Lawyer takes a more Harari-style or Suleyman-style view on AI dangers, the answer is definitely the former. Development and deployment of AI systems more powerful than GPT-4 should be paused, optimally for longer than six months.
Corporations are good at fixing external problems in the world, not good at eliciting trust or promoting mental health. Actually, corporations are bad at working towards any goal that cannot be quantified, measured, weighted, or calculated. The highest priority of any corporation is profit maximization. On all levels, a typical company is concerned with attaining more, optimizing, reaching benchmarks, and increasing top-line growth. This constant striving for more is the antidote to healing, communion, understanding, and depth. Fixing social trust in society would be millions of times more valuable than developing a new state-of-the-art language model. However, building a more powerful language model is economically viable and quantifiable, whereas fixing social trust is neither.
Many tech entrepreneurs seem to believe that imbuing systems with more intelligence is the key solution to fix all of humanity’s problems. But current-day systems, particularly social media networks, are causing the collapse of trust and polarization across democracies we witness today. What would happen if we imbue these dysfunctional systems with 10x more intelligence? Catastrophe.
To use Harari’s closing remark:
“If we, for every dollar and minute we invest in artificial intelligence, we invest another dollar and minute in developing our own consciousness and our own mind, I think we will be okay. But I don’t see it happening. I don’t see this kind of investment in human beings that we are seeing in the machine.”
Reads of the Week
NSF invests millions to unite Indigenous knowledge with Western science – Jeff Tollefson, September 8, 2023 (Nature)
What OpenAI Really Wants – Stephen Levy, September 23, 2023 (Wired)