Death To Algorithms - Introduction
Introduction
The digital transformation has led to great economic prosperity, more convenience, and more opportunities for finding entertainment, knowledge, social connections, and new partners for business or romance. What’s more, it has granted us the ability to communicate seamlessly and in real time across distances and to access unfathomable amounts of information right at our fingertips, perhaps the most defining change to the way humans live since the birth of our species. Figuring out how best to leverage these newfound communicative abilities is a task for us and many generations to come.
Currently, the power of the internet is concentrated among a group of American tech conglomerates that have more in common with empires from the past than with public corporations. These actors have privatized and commercialized the internet, narrowing the space for exploration, experimentation, and thought leadership. By owning most of the internet’s hardware, software, and physical infrastructure (e.g. cables, cellular towers, and satellites), they have effectively taken on roles as the gatekeepers of information with one converging vision for the internet’s future: they will control it. The cartel formation around the world’s digital infrastructure is not only blocking innovation by preventing new competitors from entering the market; it is also undermining the foundation for democracy.
Any democracy, no matter how it’s structured, rests on a public conversation. “Public” implies that anyone wishing to participate can do so. The “conversation” is only genuine if participants can freely express their opinions, even if they differ from the majority view. Disagreement and dissent should be encouraged, because they force the majority to continually refine and reconsider their viewpoints in the process of defending them. The ongoing discussions serve to sharpen the sword of a well-functioning democracy to cut through nonsense and inefficiency.
The public conversation breaks down if participants can’t agree on hard facts. For example, if one side of the conversation insists that the earth is round and the other side insists that the earth is flat, the parties are essentially having different conversations. A meaningful compromise can never be reached. Under such circumstances, the democratic process ceases to be an effective mechanism for governing society. This was less of a problem before the digital transformation, when people read the same newspapers and books, listened to the same radio stations, and watched the same shows and movies on television. Nowadays, we no longer rely on traditional media outlets and publishing companies to stay informed. The primary gatekeepers of information are recommendation algorithms applied on digital platforms that automatically distribute personalized information based on its perceived relevance to users’ profiles and entertainment quality. As a result, we consume news and opinions in personalized information clusters and tend to discuss different things whenever we try to have a conversation. Most of the time, we are not even trying.
For all the economic progress and endless opportunities the digital transformation has led to, the net result is that a whole lot of people are spending an alarming amount of time staring at screens. Consequently, people are talking less to each other, making less eye contact, touching each other less, and contributing less to the common good. Social withdrawal in favor of screens, algorithms, and fickle digital connections is a driver of existential loneliness and mental health issues and amplifies apathy and division on a societal level. Ironically, it’s great for the economy - particularly the attention economy, which will be a major theme of this book.
One survey from the United Kingdom shows that students from their early teen years in secondary school and up to university spend an average of 5.5 hours every day on their smartphones – the equivalent of 25 years of their life if the current trend holds.[1] From the American tech companies’ perspective, this is great news. More screen time means more data which, in turn, means more revenue. YouTube recently celebrated that its short-form video feed, YouTube Shorts, was averaging over 200 billion daily views.[2]
What are the students, and indeed society, getting in return for all of this screen time? Some chunks of it are spent on essential tasks like school, work and messaging. But the majority of it is spent on entertainment, surrogate intimacy, and unfulfilling but highly addictive online activities. This screen-based adolescence and compulsory smartphone usage comes at high cost. A consensus among researchers from various academic fields is gradually taking shape that it’s the cause of a rapid cognitive decline and a worsening mental crisis among the youth.[3] Unfortunately, the rise of artificial intelligence (AI) will very likely exacerbate these negative impacts. Increasingly, we are outsourcing our thinking and decision-making powers to the AI systems that our American tech giants are employing in an attempt to expand their control of the internet.
What is AI? An old joke originating from the 1970s tells us that “AI is whatever hasn’t been done yet.”[4] The opposite is just as true. As soon as AI “has been done” it tends to fade into the background of our lives and disappear from public discourse. While media headlines, online chatter and coffee break conversations about AI often center around how people use ChatGPT or when we will all be jobless, the kind of AI that permeates our economy and digital lives is generally left out of the conversation.
What we call “predictive AI” forms the backbone of our modern economy. In short, I define predictive AI as the automated process of collecting and analyzing data to make informed decisions and predictions about the future. Data-driven predictions play a role in so many economically significant activities of our age. They are used for targeted advertisement, algorithmic trading, sports betting, personalized pricing and recommendations, hiring new employees, calculating credit scores and making risk profiles on insurance and bank customers, reducing churn for businesses, forecasting the weather, detecting early signs of cancer in patients, anticipating pandemics, planning shipping routes, optimizing energy grids, estimating crop yields in farming, prioritizing themes and plots for new Netflix Originals and much, much more.
By contrast, AI systems like ChatGPT are defined by their creative output
rather than their prediction-making capabilities, so we characterize them as generative AI. ChatGPT’s paradigm-shifting release in November 2022 was based on more than a decade’s worth of steady research progress in the field of deep learning. Generative AI systems that are in wide use today, such as OpenAI’s ChatGPT, Anthropic’s Claude, Google’s Gemini, Meta’s Llama, and xAI’s Grok, are all based on predictive modeling of language and pixels, which could theoretically count as another application of predictive AI. However, once the systems are finished with the initial training process and offered as products, we don’t use them for making predictions, but due to their abilities to autogenerate coherent text and image output.
Investments in new infrastructure for generative AI continue to skyrocket, and it’s still too soon to foresee what the end game really looks like. Will these systems benefit humanity anywhere near as much as capital investors claim they will? How much will they add to the existing harms, collective nihilism, and value-confusion invoked by recommendation algorithms? It’s too soon to tell.
In this book, I will focus on two specific applications of predictive AI that come with tremendous social and cultural impact: recommender algorithms and matchmaking algorithms. These sub-categories of predictive AI are used in various ways but my analysis is focused on two specific applications: recommendation algorithms used for entertainment and matchmaking algorithms used for online dating.
My analysis will be centered around three case examples:
1) How Netflix revolutionized the recommender system, but flattened culture.
2) How TikTok’s entertainment algorithm outcompeted Facebook’s
emphasis on digital connections and weaponized entertainment.
3) How dating apps evolved from Match.com to Tinder and turned
relationships into games.
In the final chapter, I will give three concrete and practical recommendations that can be implemented to deal with the outsized global influence and power of American tech companies, the addictive pull of digital platforms, the personalized information clusters that stand in the way of common understanding and agreement, and other challenges raised in this book. Two of these are political recommendations aimed at policy makers, and one is a personal recommendation to anyone reading.
[1] Mark Sellman, “Students ‘Will Spend 25 Years on Their Mobiles’,” The Times, June 22, 2025, https://www.thetimes.com/uk/technology-uk/article/average-young-person-25-years-phone-screen-time-hwt76mnpq.
[2] Neal Mohan, “Neal Mohan at Cannes Lions 2025: What 20 Years of YouTube Reveals about Creativity’s Future,” YouTube Official Blog, June 18, 2025, https://blog.youtube/news-and-events/neal-mohan-cannes-2025.
[3] Valerio Capraro et al., “A Consensus Statement on Potential Negative Impacts of Smartphone and Social Media Use on Adolescent Mental Health,” OSF Preprints, preprint, posted May 15, 2025.
[4] Larry Tesler, “CV: Adages and Coinages,” Larry Tesler (website), accessed November 7, 2025, https://www.nomodes.com/larry-tesler-consulting/adages-and-coinages.



The distinction between predictive AI and generative AI is useful, but it sideteps the real issue. Both types ultimately optimize for engagement metrics that monetize attention. What makes recommendation algorithms particularly insidious is how they create what you call personalized information clusters without users even noticing. I remember working on a project last year where we tried to audit algorithmic bias, and the first challenge was just getting people to acknowledge they existed in a bubble. The flat earth example is spot-on because it illustrates how algorithmic sorting doesn't just deliver diferent content but actively shapes what people consider worth debating at all.