Announcement for 2025
“Hitherto, data was seen as only the first step in a long chain of intellectual activity. Humans were supposed to distil data into information, information into knowledge, and knowledge into wisdom. However, Dataists believe that humans can no longer cope with the immense flows of data, hence they cannot distil data into information, let alone into knowledge or wisdom. The work of processing data should therefore be entrusted to electronic algorithms, whose capacity far exceeds that of the human brain. In practice, this means that Dataists are sceptical about human knowledge and wisdom, and prefer to put their trust in Big Data and computer algorithms.”
- Yuval Noah Harari in “Homo Deus: A Brief History of Tomorrow”
I have decided to work on a book about predictive algorithms and the consequences of datafying humans and relationships. It will not be a law book but a personal account and a deeply researched, polymathic study of, essentially, AI’s impact on humanity.
I intend to continuously share my notes here on Substack throughout the year in the hopes of getting feedback. Towards the end of 2025, my goal is to publish the book digitally and hand out a gift link to paying subscribers.
As a reflection of this new commitment, I have raised my subscriber prices ever so slightly.
For free subscribers, nothing will change. You will continue to receive previews of weekly posts, podcast episodes, and one monthly full-length post.
For paid subscribers, the price for full access to all my work on Substack and a gift link to my e-book at the end of the year is $7.5/month or $75/year.
Additionally, I am offering a new paid tier called Advisory Board Member for 150$/year. This tier includes a 2-hour personal consultation with me over 1, 2, 3, or 4 sessions. Here, we will go deep into what you want to get out of Futuristic Lawyer, what you are currently working on, and if relevant, what kind of challenges you are facing. Over time, I hope to create a private group chat, annual or bi-annual network meetings on Zoom, and other networking opportunities for Advisory Board Members.
Besides these changes, I will continue to publish as normal on different topics that interest me in the intersection of tech, law, IT business, and ethics.
Thanks for reading!
My Tech Prophecy for 2025
Introduction
What an ending to 2024.
OpenAI’s new o3 model achieved a breakout score on the ARC-AGI-PUB benchmark, the benchmark designed by former Google engineer, François Chollet, and co-founder of Zapier, Mike Knoop, to measure progress towards AGI with simple tasks that should be hard for LLMs to solve. Bitcoin has surpassed a key milestone by reaching more than $100,00 in value per coin. Elon Musk is a trusted advisor to the President now with tremendous political sway. Lina Khan will likely be sacked from her position as Chair for the Federal Trade Commission, and so her admirable efforts to reign in BigTech's monopoly power will come to a close. American techno-capitalism has won. The boosters have seemingly cleared out the last resistance and can make up their own rules for a new game few people want to play. Like the time when Elon Musk considered buying Dungeons and Dragons.
What happens from here?
In the United States, the tech industry will have more power, while the people who are trying to regulate it and protect democracy and public interests, will have less power. The rich will accumulate more riches, the poor will have less of whatever they have left, and the middle class will shrink. A civil war vibe is looming in the country, as evidenced by the media and the internet's idolization of an apparently handsome guy who murdered the CEO of a big health insurance company. Social-economic inequality is causing a profound disgruntlement and resentment against the system, and social media as a communication tool lacks the dimension to pick up on these deeper structural issues and transmit them to users.
Speaking of social media, I predict that TikTok will not be banned, and if it will, not for long. Even if it should be banned permanently, an equally harmful service could quickly emerge and take over its market shares. Gurwinder aptly describes TikTok as a new kind of Chinese bioweapon that harms people with an overload of pleasure, rather than pain. Regardless, TikTok's pivotal role in Trump's successful reelection campaign - including Trump’s now famous and widely imitated shuffle-dance, which was staged by “TikTok Jack", Trump's 22-year-old Generation Z adviser - makes a permanent TikTok ban in the US unlikely for the next four years. We know that Trump’s policies operate by an eye-for-an-eye and a favor-for-a-favor principle, so if you treat him nicely on a personal level, you will do good with him, if you don’t, you will not. TikTok has been very nice to Donald Trump.
Darkness aside, since techno-capitalism has seemingly won, and won big, we will see if the wet dreams of tech CEOs in Silicon Valley can be manifested. If Elon’s new DOGE commission can successfully reduce public spending, if Bitcoin and crypto can find a "killer app" or a real-world problem to solve, and most importantly for the purpose of this post, we will see if AI can live up to its promises.
OpenAI’s latest model o3 gives us a glimpse of legitimate superintelligence - by definition - as its capabilities exceed the individual intelligence of human experts in most fields. An obvious caveat here is that these results are based on benchmark tests and it's extremely unlikely that the results translate to real-world utility as we will look further into in a future post.
Sam Altman thinks that superintelligence can fix the climate crisis and help to establish a colony in space within a few years, whereas Anthropic CEO Dario Amodei believes it’s only a matter of time before AI can cure all diseases. There are two substantial barriers to achieving something that resembles Silicon Valley’s vision of AGI, besides technical feasibility.
The first barrier, is well, money. o3 is very costly to use (thousands of dollars per query on the ARC-AGI-PUB benchmark on the high-compute configuration), and one can imagine that building and deploying a super-intelligent machine that can essentially solve all of humanity’s challenges, will be prohibitively expensive.
Long term, the high price tag on superintelligence could create inequality between those who have money and those who don’t, creating a kind of A team and B team, or in other words, reinforcing the existing structures of the socio-economic inequality in American society. Near term, the barrier is simply AI’s scaling problem - after a certain point, spending more capital on an incrementally better performance, does not make economic sense.
The other major barrier is environmental harm. A single task performed by o3 can emit the same amount of carbon dioxide as five full tanks of gas for a car, indicating that the future energy and resource demands of AI data centers will be outrageous.
The limits on capital and nature’s restraints could very well be permanent barriers to achieving superintelligence as it is advertised. But sidestepping the questions of whether superintelligence is attainable and sustainable, are we ready for it if it should come?
Now, that the techno-capitalists have obtained the democratic mandate, and tech innovation can spur unhinged in America with less push-back in the shape of annoying laws and finger-pointers, can the tech overlords really manage what they want to build?
Two critical areas that need to be managed well are AI’s impact on studying and work. If AI does not lead to more productivity and satisfaction for students and workers overall, then AI as a techno-political movement has failed. On this background, I have dedicated the next two sections to explore how AI is and should be managed in workplaces and in higher education, and how we can measure if AI’s impact has been net positive in these areas.
AI’s Impact on Studying
Keep reading with a 7-day free trial
Subscribe to Futuristic Lawyer to keep reading this post and get 7 days of free access to the full post archives.