Discover more from The Gap
A New Job Market is Opening for Futuristic Thinkers
Owning an early-stage newsletter is about acquiring specialized knowledge and mastering the craft of writing, while also trying to find a satisfying explanation to questions like; what the heck am I doing, what can I offer, what am I trying to accomplish? I believe that some resistance and creeping doubt is an unavoidable part of the journey whenever someone is straying from conventional, beaten paths.
I personally spent several hours every day, seven days of the week, thinking, reading, and writing about AI's impact on society and law. Some days, I am driven by an egoic desire to peek into the future and catch a foreglimpse of broad trends and write about them before they appear. Other days, it’s a bit hard to explain why I do it. But as the iconic tech writer Kevin Kelly advises in an interview with Noah Smith:
“My generic career advice for young people is that if at all possible, you should aim to work on something that no one has a word for. Spend your energies where we don’t have a name for what you are doing, where it takes a while to explain to your mother what it is you do. When you are ahead of language, that means you are in a spot where it is more likely you are working on things that only you can do. It also means you won’t have much competition.”
In the coming years and decades, AI along with other powerful technologies are destined to make a substantial impact that cuts deep into the very bones of society, democratic values, and our understanding of what it means to be human. We need futuristic thinkers in the intersection of tech and humanity to understand, advocate, and steer the direction of new technologies, to fulfill their potential while understanding and limiting risks.
In this post, I will give you a preview of what you can expect to read about over the coming months on The Gap.
Work & Meaning
As Dave Karpf wrote in a recent essay, Silicon Valley is the church of Moore’s Law.
Moore's law was an outstanding prediction made by the early computer chip pioneer, Gordon Moore, who foresaw in the 1960s that the number of transistors on microchips would double every 18-24 months. For the next several decades his predictions were correct. Every year computer processing powers have roughly doubled and this exponential growth has continued into the digital age we are in today.
“In the next five years, computer programs that can think will read legal documents and give medical advice. In the next decade, they will do assembly-line work and maybe even become companions. And in the decades after that, they will do almost everything, including making new scientific discoveries that will expand our concept of “everything.”
This is less of a prediction and more of a promise. OpenAI is building towards a future where AI does “almost everything”. But is that the future we want? Or if it is indeed an inevitable future like evangelists from the church of Silicon Valley claim, how can we find meaning in a world where human labor is increasingly devalued?
Social & Economic Inequality
Technocapitalism is based on the Matthew principle:
For to every one who has will more be given, and he will have abundance; but from him who has not, even what he has will be taken away.
— Matthew 25:29, RSV.
In Daron Acemoglu’s and Simon Johnson’s book “Power and Progress” the two economists challenge the notion that technological progress is always good for society per default. Historically, over the past 1000 years, technological progress has benefitted a small elite at the expense of wage earners and today not much is different. Automation tends to undermine workers and enrich those who are already rich.
The growing power of tech dynasties is also weakening the powers of democratic institutions and leaving them vulnerable to persuasion. Just last week, during the ongoing negotiations of the EU's AI Act, Germany and France strongly opposed any rules regarding foundation models that would exempt ChatGPT from being covered by the Act. Europe’s biggest OpenAI competitors, the French Mistral AI, and German Aleph Alpha had allegedly leveraged their strong political connections to sway the opinions of those who are supposed to govern them.
In our digital age where humans are increasingly reduced to profits and avatars, it’s easy to sleep on human rights law and transfer important decision-making powers to tech monopolies. Data ownership and more self-governance for users should be important priorities. Additionally, we need mechanisms to disperse wealth and prosperity in the age of AI, so that power and abundance are less concentrated in the hands of a few.
Artistic Rights & Livelihoods
Should AI companies ask for consent and pay license fees for using copyrighted material to train generative AI models? The subject is hotly contested. Several court cases are brewing around the issues such as the Authors Guild class action lawsuit that could potentially upend the entire industry.
Technically, generative AI models do not store any of the copyrighted material they are trained on, but they rather extract statistical information and learn to recognize patterns in a huge quantity of training data. Removing any single work or collection of works is very unlikely to have a meaningful impact on the AI’s output. The copyrighted works are just small drops of water in a vast ocean of data.
Creators who are worried about competing with AIs in the future may be better off relying on technical solutions to protect their work from web scraping such as the data-poising tool Nightshade which adds invisible changes to the pixels in digital art. At scale, this could severely damage the quality of AI image models.
Some AI industry leaders swear to everlasting growth, technological solutions to all problems, and more automation no matter the cost. However, too many abrupt changes in the labor market could have unintended consequences - potentially mass unemployment together with a collective loss of meaning.
Replacing human workers with automation could also create undesirable second-order effects in another way: lower wages for workers, more capital for Big Tech companies, weakening of democratic institutions, and a widening gap between rich and poor.
Finally, the AI vs copyright debate is probably leaning in the AI companies' favor. Web scraping can hardly be seen as an act of unauthorized copying under existing legal regimes, and if a high court or a government should overturn the status quo, it would be an economic disaster. Realistically, artists will probably have to compete with AI products for years to come, and the law does not provide them with a lot of ammunition to fight back.
AI’s impact on work and meaning, social and economic inequality, and artistic rights and livelihood are three broad themes that I expect to follow and study closely and write more about in future posts. Stay tuned.
Reads of the Week
Here are 13 Other Explanations For The Adolescent Mental Health Crisis. None of Them Work. - Jean M. Twenge, 23 October, 2023 (After Babel)
To Run My Best Marathon at Age 44, I Had to Outrun My Past - Nicholas Thompson, April 20, 2020 (Wired)