Tech Legal Brief #12 – Generative AI Is the World’s Most Expensive Magic Trick
+ more on AI and copyright litigation.
When we are impressed by AI, we are really impressed by ourselves.
The pretrained models are pattern recognition tools that have learned to draw inferences from the bulk of creative work that humans have produced over the last decades, and centuries, in some cases. That means, without knowing it, artists, scientists, writers, casual social media posters, and everyone in-between, have contributed to making AI what it is today.
So, congratulations everyone, I guess?
But who really deserves the credit? The major AI labs have a vested interest in making AI tools seem magical, capable of much more than just regurgitating the statistical average of human inputs. A new paper by OpenAI, “Early science acceleration experiments with GPT-5”, published in collaboration with scientists from eight universities, proves how GPT-5 has “started to be capable of contributing intellectually to scientific research”. The key claim is – implicitly - that GPT-5 is on a path to transcend human knowledge, by creating something completely new that no human has ever thought of before. In other words, foundation models are much more than advanced auto-text generators, they are in fact, non-sentient, super intelligent beings.
How is generative AI able to produce novel scientific research that could potentially cure diseases, fix the climate change crisis, turn humanity into a multi-planetary species, and create art that is so beautiful and engaging, there will no longer be a need for human artists? By the way, these are not predictions I make up for fun, CEOs such as Sam Altman, Dario Amodei, and Ilya Sutskever, and many other industry leaders, deeply believe that all of this will be possible by the end of this decade. But the answer is that no one knows exactly how. It’s magic!
The US economy is now hedged on this superintelligence bet. If the gigantic investments made in AI infrastructure are ever going to pay off, AI tools have to be much more than just tools. They have to perform magic by solving complex scientific problems, reliably and independently take actions on behalf of users, and much more. In short, what currently keeps the states in the US united is an ultrahazardous bet on building Samantha from the movie Her.
The often science-fiction like discussions about AI’s short-term potential and the grave economic risks are important, but they tend to overshadow an issue that is in my opinion even more substantial. Who should profit from AI?
Given that the leading AI models are trained on humanity’s cultural heritage, can we accept that a few American companies should profit from it, while ordinary humans have to pay to access what other humans have essentially created, but in the form of a privately-owned product?
The issue relates to copyright law, but even more so to a weak enforcement of antitrust law. Sadly, under the Trump administration’s gracious watch, Big Tech companies and tech billionaires can get away with murder in bright daylight (figuratively, I hope) if done in the pursuit of outcompeting China (the proper pronunciation is “Gina”) in the AI race. That is why Republican lawmakers have reintroduced the idea of a 10-year moratorium on AI state law which would, for example, prevent California’s comprehensive efforts to regulate the observable harms and risks associated with AI.
That is also, at least part of the reason why, a federal judge recently ruled that Meta did not create an illegal monopoly for social media platforms by acquiring Instagram and WhatsApp, even though Mark Zuckerberg had explicitly written in e-mails that “It is better to buy than to compete,” and made other statements that unambiguously indicated that the acquisitions were made with the intention of killing rivals.
Matt Stoller writes in Big that “Big Tech is no longer an antitrust problem, but a macroeconomic problem”. The influence and power of Big Tech can no longer be tamed by antitrust law, because the political will power is to go all-in on AI superintelligence. If the bet fails, the US economy fails too.
Ben Casselman and Sydney Ember recently described in The NY Times that the US is now a two-track economy:
“The U.S. economy in 2025 is split in two: Everything tied to artificial intelligence is booming. Just about everything else is not.”
More on AI and Copyright Litigation
One of the thorny legal issues I have covered most on Futuristic Lawyer concerns whether the training of cutting-edge AI models constitutes an infringement of copyright. Ethically/ideologically I think the answer should be yes. From a legal perspective the answer may vary depending on the jurisdiction, the details of the case, and the mood of the judge on a given day. Let’s briefly go over a few developments that have occurred in the last month.
GEMA v. OpenAI
Landgericht München in Germany issued a ruling on November 11 in a dispute between OpenAI and GEMA, the responsible organization in Germany for collecting licensing fees on behalf of musicians, composers, and songwriters. The case concerned the lyrics of nine well-known German songs which OpenAI’s GPT-4 and GPT-4o could reproduce in parts. In one instance, GPT-4 reproduced just 25 words of a song, and in other instances, certain words and sentences in the lyrics were hallucinated. Still, Landgericht München ruled that the models’ memorization was an unauthorized reproduction and unauthorized making available to the public that was not covered by the text and data mining (TDM) exception in Article 4 of the EU’s Copyright Directive since the song lyrics were demonstrably stored in the model’s parameters.
Read more:
GEMA v OpenAI: Memory is Fragile. Garbage Lasts Forever. (Paul Keller/Open Future)
Getty Images v. Stability AI
On November 4, the High Court of England and Wales released a much-anticipated decision in a case brought by the stock image repository, Getty Images, against the open-source-focused AI company, Stability AI. Stability had used photographs owned by Getty Images to train its diffusion model (a certain category of generative AI) Stable Diffusion, without permission. The British court did not address primary copyright infringement because Stability had – allegedly - blocked any prompt that could generate infringing outputs after the lawsuit was brought. The court found that Stability was not liable for secondary copyright infringement (secondary infringement, because Stable Diffusion’s model weights are available for users to download on Hugging Face). According to the judge, Getty did not provide evidence that copyrighted work had been memorized by Stable Diffusion and was thus not stored and reproduced as infringing copies in the model weights.
Read more:
Getty Images v Stability AI: A landmark High Court ruling on AI, copyright, and trade marks (Andres Guadamuz/TechnoLlama)
The Settlement Between Universal Music Group and Udio
In June 2024, the world’s leading music studios - including the “Big Three,” Universal Music Group (UMG), Sony Music and Warner Music Group (WMG) - filed a class action copyright lawsuit spearheaded by the Recording Industry Association of America (RIAA) against the AI music platforms, Suno and Udio, claiming that Suno and Udio had trained their models on the record labels’ copyright-protected music without permission.
On October 29, UMG and Udio announced a strategic partnership which ends UMG’s involvement in the dispute. Under the settlement agreement, Udio has to pay UMG an unspecified settlement fee and sign a licensing agreement covering any music owned by UMG. Going forward, artists working for UMG will be rewarded both for letting their work be included in the training process and each time an output is produced based on their protected work (Billboard). The new collaborative platform will be launched in 2026, and artists and songwriters who wish to participate can do so on an opt-in basis.
Just a few days ago, WMG announced a similar settlement with Udio. These settlements are a win for the artists - and a logical compromise. At the same time, I still don’t see the point of, or the need for, AI-generated music, ethically sourced or not.










The copyright angle you bring up here is realy the core isue that keeps getting pushed aside. When a few companies can basically harvest humanity's colective output and then sell it back to us as a product, something feels fundmentally broken. The GEMA ruling is intresting because it at least acknowledges that memorization matters, even if its just 25 words.