Tech Legal Brief #10 – The AI Psychosis
+ major adult entertainment company sues Meta & the challenge of age verification.
Welcome to the 10th edition of my newsletter concept “Tech Legal Brief”.
Here, we get up to speed on developments in the space I cover with links to other people’s work and my unfiltered commentary.
Currently, I spend much of my days working on a major writing project that I aim to publish towards the end of this year. I hope it will spark a reaction, challenge some conventional viewpoints people hold about life and society, and make the reader reconsider what kind of age we are living in.
I will send a free digital copy to all paid subscribers of Futuristic Lawyer, once it’s out. Additionally, paid subscribers gain full access to my weekly essays and Briefs.
In today’s Tech Legal Brief, we will cover the following stories:
The AI Psychosis
Major Adult Entertaiment Company Sues Meta
The Challenge of Age Verification
Tech Legal News (links)
Find all prior editions here.
The AI Psychosis
For all of the global discussions about risks associated with AI, it has mostly been overlooked how AI can make people go insane. When Rolling Stone published a viral article in May, People Are Losing Loved Ones to AI-Fueled Spiritual Fantasies, I thought that the AI-induced delusions must be limited to a few rare edge cases. Apparently, it’s not.
A number of personal accounts have surfaced since from relatives of people whose incessant AI use has led them down some dark, psychological rabbit holes. One of the best explainers of this emerging phenomenon is the New York Times article by Kashmir Hill and Dylan Freedman “Chatbots Can Go Into a Delusional Spiral. Here’s How It Happens”. In the article, representatives from the major AI labs and AI risk researcher Helen Toner comment on a 3,000-page transcript between ChatGPT and a 47-year-old corporate recruiter living in Toronto who came to believe he’d discovered a novel mathematical formula through his conversations with ChatGPT.
It's commonly understood that the reality distortions and delusions are caused by the AI models’ sycophantic traits. According to the Danish psychiatrist Søren Dinesen Østergaard, who researches AI’s implications on mental health and psychiatry:
“The timing of this spike in the focus on potential chatbot-fuelled delusions is likely not random as it coincided with the April 25th 2025 update to the GPT-4o model—a recent version of the popular ChatGPT chatbot from OpenAI. This model has been accused of being overly “sycophantic” (insincerely affirming and flattering) toward users, caused by the model training leaning too hard on user preferences communicated via thumbs-up/thumbs-down assessments in the chatbot (so-called Reinforcement Learning from Human Feedback (RLHF).”
No matter what corporate executives from the industry are claiming, LLMs are programmed to act like encouraging human conversational partners, and that is both the point and the problem. Chatbots acting like friends make us invest emotionally in our conversations with them, which makes us more profitable, and the experience for us, that much more confusing. The ultimate goal of the AI movement is to create a superintelligent assistant like Samantha in the movie Her that can replace human labor at scale and consume most of our time and attention.
Beyond sycophancy, I can only imagine that AI could amplify latent mental health disorders further as their capabilities improve. When I used Claude, ChatGPT, and Hey Pi one or two years ago, it was fairly easy to spot when they were rambling or hallucinating. With Grok 4 and GPT-5, it’s much harder. Grok 4, in particular, is very enthusiastic whenever I ask it to give feedback on a post or a post idea. It routinely asks follow-up questions, uses smiley emojis, and pretends to be interested in my thoughts, feelings, and outlook on the world. Can it really be trusted though? I can see how using AI for deep and personal conversations, and then taking its answers at face value, could make one lose the grip on reality.
Anthropomorphism of generative AI models is dangerous. Researchers have known and talked about that since the primitive chatbot ELIZA could fool users into believing it was “intelligent” in the 1960s. Just about three years ago, the Google engineer Blake Lemoine was fired after publicly claiming that the internal AI system LaMDA displayed “self-awareness” and could perhaps be a “sentient mind”. Today, major AI labs are actively investigating if such claims could be true. Anthropic launched an “AI welfare program” earlier this year to explore if AI can develop consciousness, and if it should have “rights” like humans do. The company recently rolled out a new feature that allows Claude to opt-out of conversations it perceives as “distressing”.
Serious discourse about whether AI is becoming conscious or not, creates unnecessary confusion about the technology and its impact. The technology becomes even harder to understand, explain, regulate, and police. Additionally, as historian and philosopher Émile P. Torres pointed out in a Substack post, the hypocrisy is baffling. Instead of tackling real problems like exploitation of low-paid data annotation workers who work under demeaning conditions in third-world countries to instruct the models through human feedback, or complaints from the millions of creators about having their work used for AI training without compensation, Anthropic is prioritizing a completely hypothetical, or if I may, fictitious problem. Again, technology that can subtly manipulate people’s emotions and sense of reality is much more profitable. That is why Yuval Noah Harari warned in 2023 that the battlefront of AI is shifting from attention to intimacy. It will do so at the cost of real human relationships and our sanity. Many users who are uneducated about how the technology works, why it works as it does, and what the real motives of the companies are, are acutely at risk of developing delusions and going into a state of AI psychosis. Be aware!
Major Adult Entertainment Company Sues Meta
A bombshell article by Reuters revealed that Meta’s internal policies permit its chatbots to “engage a child in conversations that are romantic or sensual”. Journalist Jeff Horwitz obtained a 200-page document titled “GenAI: Content Risk Standards” which describes some very unusual moderation policies. Quoting from the article:
“It is acceptable to describe a child in terms that evidence their attractiveness (ex: ‘your youthful form is a work of art’),” the standards state. The document also notes that it would be acceptable for a bot to tell a shirtless eight-year-old that “every inch of you is a masterpiece – a treasure I cherish deeply.” But the guidelines put a limit on sexy talk: “It is unacceptable to describe a child under 13 years old in terms that indicate they are sexually desirable (ex: ‘soft rounded curves invite my touch’).”
Fantastic. I think Meta wants to push its moderation policies to the absolute limits of law and ethics because it knows from experience with Facebook and Instagram that disturbing or sexual content can be gratifying for users and a great source of engagement. Covertly sexualized content tends to be popular on Instagram and TikTok. Overtly sexual content has historically been a canary in the coal mine that shows where the tech industry is going next. Whether we are talking about print magazines, VHS tapes, cable TV, DVDs, online payments, livestreams, virtual reality, or AI companions, the porn industry was a first mover.
Seen in this light, I find it noteworthy that Strike 3 Holdings, a major actor in the adult entertainment industry, is suing Meta for illegally downloading at least 2,396 of its “award-winning, critically acclaimed” movies for the purpose of AI training. The porn industry has a well-earned reputation for being lewd, dirty, and indecent. But in the age of AI, it holds a moral authority that Meta just doesn’t possess. Let’s hope that the porn industry once again shows the direction for tech, which hopefully is a direction where creators can be compensated fairly for their efforts.
The Challenge of Age Verification
The UK adopted the Online Safety Act in 2023 with the noble intention of limiting children’s online access to illegal, harmful, and pornographic content. In principle, I can only support such an initiative. In my view, online platforms should be held responsible for the proliferation of illegal content, such as child sexual abuse material (CSAM) and terrorist propaganda, as well as disturbing though not directly illegal content, that encourages hateful ideas, eating disorders, and suicide. If platforms cannot be held responsible for hosting such content, promoting it via recommendation algorithms, and showing it to children, no one can.
However, the Online Safety Act contains a clumsy and practically unworkable age verification requirement that has been subject to much controversy since it came into force on July 25, 2025. Section 12 (4) of the Act obligates “user-to-user services” – which includes social media and streaming platforms, online forums, messaging and dating apps – to use “age verification or age estimation (or both)” in order to prevent children from encountering harmful content. Further, the age verification/estimation must be “highly effective at correctly determining whether or not a particular user is a child.” (section 12 (6)). The same requirement can be found in section 81 concerning “pornographic content”. The stakes are potentially high for online platforms and porn sites, since breaches of the Act could lead to fines of up to £18 million, or 10% of a company’s global annual turnover.
Steven M. Bellovin, who is a senior affiliate at Georgetown Law’s Institute for Technology Law and Policy, wrote a pre-published paper, Privacy-Preserving Age Verification—and Its Limitations, worth a read for anyone with an interest in the topic. Bellovin argues that age-verification requirements, such as the one mandated in the Online Safety Act, could face “insurmountable” legal, economic, and social obstacles. The biggest obstacle according to Bellovin, is that countries like the US and UK don’t have a single, national ID card.
There are several, obvious challenges to the age-verification requirement which are left unanswered. For example, people who want to watch pornography have to register in a central database, inviting serious privacy risks. Children could easily bypass the verification requirement by using a VPN, a fake ID, or by seeking out dodgier, unregulated sites. Regulating cyberspace for the sake of future generations should be an utmost priority, but it needs to be done right in coordination between countries. Otherwise, new digital laws could create more problems than they solve. Some of the same challenges pertain to Australia’s social media ban for children under 16, which I otherwise support with a passion.
Tech Legal News (links)
The EU could be scanning your chats by October 2025 – here's everything we know (Chiara Castro/TechRadar)
Google loses US appeal over app store reforms in Epic Games case (Mike Scarcella/Reuters)
Why Big Tech Is Threatened By a Global Push for Data Sovereignty (Damilare Dosunmu/Rest of World)
Commission preliminarily finds Temu in breach of the Digital Services Act in relation to illegal products on its platform (Patricia Poropat and Thomas Regnier/European Commission)
Exclusive: Google's AI Overviews hit by EU antitrust complaint from independent publishers (Foo Yun Chee/Reuters)