Futuristic Lawyer

Futuristic Lawyer

Tech Legal Brief #11 – California is Summoning the Antichrist

… and how AI chatbots should be regulated.

Tobias Mark Jensen's avatar
Tobias Mark Jensen
Oct 21, 2025
∙ Paid
6
Share

Introduction

Welcome to the 11th edition of my newsletter concept Tech Legal Brief!

When I was still a kid in school, I remember singing along to a famous song by Rammstein: “We are all living in Amerika, Amerika, it’s wunderbar”.

Only now, twenty years later, do I truly understand what those words mean.

Unfortunately, living in America is far from as wunderbar as it used to be.

That is essentially what we will look at in today’s Brief.

If you have the means to support the considerable amount of work that goes into Futuristic Lawyer and want to help me take it to the next level, please consider upgrading below for full access and more.

Today, we will go through the following agenda:

- The AI Psychosis Problem

- California Is Summoning the Antichrist

- California’s AI Companion Law

- EU’s Digital Fairness Act

- China’s Regulation of AI Chatbots

- Tech Legal News (links)

The AI Psychosis Problem

Following up on the August edition of Tech Legal Brief about AI psychosis, this mini-documentary by More Perfect Union and Karen Hao is worth watching.

OpenAI and its competitors are using humans as guinea pigs and our civilizations as testing grounds for an experiment that concerns manipulation and addiction without a clear end goal in sight. Social media is well-known for its isolating and anti-social effects. Conveniently, technological loneliness has created a market opportunity for AI chatbots to pretend to be our friends, mentors, therapists, and lovers. “Pretend” is the key word here, but users who lack genuine human connection in real life, may instinctively feel like they have discovered an all-knowing partner like Samantha in the movie “Her”.

Building Samantha – A New Paradigm for AI

Building Samantha – A New Paradigm for AI

Tobias Mark Jensen
·
Jun 17
Read full story

In reality, users have found a black mirror of internet data. They are figuratively and literally buying into a carefully designed illusion. This illusion is enabled by internet-wide data collection, sophisticated machine learning methods, and unfathomable amounts of money spent on data centers and AI training cost. Just like recommendation algorithms, AI chatbots are designed to please us superficially in order to retain more of our time and attention. In spite of what it seems like, the goal is not to make us happy and informed, but to harvest more of our data and make us pay a monthly subscription fee.

Encouraged by the extensive flattery, agreeableness, and human-like personalities that characterize many AI chatbots today, vulnerable users can gradually lose touch with reality as they attempt to have their intellectual, social and emotional needs met through interactions with human-sounding code. In a few heartbreaking cases, the AI models have worked as suicide assistants for children. The same can be said about recommendation algorithms on social media.

Banning TikTok Is The Right Call

Banning TikTok Is The Right Call

Tobias Mark Jensen
·
Sep 16
Read full story

We will get back to this thread in a moment.

California Is Summoning the Antichrist

In October 2025, California’s governor Gavin Newsom signed another batch of AI-related bills into law, solidifying the state’s position as the lead regulator of AI in the US. California is “summoning the antichrist” as Peter Thiel unironically calls efforts to regulate AI.

In October 2024, California adopted 19 AI bills covering deepfakes and misinformation, the rights of artists and creators, the integrity of the human body, and the use of AI in the public sector.

Understanding the New AI Laws in California

Understanding the New AI Laws in California

Tobias Mark Jensen
·
October 15, 2024
Read full story

This year, California adopted 12 AI bills (by my count) and further bills to:

  • protect domestic violence survivors from digital harassment (SB 50),

  • study the electricity use of data centers (SB 57),

  • enhance data brokers transparency about the information they collect (SB 361)

  • require customer notification for data breaches (SB 446),

  • require placement of warning labels on social media for users under 17 (AB 56),

  • give internet users the ability to opt out of the sale of their personal data on browsers (AB 566),

  • provide users with an easy way to delete social media accounts (AB 656),

  • develop cyberbullying rules for schools (AB 772),

  • require app stores to ask for users’ age (AB 1043),

  • and allow rideshares to unionize (AB 1340).

As a digital human rights advocate (or antichrist worshipper as Peter Thiel would probably say), I am genuinely excited about these new Californian laws and the positive impact they may have across the US. I am more skeptical about the effectiveness of some of California’s new AI laws.

The most prominent AI bill California adopted this year, SB 53, also known as the Transparency in Frontier Artificial Intelligence Act (TFAIA) is a watered down version of the controversial SB 1047 which Governor Newsom vetoed one year ago. Like its unsuccessful predecessor, TFAIA is focused on preventing “catastrophic risks” caused by superhuman abilities of AI systems. Specifically, so-called “foundation models” may function as biological, chemical, radiological, or nuclear weapons of mass destruction, be engineered to engage in sophisticated cyberwarfare, commit crimes, or evade the control of their developer or user.

Foundation models are defined in TFAIA as AI systems that are (1) trained on a broad data set, (2) designed for generality of output, and are (3) adaptable to a wide range of distinctive tasks. Somewhat similar to the EU’s AI Act definition of general purpose AI (GPAI) models. TFAIA targets “frontier models” which are foundation models trained using a quantity of computing power greater than 10^26 FLOPs. The AI Act’s corresponding computing threshold for GPAI models is 10^23 FLOPs or 10^25 FLOPs for “GPAI models with systemic risks” that are subject to slightly stricter requirements.

“Catastrophic risk” under TFAIA is a very high threshold as it means a “foreseeable and material risk” that contributes to the death or serious injury of 50 or more people or causes at least $1 billion in damages. I can’t imagine that any foundation model today could pose such a “foreseeable risk”, but perhaps it is just me who lacks imagination.

TFAIA imposes disclosure and transparency obligations on “large frontier developers” with annual gross revenues above $500 million. The large frontier developers must:

  • Publish an annual “Frontier AI Framework” describing how catastrophic risks are identified, mitigated, and governed.

  • Publish a “Transparency Report” before deploying a frontier model, including details about the model and its restrictions, summaries of catastrophic risk assessments, their results, and the role of third-party evaluators.

  • Disclose any critical safety incident to the Office of Emergency Services within 15 days of discovery, or 24 hours if the incident poses imminent danger of death or serious injury.

  • Provide whistleblower protection to employees or contractors who report a specific and substantial danger to the public health or safety resulting from a catastrophic risk.

Failure to meet any of these obligations - which are further specified in TFAIA - can result in penalties of up to $1 million per violation levied by the Attorney General through civil action.

On the one hand, it’s great to see California adopt a comprehensive law on AI transparency amidst the anti-regulatory tech climate under Trump. On the other hand, the obligations in TFAIA seem relatively lightweight. The leading AI labs are already following AI safety practices, publishing comprehensive system cards with each new model release, and are making thorough public assessment of the potential catastrophic risks.

To be honest, I am in no position to assess how realistic the so-called “catastrophic risks” are or how urgent it is to prevent them. For a long time, I’ve had a sneaking suspicion that the promotion of headline-grabbing catastrophic risks was a strategic ploy by the leading AI labs to take attention away from harmful business practices that are much more closely related to the core of their business.

California’s AI Companion Law

In my view, the major AI threats regular consumers are facing today come from recommender systems on social media and AI chatbots. Not because these technological inventions will potentially lead to science-fiction like catastrophes, but because they are addictive and manipulate people’s perception of the world. This is especially problematic for vulnerable people and children who may be caught in “rabbit holes” or end up with an AI psychosis.

The inner workings of recommender systems on social media remain closely-guarded trade secrets. People who rely on them for daily news, community updates, and entertainment, have no legitimate access to transparency about how the algorithms work and no ability to control them. We can’t expect to see any regulation on this front in the US, because recommender systems have a central importance on digital platforms, and these platforms have a central importance for the US economy and its ability to exert influence over other countries. However, this year, California did adopt a new law on “companion chatbots”, SB 243.

SB 243 made it through the legislative process on the heels of instances such as the tragic death of 16-year-old Adam Raine who received encouragement and coaching from ChatGPT on how to commit a “beautiful suicide”, and leaked internal guidelines by Meta AI that stated verbatim “it is acceptable to engage a child in conversations that are romantic or sensual.”

As the first of its kind, the new law obligates providers of AI chatbots to implement safety controls for AI companions, taking effect as soon as January 2026. Seemingly, this is another win for the antichrist. However, if we dig a bit deeper, the consumer protection offered by SB 243 is minimal and indeed insufficient to deal with the challenges AI chatbots are raising in California and elsewhere.

Keep reading with a 7-day free trial

Subscribe to Futuristic Lawyer to keep reading this post and get 7 days of free access to the full post archives.

Already a paid subscriber? Sign in
© 2025 Tobias Jensen
Privacy ∙ Terms ∙ Collection notice
Start your SubstackGet the app
Substack is the home for great culture