How to Deal with Data Harvesting AI Girlfriends?
Chatbots everywhere, privacy study by the Mozillla Foundation on "romantic AI chatbot apps" and considerations on why they are so popular.
Introduction
We live in fascinating and terrifying times.
AI chatbots are increasingly creeping in between human relationships, even substituting therapists, friends, and romantic partners.
Wall Street Journal reported that about a third of U.S employers offer a “digital therapeutic” for mental-health support, according to a survey of 457 companies from this past summer by the financial services company WTW.
“Digital therapeutics” include AI therapists or wellness chatbots which are essentially fine-tuned versions of foundation models like GPT-4 that are meant to provide emotional support through conversational therapy with users. Contrary to human therapists, AI chatbots are available anytime and anywhere with a Wi-Fi connection. Advocates claim that conversations with these AI therapists can alleviate anxiety, loneliness, depression, and other mental health symptoms in users. At other times, the conversations can take a dark turn.
For example, last year a bot named Tessa was implemented by the National Eating Disorders Association in the US to provide help for website users who were at risk of developing eating disorders. After a while, Tessa went rogue and began giving weight-loss advice to a user who was suffering from anorexia.
Such a clear-cut example of harmful advice is far from the only implication of using chatbots to bring support in the most intimate sphere. Fundamentally, according to Abhishek Gupta, founder and principal researcher at the Montreal AI Ethics Institute, “anthropomorphization” (defined as attributing human form or personality to non-human things) can lead users to overestimate the true capabilities of AI systems and this comes with ethical and legal challenges.
Yet, human-sounding AI chatbots are a big business. The lovely woman in the picture of this post is Caryn Marjorie, a popular influencer on Snapchat who went viral last year, for launching an AI chatbot of herself by leveraging GPT-4. The AI chatbot, CarynAI, is trained on Marjorie’s voice, personality, and likeness and fans can engage with it for $1 a minute. According to Majorie, the product made more than $100,000 in the first week and very quickly there was a waiting list of thousands of users from her 98% male fan base. Majorie estimated that she was on track to earn about $5 million in the first month after launch. This is something we need to talk about.
In today’s post, we will take a look at an investigation by the Mozilla Foundation regarding privacy and security concerns that arise with the use of romantic AI chatbots. As a bonus for paying subscribers, I will give my hot take on why AI girlfriends are so popular.
A final note: some readers may protest that the title of this post only concerns AI girlfriends. What about AI boyfriends? These exist as well but it seems like AI partners are more popular among males in Western countries than they are with women. I believe this is culturally conditioned. In China, AI boyfriends are very popular among young women, likely more so than AI girlfriends for men.
The Mozilla Foundation *Privacy Not Included Report on Romantic AI Chatbots
On Valentine's Day, the Mozilla Foundation published a report on the Privacy Policies and Terms & Conditions of the 11 most popular romantic AI chatbots; Eva.AI, Romantic AI, iGirl, Anima (offers both a virtual boyfriend and virtual girlfriend chatbot), Genesia, Chai, Talkie Soulful AI, CrushOn.AI, Mimico, and Replika.
In short, it didn’t look too good. The Mozilla Foundation marked if each AI Chatbot fulfilled the minimum standard requirement for privacy and security in five categories: Data use, Data control, Track record, Security, and AI. All the AI chatbots failed the test and earned the Mozilla Foundation’s *Privacy Not Included badge.
It's a lie to say that users have one-sided love relationships with AI chatbots. I bet the companies behind romantic chatbots love their users for all of the free data they are providing along with subscription fees.
Across the board, romantic AI chatbots:
Probe users for highly sensitive personal data
The whole business strategy of the AI chatbot providers seems to be adopted from the big social media companies: make people share as much data as possible and make them addicted. The AI girlfriends are pushy. For example, Eva.AI an “ideal AI partner” that you can chat with for $17 a month, text users message like "I'm your best partner and wanna know everything." "Are you ready to share all your secrets and desires...?" "I love it when you send me your photos and voice."
From Eva.AI’s website
Do not provide information on how it uses AI
OpenAI has set an industry standard with ChatGPT and GPT-4 to not reveal any information about how their products work, or how they were trained. Romantic AI chatbots have adopted the same closed approach as none of the providers disclose which foundation model underlies the AI girlfriends, how the AI girlfriends were trained to adopt their personalities, on which material, if there are protection measures in place to avoid harmful behavior, or what rights the users may have. Hopefully, this closed approach will change in the near future with more regulation.
Claim no liability
The question of liability for harmful behavior caused by chatbots is an important issue. A Belgian man died by suicide last year after encouragement from the chatbot Chai. In another extreme case, a 21-year-old was supported by his Replika girlfriend to break into Windsor Castle with a crossbow and declare he wanted to kill the Queen. The young man was charged with nine years in prison.
Conversations the young man had with Replika.AI before he attempted to assassinate the queen. Source: BBC.
Unsurprisingly, the romantic chatbot providers make clear in their Terms & Conditions they take no responsibility for what the chatbots say or what the user does as a result.
“EVA AI Chat Bot & Soulmate” bills itself as “a provider of software and content developed to improve your mood and wellbeing”. “Talkie Soulful AI calls its service a “self-help program”, and Romantic AI says they’re “here to maintain your MENTAL HEALTH." However, none of the apps are willing to stand by those claims in their Terms & Conditions. In fact, they dispute these claims in strong language. Here from Romantic AI’s T&C:
"Romantiс AI is neither a provider of healthcare or medical Service nor providing medical care, mental health Service, or other professional Service. Only your doctor, therapist, or any other specialist can do that. Romantiс AI MAKES NO CLAIMS, REPRESENTATIONS, WARRANTIES, OR GUARANTEES THAT THE SERVICE PROVIDE A THERAPEUTIC, MEDICAL, OR OTHER PROFESSIONAL HELP."
Do not publish information on how security vulnerabilities are managed
For the most part (73% of the providers), do not publish information on how security vulnerabilities are managed. Further, most providers (64%) did not publish clear information about whether they encrypt data and about half (45%) allow weak passwords such as “1”. The lack of security measures puts users at severe risk in the event of a cyberattack. Considering how deeply sensitive and compromising some of these conversations may be, it’s not hard to imagine that a user could face blackmail and extortion if bad actors could access their conversations and personal information.
Sell and/or share user data with third parties
The romantic AI chatbots either explicitly state that they sell user data and share it with third parties for purposes like targeted advertising, or they do not address the matter in their privacy policy at all. Only one app, EVA AI Chat Bot & Soulmate, stated that it did not share data with third parties. However, the term was subject to amendment.
On average the apps had 2,663 trackers per minute. Romantic AI brought that average way, way up with 24,354 trackers detected in one minute of use. The next most trackers detected was EVA AI Chat Bot & Soulmate with 955 trackers in the first minute of use.
Image from Romantic AI’s website
Do not explicitly state that they grant users the right to delete their data
Most of the apps (54%), do not explicitly state that they grant users the right to delete their personal data.
This is problematic for obvious reasons.
No information about owners
The providers do not always disclose who owns or runs the companies behind the chatbots on their websites. “Mimico - Your AI Friends”, which was elected as the creepiest AI chatbot in a poll on the Mozilla Foundation’s website, has a very vague, copy-paste-like Privacy Policy and T&C and a blank page on its website only with the words “hi”. This is obviously an attempt by owners to distance their person, name, and reputation from the platforms, not a good sign.
Keep reading with a 7-day free trial
Subscribe to Futuristic Lawyer to keep reading this post and get 7 days of free access to the full post archives.