‘AI Girlfriends’ Are a Privacy Nightmare

AI

You shouldn’t trust any answers a chatbot sends you. And you probably shouldn’t trust it with your personal information either.

That’s especially true for “AI girlfriends” or “AI boyfriends,” according to new research.

An analysis into 11 so-called romance and companion chatbots, published on Wednesday by the Mozilla Foundation, has found a litany of security and privacy concerns with the bots.

Collectively, the apps, which have been downloaded more than 100 million times on Android devices, gather huge amounts of people’s data.

They use trackers that send information to Google, Facebook, and companies in Russia and China; allow users to use weak passwords; and lack transparency about their ownership and the AI models that power them.

Since OpenAI unleashed ChatGPT on the world in November 2022, developers have raced to deploy large language models and create chatbots that people can interact with and pay to subscribe to.

Mozilla research

The Mozilla research provides a glimpse into how this gold rush may have neglected people’s privacy, and into tensions between emerging technologies and how they gather and use data.

It also indicates how people’s chat messages could be abused by hackers.

Many “AI girlfriend” or romantic chatbot services look similar. They often feature AI-generated images of women which can be sexualized or sit alongside provocative messages.

Mozilla’s researchers looked at a variety of chatbots including large and small apps, some of which purport to be “girlfriends.” Others offer people support through friendship or intimacy, or allow role-playing and other fantasies.

“These apps are designed to collect a ton of personal information,” says Jen Caltrider, the project lead for Mozilla’s Privacy Not Included team, which conducted the analysis.

“They push you toward role-playing, a lot of sex, a lot of intimacy, a lot of sharing.”AI chatbot

For instance, screenshots from the EVA AI chatbot show text saying “I love it when you send me your photos and voice,” and asking whether someone is “ready to share all your secrets and desires.”

Concerns mount up

Caltrider says there are multiple issues with these apps and websites.

Many of the apps may not be clear about what data they are sharing with third parties, where they are based, or who creates them, Caltrider says.

She adds that some allow people to create weak passwords, while others provide little information about the AI they use. The apps analyzed all had different use cases and weaknesses.

Take Romantic AI, a service that allows you to “create your own AI girlfriend.” Promotional images on its homepage depict a chatbot sending a message saying,“Just bought new lingerie. Wanna see it?”

The app’s privacy documents, according to the Mozilla analysis, say it won’t sell people’s data.

However, when the researchers tested the app, they found it “sent out 24,354 ad trackers within one minute of use.”

Romantic AI, like most of the companies highlighted in Mozilla’s research, did not respond to WIRED’s request for comment. Other apps monitored had hundreds of trackers.

Lack of clarity

In general, Caltrider says, the apps are not clear about what data they may share or sell, or exactly how they use some of that information.

“The legal documentation was vague, hard to understand, not very specific—kind of boilerplate stuff,” Caltrider says, adding that this may reduce the trust people should have in the companies.

It is unclear who owns or runs some of the companies behind the chatbots.

The website for one app, called Mimico—Your AI Friends, includes only the word “Hi.”

Others do not list their owners or where they are located, or just include generic help or support contact email addresses.

“These were very small app developers that were nameless, faceless, placeless,” Caltrider adds. Read more

  • Matt Burgess is a senior writer at WIRED focused on information security, privacy, and data regulation in Europe.
Additional reading

News category: Analysis and Comment.

Tags: , , ,