Choose Language

Back to Blog
Claude Identity Verification and AI Privacy

Claude now demands photo IDs. Your face, your passport, your chats. All linked?

Selfie and ID scans for AI usage? Not with us. How you can protect your identity with local models and privacy-friendly AI assistants like xPrivo.

Hand on heart: You chose Claude because you thought it was the safer alternative.

Perhaps you switched from ChatGPT to Claude in February 2026 when OpenAI announced it was deploying its AI in classified Pentagon networks. Maybe Anthropic’s promise of "responsible AI" impressed you. Or you simply thought: "This is the professional option."

And now Claude might be asking for your ID card.

This is no exaggeration. As of April 14, 2026, Anthropic has published an official page in its Help Center mandating identity verification for certain users, processed by a third party provider called Persona. You submit a government issued photo ID. Plus a live selfie. Only then are you allowed to continue prompting. You can read the original text directly in the Claude Help Center here.

What is happening at Claude right now

Anthropic has outsourced its verification process to the provider Persona Identities, the same KYC company used by Discord.

Physical identification documents such as passports, driver’s licenses, or national ID cards are accepted. Photocopies, digital IDs, and student IDs, however, are rejected. Additionally, a live selfie may be required.

Anthropic emphasizes that the data is stored at Persona, not on Anthropic’s own servers, and that it is not used for model training. Furthermore, Persona is contractually restricted to using the data exclusively for verification purposes.

That sounds reassuring, but it really isn't.

From the moment of verification, there is a direct, legally actionable connection between your legal name, your date of birth, and your passport photo on one hand, and every prompt, every uploaded document, every medical question, every trade secret, and every political opinion on the other. Disturbing, isn't it?

Names, faces, and dates of birth cannot be changed. Once this data is linked to an AI conversation history, that connection is permanent.


A look towards Strasbourg: What happens when an AI knows the user

Before moving on, it is worth looking at what is already possible today when an AI platform knows or can share a user’s identity.

For example, on April 3, 2026, a 37 year old man from Strasbourg asked ChatGPT how he could obtain a weapon to "kill a secret agent of the CIA, Mossad, or DGSI." A few hours later, the French elite police command RAID was at his door. He was arrested and released a day later because the public prosecutor saw no serious offense. He had "merely questioned an AI."

He later said himself that he wanted to "test the reliability and surveillance of artificial intelligence." The answer came quickly.

The chain that triggered this RAID visit is worth reading carefully: ChatGPT flagged the message. OpenAI’s moderation system escalated it to human reviewers. The FBI was notified. They submitted a report to PHAROS, France's national platform for reporting illegal online content. French law enforcement acted accordingly. All of this happened within a few hours across two continents, triggered by a single chat message in what most users still believe to be a private conversation.

OpenAI’s own documents state: "If we identify users planning to harm others, we forward their conversations to specialized pipelines. There, they are reviewed by a small team authorized to take action, including notifying law enforcement."

This is not an exception.


Those who were "just joking" learn the hard way

The man from Strasbourg was testing something — or at least claimed to be. The infrastructure behind it does not automatically make that distinction.

OpenAI is an American company operating under US law with a direct reporting pipeline to the FBI. The FBI, in turn, maintains established connections to European law enforcement agencies via platforms such as PHAROS. This chain is active. It worked in real time. Every conversation users have with ChatGPT passes through this pipeline and is automatically scanned, potentially manually reviewed, and possibly forwarded to US federal authorities.

Now, one can imagine what becomes possible when a verified government ID is also stored in the profile. It is not an anonymous account that can be blocked. It is a person with a name, face, and address that can be retrieved directly.

Incidentally, Anthropic refused to provide its Claude AI for mass surveillance of citizens when the US Department of Defense inquired. However, that does not make Anthropic innocent regarding surveillance. The public does not know what deals took place behind closed doors afterwards. Subsequently, the Pentagon turned to OpenAI, the company whose product is used daily by hundreds of millions of people as a personal assistant.


"Persona is reputable." But safe enough?

Anthropic chose Persona because it has comprehensive privacy controls and security standards. That is a reassuring phrasing at first.

In October 2025, a data leak at Discord exposed around 70,000 government IDs that users had submitted there for age verification. Persona seems to be a reputable company at first glance. But no third party provider is immune to data breaches. The history of the KYC industry is full of promises that failed against reality.

And even if Persona were technically flawless: Under the US CLOUD Act, the US government can compel American companies to hand over data regardless of where the servers are located. Persona is based in the USA. Anthropic is based in the USA. This data is subject to US law.


The irony that is hard to bear

In February 2026, millions of users switched from ChatGPT to Claude because OpenAI had signed a contract for AI use in classified government networks. Anthropic reported record registration numbers and a 60 percent increase in free users since January. The reasoning from users was the same everywhere: "Claude is the more privacy friendly alternative."

Two months later, Claude became the first major AI provider ever to require an ID for use.

Those who turned away from one provider for privacy reasons now find themselves with a provider that wants to link legal identity with every AI interaction.


What is an exception today will be the standard tomorrow

In a few years, this might even be the standard: no access to AI assistants without an ID. No anonymous prompting. No private work. Everything linked, everything storable, everything potentially deliverable — in real time, across continents.


And for European users, there is more

As a European, you have special protective rights on paper — keyword GDPR. But these rights only apply if data processing takes place within the European legal area.

Anyone who sends their passport photo to Persona is sending that data out of Europe. It ends up with a US company, on US servers, under US law. Even if Anthropic claims to act in accordance with GDPR: the actual control over this data does not lie in Brussels. It lies in the USA.

In addition: Most users do not know exactly when verification is triggered. Anthropic speaks vaguely of "certain features," "platform integrity checks," and "security measures." When users are specifically affected remains unclear. The ID request comes eventually — surprising, unavoidable, mandatory.


The only real way out: Anonymous AI

There are exactly two ways to escape this scenario.

First: Use an AI service that simply never asks for identity. No account. No ID. No link between names and chats.

Second: Operate the AI locally on your own hardware. No cloud provider sees the data. No third party provider holds the ID. No US law can access something that has never left your own computer.

Both paths are possible today with xPrivo.

xPrivo is a European initiative designed to enable people to use AI without disclosing their identity. It can be used immediately on the web without an account, ID, or verification, or it can be downloaded and operated entirely locally on your own machine.

Concretely, this means: There is no linking of legal identity with prompts, no US company stores conversation histories, there is no vulnerability to KYC data breaches, and there is no obligation to surrender data under the US CLOUD Act.

xPrivo can optionally be connected to European inference providers like Scaleway or operated completely offline with local models via Ollama. For example, a small Mistral model can be run locally or the new Gemma 4 model can be operated on your own computer and used directly via xPrivo.


Local models are not yet perfect, but they will be

Yes, local language models do not always reach the performance of the largest cloud models from OpenAI, Anthropic, and Google. That is the honest truth. But the gap is closing faster than expected. Open source models like those from Mistral are already more than sufficient for most everyday tasks today.

The more people switch to local and data sovereign AI, the faster the corresponding ecosystem grows and the faster these models are developed. Users should not just be consumers. They should be able to help decide in which direction AI develops. This is also how we proceed at xPrivo, where we always consider the needs and requirements of our users.


Conclusion

Anthropic’s decision to introduce identity verification is a step in the wrong direction.

The window in which AI can be used anonymously — without a passport, without a selfie, without a permanent link of identity to one's own thoughts — is closing. Not because of a law. But because the big providers want it that way.

No one has to accept this.

xPrivo is privacy by design and European sovereign by design. No account required. No ID. No tracking. Only the user and their AI.

You can try it out immediately on the web at xprivo.com — anonymously and without registration — or the source code can be downloaded and operated by yourself.

The tools for anonymous, sovereign AI use already exist. The only question is whether they will be used before the ID becomes mandatory.