Humanity Protocol’s controlled experiment showed that AI can create convincing fake profiles to bypass identity verification, exposing critical weaknesses in traditional KYC systems and highlighting the growing risk of AI-driven fraud online.
Humanity Protocol, a technology startup focused on developing the internet’s trust layer, recently carried out a controlled social experiment to explore how AI can be leveraged to create highly convincing fake profiles, bypass identity verification on a major dating platform, and engage with real users at scale
The results highlight pressing concerns regarding the effectiveness of conventional Know Your Customer (KYC) systems in an era dominated by generative AI, as well as the potential for these vulnerabilities to be exploited by malicious actors.
The experiment spanned two months, from October to December 2025, and involved six members of the Humanity Protocol team. Using publicly accessible AI tools including Reve AI, ChatGPT, Nanobanana, and Midjourney, the team generated four distinct Tinder profiles, complete with photos and biographical details
In order to maintain realistic activity, the team employed TinderGPT, an open-source tool on GitHub, enabling the profiles to manage over 100 simultaneous conversations with genuine users on the dating app. Although the accounts were created across multiple countries—Portugal, Spain, Serbia, Indonesia, and Thailand—all profiles used Tinder Gold to set their location to Portugal. Over the course of the experiment, the AI-generated accounts engaged with 296 real Tinder users and successfully convinced 40 individuals to agree to meet in person.
The experiment concluded ethically at a restaurant in Lisbon, Portugal, where all participants were informed of the situation upon arrival and treated to dinner by the startup. The design of the experiment ensured that no financial, emotional, or physical harm was caused, and the stated purpose was to expose systemic vulnerabilities rather than exploit them.
“This was not about tricking people for fun,” said Terence Kwok, Founder of Humanity Protocol, in a written statement. “It was about stress-testing systems we all rely on and showing how easily trust can be manufactured when AI is involved. If a small team can do this as an experiment, imagine what coordinated bad actors could achieve with malicious intent,” he added.
KYC Systems, Raising Stakes For Online Security And Fraud Prevention
The findings underscore a broader challenge for digital platforms: traditional KYC protocols were conceived in a pre-generative-AI environment and are increasingly insufficient. Standard verification measures such as photo checks, basic liveness detection, and document uploads struggle to keep pace with AI tools capable of producing hyper-realistic images, voices, and behaviors that convincingly mimic human users.
While the immediate impact on dating platforms may appear limited to wasted time or emotional manipulation, the stakes are far higher in financial and security contexts. According to the US Federal Trade Commission, Americans lost more than $1.3 billion to romance scams in 2022, making it the costliest category of consumer fraud. As AI technology continues to advance, the distinction between benign experimentation and large-scale fraud is rapidly narrowing. Without KYC frameworks that can adapt to generative AI and verify humanity rather than simply identity, online platforms risk becoming prime targets for the next generation of AI-driven exploitation.
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
Humanity Protocol Experiment Reveals How AI Can Bypass KYC And Exploit Digital Trust
In Brief
Humanity Protocol’s controlled experiment showed that AI can create convincing fake profiles to bypass identity verification, exposing critical weaknesses in traditional KYC systems and highlighting the growing risk of AI-driven fraud online.
Humanity Protocol, a technology startup focused on developing the internet’s trust layer, recently carried out a controlled social experiment to explore how AI can be leveraged to create highly convincing fake profiles, bypass identity verification on a major dating platform, and engage with real users at scale
The results highlight pressing concerns regarding the effectiveness of conventional Know Your Customer (KYC) systems in an era dominated by generative AI, as well as the potential for these vulnerabilities to be exploited by malicious actors.
The experiment spanned two months, from October to December 2025, and involved six members of the Humanity Protocol team. Using publicly accessible AI tools including Reve AI, ChatGPT, Nanobanana, and Midjourney, the team generated four distinct Tinder profiles, complete with photos and biographical details
In order to maintain realistic activity, the team employed TinderGPT, an open-source tool on GitHub, enabling the profiles to manage over 100 simultaneous conversations with genuine users on the dating app. Although the accounts were created across multiple countries—Portugal, Spain, Serbia, Indonesia, and Thailand—all profiles used Tinder Gold to set their location to Portugal. Over the course of the experiment, the AI-generated accounts engaged with 296 real Tinder users and successfully convinced 40 individuals to agree to meet in person.
The experiment concluded ethically at a restaurant in Lisbon, Portugal, where all participants were informed of the situation upon arrival and treated to dinner by the startup. The design of the experiment ensured that no financial, emotional, or physical harm was caused, and the stated purpose was to expose systemic vulnerabilities rather than exploit them.
KYC Systems, Raising Stakes For Online Security And Fraud Prevention
The findings underscore a broader challenge for digital platforms: traditional KYC protocols were conceived in a pre-generative-AI environment and are increasingly insufficient. Standard verification measures such as photo checks, basic liveness detection, and document uploads struggle to keep pace with AI tools capable of producing hyper-realistic images, voices, and behaviors that convincingly mimic human users.
While the immediate impact on dating platforms may appear limited to wasted time or emotional manipulation, the stakes are far higher in financial and security contexts. According to the US Federal Trade Commission, Americans lost more than $1.3 billion to romance scams in 2022, making it the costliest category of consumer fraud. As AI technology continues to advance, the distinction between benign experimentation and large-scale fraud is rapidly narrowing. Without KYC frameworks that can adapt to generative AI and verify humanity rather than simply identity, online platforms risk becoming prime targets for the next generation of AI-driven exploitation.