Google’s Mandiant security team warns that North Korean hackers are integrating AI-generated deepfake technology into fake video meetings as part of an increasingly sophisticated campaign targeting crypto companies, according to a report released Monday.
Mandiant recently investigated a breach at a fintech company attributed to UNC1069 (also known as “CryptoCore”)—a threat actor with very high ties to North Korea. The attack involved hijacked Telegram accounts, a fake Zoom meeting, and ClickFix techniques to trick victims into executing malicious commands. Investigators also found evidence that AI-generated videos were used in the fake meetings to deceive targets.
According to the report, Mandiant observed UNC1069 deploying these techniques to target both organizations and individuals in the crypto industry, including software companies, developers, venture funds, and their personnel and leadership.
The warning comes amid ongoing increases in crypto thefts linked to North Korea. In mid-December, blockchain analytics firm Chainalysis reported that North Korean hackers stole $2.02 billion in crypto in 2025, a 51% increase from the previous year. The total estimated value of digital assets stolen by groups connected to Pyongyang is approximately $6.75 billion, despite a decrease in the number of attacks.
These findings indicate a shift in the operational methods of state-linked cybercriminal groups. Instead of conducting broad phishing campaigns, CryptoCore and similar groups focus on highly personalized attacks that exploit trust in familiar digital interactions like meeting invitations or video calls. This approach allows hackers to steal larger amounts of value through fewer incidents but with clear targets.
According to Mandiant, the attack begins when the victim is contacted via Telegram by someone appearing to be a familiar leader in the crypto space, but whose account is actually controlled by hackers. After building trust, the attacker sends a Calendly link to schedule a 30-minute meeting, leading the victim to a fake Zoom call hosted on the group’s infrastructure. During the call, the victim reports seeing a deepfake video of a well-known crypto CEO.
When the meeting starts, the hacker claims there is an audio issue and guides the victim to run “troubleshooting” commands— a variation of the ClickFix technique— which activates malware. Forensic analysis later uncovered seven different malware strains on the victim’s system, deployed to steal login credentials, browser data, and session tokens for financial theft and impersonation.
Fraser Edwards, co-founder and CEO of decentralized identity company cheqd, says the incident reflects a trend where hackers increasingly target individuals reliant on online meetings and remote collaboration. He notes that the effectiveness of this method lies in its subtlety: familiar senders, familiar meeting formats, no attachments or obvious vulnerabilities. Trust is exploited before technical defenses can intervene.
Edwards explains that deepfake videos are often used during escalation phases, such as in live calls, where a familiar face’s image can eliminate suspicion caused by unusual requests or technical glitches. The goal isn’t to prolong interaction but to create enough realism to prompt the victim to take the next step.
He also emphasizes that AI is now used to support impersonation beyond live calls, including drafting messages, adjusting tone, and mimicking the communication style of a known individual with colleagues or friends. This makes everyday messages less suspicious and reduces the likelihood that recipients will pause to verify.
Edwards warns that risks will continue to grow as AI tools become more integrated into daily communication and decision-making. These systems can send messages, schedule calls, and act on behalf of users at machine speed. If misused or compromised, AI-generated deepfake audio and video could be deployed automatically, transforming impersonation from manual efforts into scalable, automated processes.
He argues that expecting most users to detect deepfakes on their own is unrealistic. Instead of relying on user vigilance, there should be default protective systems, improved authentication mechanisms, and content authenticity indicators to help users quickly determine whether information is genuine, AI-generated, or unverified—rather than relying on intuition or familiarity.