X Warns Against Creator Payouts Over Undisclosed AI War Videos

Decrypt

In brief

  • X product head Nikita Bier said creators posting undisclosed AI-generated war videos will lose access to the platform’s revenue-sharing program for 90 days.
  • The policy targets AI-generated footage that could mislead users during wartime.
  • Researchers and governments have warned that deepfakes could spread propaganda and misinformation online.

Elon Musk’s social media platform X said it will suspend creators from its revenue-sharing program if they post AI-generated videos depicting armed conflict without clearly disclosing that the footage was created using artificial intelligence. In a post on Tuesday, X’s head of product Nikita Bier said the company is revising its Creator Revenue Sharing policies to maintain authenticity on the platform’s timeline and “prevent manipulation of the program.” “During times of war, it is critical that people have access to authentic information on the ground,” Bier wrote. “With today’s AI technologies, it is trivial to create content that can mislead people.” 

Creators who violate the rule will lose access to the platform’s Creator Revenue Sharing program for 90 days, Bier wrote. Repeat violations will lead to permanent removal from the monetization program. The policy change comes as AI-generated videos claiming to show scenes of escalating violence in the Middle East following missile strikes by the U.S., Israel, and Iran last week. On Monday, an AI-generated clip on X showing an airstrike on the Burj Khalifa in Dubai was viewed over 8 million times; at the same time, another version of the clip was viewed over 42,000 times on Instagram.

We fired 1,800 missiles at the Burj Khalifa.

Every single missile hit the target. pic.twitter.com/pdfHWhN2D8

— Mojtaba Khamenei (@MojtabaSpoof) March 2, 2026

The United Nations has warned that deepfakes and AI-generated media threaten information integrity, particularly in conflict zones where fabricated images or videos can spread hate or misinformation at scale. This concern was realized during Russia’s invasion of Ukraine, a deepfake video circulated online appearing to show Ukrainian President Volodymyr Zelensky urging Ukrainian troops to surrender. Officials quickly debunked the video, and Zelensky later released a message rejecting the claim. According to Bier, enforcement will rely on several signals, including posts that receive a Community Note identifying the video as AI-generated, along with metadata or other indicators suggesting the footage was produced using generative AI tools. By tying enforcement to monetization, X’s policy focuses specifically on the financial incentives creators have to post fake videos that drive clicks and views. “We will continue to refine our policies and product to ensure X can be trusted during these critical moments,” Bier wrote.

Disclaimer: The information on this page may come from third parties and does not represent the views or opinions of Gate. The content displayed on this page is for reference only and does not constitute any financial, investment, or legal advice. Gate does not guarantee the accuracy or completeness of the information and shall not be liable for any losses arising from the use of this information. Virtual asset investments carry high risks and are subject to significant price volatility. You may lose all of your invested principal. Please fully understand the relevant risks and make prudent decisions based on your own financial situation and risk tolerance. For details, please refer to Disclaimer.
Comment
0/400
No comments
Trade Crypto Anywhere Anytime
qrCode
Scan to download Gate App
Community
English
  • 简体中文
  • English
  • Tiếng Việt
  • 繁體中文
  • Español
  • Русский
  • Français (Afrique)
  • Português (Portugal)
  • Bahasa Indonesia
  • 日本語
  • بالعربية
  • Українська
  • Português (Brasil)