Grok made headlines claiming it tightened restrictions on image generation after the deepfake uproar. Sounds good on paper, right? Well, not quite. Despite the official statement, users report the feature remains surprisingly accessible in practice. It's the classic move—announce stricter controls to appease critics, but the actual implementation tells a different story. This gap between what platforms say and what they actually do raises real questions about AI safety governance. When image generation tools can still produce potentially problematic content even after a supposed crackdown, it reveals how challenging it is to enforce content policies at scale. The deepfake concern isn't going away anytime soon, and half-measures aren't going to cut it if the public trust keeps getting tested like this.

This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
0/400
No comments
Trade Crypto Anywhere Anytime
qrCode
Scan to download Gate App
Community
English
  • 简体中文
  • English
  • Tiếng Việt
  • 繁體中文
  • Español
  • Русский
  • Français (Afrique)
  • Português (Portugal)
  • Bahasa Indonesia
  • 日本語
  • بالعربية
  • Українська
  • Português (Brasil)