Gate Square “Creator Certification Incentive Program” — Recruiting Outstanding Creators!
Join now, share quality content, and compete for over $10,000 in monthly rewards.
How to Apply:
1️⃣ Open the App → Tap [Square] at the bottom → Click your [avatar] in the top right.
2️⃣ Tap [Get Certified], submit your application, and wait for approval.
Apply Now: https://www.gate.com/questionnaire/7159
Token rewards, exclusive Gate merch, and traffic exposure await you!
Details: https://www.gate.com/announcements/article/47889
Grok made headlines claiming it tightened restrictions on image generation after the deepfake uproar. Sounds good on paper, right? Well, not quite. Despite the official statement, users report the feature remains surprisingly accessible in practice. It's the classic move—announce stricter controls to appease critics, but the actual implementation tells a different story. This gap between what platforms say and what they actually do raises real questions about AI safety governance. When image generation tools can still produce potentially problematic content even after a supposed crackdown, it reveals how challenging it is to enforce content policies at scale. The deepfake concern isn't going away anytime soon, and half-measures aren't going to cut it if the public trust keeps getting tested like this.