X has announced a significant revision to its Creator Revenue Sharing rules, targeting the growing spread of undisclosed AI-generated content. Under the new policy, users who publish AI-generated videos depicting armed conflicts without clearly labeling them as AI-created will face a 90-day suspension from the platform’s monetization program.

X Creator Revenue Sharing rules now penalize undisclosed AI content

The update is designed to protect authenticity and reduce the risk of misleading or fabricated conflict footage circulating on the platform. As AI video generation tools become more advanced and accessible, distinguishing real-world events from synthetic media has become increasingly difficult.

Under the revised guidelines, creators must clearly disclose when a video showing war, military action, or armed conflict has been generated or significantly altered using AI tools. Failure to do so will not necessarily result in a full account suspension, but it will immediately remove the creator from revenue sharing eligibility for 90 days.

The penalty specifically targets monetization privileges, signaling that X is tying financial incentives to responsible disclosure.

AI-generated war videos raise misinformation concerns

AI-generated conflict footage has become a growing concern for platforms, governments, and media watchdogs. Hyper-realistic synthetic videos can mislead viewers, fuel panic, or influence public opinion, particularly during ongoing geopolitical crises.

By focusing on armed conflict content, X appears to be addressing one of the most sensitive and high-risk categories for misinformation. The company’s move suggests a recognition that undisclosed synthetic war footage poses greater reputational and societal risks than other AI-generated entertainment content.

Monetization enforcement signals stricter authenticity standards

The 90-day revenue suspension introduces a tangible consequence for non-compliance, reinforcing that creator earnings are contingent on transparency. While X has not indicated whether additional penalties could apply for repeated violations, the update signals a tightening approach to content authenticity.

As AI tools continue to reshape digital media, platforms are under pressure to balance innovation with accountability. X’s latest policy change positions monetization rules as a lever to encourage clearer labeling and maintain trust among users.

The effectiveness of the update will likely depend on enforcement consistency, and whether creators adapt quickly to clearer AI disclosure standards.