
TikTok has announced updates to its Community Guidelines, which aim to enhance user safety, tackle AI misinformation, and offer clearer, more accessible language.
TikTok has introduced a “rules-at-a-glance” section summarising each policy in response to user feedback for simpler explanations and clearer definitions. These changes simplify the existing rules and strive to better meet community expectations.
While many updates focus on language clarity, some noteworthy adjustments have been made. TikTok is consolidating its policies related to gambling, alcohol, tobacco, drugs, firearms, and dangerous weapons into a single regulated goods and services policy. Other policies, such as those regarding bullying, have also been refined.
A significant revision pertains to the platform’s stance on AI-generated content. TikTok previously prohibited AI content that misrepresented authoritative sources or crises, as well as content that misrepresented public figures. This standard has now changed to disallow any content that’s misleading about public matters or could harm individuals. Notably, the prior ban on AI endorsements has been removed.
New guidelines state that TikTok LIVE creators are fully accountable for everything occurring during their streams, including third-party tools like voice-to-text software and real-time translation services. For instance, the creator bears responsibility if a voice-to-text tool generates harmful content.
Additionally, TikTok has updated its search functionality, revealing that search recommendations are personalised based on users’ previous searches and viewed content. The platform has also clarified how comments are sorted, considering various factors, including past user interactions, likes, and reports.
There’s also an update regarding commercial content: brands must disclose any promotional material. TikTok will limit content visibility, directing users to purchase products off-platform in regions where TikTok Shop operates.
TikTok emphasises its commitment to effective moderation practices. It encourages users to report any violations they encounter. The platform stated, “We continue to invest in moderation technologies, including AI, to enhance our proactive enforcement approach. Over 85% of the content removed for Community Guidelines violations is now identified and removed automatically, with 99% of that content taken down before receiving user reports.”
These updated guidelines will take effect on September 13th, 2025. Users can also compare the old and new versions of the guidelines.