“A platform that garners as much public attention as it does popularity.” TikTok, a social media platform for sharing videos, has consistently faced such mixed assessments. Since its introduction in 2017, it has evolved into a major app with around 1.2 billion users, yet there are still doubts that the platform could be “compelled to release data for the Chinese government,” considering its parent company, ByteDance, is based in China. Even following the transfer of TikTok’s U.S. operations to a local investment firm in January, questions about its management continue.
The TikTok Transparency and Accountability Center (TAC), situated in Singapore’s One North area, was visited on the 25th of last month, and is a facility created to tackle these concerns. The TikTok Trust and Safety Team (T&S), responsible for managing this center, evaluates the harmful nature of TikTok content before it reaches users. Content from Korea is mainly assessed by employees who are native speakers of Korean. The TAC also functions as an exhibition space, highlighting transparency through annual disclosure of its review procedures.
◇Dangerous Items Such as Knives and Cigarettes… AI Makes Decisions in Moments
At the Singapore TAC, visitors can directly witness how TikTok’s AI identifies harmful content right from the start. When a model knife was waved threateningly in front of a monitor equipped with TikTok’s AI, the on-screen “dangerous tool index” immediately jumped above 90%. In contrast, when the knife was held like a microphone and used to imitate singing, the reading stayed at 0%. A TikTok representative explained, “The AI examines the ‘context’ of potentially dangerous actions,” and noted, “89.7% of harmful content globally is removed before it is even seen.”
The standards used to determine harmful content, referred to as “Community Guidelines,” are revised each year by TikTok’s Global Trust and Safety Team (T&S), which includes thousands of experts around the world, and these updates are integrated into the AI system. Thousands of categories, such as “dangerous tools” including beer bottles and knives, “extremist symbols” like ISIS flags, and “drinking, smoking, or criminal activities,” are incorporated into the AI’s evaluation process.
Content that is flagged by the AI as unclear for decision-making is reviewed again by human moderators in the Trust & Safety team. This covers “gray-area” content like dance videos that do not involve sexual acts even if they feature revealing attire, or hate speech without violent imagery. The six evaluation categories are: ▲Safety and Public Awareness ▲Mental and Behavioral Health ▲Sensitive Adult Themes ▲Authenticity and Ethics ▲Controlled Items and Business Activities ▲Privacy and Security. TikTok allocates more than 2 billion South Korean won (around 2.8 trillion South Korean won) each year to the safety management budget of the Trust & Safety team.

The percentage of AI-generated reviews has been consistently rising. However, a local representative stated, “AI cannot completely take the place of human moderators,” and added, “Reviews that capture cultural nuances in different countries are still the specialty of humans.” For instance, “in the Korean TikTok market, harmful content affecting teenage eating disorders or defamatory videos within the growing K-pop content has been introduced as new areas for review.”
The main factor used to evaluate “politically sensitive content,” which raises the most concern about TikTok functioning “based on the preferences of different governments,” is currently “harmfulness.” The criterion for removing videos is whether the inflammatory content can result in false information or violence.
As per TikTok’s report, the number of contents removed at the request of the South Korean government increased from 19 instances in 2022, 19 in 2023, 72 in 2024, and reached 113 in the first half of 2025. Although this figure is lower than that of Türkiye, which had the highest number of deletion requests in the first half of 2025 (3,340 instances), the rate of increase is significant. TikTok has not provided specific reasons for these deletions, but 2024 and 2025, when the South Korean government’s deletion requests rose sharply, coincided with the 22nd National Assembly election, the 9th nationwide simultaneous local elections, and an early presidential election.
A representative from TAC also stated that the verification of false news videos in Korean is carried out by “Lead Stories,” an external agency located abroad, rather than within Korea. This decision stems from the belief that “there is a shortage of reliable third-party Korean-language fact-checking organizations in South Korea.”
Accusations that TikTok’s unpredictable rules overly restrict users’ “freedom of expression” continue to be a significant issue. A representative from TikTok’s headquarters replied, “Although the right to assemble and protest should be protected, whether the message is suitable for children depends on the situation,” and added, “TikTok believes that app guidelines can differ based on ‘who is viewing the content and under what circumstances.’”
◇A Time When AI Monitors AI… “AI Content Filter to Be Implemented in South Korea”
“AI Waste” (poor-quality AI-generated material), becoming an increasing problem for social media sites, also presents a ambiguous situation for TikTok. Even when AI is utilized to create large amounts of substandard content, strict penalties are challenging to implement if no damage occurs.
Alternatively, TikTok implements an “AI tagging” system that leverages artificial intelligence to detect and mark content created by AI. Should AI-generated material violate intellectual property rights or disseminate false information, TikTok’s T&S can categorize it as damaging. A common instance involves videos that have AI-generated subtitles added to stolen footage.
A TikTok spokesperson also mentioned, “In certain foreign nations, a content restriction feature (topic control settings) is being introduced to stop AI-generated material from showing up on the platform when users ask for it,” and noted, “It will be slowly rolled out in South Korea following initial testing.”






Leave a comment