Chinese developers emphasize cultural context in AI safety, as local models close the gap with their US counterparts to the smallest level ever.

Chinese firms are addressing the risks associated with artificial intelligence in their unique manner and should not be evaluated using a Western perspective, as per Chinese industry experts.

The remarks precede what is anticipated to be ahectic month for Chinese artificial intelligence creators, with significant new models scheduled to launch before the Lunar New Year.

Last year, worries regarding the dangers posed by Chinese models prevented certain international users from embracing them, with notable instancesDeepSeekespecially prohibited or limited in over 10 nations, such as the United States, Italy, and India.

Are you curious about the most significant issues and developments happening globally? Find the solutions withSCMP Knowledge, our latest platform featuring curated content including explainers, FAQs, analyses, and infographics, presented by our acclaimed team.

Chinese AI models entered 2026 withreduced the performance difference against American competitorsto the nearest level so far, as per independent assessments, leading to demands for Chinese firms to focus more onAI risksfor example, improper use and misalignment.

In a podcast published on Sunday, former DeepSeek researcher Tu Jinhao mentioned that an excessive focus on keeping pace with the US has overshadowed internal efforts in AI safety.

“All the computational power is being used to train AI models, leaving very little for safety-related efforts,” stated Tu, who was still in high school when he joined the Hangzhou-based startup. DeepSeek has not responded to a request for further information.

In December, a report from the US-based non-profit group Future of Life Institute (FLI) pointed out that DeepSeek and several other Chinese AI companies lack transparency regarding their safety measures, despite the significant advancements in their models.

“As companies in [the US and China] reach comparable levels of capability, they should be subject to equally high safety standards,” said Max Tegmark, president of FLI and a physics professor at the Massachusetts Institute of Technology.

Nevertheless, a limited amount of public discussion did not imply thatChinese companiesDid not take AI risks seriously, as stated by Li Zixuan, head of global operations at the Chinese company Zhipu AI.

A “systematic evaluation suite” was used by the GLM model developer to assess model safety, along with “refusal mechanisms” for handling “sensitive content,” according to Li, though no details were shared.

Internationally known as Z.ai, Zhipu received a “D” rating from the FLI’s panel of experts, which included Professor Zeng Yi from the Chinese Academy of Sciences, regarding its AI safety measures.

A Beijing-based company was the first Chinese entity to commit to voluntary global AI safety principles after the Seoul AI Summit in May 2024, promising to release a detailed AI safety framework by February 2025. It has not done so yet.

Li mentioned that the company’s internal safety practices were “rooted in the social norms and cultural environment of the society where we function, which might not align with every global audience.”

Values are not the same everywhere; they have significant cultural and regional differences,” he added. “That being said, we recognize that over half of the fundamental values are common across various regions and cultures, and we are dedicated to maintaining these shared principles in our AI development.

A prominent Chinese foundational model developer based in Shanghai, MiniMax, which also pledged to the commitments in February 2025, has not yet unveiled a safety framework. Nevertheless, the company has taken part in several significant global AI governance discussions, such as a private Track 2 dialogue in November 2024 involving U.S. and Chinese AI firms at the University of Tokyo.

Dialogue 2 involves conversations between non-governmental entities seeking to enhance cooperation among nations on significant topics, such as global AI governance.

Other attendees at the Tokyo discussion were OpenAI and ByteDance, as reported by a person aware of the situation. OpenAI, ByteDance, and MiniMax did not provide responses to inquiries for further information.

A concentration on self-reported information represented a “Western-focused viewpoint” that benefited American firms, stated Fang Liang, head of AI safety at Concordia AI, a Beijing-based advisory firm. Although American companies shared their safety procedures according to voluntary pledges, Chinese companies faced stricter regulatory requirements and sector guidelines, he noted.

Chinese officials have launched agroup of rules aimed at AI servicesover the past year, covering labeling obligations for artificial intelligence-generated material and proposed guidelines for AI companions.

On Monday, the nation’s internet regulator stated that leading AI service companies such as ByteDance and DeepSeek have applied AI labels to over 150 billion AI-created items within the past four months since the regulations were implemented.

More Articles from SCMP

All You Need Is Kill film review: The Japanese novel that inspired Edge of Tomorrow is now animated

High-rollers were given luxury car rides worth HK$3,500 as Uber expands into Macau

Leading US trade representative outlines strategy to reduce China’s influence over essential minerals

Indonesia’s leading economic official promises changes to the financial market: ‘not just due to MSCI’

This piece was first published in the South China Morning Post (www.scmp.com), a top news outlet covering China and Asia.

Copyright (c) 2026. South China Morning Post Publishers Ltd. All rights reserved.

Leave a comment

Trending