 
									Advertisement
AI companies in China are currently undergoing rigorous government reviews of their large language models to ensure they align with “core socialist values,” according to a report by the Financial Times.
Review by Cyberspace Administration of China (CAC): The Cyberspace Administration of China (CAC), the country’s primary internet regulator, is leading this review. The evaluation encompasses a range of companies, from major tech firms like ByteDance and Alibaba to smaller startups. The review process includes testing AI models for their responses to politically sensitive questions and topics related to Chinese President Xi Jinping. Additionally, the training data and safety processes of these models are being scrutinized.
Challenges Faced by AI Companies: An anonymous source from a Hangzhou-based AI company mentioned that their model initially failed the review due to unspecified reasons. They succeeded only after months of “guessing and adjusting.” This highlights the difficulty companies face in aligning their models with the government’s stringent requirements.
Balancing Innovation and Censorship: The CAC’s actions reflect China’s attempt to balance its advancements in generative AI with strict adherence to its internet censorship policies. China was among the first countries to establish regulations for generative AI, mandating that AI services promote “core values of socialism” and avoid generating “illegal” content.
Censorship and Filtering Mechanisms: Meeting these censorship requirements involves “security filtering,” which is challenging given that Chinese large language models (LLMs) are often trained on significant amounts of English-language content. This filtering process includes removing “problematic information” from training data and creating databases of sensitive words and phrases.
Impact on Chatbot Responses: As a result of these regulations, popular Chinese chatbots frequently decline to answer questions on sensitive topics like the 1989 Tiananmen Square protests. During CAC testing, however, models are limited in the number of questions they can outright decline to answer. Therefore, they must generate “politically correct answers” to sensitive inquiries.
Technological Adjustments: An AI expert working on a chatbot in China explained that it’s difficult to entirely prevent LLMs from generating potentially harmful content. Consequently, additional layers are added to the system to replace problematic answers in real-time.
Challenges from U.S. Sanctions: The combination of stringent regulations and U.S. sanctions, which have restricted access to chips necessary for training LLMs, has made it challenging for Chinese firms to develop and launch ChatGPT-like services. Despite these obstacles, China leads globally in generative AI patents.
Conclusion: China’s approach to regulating AI highlights the country’s efforts to advance its technology while ensuring it adheres to the government’s ideological standards. This balancing act presents significant challenges for AI companies striving to innovate within the confines of stringent regulatory frameworks.
 
