China accelerates AI agent governance amid emerging security risks-Xinhua

China accelerates AI agent governance amid emerging security risks

Source: Xinhua

Editor: huaxia

2026-05-14 23:21:15

This photo taken on March 11, 2026 shows the screen of a mobile phone running the open-source AI agent OpenClaw at Wuxing District of Huzhou City, east China's Zhejiang Province. (Photo by Yi Fan/Xinhua)

BEIJING, May 14 (Xinhua) -- China is stepping up efforts to regulate and secure artificial intelligence (AI) agents in response to an increase in vulnerabilities tied to emerging open-source technologies.

On May 8, the Cyberspace Administration of China (CAC), the National Development and Reform Commission and the Ministry of Industry and Information Technology (MIIT) jointly issued guidelines for the standardized application and innovative development of AI agents, clearly stressing the principles of safety and controllability, as well as standardization and orderliness, regarding the development of AI agents.

In April, five central departments including the CAC rolled out regulations on AI anthropomorphic interactive services, establishing a risk-based oversight mechanism that mandates security assessments and algorithm filings, and proposed building an AI sandbox security service platform. This move marks the country's first articulation of the AI sandbox governance concept.

Meanwhile, the MIIT and other authorities have released guidelines to standardize tech ethics reviews, requiring AI models to maintain robustness, controllability, transparency and accountability. Authorities are also accelerating the development of a national AI security standard system to set clear ground rules for the industry's sound growth.

According to the China National Vulnerability Database of Information Security (CNNVD), 111 vulnerabilities associated with OpenClaw were recorded between April 14 and April 28 alone. These flaws range from access control errors to critical code issues.

Previously, the National Computer Network Emergency Response Technical Team/Coordination Center of China (CNCERT/CC) and the MIIT had issued a series of high-level warnings regarding vulnerabilities tied to OpenClaw. The National Computer Virus Emergency Response Center has also detected a large number of counterfeit OpenClaw skill packages embedded with Trojan viruses, posing severe risks to users' data security and system stability.

The security challenges posed by AI agents are increasingly recognized as a global concern. The Open Web Application Security Project (OWASP) Foundation has listed agent goal hijacking and tool misuse among the core threats in a recent report.

"OpenClaw-type agents are likely to become the next generation of operating systems," said Tian Suning, co-founder of AsiaInfo, a leading Chinese cybersecurity tech firm. He noted that as core corporate assets shift from traditional personnel and software to data and agents, the ownership and security of these digital entities have become critical issues.

Chinese tech firms are rapidly developing diverse defense systems to mitigate these risks. Liu Longwei, CSO of Tuya Smart, a leading AI cloud platform service provider, revealed that the company has equipped its entire workforce with "digital employees" based on modified versions of OpenClaw, noting that AI generated 70 percent of the company's code last year. Nonetheless, he acknowledged the additional security pressure. The company has responded by constructing six layers of defense, including system strengthening and supply chain security.

"Allowing employees to run unregulated OpenClaw in the workplace is risky, as it undermines control over security and data exposure threats," said Liang Hongwei, a senior tech expert at Alibaba Cloud. He recommended elastic cloud deployment and strict adherence to operational principles prioritizing security and compliance to prevent data leakage.

Domestic security vendors are also leveraging their technical expertise to bolster security protection of AI agents. The cybersecurity arm of AsiaInfo has introduced the Agent Trust Framework (ATF), a governance model that integrates the concepts of "agent intent alignment" and "human-AI co-governance". This approach seeks to contain risks arising from AI randomness, ensuring that the release of AI productivity remains within compliance boundaries. 

Comments

Comments (0)
Send

    Follow us on