AI-generated content rules—how are they shifting policy?
- draft rules set to regulate ChatGPT-like AI systems
- regulatory guidelines poised to limit industry initiatives
- first-tier cities push support of local AI
controlling AI content
ChatGPT and its AI-generated content (AIGC) are here and in the PRC. An array of domestic models generating text, pictures, audio, video, and code have sprung up in recent months. Trying to keep abreast, Beijing issued draft regulations for AIGC systems for public comment in early April. A large number of experts made suggestions to improve the rules.
The draft regulations build upon sanctions on ‘deepfakes’ issued in December 2022 and algorithm management provisions introduced in March 2022. The striking similarity between the new draft regulations and the existing ones, indeed, makes the need for more rules doubtful, argues Wang Rang 王燃 Tianjin University Law School.
seeking truth from training data
The demanding requirements of articles 4 and 7 of the draft regs spark controversy. They stipulate that ‘content created by generative AI should be true and accurate’, and emphasise the need for training data to maintain ‘truth, accuracy, objectivity, and diversity’. But as ChatGPT users know, while AI excels in many things, veracity is the least of them. Training data scraped from the internet has little chance of meeting such stringent criteria. Several groups of experts hence urge their adjustment or removal.
The draft places burdens on AIGC providers but none on users. It is not the training data or generated content that needs to be true and accurate, says Zan Lingxiao 昝凌霄 Heping District People’s Procuratorate. Using AIGC to spread misinformation is what should be illegal.
To make things worse, the term ‘provider’ is poorly defined, laments Tian Ye 田野 Tianjin University Law School. Is it the developer of the model? What of apps from secondary providers that allow their users to interact with AIGC models? Further categories are needed, argues Tian: direct or secondary users, tech or content providers.
over-regulated...
‘Healthy development and regulated application’ of AIGC is the point, claims CAC (Cyber Administration of China). Yet observers find the rules inhibiting rather than encouraging. In an immature industry, some adverse side effects are inevitable and permissible, argues Wang Bin 王彬 Nankai University Law School. Imposed too early, rules hinder an industry from maturing. The state should provide encouragement, he insists. And with rapid advances, rules will be overtaken quickly, some warn.
Article 11 forbids user profiling. With few exceptions, service providers profile their users; AIGC should be no exception, argues Wang Ran 王燃 Tianjin University Law School. Real-name authentication is another point of contention. Zan Lingxiao comments that if search engines can be used without ID, so should AIGC.
The draft regulation supports responsible use of AIGC, with the aim of preventing users from becoming AI-dependent. AIGC is already a professional tool, says Zhu Geda 诸葛达 Heping District People’s Procuratorate prosecutor; this clause should be removed or limited to recreational AI only.
or under-regulated
Not everyone is calling for laxer rules and warmer support for the industry. ‘Information cocoons’ have alarmed Fang Binxing 方滨兴 Chinese Academy of Engineering, known as ‘the father of China’s Great Firewall’. Content is the irritant: AIGC users may be manipulated, says Fang. This poses a threat to the state.
Zeng Yi 曾毅, an AI ethics pundit (see profile), underscores the need to moderate AI development. Rapid tech advance, he asserts, outruns state capacity to manage its impact; regulation is always in catch-up mode. AI safety and ethics topped the agenda at a symposium organised by the Beijing Academy of AI on 9-11 June. Speakers included overseas heavyweights Sam Altman, OpenAI CEO and AI safety advocate Stuart Russell. But halting AI research is impractical, warns Qiu Xipeng 邱锡鹏 Fudan University School of Computer SciTech and head of ChatGPT rival MOSS.
whatever next
Some localities have already taken positions on ‘regulating vs promotion’. Beijing, Shanghai, and Shenzhen have swiftly rolled out measures to bolster local AI sectors. For instance, Beijing wants to crowdsource efforts to generate ‘high-quality, safe, compliant’ training datasets. This strategic move may give local firms a head start when the rules kick in, and it is no longer an option to scrape data wholesale from the internet.
Some worry these measures just fuel the ‘AIGC hype’. Disorderly expansion of the sector wastes resources and creates industry bubbles, says Tan Tieniu 谭铁牛, Chinese Academy of Sciences academician and CPPCC (Chinese People's Political Consultative Conference) delegate. Beijing should support basic AI research holistically, he says, and not just blindly jump on the bandwagon of the latest ‘hot topic’ on AIGC.
Time will tell which suggestions CAC will roll out in amendments to the draft. But the delivered version of the regs will not have the last word. A legislative plan issued by the State Council in early June flagged that an ‘Artificial Intelligence Law’ may be soon in the pipeline.
AI experts
Zeng Yi 曾毅 | Chinese Academy of Sciences Institute of Automation researcher, New Generation AI Governance Professional Committee member, and UNESCO Ad Hoc Expert Group on AI Ethics expert
The AI industry should heed calls for ‘moderating its development and deepening its governance’, argues Zeng. AI companies and state agencies at all levels should set up specialised committees on AI ethics and safety. Ethics should become compulsory in IT curricula, and testing and safety evaluation research should be prioritised over enhancing AI itself. Significant AI risks will likely merge decades in the future, he says. The groundwork for ethical safeguards must be laid now, he says. Powerful deep-learning models that dominate the field are inherently unsafe. Future AI models will be judged primarily on their transparency and accountability. This calls for entirely rethinking design ‘from scratch’. If China could pioneer this area of accountable AI, suggests Zeng, it would be a long-term advantage.
Zeng is deputy director of the Research Centre for Brain-inspired Intelligence at the Chinese Academy of Sciences, Institute of Automation. Bridging neuroscience and AI with interests in human cognition, ethics and governance, he has contributed to the ’Beijing Principles’ and governance principles published by the Ministry of Science and Technology. He joined a UNESCO expert group, drafting AI ethics guidelines, and a WHO (World Health Organisation) expert group on the Ethics and Governance of Artificial Intelligence for Health.
Zhang Ping 张平 | Peking University Law School professor
Strict rules for training data should be relaxed, argues Zhang. They inevitably collide with copyright law. Zhang suggests using ‘safe harbour rules’ and ex post facto compensation to resolve disputes. Open licensing and collective copyright management may also facilitate the massive licensing of IPRs. It is impractical to expect developers to verify the ‘accuracy’ of training data, she says. Chinese AI needs a flexible legal framework. EU legislation, she notes, is adversarial, setting up trade barriers for US companies. Industry development and tech innovation should be the point of PRC legislation.
Teaching IPR at Peking University Law School since 1991, Zhang added its Institute of AI to her portfolio in 2020. Her interests include IPR, internet law, data trading, and open-source community rules. Visitorships have brought her to Washington, Stanford, and Tokyo. She has served in several advisory bodies to the government, currently as an expert in the Online Dispute Resolution Centre of the China International Economic and Trade Arbitration Commission (CIETAC), an institution under the State Council.
Beijing Academy of Artificial Intelligence (BAAI) | 北京智源人工智能研究院
BAAI is a non-profit research lab. Founded in 2018 by leading firms, campuses, and institutes under the support of MoST (Ministry of Science and Technology), it leads research on AIGC large models. The WuDao model, first released in 2020, was the world's largest by parameter count at the time. Its successor, WuDao 3.0, appeared in June 2023. It includes a model evaluation system and generates text, images, and code.
Governance, safety and ethics are a research focus. Its dedicated Research Centre for AI Ethics and Safety co-produced the influential ‘Beijing AI Principles’ in 2019, with fraternal institutes like Peking, Tsinghua Universities, and institutes under the Chinese Academy of Sciences (not to be confused with the 'AI 2.0 governance principles').
Unlike OpenAI, BAAI champions transparency, publishing the source code of most of its models (ChatGPT is closed-source). But it also opposes slowing down AI research (see Zeng Yi profile). Director Huang Tiejun 黄铁军 urges resource bundling to develop powerful AI models domestically.
The institute's annual conference is dubbed the ‘AI spring festival gala’ (AI春晚): the 2023 event showcased international AI celebrities Stuart Russell, Max Tegmark, Geoffrey Hinton, and Sam Altman.