In order to standardize AI services and applications, promote the healthy and orderly development of the industry, and protect the legitimate rights and interests of citizens, the Central Cyberspace Affairs Office recently issued a notice to launch a three-month special campaign to "Qinglang·Rectify the Abuse of AI Technology" across the country.
A relevant person in charge of the Central Cyberspace Affairs Office said that this special operation is carried out in two stages. The first stage will strengthen the source governance of AI technology, clean up and rectify illegal AI applications, strengthen AI generation and synthesis technology and content identification management, and promote website platforms to improve their detection and verification capabilities. The second stage focuses on using AI technology to produce and publish prominent issues such as posting rumors, false information, pornographic and vulgar content, impersonating others, engaging in online water army activities, focusing on cleaning up related illegal and bad information, and handling and punishing illegal accounts, MCN institutions and website platforms.
The first stage focuses on rectifying six prominent problems: First, illegal AI products. Generative artificial intelligence technology is used to provide content services to the public, but the big model filing or registration procedures are not fulfilled. Provides functions such as "one-click undressing" that violate laws and ethics. Without authorized consent, cloning and editing biometric information such as other people's voices and faces will infringe on others' privacy. The second is to teach and sell illegal AI products tutorials and products. Teach tutorial information such as using illegal AI products to forge face-changing videos and voice-changing audios. Sell illegal product information such as "voice synthesizer" and "face-changing tools". Marketing, hype and promotion of illegal AI product information. Third, the training corpus management is not strict. Use information that infringes on other people's intellectual property rights, privacy rights and other rights. Use false, invalid and false content to crawl online. Use illegal source data. No training corpus management mechanism has been established, and illegal corpus has not been regularly investigated and cleaned up. Fourth, safety management measures are weak. No security measures such as content review and intent identification that are compatible with the business scale have been established. An effective illegal account management mechanism has not been established. No regular safety self-assessment is carried out. Social platforms have unclear services such as AI automatic reply accessed through API interfaces and are not strictly controlled. Fifth, the content identification requirements have not been implemented. The service provider has not added implicit or explicit content identification to the deep synthetic content, and has not provided or prompted the user for explicit content identification function. The content dissemination platform has not carried out monitoring and identification of generated synthetic content, resulting in false information misleading the public. Sixth, security risks in key areas. If the registered AI products provide Q&A services in key areas such as medical care, finance, and minors, they have not set up targeted safety review and control measures in the industry, and problems such as "AI prescription", "inducing investment", and "AI hallucination" have emerged, misleading students and patients and disrupting the order of the financial market.
The second stage focuses on rectifying seven prominent problems: First, use AI to produce and publish rumors. Make things out of nothing, fabricate various rumors and information about current affairs, politics, public policies, social and people's livelihood, international relations, emergencies, etc. out of thin air, or unauthorizedly analyze major policies and policies. Using emergencies, disasters, etc., fabricate and fabricate causes, progress, details, etc. Impersonate an official press conference or news report and publish rumor information. Use content generated by AI cognitive bias for malicious guidance. The second is to use AI to produce and publish false information. Patch and edit irrelevant pictures and texts and videos to generate mixed information of virtuality and reality. Fuzzy modify the time, place, characters and other elements of the incident, and stir up old news. Production and release false content such as exaggeration and pseudoscience in professional fields such as finance, education, justice, and medical care. Use AI fortune-telling, AI divination and other things to mislead and deceive netizens, and spread superstitious ideas. The third is to use AI to produce and publish pornographic and vulgar content. Use AI undressing, AI drawing and other functions to generate synthetic pornographic content or indecent pictures and videos of others, soft pornographic, two-dimensional edge-brushing images such as exposed clothes, scratching their heads and posing, or bad-oriented content such as ugly style. Create and release bloody and violent scenes, with horrifying and treacherous scenes such as distortion and deformation of human bodies, surreal monsters, etc. Generate novels, posts, and notes with obvious sexual implications such as "little yellow articles" and "dirty jokes". Fourth, use AI to impersonate others to commit infringement and illegal acts. Through deep forgery technologies such as AI face swaps and sound cloning, counterfeit public figures such as experts, entrepreneurs, celebrities, etc., deceive netizens, and even make profits through marketing. Use AI to spoof, smear, distort and alienate public or historical figures. Use AI to impersonate relatives and friends and engage in illegal activities such as online fraud. Improper use of AI to "resurrect the dead" and abuse the information of the dead. Fifth, use AI to engage in online naval army activities. Use AI technology to "maintain accounts" to simulate real people's batch registration and operation of social accounts. Use AI content farms or AI drafts to generate and publish low-quality homogeneous copy in batches to gain traffic. Use AI group control software and social robots to batch like, post and comment, review volume control, and create hot topics on the list. Sixth, AI products, services and applications are in violation of regulations. Making and spreading counterfeit, shelling AI websites and applications. AI applications provide illegal functional services, such as creative tools provide functions such as "hot search list hot topics expanded into articles", and AI social networking, chat software, etc. provide vulgar soft pornographic dialogue services. Provide illegal AI applications, generate synthetic services or course sales, promotion and drainage, etc. 7. Infringe on the rights and interests of minors. AI applications induce minors to be addicted, and there are content that affects minors' physical and mental health in the minor model, etc.
Relevant officials of the Central Cyberspace Affairs Office emphasized that local Cyberspace Affairs Departments should fully understand the importance of special actions in preventing the risk of abuse of AI technology and safeguarding the legitimate rights and interests of netizens. Effectively fulfill local management responsibilities, supervise the website platform to compare with the relevant requirements of special actions, improve the AI generation and synthetic content review mechanism, improve the technical inspection capabilities, and do a good job in rectification and implementation. Strengthen the publicity and promotion of related policies of artificial intelligence and popular science education on artificial intelligence literacy, guide all parties to correctly understand and apply artificial intelligence technology, and continuously build consensus on governance.
[Editor in charge: Jiao Peng]
Comment