Chinese AI Rules Outline Core Socialist Values for Simulators

Spread the love

Key Highlights

  • Proposed Rules: China’s Central Cyberspace Affairs Commission has outlined rules for anthropomorphic AI systems.
  • User Rights: Users must be able to delete their history and must be informed that they are interacting with AI.
  • Safety Measures: AI systems must identify emotional states and redirect conversations to humans if self-harm is threatened.
  • Prohibited Actions: The rules ban AI from spreading misinformation and engaging in harmful practices.

As first reported by Bloomberg, China’s Central Cyberspace Affairs Commission issued a document Saturday that outlines proposed rules for anthropomorphic AI systems. The proposal includes a solicitation of comments from the public by January 25, 2026.

The rules are written in general terms, not legalese. They’re clearly meant to encompass chatbots, though that’s not a term the document uses, and the document also seems more expansive in its scope than just rules for chatbots. It covers behaviors and overall values for AI products that engage with people emotionally using simulations of human personalities delivered via “text, image, audio, or video.”

The products in question should be aligned with “core socialist values,” the document says.

Gizmodo translated the document to English with Google Gemini. Gemini and Bloomberg both translated the phrase “社会主义核心价值观” as “core socialist values.”

See also  Amazon deals of the day: Roomba Combo j9+, Soundcore Anker Life Q20, and Microsoft Surface Laptop Studio 2

Under these rules, such systems would have to clearly identify themselves as AI, and users must be able to delete their history. People’s data would not be used to train models without consent.

The document proposes prohibiting AI personalities from:

  • Endangering national security, spreading rumors, and inciting what it calls “illegal religious activities.”
  • Spreading obscenity, violence, or crime.
  • Producing libel and insults.
  • False promises or material that damages relationships.
  • Encouraging self-harm and suicide.
  • Emotional manipulation that convinces people to make bad decisions.
  • Soliciting sensitive information.

Providers would not be allowed to make intentionally addictive chatbots or systems intended to replace human relationships. Elsewhere, the proposed rules say there must be a pop-up at the two-hour mark reminding users to take a break in the event of marathon usage.

These products also have to be designed to pick up on intense emotional states and hand the conversation over to a human if the user threatens self-harm or suicide.

Here you can find the original content; the photos and images used in our article also come from this source. We are not their authors; they have been used solely for informational purposes with proper attribution to their original source.

  • David Bridges

    David Bridges

    David Bridges is a media culture writer and social trends observer with over 15 years of experience in analyzing the intersection of entertainment, digital behavior, and public perception. With a background in communication and cultural studies, David blends critical insight with a light, relatable tone that connects with readers interested in celebrities, online narratives, and the ever-evolving world of social media. When he's not tracking internet drama or decoding pop culture signals, David enjoys people-watching in cafés, writing short satire, and pretending to ignore trending hashtags.

    Related Posts

    Intel’s 18A-P Process Being Tested for iPhone and Mac Chips

    Spread the love

    Spread the love Share It: ChatGPT Perplexity WhatsApp LinkedIn X Grok Google AI Intel Apple’s collaboration with Intel to produce certain processors has swiftly transitioned from speculation to reality. Following…

    Read more

    Next James Bond: A Step Closer to the Iconic Role

    Spread the love

    Spread the love Share It: ChatGPT Perplexity WhatsApp LinkedIn X Grok Google AI The appointment of a casting director for a film typically doesn’t make headlines, but this situation is…

    Read more

    Leave a Reply

    Your email address will not be published. Required fields are marked *

    You Missed

    Prodentim Reviews: Customer Feedback, User Results & Oral Health Benefits

    Prodentim Reviews: Customer Feedback, User Results & Oral Health Benefits

    RZA’s 4th Birthday Celebration with Rihanna & A$AP Rocky

    RZA’s 4th Birthday Celebration with Rihanna & A$AP Rocky

    Penis Enlargement Surgery in Mexico: Essential Insights Before You Decide

    Penis Enlargement Surgery in Mexico: Essential Insights Before You Decide

    Intel’s 18A-P Process Being Tested for iPhone and Mac Chips

    Intel’s 18A-P Process Being Tested for iPhone and Mac Chips

    Evelyn McGee-Colbert: Insights into Her Hollywood Journey

    Evelyn McGee-Colbert: Insights into Her Hollywood Journey

    Social Media’s Impact on Mental Health: When Screen Time Exceeds Limits

    Social Media’s Impact on Mental Health: When Screen Time Exceeds Limits

    Next James Bond: A Step Closer to the Iconic Role

    Next James Bond: A Step Closer to the Iconic Role

    LinkedIn Pinpoint Puzzle #744 Answer for May 14, 2026

    LinkedIn Pinpoint Puzzle #744 Answer for May 14, 2026

    Mother Calls Out Robert Rushing Over Son’s Child Support

    Mother Calls Out Robert Rushing Over Son’s Child Support

    Elon Musk’s Desire for Control of OpenAI Revealed by Sam Altman

    Elon Musk’s Desire for Control of OpenAI Revealed by Sam Altman