Hacker News

Beijing is enforcing tough rules to ensure chatbots don’t misbehave

60 points by bookofjoe ago | 21 comments

stevenjgarner |next [-]

Will these heavy-handed constraints ultimately stifle the very innovation China needs to compete with the U.S.? By forcing AI models to operate within a narrow ideological "sandbox," the government risks making its homegrown models less capable, less creative, and less useful than their Western counterparts, potentially causing China to fall behind in the most important technological race of the century. Will the western counterparts follow suit?

meyum33 |root |parent |next [-]

This has been said of the internet itself in China. But even with such heavy censorship, there seem to have been many more internet heavy weights in China than even Europe?

Zetaphor |root |parent |next |previous [-]

I don't see how filtering the training data to exclude specific topics the CCP doesn't like would affect the capabilities of the model. The reason Chinese models are so competitive is because they're innovating on the architecture, not the training data.

stevenjgarner |root |parent |next [-]

Intelligence isn't a series of isolated silos. Modern AI capabilities (reasoning, logic, and creativity) often emerge from the cross-pollination of data. For the CCP, this move isn't just about stopping a chatbot from saying "Tiananmen Square." It's about the unpredictability of the technology. As models move toward Agentic AI, "control" shifts from "what it says" to "what it does." If the state cannot perfectly align the AI's "values" with the Party's, they risk creating a powerful tool that could be used by dissidents to automate subversion or bypass the Great Firewall. I feel the real question for China is: Can you have an AI that is smart enough to win a war or save an economy, but "dumb" enough to never question its master? If they tighten the leash too much to maintain control, the dog might never learn to hunt.

Workaccount2 |root |parent [-]

They will disappear a full lab once there is a model with gross transgressions.

They won't comment on it, but the message will be abundantly clear to the other labs: only make models that align with the state.

sokoloff |root |parent |next |previous [-]

Imagine a model trained only on an Earth-centered universe, that there are four elements (earth, air, fire, and water), or one trained only that the world is flat. Would the capabilities of the resulting model equal those of models trained on a more robust set of scientific data?

Architecture and training data both matter.

AlotOfReading |root |parent [-]

Pretty much all the Greek philosophers grew up in a world where the classical element model was widely accepted, yet they had reasoning skills that led them to develop theories of atomism, and measure the circumference of the earth. It'd be difficult to argue they were less capable than modern people who grew up learning the ideas they originated either.

It doesn't seem impossible that models might also be able to learn reasoning beyond the limits of their training set.

Retric |root |parent |next [-]

Greek philosophers came up with vastly more wildly incorrect theories than correct ones.

When you only celebrate success simply coming up with more ideas makes things look better, but when you look at the full body of work you find logic based on incorrect assumptions results in nonsense.

pixl97 |root |parent |previous [-]

I mean they came up with it then very slowly, they would quickly have to learn everything modern if they wanted to compete...

Kind of a version of you don't have to run faster than the bear, you just have to run faster than the person beside you.

throwuxiytayq |root |parent |previous [-]

I imagine trimming away 99.9% of unwanted responses is not at all difficult at all and can be done without damaging model quality; pushing it further will result in degradation as you go to increasingly desperate lengths to make the model unaware, and actively, constantly unwilling to be aware of certain inconvenient genocides here and there.

Similarly, the leading models seem perfectly secure at first glance, but when you dig in they’re susceptible to all kinds of prompt-based attacks, and the tail end seems quite daunting. They’ll tell you how to build the bomby thingy if you ask the right question, despite all the work that goes into prohibiting that. Let’s not even get into the topic of model uncensorship/abliteration and trying to block that.

cherioo |root |parent |next |previous [-]

The west is already ahead on this. It is called AI safety and alignment.

throwuxiytayq |root |parent [-]

People laughing away the necessity for AI alignment are severely misaligned themselves; ironically enough, they very rarely represent the capability frontier.

meltyness |root |parent |next [-]

In security-eze I guess you'd say then that there are AI capabilities that must be kept confidential,... always? Is that enforceable? Is it the government's place?

I think current censorship capabilities can be surmounted with just the classic techniques; write a song that... x is y and y is z... express in base64, though stuff like, what gemmascope maybe can still find whole segments of activation?

It seems like a lot of energy to only make a system worse.

sho_hn |root |parent |next |previous [-]

> Will the western counterparts follow suit?

Haven't some of them already? I seem to recall Grok being censored to follow several US gov-preferred viewpoints.

bilbo0s |root |parent |previous [-]

Probably not.

It's the arts, culture, politics and philosophies being kneecapped in the embeddings. Not really the physics, chemistry, and math.

I could see them actually getting more of what they want: which is Chinese people using these models to research hard sciences. All without having to carry the cost of "deadbeats" researching, say, the use of the cello in classical music. Because all of those prompts carry an energy cost.

I don't know? I'm just thinking the people in charge over there probably don't want to shoulder the cost of a billion people looking into Fauré for example. And this course of action kind of delivers to them added benefits of that nature.

|next |previous [-]

SilverElfin |next |previous [-]

This isn’t surprising. They even enforced rules protecting Chinese government interests in the US TikTok company (https://dailycaller.com/2025/01/14/tiktok-forced-staff-oaths...), so I would expect them to be tougher within their borders.

bgwalter |next |previous [-]

China has been more cautious the whole year. Xi has warned of an "AI" bubble, and "AI" was locked down during exam periods.

More censorship and alignment will have the positive side effect that Western elites get jealous and also want to lock down chatbots. Which will then get so bad that no one is going to use them (great!).

The current propaganda production is amazing. Half of Musk's retweets seem Grok generated tweets under different account names. Since most of responses to Musk are bots, too, it is hard to know what the public thinks of it.

Imustaskforhelp |previous [-]

Interesting but for a country like china with the fact that companies are partially-owned by CCP itself. I feel like most of these discussions would / (should?) have happened in a way where they don't leak outside.

If the govt. formally anounces it, perhaps, I believe that they must have already taken appropriate action against it.

Personally I believe that we are gonna see distills of large language models perhaps even with open weights Euro/American models filtering.

I do feel like everybody knows seperation of concerns where nobody really asks about china to chinese models but I am a bit worried as recently I had just created if AI models can still push a chinese narrative in lets say if someone is creating another nation's related website or anything similar. I don't think that there would be that big of a deal about it and I will still use chinese models but an article like this definitely reduces china's influence overall

America and Europe, please create open source / open weights models without censorship (like the gpt model) as a major concern. You already have intelligence like gemini flash so just open source something similar which can beat kimi/deepseek/glm

Edit: Although thinking about it, I feel like the largest impact wouldn't be us outsiders but rather the people in china because they had access to chinese models but there would be very strict controls on even open weights model from america etc. there so if chinese models have propaganda it would most likely try to convince the average chinese with propagandization perhaps and I don't want to put a conspiracy hat on but if we do, I think that the chinese credit score can take a look at if people who are suspicious of the CCP ask it to chatbots on chinese chatbots.

KlayLay |root |parent [-]

Last time I checked, China's state-owned enterprises aren't all that invested in developing AI chatbots, so I imagine that the amount of control the central government has is about as much as their control over any tech company. If anything, China's AI industry has been described as under-regulated by people like Jensen Huang.

A technology created by a certain set of people will naturally come to reflect the views of said people, even in areas where people act like it's neutral (e.g., cameras that are biased towards people with lighter skin). This is the case for all AI models—Chinese, American, European, etc.—so I wouldn't dub one that censors information they don't like as propaganda just because we like it, since we naturally have our own version of that.

The actual chatbots, themselves, seem to be relatively useful.

Workaccount2 |root |parent [-]

China is a communist country, every company is defacto under the states control.

It might not feel like that on the ground, the leash has been getting looser, but the leash is still 100% there.

Don't make the childish mistake of thinking China is just USA 2.0

Imustaskforhelp |root |parent [-]

Agreed, my point was that the leash was there so most likely if the news gets released to the public, it means that they must have definitely used "that leash" a lot privately too so the news might/does have a deeper impact than one might think but it can be hidden.

So like even now although I can trust chinese models, who knows for how long their private discussions have been happening and for how long chinese govt has been using that leash privately and for chatbots like glm 4.7 and similar.

I am not sure why china would actively come out and say they are enforcing tough rules tho, doesn't make much sense for a country who loves being private.