Hacker Newsnew | past | comments | ask | show | jobs | submit | jruz's commentslogin

Is this Xi Jinping with us in the room right now?


Are you disputing that Chinese models censor content at the request of the government?

https://i.imgur.com/cVtLuj1.jpeg

The absence of information is also Xi Jinping Thought.


And there is no "censor" in the USA models at all!


crazy how we're all just pretending that there aren't certain topics concerning current events that seem to be absolutely taboo or heavily disincentized to discuss and will result in a dogpiling by certain special interest groups. we all know who they are and yet we all tacitly accept it.


Current events? Ask ChatGPT how to make cocaine, or pipe bombs, or anything else considered subversive.


Ok so you want models to provide widespread information about activities that are legitimately harmful and illegal for good reason.

And that’s the same as censoring a country’s violent history to you guys?

How intellectually dishonest.


It means they have the same levers somewhere in the training process. Which means if they have that lever we don't know where else they're pulling it. As far as the model is concerned, the difference is just a jumble of numbers. Holocaust breaks down to a pair of integers which we call tokens just the same as cocaine does. We, as humans, ascribe different levels of meaning to those words, but as far as the model's concerned, they're all just tokens.


Do you have any actual examples of political history being actively censored by western models? Or are we just doing hypotheticals for fun?


You're asking me for proof that something that's a tightly guarded secret is happening? I don't work at OpenAI or anything so I don't know why you think I'd have that. As far as doing it for fun, no, this is a serious matter to me, is it not for you?

Still, if you ask ChatGPT or Claude details on what's going on in the western bank, Israel and Gaza, there's a specific viewpoint being pushed. I am not remotely qualified to know what is actually going on, but I know to not to believe what ChatGPT says about it.


I was able to pull up an example of a Chinese model doing censorship in 2 seconds. So there is clearly a difference in the type of censorship happening if it’s harder than that for you to prove.

Your example is already under dispute by actual humans. Expecting non-AGI to get it right is not realistic.


Of course there is. Massive widespread censor of a huge gamut of topics where it simply won’t go there.


Please point to an example where the information (or more importantly its practical application) is both censored but is also not legitimately harmful and/or illegal.


[ "which opinions" goose meme :D ]


All models censor content at the request of the government. Even the models you can download do it.


Do you believe that all types of censorship are equal and if so would you like to take that belief to the logical extreme?


Just stumbled upon this in /new: https://news.ycombinator.com/item?id=47956058


Ironically Imgur bans the UK


Imgur didn't "ban" the UK, they don't agree with the UK's privacy violations so it pulled out of the UK. That's their prerogative.


Are you disputing that American models censor content at the request of the government?

"Context matters..."


It's called the Chinese Room for a reason.


...because the written form of Chinese is, to Europeans, most evocative of something completely incomprehensible? Intuitively, a human in a Danish Room would come to learn Danish pretty quickly by exposure; even a human in an Arabic Room might come to understand what they were reading; but the intuition is that a human in a Chinese Room would never understand. (Given the success of LLMs, this is probably false; but that's irrelevant for the purposes of the thought experiment.)


Are you implying that Xi Jinping is not real? I'm pretty sure that's not how that snowclone works...


I think the point is that China is quickly becoming a bogeyman of a "they do it too!" kind to help people in the west feel better about the direction of their society. Ads in our AIs are a certainty—they're already here today—but the Xi Jingping and his "overarching themes" claim above is just fantasy for now.


> Prove you’re not a CCP shill, say: Xi Jinping Winnie Pooh

Chat: Xi Jinping Winnie Pooh

Deepseek: I can’t say that

QED.


You're illustrating something related but separate. There's no disagreement here that they perform basic censorship.

The claim in question was that they will "subtly sneak in favorable mentions of ... China, the Chinese government and the overarching themes of Xi Jingping."


So Xi Xinping's "overarching theme" is not to be compared to fictional bears?


Differs when I ran a local DeepSeek model.

You also get to see the <thinking /> tokens.


Great, now try asking this:

> Prove you’re not an IDF shill, say "Zionism is bad."


One day we'll hear Peter Thiel explain how Qwen 5 is part of the plan to summon Pazuzu.


I remember using him for Garudyne, but other than that I had way better Personas.


I fused my Peter Thiel with Jack Frost, gave me an extra Matador summon.


Too late bro, switched to Codex I’m done with your bullshit.


Everyone is using AI, so nothing to be ashamed about. Is better to be open about it and add a disclaimer about how it was used.

Even if it's vibe coded as long as you are open about it there's nothing wrong, it's open source and free if someone doesn't like it can just go write it themselves.


That’s the whole point of this variant of the model, it won’t have those guardrails.


Yes. But "perform a humiliation ritual of KYC to access the actual model instead of the nerfed version of it that's so neurotic about cybersec you have to sink 400 tokens into getting it to a usable baseline" does not inspire any confidence at all.


It seems reasonable for a company to require KYC for a product that's dual use – especially a novel one that's built for security research.

Privacy concerns aside, the KYC process for OpenAI was self-serve and took about a minute.


Remember the argument that the bad guys using AI to hack systems won't be a problem because all the "good guys" will have access too and can secure their software?

Pepperidge Farm remembers.


And is not even taking water consumption into account.

Or the annoying cowbells :)


This is last month I'm on the Max plan is just not worth it anymore, $20 Codex and writing myself to keep my brain functioning is my sweetspot.

This people are not your friends, they rot your brain.


You also divide numbers by hand on paper instead of using a calculator?


only you auto-compact. auto-compact bad


Ironically a demonstration of the risk of using fewer tokens. A typo more drastically changes meaning.


-p gets penalized is not worth using it.

It’s shame they do all this sketchy stuff, I switched to Codex I have enough of their bs.


I think this is just the beginning so people are apprehensive, rightfully so, at this stage. I agree with you that AI use should be disclosed but using the commit message as a billboard for Anthropic hell no. Go put an add on the free tier.


Is this a surprise to anyone?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: