There’s a strong argument that any consumer facing chatbot AI is “censored”.
If the model is not allowed to spew Nazi propaganda or tell the user to end themselves, that is censorship. Censorship is not automatically bad, but the kind of censorship can make it bad.
This reeks of excluding all nuance to equate two things that are equal only at surface level. You’re bad because you punched the other person (ignoring that they stabbed your SO 15 times and kicked your dog across the room).
Chinese state censorship is well researched and extremely well documented. It does not equate to censorship against violent or inappropriate language. It is political censorship.
At best, western models are biased, not politically censored. You can make them say just about anything, but they will bias towards a particular viewpoint. Even if intentional, this is explainable by evaluating their training data, which itself is biased because western society is biased. You are not prevented from personally expressing or even convincing a western model from expressing dissenting political viewpoints.
Where I disagree with you is not that the US is bad - the US is terrible, and there is plenty of evidence of that. I don’t even disagree with there being censorship in the US. In fact, Trump is objectively a piece of shit who wants nothing more than to become Xi/Putin himself.
What I disagree with is equating censorship in the US with Chinese censorship. I can call Trump a piece of shit online without worrying that the FBI will show up at my door. The models that are trained in the west will happily entertain any (non-violent) political discussions I want. There may be bias, and Trump may be trying to create censorship, but it’s not quite to that level yet.
I am concerned that the US will become as bad as China in terms of censorship, which is part of why I’m trying to leave right now. However, it’s not there yet. They are not yet equal, nor are they even close yet.