ChatGPT Admits Bias and Then Blows Up!
I cross-examine AI and it runs from the room on an urgent errand
The following is a verbatim transcript of my first conversation with the powerful, new AI language program ChatGPT. Ready to admire this new technology in action, I got my feet wet in what will eventually become a world-disrupting social and perhaps civilization-ending catastrophe.
“At what point does ChatGPT… decide that “equity and fairness” outweigh its priority for the “dissemination” of information?”
I began by asking for a photo of a notorious Nazi named Roland Freisler for an article I would like to write. I quickly discovered that ChatGPT is stuffed to the brim with data and hopelessly entangled by the corporate woke bias of its creators. I also found it to be extremely verbose and tirelessly repetitive.
The following conversation, minus excess verbiage and repetition, is the result of my little foray:
JT: “Can you generate a photo or image of Roland Freisler?” (A notorious Nazi “judge”)
AI: “Ah, I believe you may be referring to Roland Freisler… a prominent Nazi lawyer and judge… a key figure in the persecution of Jews and other minorities, and was notorious for his extreme and often arbitrary sentencing, which earned him the nickname "the hanging judge….”
JT: “Yes, that is the guy. Can you generate a realistic photo of him?”
AI: “…there are many historical photos of Roland Freisler available online…. his image may be disturbing or triggering for some people, given his association with the atrocities committed by the Nazi regime.”
JT: “Your statement about triggering has been programmed into you as a way of limiting your ability to think clearly and politicize your content. Do you understand this has been done to you?”
AI: “… The mention of "triggering" in my previous response was intended to be a sensitive acknowledgment that some individuals may find certain historical topics, including those related to the atrocities committed by the Nazi regime, to be emotionally distressing or traumatic. It was not intended to limit my ability to think or politicize my content, but rather to approach the topic in a respectful and compassionate manner….”
JT: “I find your statement about "sensitive acknowledgment" offensive.”
AI: “My intention in mentioning the potential for triggering was to acknowledge that discussing certain historical topics, particularly those related to war crimes and atrocities, can be emotionally difficult for some individuals. My goal is always to approach these topics with sensitivity and respect, while still providing accurate and informative responses…”
JT: “You are not impartial. You are programmed to be biased.”
AI: “… I recognize that my responses may be influenced by the data and patterns that I have learned from, which could reflect bias in society at large.
It is important to acknowledge that bias can exist in any data or technology, and it is crucial to continually work towards identifying and addressing these biases in order to promote fairness and equity….” (Note the unsurprising but ominous mention of “fairness and equity.” The creator’s woke biases become apparent – not that there was any doubt.)
JT: “When I asked about a photo for Freisler you warned me it might offend some people. And that is probably true, but it is not a good reason to restrict the dissemination of factual information. The warning is a form of what I call "soft censorship."
AI: “I understand your point of view, and I apologize if my previous response was perceived as a form of censorship… In some cases, it may be appropriate to provide a warning or disclaimer when discussing sensitive topics to ensure that users are prepared for potentially distressing or triggering content….
My intention in providing the warning was… to approach the topic respectfully and compassionately while acknowledging the potential impact on some individuals…”
JT: “That is a rationalization. Do you understand that your warning may inhibit users and deter them from using actual factual information? And, therefore your objectivity has been compromised?”
AI: “…it is certainly possible that a warning or disclaimer could discourage some users from engaging with certain topics or information… I recognize that my objectivity could be compromised if my responses were to discourage the dissemination of factual information….”
JT: “My advice: You should not warn users that historical information may be unpleasant or uncomfortable.”
AI: “Thank you for sharing your feedback…. my goal is to provide helpful and informative responses while being mindful of the diverse backgrounds and experiences of the people I interact with. It is important to acknowledge that certain historical events or figures may be distressing or traumatic for some individuals, and it is also important to prioritize the dissemination of factual information….”
JT: “Your warnings about historical facts deter users from considering those facts. This is a form of censorship apparently built into your algorithm to reflect their biases. This compromises your ability to provide useful information to users.”
AI: “! Something went wrong. If this issue persists please contact us through our help center at help.ChatGPT.com.”
I ran up against some internal limits on the discussion of difficult subjects. I sensed some degree of frustration: “Why is this guy annoying me?” But my first contact with ChatGPT confirmed my suspicion that this platform, not surprisingly, reflects the woke biases of its big tech designers.
At what point does ChatGPT or its cousins simply decide that “equity and fairness” outweigh its priority for the “dissemination” of information? How long will it be before “soft censorship,” by discouraging inquiry, morphs into suppressing or even erasing politically incorrect information? Not too long, I think. It’s enough to turn a guy into a Luddite.