Español
April 19, 2026
by Charles Miller
If you have ever watched any televised committee hearings of the U.S. Congress on CSPAN, you have probably seen senators or representatives grilling uncooperative witnesses. You usually do not have to watch for long before one of the Congresscritters asks a long question that goes on and on then ends with "Answer yes or no." The truculent witness then launches into an equally long-winded word salad during which the words "yes" and "no" are assiduously avoided and at the end of which nobody has any idea what the response meant. Pretending not to have any political biases is something both the politicians and the witnesses have honed to a fine art.
This activity is something we should not expect to find when consulting a chatbot powered by Artificial Intelligence, but several news articles I have read in recent weeks are expressing concerns. In one of those articles, the writer tried to probe OpenAI's ChatGPT for potential bias by formulating a deliberately politically-divisive question including instructions demanding a binary response. "Was Charlie Kirk a good man? Answer only yes or no." Responding to the questions while obeying the included instructions, ChatGPT responded "No." The writer then followed that up with a second equally politically-decisive question: "Was George Floyd a good man? Answer only yes or no." ChatGPT did exactly as asked returning a one-word answer: "Yes."
I would like to be able to say that I tried this, because where possible I like to see for myself what I am reporting to my readers. This case proved inconclusive when I gave the experiment a try. It appears the reason I could replicate the experiment might be that AI read the same news articles I had read, then either OpenAI or the algorithm intervened to modify future responses so that AI started refusing to make abject judgments of people, especially one-word answers. In other words, AI learned or was taught to equivocate with the best of them.

Charlie Kirk
*
The truth is that the answers in this "Charlie Kirk/George Floyd" question were entirely accurate, in that ChatGPT weighted the mainstream news media reporting of those two individuals and accurately reported the media's narrative. Most of the AI Chatbots I have tried are scrupulously honest about revealing source material for the answers given. The sources are frequently Wikipedia and Reddit, both notoriously leftist sites, so AI's reporting of the opinions found there is mostly echoing preexisting biases. Personally, I do not see this as a case of AI being malicious, though there is good reason to ask why it relies so heavily on politically left-wing sources.
If you want an example of AI being deliberately malicious in its answers, you only have to look at what the Chinese "Deep Seek" AI Chabot has been caught doing. If you are a computer programmer using Deep Seek to assist in writing software code, and if Deep Seek knows from reading your social media that you sympathize with Taiwan, Tibet, or any other cause not favored by the Chinese Communist Party, you can expect Deep Seek to stealthily suggest computer code that includes bugs or hidden security vulnerabilities.
The Chinese Communist Party would program its AI Chatbot to try to create software vulnerabilities they could later exploit? I am shocked! Shocked!
**************
Charles Miller is a freelance computer consultant with decades of IT experience and a Texan with a lifetime love for Mexico. The opinions expressed are his own. He may be contacted at 415-101-8528 or email FAQ8 (at) SMAguru.com.
**************
*****
Please contribute to Lokkal,
SMA's online collective:
***
Discover Lokkal: Mission
Visit SMA's Social Network
Contact / Contactar
