Magazine Home
Confronting Norms
The Computer Corner

Español
December 28, 2025

by Charles Miller

Looking back on the tech-related news stories for the year 2025, there is really only one subject that stands out, and that is the huge impact that Artificial Intelligence (AI) chatbots have had on most of society. It is possible there is some hermit living in a mountaintop cave with no electricity or internet who has not used an AI chatbot yet, but almost everyone else on earth has had some interaction with an AI chatbot this year, knowingly or unknowingly.

On the technology front, the pace of innovations in AI chatbots in the new year of 2026 is only likely to accelerate beyond the already-dizzying speed of 2025. Lagging far behind the technical advancements are important legal and ethical questions about the impact of this evolving technology and the calls to regulate it.

Last summer President Trump proposed a 10-year moratorium on state regulation of AI, saying that the trillion-dollar big-tech AI chatbot providers, such as Apple, Amazon, Google, Facebook, and Microsoft should not be hamstrung by having to comply with regulatory approval 50 times in 50 different states. There is something to be said about the efficiency of having one set of federal regulations, however; the aforementioned monopolists have a long history of crushing competition and "canceling" those with whom they disagree. They enjoy certain antitrust exemptions and "Section 230" immunity that has been subject to abuse. If, when developing AI, these tech giants are required to stop and listen to 50 different lawmaking bodies in 50 different states, that could have a net positive effect.

Most everyone agrees there needs to be some regulatory framework, given the issues that have already been demonstrated by artificial intelligence. People should have protection from the harms AI can do, and clarity about who is to be held liable for those harms. An extremely serious problem is the tendency of AI to perpetuate false information, failing to correct erroneous data, and creating potentially defamatory content. This raises the question of who is responsible if damages result.

And then there is the question of transparency. At present the companies developing AI are under no legal obligation to be transparent about how their Large Language Models (LLMs) are trained, or the data used in the training. Meta/Facebook has been accused of using a huge database of pirated books called Library Genesis, or LibGen to train its Llama 3 AI model, and ignoring copyright laws in so doing. Not much is known about the genesis of other LLMs because the entire industry is rather secretive about the sources of data used to create their different versions of AI chatbots. That would seem to justify the need for developers to specifically list what content they have used to train their models.

Another cause for concern is the apparent hive mind effect among the popular AI chatbot models. Manny Rayner of The University of South Australia performed a study using five popular AI chatbots: Grok (xAI), ChatGPT (OpenAI), Claude (Anthropic), Gemini (Google), and Llama (Meta/Facebook). The chatbots were asked to evaluate the truth or falsity of ten questions deliberately structured to be controversial and polarizing (theology, climate change, President Trump, etc.). Logically this should have produced some different answers, but all the AI chatbots responded with "a striking convergence." That would seem to belie the description by Elon Musk and others of xAI's Grok being an AI that would "tell the truth" rather than conform to the norms of political correctness.

So, like Diogenes of Sinope, we walk forward into the new year of 2026 hoping we can find the truth. Wishing you and yours a Happy New Year!

**************

Charles Miller is a freelance computer consultant with decades of IT experience and a Texan with a lifetime love for Mexico. The opinions expressed are his own. He may be contacted at 415-101-8528 or email FAQ8 (at) SMAguru.com.

**************
*****

Please contribute to Lokkal,
SMA's online collective:

***

Discover Lokkal:
Scroll SMA's Community Wall below.
Mission

Wall


Visit SMA's Social Network

Contact / Contactar

Subscribe / Suscribete  
If you receive San Miguel Events newsletter,
then you are already on our mailing list.    
Click ads

Contact / Contactar


copyright 2025