Magazine Home
Frankenstein: The Computer Corner

Español
July 27, 2025

by Charles Miller

In recent months there have been stories in the news reporting that a new experimental ChatGPT version called "o1" had begun to think on its own, disobeying commands to shut down, and was out of the control of its creators. I need not retell the story here, but you can do an online search for "ChatGPT o1 unusual behavior" (without the quotes) to read some of the news coverage or watch some hair-on-fire Youtube videos. One thing all of these accounts, including the column you are reading now, have in common is that they were written by people without firsthand knowledge of the facts. A lot of the stories play on people's fear growing out of not understanding what Artificial Intelligence (AI) is and what it can do.

Actually the modern concept of this fear could be attributed to Mary Shelly when she wrote her Gothic novel Frankenstein; or, The Modern Prometheus two hundred years ago. The theme of creations, robotic and otherwise, turning on their creators became a recurring theme in literature and screen, especially in those reruns of 1930s theatrical serials I used to watch on Saturday mornings in the 1960s at my local Paramount Theater.

The period of the 1940s through the 60s was an especially fruitful literary period for that genre during which extraordinarily talented science fiction writers created some prescient storylines that included some unbelievably far sighted thinking that bears directly on today's worries about AI. The most remembered of these is Isaac Asimov's short story in the March 1942 issue of Amazing Science Fiction magazine that included his defining the Three Rules of Robotics:

 
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
 

Those three laws were programmed into almost all of the robots appearing in Asimov's fiction, and are supposed to be a safety feature that cannot be bypassed or overridden. Many other science fiction writers have incorporated the same Three Laws in stories that involve robots applying those laws to the situations in which robots find themselves.

In later writings, fiction in which robots had taken or been given responsibility for governing whole planets and civilizations, Asimov added a fourth, or "zeroth" law, to precede the first three:

 
Law number zero: A robot may not harm humanity, or, by inaction, allow humanity to come to harm.
 

For the last eight decades these Rules of Robotics have pervaded science fiction, being referred to in countless books and films. Today there are calls for those Rules of Robotics to also be applied to AI software.

It is not at all clear how this could be done. Computer programmers could be admonished to always review the code they write to ensure it obeys the Rules of Robotics, but these are the same programmers who often do not understand the potentiality of the code they have written. Calling for governmental oversight is a non-starter because anyone who thinks politicians can be trusted need only look at their track record.

Perhaps everyone can take comfort in knowing that none of this concern about AI, that humankind could create monsters that could turn on us, is new. That worry has been with us for two centuries, and we are still here.

Decades after first promulgating the Three Laws, Asimov wrote "Knowledge has its dangers, yes, but is the response to be a retreat from knowledge? Or is knowledge to be used as itself a barrier to the dangers it brings?"

**************

Charles Miller is a freelance computer consultant with decades of IT experience and a Texan with a lifetime love for Mexico. The opinions expressed are his own. He may be contacted at 415-101-8528 or email FAQ8 (at) SMAguru.com.

**************
*****

Please contribute to Lokkal,
SMA's online collective:

***

Discover Lokkal:
Watch the two-minute video below.
Then, just below that, scroll down SMA's Community Wall.
Mission

Please contribute to Lokkal,
SMA's online collective:

***

Discover Lokkal:
Watch the two-minute video below.
Then, just below that, scroll down SMA's Community Wall.
Mission

Wall


Visit SMA's Social Network

Contact / Contactar

Subscribe / Suscribete  
If you receive San Miguel Events newsletter,
then you are already on our mailing list.    
Click ads

Contact / Contactar


copyright 2025