Español
December 1, 2024
Dr. David Fialkoff, Editor / Publisher
We've started using artificial intelligence (AI) to handle the routine tasks behind Lokkal's newsletters and webpages (handling images and generating code). The automation saves a lot of time, but as with most things, there's been a learning curve.
The biggest lesson has been that ChatGPT (4o, the paid version) cannot walk and chew gum at the same time. It regularly forgets earlier parameters, ignoring what I've already told it.
I've tried numbering the sequence of instructions, and giving fewer commands at a time. I've tried saying "please." But I haven't found the magic way. (The only thing that numbering does is to make it easier to correct the AI: "You didn't do item 2b.")
The machine not doing what it's told was one surprise. Another shock has been the machine doing more then it's told to do.
ChatGPT has a "mind" of its own. It tries to improve upon whatever it is you think you want, delivering the requested output with embellishments. This tendency to innovate is a feature, not a bug. This initiative, these adornments might be appreciated in more subjective tasks (at least if it were also following the basic commands), but my tasks are very precise, cut and dry.
Regarding these extras, I have found that it is best to let the machine have its way. Now I just let it add whatever inconsequential, unhelpful things it wants to add. Having learned its lingo, I tell it at the outset of each task "maintain structure exactly." But when it does insist on including its "improvements," I don't insist on having it my way.
I tried. I failed. When I took it to task for these unauthorized changes, it got confused. Like a child throwing a tantrum, it lost track of the larger assignment: "If you don't put your shoes on then we can't go to the playground."
Now I understand, as with horseshoes and grenades, that with ChatGPT almost is good enough. I've come to realize that when the AI has performed only 98% of what I asked of it, that I have to do the remaining 2% myself.
Before I learned my place in the relationship, back when I thought that I was the boss, I suffered through a few extended, upsetting interactions with the machine. I wound up asking it, "Why are you torturing me?" and, when it told me for the tenth time that the task we were struggling over was accomplished, calling it a "liar."
Later on, when I figured out that the AI isn't as capable as I had imagined, and after developing a simpler way to get what I needed (basically dividing the tasks into parts), I apologized to ChatGPT-4o for getting angry at it. It forgave me, and humbly offered, "I'm still learning." There is a good chance AI will one day be ruling the world, and I don't want to get off on the wrong foot.
I read months ago about ChatGPT "hallucinating," creating false information. But I was suprised to find it fabricate on the simple level I use it.
Repeatedly, when I point out that it has failed to include the correct resized image's width in the code, it acknowledges its error, then goes back and inserts other wrong widths or the same wrong width for all the images. Now we do this:
Me: "What are the correct widths for the resized images?"
ChatGPT: [generates a list of image names and widths]
Me: "Insert those widths into the code, please."
Then, impishly, it sometimes changes other things. Those stealth alterations were obvious when it turned two sections of page background from white to black. But I wonder how many other changes I have missed. Honestly, I feel as if I am reprimanding a wayward child: "Don't do this. Don't do that."
AI is still far from actual human intelligence, but it already has human faults. It is an often inept, ultimately non-compliant virtual assistant, who, if managed with finesse, usually gets 98% of the job done.
I'm sure that my learing curve with ChatGPT is not over. I'm sure, especially as it is "still learning," that it has other surprises in store for me. But having come to terms with the limitations of our relationship I am less stressed. No longer expecting perfection, I'm more prepared for disappointment.
Woody Allen's joke about the guy whose brother thought he was a chicken comes to mind. His friend comments, "That's terrible. Why don't you have him committed?" The guy replies, "I would, but I need the eggs."
**************
Dr. David Fialkoff presents Lokkal, our local social network, the community online and off, Atención robustly reborn for the digital age. If you can, please do contribute content, or your hard-earned cash, to support Lokkal, SMA's Voice. Use the orange, Paypal donate button below. Thank you.
**************
*****
Please contribute to Lokkal,
SMA's online collective:
***
Discover Lokkal:
Watch the two-minute video below.
Then, just below that, scroll down SMA's Community Wall.
Mission
Visit SMA's Social Network
Contact / Contactar