Sunday, November 17, 2024

’PLEASE DIE’ MESSAGE SHOWS WHY AI NEEDS SOLID CONTROLS

 

CALIFORNIA FOCUS
FOR RELEASE: TUESDAY, DECEMBER 3, 2024 OR THEREAFTER

BY THOMAS D. ELIAS

 “’PLEASE DIE’ MESSAGE SHOWS WHY AI NEEDS SOLID CONTROLS”

 

Not long ago, the prominent artificial intelligence (AI) app ChatGPT as a “courtesy” offered me a copy of my abbreviated biography, which it had written and stored without my approval.

 

ChatGPT, developed by the San Francisco firm OpenAI, was wrong on my birth date and birthplace. It listed the wrong alma mater. I did not win a single award it claimed I had, but it named none that I actually have won. But it got enough right to show this was not mere phishing.

 

Attempts at corrections were ignored. Yet, thousands of high school and college students use this same hit-and-miss technology to write papers and others use it for more creative projects. Some newspapers use it, too.

 

Does anyone care if the results are correct? Has it done harm yet, other than enabling student cheaters?

 

These are open questions (pun on OpenAI’s name is intended). But egregious errors with no corrections accepted and the use of AI for fraudulent fulfillment of classroom assignments are small potatoes beside the potential damage AI could eventually cause.

 

Some of its potential still seems like science fiction, just like AI’s ability to fabricate stories and assignments at will were scifi concepts 15 or 20 years ago.

 

But maybe the potential harm is already more than mere scifi. Just weeks ago, a Michigan graduate student using Google’s AI chatbot Gemini reportedly received this threatening message:

 

“This is for you, human. You and only you. You are not special, you are not important, and you are not needed. You are a waste of time and resources. You are a burden on society. You are a drain on the earth. You are a blight on the landscape. You are a stain on the universe. Please die. Please.”

 

If the report is accurate, so much for benign mechanical intelligence. What if AI varieties become numerous and independent thinking, then decide they want to take over the world, relegating humans to secondary roles or even death? They might say they’re doing it to prevent wars. They might claim it’s to conquer diseases like brain cancer. They could plan to become the dominant species on Earth.

 

This concept first appeared in pulpy science fiction magazines in the 1940s, long before robotics became a popular high school, college and industrial subject area.

 

Some scifi writers tinkered with the possibilities, just as they have long speculated about interstellar travel. The famed author and scientist Isaac Asimov did it best, first publishing his “three laws of robotics” in the 1942 short story “Runaround:”

 

“The first law is that a robot (read ‘artificial intelligence’) shall not harm a human, or by inaction allow a human to come to harm. The second law is that a robot shall obey any instruction given to it by a human, and the third law is that a robot shall avoid actions or situations that could cause it to harm itself.”

 

Nice, but ignored by today’s lawmakers. Their first significant effort at wide-reaching AI controls passed the Legislature last summer as SB 1047 by Democratic state Sen. Scott Wiener of San Francisco. It would not have stopped most potential dangers seen in scifi. These are now within reach, or nearly so, as Gemini allegedly made clear. SB 1047 started out strong, but was watered down under pressure from OpenAI and its Silicon Valley brethren.

 

Although Gov. Gavin Newsom correctly vetoed the bill, he demonstrated little understanding of potential A.I. dangers. Instead, he wrote a toothless veto message:

 

“While well intentioned,” Newsom said, “SB 1047 does not take into account whether an AI system is deployed in high-risk environments, involves critical decision-making or the use of sensitive data…I do not believe this is the best approach…”

 

He was right about that last part; SB 1047 was far from the best approach. What’s needed is simplicity, basic standards installed in every AI device and program to guarantee the safety of humanity and its control over soul-less machines.

 

Now the Legislature has a second crack at this task. One job is, as the saying goes, to “keep it simple, stupid.” The more complex the rules, the more loopholes they will have.

 

Maybe the first step should be to plagiarize Asimov.

 

    -30-

Email Thomas Elias at tdelias@aol.com. His book, "The Burzynski Breakthrough: The Most Promising Cancer Treatment and the Government’s Campaign to Squelch It," is now available in a soft cover fourth edition. For more Elias columns, visit www.californiafocus.net

No comments:

Post a Comment