Monday, September 9, 2024

POSSIBLE NEW A.I. LAW IS NOWHERE NEAR ADEQUATE

 

CALIFORNIA FOCUS
FOR RELEASE: FRIDAY, SEPTEMBER 27, 2024 OR THEREAFTER

 

BY THOMAS D. ELIAS

 “POSSIBLE NEW A.I. LAW IS NOWHERE NEAR ADEQUATE”

 

Scott Wiener will be all smiles if Gov. Gavin Newsom signs his supposedly landmark bill to govern development of new artificial intelligence devices and programs in California.

Newsom will decide whether to sign or veto the measure, also known as SB 1047, this month.

 

This bill originally was also intended as a model for other states to follow, but it now falls far short of that. Instead, it was so watered down in the legislative process, so dumbed down for the sake of political convenience that it might as well contain no new rules.

 

Yes, Wiener, a Democratic state senator from San Francisco, sported a big grin when his bill passed, despite being cut to pieces in the state Assembly. That might have been because pioneering A.I. startups Open AI and Anthropic are in his San Francisco district. Helping out big-potential hometown businesses by accepting a weaker measure can’t hurt him as he continues his not exactly secret quest to take the seat Nancy Pelosi has occupied for decades in Congress whenever she retires.

 

Open AI is the developer of the widely-used A.I. tool Chat GPT, which has often been wrong about a host of things.

 

But here’s the real question for Wiener and the governor who may sign his bill into law: Why set up a complicated, often obfuscated so-called protection against harmful robots and mechanical minds when simple rules that could protect against all kind of problems were laid out about 82 years ago by a leading scholar and science fiction author?

 

In his 1942 short story “Runaround,” Isaac Asimov first put forward his three laws of robotics, which would become staples in his myriad later works, including the famed “Foundation” series.

 

“The first law is that a robot shall not harm a human, or by inaction allow a human to come to harm. The second law,” Asimov wrote, “is that a robot shall obey any instruction given to it by a human, and the third law is that a robot shall avoid actions or situations that could cause harm to itself.”

 

Rather than offering this kind of wide but simple protection, politics interferes. Some opponents questioned even the softened Wiener bill that eliminated a previously proposed state department specializing in safety measures for A.I. devices in all forms. Instead, they would be submitted for approval to the attorney general’s office, never known for its cybernetic genius.

 

The attorney general, nominally California’s top law enforcement officer, could penalize companies posing imminent threat or harm. But there is no solid definition of what that means.

 

Backers of the Wiener measure claimed it creates guardrails to prevent A.I. programs from shutting down the power grid and causing other sudden disasters. It’s clear some kind of controls are needed because A.I. is developing fast and in many forms, from taking over most mathematical functions at banks to writing automated news stories.

 

Then there’s the state’s legitimate concern that it not set up rules so tough they threaten to drive out the newest potential high tech economic engine, one that’s already picking up some of the slack for companies like Tesla and Toyota, which moved headquarters to other states.

 

Then there are those who claim this would be head-in-the-clouds regulation does not halt everyday real-world concerns like privacy and misinformation. For sure, A.I. produces plenty of misinformation, often mangling basics like birth dates and birthplaces, thus complicating some people’s lives. Wiener’s bill offers no recompense for these ills.

 

Why not instead merely adopt Asimov’s rules? They’re simple and his vivid imagination used them as central features of many novels and stories involving robots with disparate personalities and functions.

 

The advantage to starting with simple rules to govern an industry that has previously had few is that it allows for designing new rules as need for them is demonstrated, and leaving people and companies alone to develop new A.I. functions and wrinkles with little interference from government agencies unless circumstances demand they step in.

 

There’s an old principle that says “Start simply,” and if there’s ever been a situation demanding this, it is the potentially limitless field of artificial intelligence. Just another big decision for the lame duck Gov. Newsom.

 

    -30-

    Email Thomas Elias at tdelias@aol.com. His book, "The Burzynski Breakthrough: The Most Promising Cancer Treatment and the Government’s Campaign to Squelch It," is now available in a soft cover fourth edition. For more Elias columns, visit www.californiafocus.net

No comments:

Post a Comment