CALIFORNIA
FOCUS
FOR RELEASE: FRIDAY, JULY 14, 2023, OR THEREAFTER
BY THOMAS D. ELIAS
“A.I. MAKERS MUST CREATE, OBSERVE
NEW LAWS OF ROBOTICS”
Not
long ago, the artificial intelligence (A.I.) bot ChatGPT as a “courtesy” sent
me a copy of my abbreviated biography, which it had written.
ChatGPT,
developed by the San Francisco firm OpenAI, was wrong on both my birth date and
birthplace. It listed the wrong college as my alma mater. I had not won a
single award it said I did, but it ignored those I actually won. Yet, it got
enough facts right to assure this was no mere phishing expedition, but a
version of the new real thing.
Attempts
at correction were ignored.
All
along, I knew this could be dicey, both in providing information that – had it
been used to correct – could have led to identity theft or, worse, directed
criminals to my door.
The
experience recalled the science fiction stories and novels of Isaac Asimov, who
prophetically devised a generally recognized (in Asimov’s fictional future) set
of major laws governing intelligent robots.
In his
1942 short story “Runaround,” Asimov first put forward these three laws, which
would become staples in his later works:
“The
first law is that a robot shall not harm a human, or by inaction allow a human
to come to harm. The second law is that a robot shall obey any instruction
given to it by a human, and the third law is that a robot shall avoid actions
or situations that could cause it to harm itself.”
These
fictitious laws were reminiscent of the U.S. Constitution, open to constant
re-interpretation: new questions arose on what is harm and whether sentient
robots should be condemned to perpetual second-class, servant status.
It took
more than 30 years, but eventually others tried to improve on Asimov’s laws.
Altogether, four authors proposed more such “laws” between 1974 and 2013.
All
sought ways to prevent robots from conspiring to dominate or eliminate the
human race.
The same
threat was perceived in May by more than 100 technology leaders, corporate CEOs
and scientists who warned that “A.I. poses an existential threat to humanity.”
Their 22-word statement warned that “Mitigating the risk of extinction from
A.I. should be a global priority alongside other societal scale risks such as
pandemics and nuclear war.” President Biden joined in during a California trip,
calling for safety regulations on A.I.
As
difficult as it has been to get international cooperation against those other
serious threats of pandemics and nuclear weapons, no one can assume A.I. will
ever be regulated worldwide, the only way to make such rules or laws effective.
The
upshot is that a pause – not a permanent halt – in advancement of A.I. is
needed right now.
For
A.I. has already permeated essentials of human society, used in college
admissions, hiring decisions, generating fake literature and art and in police
work, plus driving cars and trucks.
An old
truism suggests that “Anything we can conceive of is probably occurring right
now someplace in the universe.” The A.I. corollary might be that if anyone can
imagine an A.I. robot doing something, then someday a robot will do it.
And so,
without active prevention someone somewhere will create a machine capable of
murdering humans at its own whim. It also means that someday, without
regulation, robots able to conspire against human dominance on earth will be
built, maybe by other robots.
Asimov,
of course, imagined all this. His novels featured a few renegade robots, but
also noble ones like R. Daneel Olivaw, who created and nurtured a (fictitious)
benevolent Galactic Empire.
In
part, Asimov reacted to events of his day, which saw some humans exterminate
other types of humans on a huge, industrial scale. He witnessed the rise and
fall of vicious dictatorships, more despotic than any of today’s.
Postulating
that robots would advance to stages far beyond even today’s A.I., he conceived
a system where they would co-exist peacefully with humans on a large scale.
But no
one is controlling A.I. development now, leaving it free to go in any
direction, good or evil. Human survival demands limits on this, as Asimov
foresaw. If we don’t demand it today, not even a modern Asimov could predict
the possible consequences.
-30-
Email
Thomas Elias at tdelias@aol.com. His book, "The Burzynski Breakthrough:
The Most Promising Cancer Treatment and the Government’s Campaign to Squelch
It," is now available in a soft cover fourth edition. For more Elias
columns, visit www.californiafocus.net
No comments:
Post a Comment