Just as we don’t allow just anyone to build a plane and fly passengers around, or design and release medicines, why should we allow AI models to be released into the wild without proper testing and licensing?
That’s been the argument from an increasing number of experts and politicians in recent weeks.
With the United Kingdom holding a global summit on AI safety in autumn, and surveys suggesting around 60% of the public is in favor of regulations, it seems new guardrails are becoming more likely than not.
One particular meme taking hold is the comparison of AI tech to an existential threat like nuclear weaponry, as in a recent 23-word warning sent by the Center of AI Safety, which was signed by hundreds of scientists:
“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”
Extending the metaphor, OpenAI CEO Sam Altman is pushing for the creation of a global body like the International Atomic Energy Agency to oversee the tech.
“We talk about the IAEA as a model where the world has said, ‘OK, very dangerous technology, let’s all put (in) some guard rails,’” he said in India this week.
Libertarians argue that overstating the threat and calling for regulations is just a ploy by the leading AI companies to a) impose authoritarian control and b) strangle competition via regulation.
Princeton computer science professor Arvind Narayanan warned, “We should be wary of Prometheans who want to both profit from bringing the people fire and be trusted as the firefighters.”
Netscape and a16z co-founder Marc Andreessen released a series of essays this week on his technological utopian vision for AI. He likened AI doomers to “an apocalyptic cult” and claimed AI is no more likely to wipe out humanity than a toaster because: “AI doesn’t want, it doesn’t have goals — it doesn’t want to kill you because it’s not alive.”
This may or may not be true — but then again, we only have a vague understanding of what goes on inside the black box of the AI’s “thought processes.” But as Andreessen himself admits, the planet is full of unhinged humans who can now ask an AI to engineer a bioweapon, launch a cyberattack or manipulate an election. So, it can be dangerous in the wrong hands even if we avoid the Skynet/Terminator scenario.
The nuclear comparison is probably quite instructive in that people…
Click Here to Read the Full Original Article at Cointelegraph.com News…