Before his sacking as CEO of OpenAI last week — and reinstatement this week — could Sam Altman have been working on an artificial intelligence system so powerful it might threaten the safety of humankind?
It sounds like a horror story worthy of Skynet — the AI system in the Terminator films that took control of the planet and subjugated humanity to death and slavery. With Microsoft Corp (NASDAQ:MSFT) as your chief backer, you’d have access to most of the computer terminals around the world.
Much of the media speculation behind Altman’s sacking focused on “safety concerns.” Tech news website The Information suggested Altman had been working on a model called Q* (Q-Star) that was developing at such a pace it caused alarm to safety researchers.
‘No Disagreement On Safety’
This was refuted last week by several of OpenAI’s board members, and Emmett Shear — interim CEO during Altman’s brief removal — wrote this week the board “did not remove Sam over any specific disagreement on safety.”
Nevertheless — for many — where there’s smoke there’s fire. So what is Q*, and what triggered the speculation behind these safety issues?
Q* was reportedly able to solve basic math problems it had never seen before — a major leap forward in AI technology’s abilities, if true. But not at the level of the much-debated artificial general intelligence (AGI) that could possibly perform tasks at, or above, human levels of ability. A Skynet moment perhaps?
On Thursday last week, Altman appeared at a conference saying that in the last couple of weeks, he was in the room when OpenAI “pushed the veil of ignorance back and the frontier of discovery forward.” He was sacked the next day.
Also Read: OpenAI Engineers Flexed The Power Of Their Rare Skillset In Sam Altman Reinstatement
OpenAI’s AGI Mission
OpenAI’s mission statement on its website reads: “We believe our research will eventually lead to artificial general intelligence, a system that can solve human-level problems. Building safe and beneficial AGI is our mission.”
AGI’s critics were less concerned about existential threats to humankind, but more worried about the solving of “human-level problems,” and what impact that might have on issues such as jobs, privacy and cybersecurity.
Writing in Forbes, Nisha…
Click Here to Read the Full Original Article at Cryptocurrencies Feed…