The Biden Administration is calling on a long list of tech companies and banking firms to tackle the dangers of artificial intelligence (AI).
Gina Raimondo, Secretary of Commerce, announced the creation of the U.S. AI Safety Institute Consortium (AISIC). The effort unites a who’s who of AI developers with academics, researchers and civil society organizations.
The goal is to draft development and deployment guidelines, approaches to risk management and other safety issues.
“President Biden directed us to pull every lever to accomplish two key goals: set safety standards and protect our innovation ecosystem. That’s precisely what this consortium is set up to help us do,” Raimondo said.
The full list of consortium participants is available here.
Key Measures Announced
Certain guidelines include the use of watermarks to identify to the public when audio and visual content is AI-generated.
So far, several companies have reported in recent weeks that they are currently stepping up AI regulatory efforts.
Facebook and Instagram owner Meta has said it will start labeling AI-generated images and videos on its platforms. Microsoft‘s partner OpenAI has said it will launch a tool to identify AI content.
Earlier this week, European Union officials began a process to compile guidelines for big tech companies to follow. The goal is to safeguard the democratic process from AI-generated misinformation or deepfake imagery that could negatively influence voters. After all, a third of the world’s population votes in national elections later this year.
The effort came just as Meta found itself under scrutiny due to an altered video of President Joe Biden appearing on Facebook.
Pop star Taylor Swift has also been the target of troubling deepfake videos.
Europe has already drafted a large document that caused something of a stir. Companies groaned at the likely compliance costs they’d face. Others fretted over whether more legislation would mean less innovation as developments in the startlingly fast-moving…