7 leading AI companies have decided to adopt new security standards during a meeting with Biden at the White House. Self-regulation is a signal but it is not enough.
Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI.
These seven leading AI companies agreed to join the new security standards during a meeting with Joe Biden at the White House on Friday, July 21.
The seven Big Techs have decided to watermark the AI contents to make them recognizable and accept control tests by independent experts.
“We must be lucid and vigilant about the threats that emerging technologies can pose to our democracy and our values,” President Biden explained. "This is a great responsibility, we have to work well."
Is Big Tech self-regulation enough?
The Big Tech announcement is certainly commendable, but corporate self-regulation cannot be the definitive solution.
The White House rules to regulate artificial intelligence are a step in the right direction. However, they prove to be insufficient to address all the challenges this technology entails. To ensure proper AI governance, strong and mandatory regulatory frameworks are needed to protect democracy, citizens’ rights, and security.
The risks to avoid
AI is a constantly evolving field and offers countless opportunities and potential, but we cannot ignore the risks that this technology entails. Behind the chatbots and machines capable of generating fantasy worlds from a string of text, there are serious problems to be addressed.
Automation and deforestation of humans, preventative disinformation, privacy violation, and deepfakes are just some of the problems related to AI that require a timely and adequate solution.
Commitments by companies to watermark AI-generated content to make it recognizable and have their AI analyzed by independent experts are positive steps. However, it is important to note that these measures can be interpreted and applied differently by each company.
To ensure effective and uniform regulation, clear and binding legislation is needed that promotes transparency, privacy protection and intensifies research into generative AI risks.
US behind EU
The United States is clearly lagging behind the European Union on AI regulation. The fact that the European Union Parliament has already approved the world’s first regulation to regulate artificial intelligence demonstrates a greater determination to face this challenge. The European regulation provides for a categorization of AI applications based on risk, establishing limitations and bans for high-risk and unacceptable-risk areas. This approach is an example of how AI challenges can be tackled systematically and rigorously.
Original article published on Money.it Italy 2023-07-25 08:00:00. Original title: 7 Big Tech si accordano con la Casa Bianca e mettono un freno allo sviluppo dell’IA