The EU’s AI Act comes into force. Here’s what it contains

Lorenzo Bagnato

1 August 2024 - 20:12

condividi
Facebook
twitter whatsapp

The European Union issued the world’s most comprehensive legislature on artificial intelligence to date.

The EU's AI Act comes into force. Here's what it contains

The AI Act officially came into force on Thursday, August 1st across the European Union after it was approved at the end of May. This is the first and most comprehensive legislature on artificial intelligence in the world.

The act had been in the works since 2021, but the explosion of the AI frenzy with the release of OpenAI’s ChatGPT increased its urgency.

The framework tackles artificial intelligence on a risk-based parameter, imposing strong restrictions on AI applications considered with higher risk. Obligations for “High-risk” AI usage include adequate risk assessment and mitigation, high-quality training datasets, and routine activity logging. The newly created AI Office in Brussels shall investigate these measures to ensure compliance.

Examples of "High-risk" activities include the application of artificial intelligence in medical devices, bank loans, and self-driving vehicles.

In some instances, the use of AI is considered “Unacceptable” and therefore forbidden by the act. For example, social scoring systems that rank EU citizen on their personal data are banned. Algorithms that recognize a citizen’s emotion are also considered “Unacceptable”.

Proponents praised the EU’s approach to AI regulation, saying other major legislative bodies should take note. Eric Loeb, executive vice president of government affairs at Salesforce, praised the risk-based regulatory framework saying it “helps encourage innovation while also prioritizing the safe development and deployment of the technology.”

Generative AI models like ChatGPT or Google’s Gemini are considered “General purpose” technology. Generative AI software, when not open source, shall follow the EU’s copyright law, issue disclosure on their training datasets and methods, and implement adequate cybersecurity protection.

Open-source generative AI models have little to no regulation. These include 7B, an open-source Large Language Model developed by the French company Mistral.

In general, the AI act is aimed at US-based tech giants, whose LLMs have overwhelmed the technology world in recent years. Specifically, Microsoft, Alphabet (Google), Amazon, and Meta (Facebook) have invested billions of dollars into start-ups to develop their own AI solutions.

The AI Act has implications that go far beyond the EU. It applies to any organization with any operation or impact in the EU, which means the AI Act will likely apply to you no matter where you’re located,” Charlie Thompson, senior vice president of EMEA and LATAM for enterprise software firm Appian, told CNBC.

AI software from US tech giants received strong backlash for their alleged violation of international copyright laws. The datasets these models are trained on are filled with human-made texts and/or images whose authors have not received any recognition. Moreover, several European authorities have found AI models to breach EU privacy law.

Failing to comply with the AI Act will result in a fine of up to €35 million or 7% of the company’s global turnover, whichever one is higher.

Trading online
in
Demo

Fai Trading Online senza rischi con un conto demo gratuito: puoi operare su Forex, Borsa, Indici, Materie prime e Criptovalute.