EU agrees landmark rules on artificial intelligence

Unlock the Editor’s Digest for free

EU lawmakers have agreed the terms for landmark legislation to regulate artificial intelligence, pushing ahead with enacting the world’s most restrictive regime on the development of the technology.

Thierry Breton, EU commissioner, confirmed in a post on X that a deal had been reached, calling it a historic agreement. “The EU becomes the very first continent to set clear rules for the use of AI,” he wrote. “The AI Act is much more than a rule book — it’s a launch pad for EU start-ups and researchers to lead the global AI race.”

The deal followed years of discussions among member states and members of the European parliament on the ways AI should be curbed to have humanity’s interest at the heart of the legislation. It came after marathon discussions that started on Wednesday this week.

Details of the deal were still emerging after the announcement. Breton said legislators agreed on a two-tier approach, with “transparency requirements for all general-purpose AI models (such as ChatGPT)” as well as “stronger requirements for powerful models with systemic impacts” across the EU.

Breton said the rules would implement safeguards for the use of AI technology while at the same time avoiding an “excessive burden” for companies.

Among the new rules, legislators agreed to strict restrictions on the use of facial recognition technology except for narrowly defined law enforcement exceptions.

The legislation also includes bans on the use of AI for “social scoring” — using metrics to establish how upstanding someone is — and AI systems that “manipulate human behaviour to circumvent their free will”.

The use of AI to exploit those vulnerable because of their age, disability or economic situation is also banned.

Companies that fail to comply with the rules face fines of €35mn or 7 per cent of global revenue.

Some tech groups were not pleased. Cecilia Bonefeld-Dahl, director-general for DigitalEurope, which represents the continent’s technology sector, said: “We have a deal, but at what cost? We fully supported a risk-based approach based on the uses of AI, not the technology itself, but the last-minute attempt to regulate foundation models has turned this on its head.

“The new requirements — on top of other sweeping new laws like the Data Act — will take a lot of resources for companies to comply with, resources that will be spent on lawyers instead of hiring AI engineers.”

MEPs had spent years arguing over their position before negotiations began with member states and the European Commission, the executive body of the EU. All three — national ministries, parliamentarians and the commission — agreed to a final text on Friday night, allowing the legislation to become law.

European companies have expressed their concern that overly restrictive rules on the technology, which is rapidly evolving and gained traction after the popularisation of OpenAI’s ChatGPT, will hamper innovation. In June, dozens of some of the largest European companies, such as France’s Airbus and Germany’s Siemens, said the rules as constituted were too tough to nurture innovation and help local industries.

Last month, the UK hosted a summit on AI safety, leading to broad commitments from 28 nations to work together to tackle the existential risks stemming from advanced AI. That event attracted leading tech figures such as OpenAI’s Sam Altman, who has previously been critical of the EU’s plans to regulate the technology.

Reference

Denial of responsibility! Elite News is an automatic aggregator of Global media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, and all materials to their authors. For any complaint, please reach us at – [email protected]. We will take necessary action within 24 hours.
DMCA compliant image

Leave a comment