Europe Leads the Way in Regulation for AI

12 December 2023

In a monumental stride toward governing the responsible use of Artificial Intelligence (AI), the European Union has set a historic precedent by establishing comprehensive regulations. Following extensive deliberations spanning three days, negotiators from the Council presidency and the European Parliament achieved a pivotal provisional agreement on the ‘Artificial Intelligence Act’ on December 8th. This Act delineates harmonized rules to govern AI’s deployment and ensure its safety while upholding fundamental rights and European values within the EU market. This topic has been the subject of INSME’s 2023 Annual Meeting in Berlin, that thoroughly analysed the impact of AI on SMEs.

The primary highlights of this provisional agreement entail several key elements that substantially augment the initial Commission proposal starting with the regulation of AI Systems: it is crucial to focus on high-impact general-purpose AI models capable of posing systemic risks. The UE therefore foresees the introduction of a governance system empowered with enforcement capabilities at the EU level. Expansion of prohibited practices, albeit allowing remote biometric identification by law enforcement in public spaces under stringent safeguards. Mandatory fundamental rights impact assessment for deployers of high-risk AI systems before utilization.

There are various critical components when it comes to this Agreement. When it comes to the regulation’s definitions and general scope, the agreement refines the definition of AI systems, aligning it with OECD guidelines to distinctly categorize AI from simpler software systems. It expressly excludes areas beyond EU law’s scope and safeguards member states’ competencies in national security, exempting AI systems exclusively for military or defense purposes. Establishing a horizontal layer of protection, the agreement classifies AI systems based on risk levels, allowing lighter transparency obligations for systems with limited risks. High-risk AI systems will therefore face stringent requirements to access the EU market, with adjustments to ease compliance burdens, particularly for SMEs. Clear delineation of responsibilities among various actors in AI value chains is emphasized, aligning with existing legislations such as EU data protection laws.

Certain AI applications that are, or will be, deemed unacceptable are barred within the EU. Prohibitions include cognitive behavioral manipulation, untargeted scraping of facial images, emotion recognition in workplaces and educational settings, social scoring, biometric categorization to infer sensitive data, and specific instances of predictive policing for individuals.

This landmark agreement not only ensures AI system safety and adherence to fundamental rights but also aims to foster investment and innovation in Europe’s AI landscape. By introducing clear-cut regulations and robust governance structures, the EU heralds a new era in responsible AI usage. The new AI act reflects a balance between encouraging innovation and safeguarding against potential risks associated with AI technologies. It exemplifies the EU’s commitment to maintaining ethical standards while promoting technological advancement. As these regulations take shape, they mark a pivotal milestone in global AI governance, positioning Europe at the forefront of responsible AI implementation.

Source: INSME Secretariat

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.