Guest article: provided by Rachele ReinaCompany: Grand Compliance
Artificial intelligence (AI) stands at the forefront of technological innovation, and the European Union (EU) is on the verge of pioneering the regulation of this transformative field through the AI Act. This ground-breaking legislation represents the world’s first comprehensive AI law, carefully designed to govern the use of AI within the EU. In an age where individuals can interact with a computer-generated personas, often shaped by AI itself, ensuring responsible AI usage has become a paramount concern.
The European Union has taken a proactive stance towards AI regulation, aligning it with a broader digital strategy. The overarching goal is to create an environment that promotes the development and application of AI while simultaneously safeguarding the well-being and rights of individuals. The potential benefits of AI adoption are extensive, spanning from improved healthcare to safer transportation, efficient manufacturing, and sustainable energy solutions.
To realize this vision, the European Commission introduced a comprehensive regulatory framework for AI in April 2021. This framework is built upon the evaluation and classification of AI systems, categorizing them based on the risks they pose to users. These risk levels play a pivotal role in determining the extent of regulation, thereby setting global standards for AI governance within the EU.
The European Parliament is at the forefront of efforts to ensure that AI systems used within the EU adhere to strict principles. These principles encompass security, transparency, traceability, non-discrimination, and environmental responsibility. The Parliament advocates for human oversight of AI systems to prevent undesirable outcomes, emphasizing the importance of maintaining human control over AI applications. Furthermore, the Parliament aims to establish a uniform, technology-neutral definition of AI that can be applied to all forthcoming AI systems, providing much-needed clarity and consistency.
The crux of the new regulations lies in the differentiation of AI systems based on the risks they entail. While many AI applications present minimal risk, they still require assessment and compliance. For AI systems classified as presenting an unacceptable risk, they are deemed hazardous to individuals and society, and they will be prohibited. This category includes cognitive behavioral manipulation targeting individuals or specific vulnerable groups, social scoring and real-time and remote biometric identification systems, including facial recognition. Certain exceptions may be considered; such as “post” remote biometric identification systems, but only with court authorization.
High risk AI systems include those that could have a detrimental impact on safety or fundamental rights. They further divide into two segments: AI systems used in products subject to the EU’s product safety regulations and AI systems operating in eight specific domains, requiring registration in an EU database. All high-risk AI systems must undergo assessments both before entering the market and throughout their lifecycle.
Generative AI, like ChatGPT, must adhere to transparency requirements, such as disclosing that the content was generated by AI; designing models to prevent the generation of illegal content and publishing summaries of copyrighted data used for training.
Limited risk AI systems must comply with minimal transparency requirements, enabling users to make informed decisions.
On June 14, 2023, Members of the European Parliament (MEPs) endorsed the Parliament’s negotiation stance on the AI Act. Negotiations will now commence with EU member states in the Council to finalize the legislation. The objective is to reach a consensus by the end of this year.
This comprehensive proposal for AI regulation within the EU is a response to the dynamic landscape of AI technology. It seeks to strike a balance between fostering AI innovation and safeguarding fundamental rights and EU values. The proposal focuses on creating a legal framework for trustworthy AI, reflecting the commitment of the European Commission and the aspirations of the European Parliament.
The key objectives include the fundamental rights and safety requirements applicable to AI systems and to facilitate the development of a single market for lawful, safe, and trustworthy AI applications while preventing market fragmentation.
To achieve these objectives, the proposal adopts a balanced and proportionate approach, applying regulations based on the level of risk posed by AI systems. It introduces a risk-based framework that categorizes AI applications and sets requirements accordingly. This approach allows for the responsible development and deployment of AI without unduly hindering technological progress or imposing excessive costs.
The proposal aligns with EU legislation applicable to sectors where high-risk AI systems are already used or anticipated. It complements existing EU laws on data protection, consumer protection, non-discrimination, gender equality, and more. The proposal respects the General Data Protection Regulation (GDPR) and the Law Enforcement Directive. It also integrates with sectoral safety legislation to ensure consistency, prevent duplications, and minimize additional burdens for high-risk AI systems that serve as safety components of products.
Furthermore, the proposal enhances EU’s engagement on the global stage in shaping norms and standards for trustworthy AI, emphasizing the importance of adhering to Union values and interests. It fosters the development of an ecosystem of trust in AI in Europe, furthering the Union’s role in promoting technology that aligns with the well-being of its citizens.