European Union has finalised a historic deal! Check out the AI Act cheat sheet ahead of its enactment

[ad_1]

On Friday, December 8, Europe reached a provisional deal on landmark European Union (EU) rules governing the use of artificial intelligence (AI). The political agreement is seen as a landmark moment for the EU and now, it is expected that the rest of the process to enact the AI Act. As per reports, the sticking points where the bloc failed to find consensus were governments’ use of AI in biometric surveillance and how to regulate AI systems such as ChatGPT. Now that the agreement is in place, you should know the key elements of the AI Act and how it may shape the future of this emerging technology.

These key points were shared by Oliver Patel, Enterprise AI Governance Lead at AstraZeneca on LinkedIn. Posting it in an image as an ‘AI Act cheat sheet’, he said, “Now the dust has settled on Friday’s announcement of a political agreement on the AI Act, it’s time to delve into the details”. It should also be noted that these points come only from the publicly available text. Once the full text becomes available, more items will be added to the list.

Key points to know about the AI Act

With this agreement, the AI Act has become the world’s first comprehensive Al law. It is expected to come into force in 2026. The draft legislation focuses on data sharing transparency, and data safety, as well as establishing how the regulatory framework will function in tandem with the marketplace.

First, let us take a look at the basics of the AI Act.

  • Definition of AI: It has been aligned to the recently updated OECD definition.
  • The new definition is: An AI system is a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments. Different AI systems vary in their levels of autonomy and adaptiveness after deployment.
  • Extraterritorial: This word is used for organizations outside the EU
  • Exemptions: This defines the space where the AI Act is not enactable. It includes national security, military and defense; R&D; open source (partial).
  • Compliance grace periods of between 6-24 months
  • Risk-based: This defines different tiers for risky AI. The most dangerous AI systems will be placed as Prohibited Al followed by High-Risk Al, Limited Risk Al, and Minimal Risk Al.
  • Extensive requirements for ‘Providers’ and ‘Users’ of High-Risk Al
  • Generative Al: Specific transparency and disclosure requirements have been amended.

Prohibited AI system

As per the proposed amendment of the AI Act, this particular text defines the reason for prohibiting AI systems. The text states, “Aside from the many beneficial uses of artificial intelligence, that technology can also be misused and provide novel and powerful tools for manipulative, exploitative and social control practices. Such practices are particularly harmful and abusive and should be prohibited because they contradict Union values of respect for human dignity, freedom, equality, democracy and the rule of law and Union fundamental rights, including the right to non-discrimination, data protection and privacy and the rights of the child”.

The following are the areas where AI has been prohibited in the AI Act, as per Patel.

  • Social credit scoring systems
  • Emotion recognition systems at work and in education
  • Al used to exploit people’s vulnerabilities (e.g., age, disability)
  • Behavioural manipulation and circumvention of free will
  • Untargeted scraping of facial images for facial recognition
  • Biometric categorization systems using sensitive characteristics
  • Specific predictive policing applications
  • Law enforcement use of real-time biometric identification in public (apart from in limited, pre-authorized situations)

High-risk AI systems

The proposed legislation says the following about High-risk AI systems: High-risk AI systems should only be placed on the Union market, put into service or used if they comply with certain mandatory requirements. Those requirements should ensure that high-risk AI systems available in the Union or whose output is otherwise used in the Union do not pose unacceptable risks to important Union public interests as recognised and protected by Union law.

The following are the areas where usage of AI is considered high-risk and a special emphasis on regulatory compliance will be placed.

  • Medical devices
  • Vehicles
  • Recruitment, HR, and worker management
  • Education and vocational training • Influencing elections and voters
  • Access to services (e.g., insurance, banking, credit, benefits, etc.)
  • Critical infrastructure management (e.g., water, gas, electricity, etc.)
  • Emotion recognition systems
  • Biometric identification
  • Law enforcement, border control, migration and asylum
  • Administration of justice
  • Specific products and/or safety components of specific products

With this in mind, the following are the key requirements for high-risk AI, highlighted in the proposed AI Act, shared by Patel.

  • Fundamental rights impact assessment and conformity assessment
  • Registration in public EU database for high-risk Al systems
  • Implement risk management and quality management system
  • Data governance (e.g., bias mitigation, representative training data, etc.)
  • Transparency (e.g., Instructions for Use, technical documentation, etc.)
  • Human oversight (e.g., explainability, auditable logs, human-in-the-loop, etc.)
  • Accuracy, robustness, and cyber security (e.g., testing and monitoring)

General purpose AI

The original AI Act did not have any mention of a general-purpose AI. However, the amendment proposes the following definition: ‘General purpose AI system’ means an AI system that can be used in and adapted to a wide range of applications for which it was not intentionally and specifically designed.

The following key elements have been added for such AI systems.

  • Distinct requirements for General Purpose Al (GPAI) and Foundation Models
  • Transparency for all GPAI (e.g., technical documentation, training data summaries, copyright and IP safeguards, etc.)
  • Additional requirements for high-impact models with systemic risk: model evaluations, risk assessments, adversarial testing, incident reporting, etc.
  • Generative Al: individuals must be informed when interacting with Al (e.g., chatbots); Al content must be labeled and detectable (e.g., deepfakes)

Penalties and enforcement

The proposed AI Act highlights, “In compliance with the terms and conditions laid down in this Regulation, Member States shall lay down the rules on penalties, applicable to infringements of this Regulation by any operator, and shall take all measures necessary to ensure that they are properly and effectively implemented and aligned with the guidelines issued by the Commission and the AI Office”.

In accordance with that, the following have been established.

  • Up to 7 percent of global annual turnover or €35m for prohibited Al violations
  • Up to 3 percent of global annual turnover or €15m for most other violations
  • Up to 1.5 percent of global annual turnover or €7.5m for supplying incorrect info Caps on fines for SMEs and startups
  • European ‘Al Office’ and ‘Al Board’ established centrally at the EU level
  • Market surveillance authorities in EU countries to enforce the Al Act
  • Any individual can make complaints about non-compliance

One more thing! We are now on WhatsApp Channels! Follow us there so you never miss any update from the world of technology. ‎To follow the HT Tech channel on WhatsApp, click here to join now!

[ad_2]

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *