Artificial Intelligence is increasingly being made available and widely used in various forms. Artificial Intelligence (AI) – enabled tools, such as Chat GPT, are expected to have a great impact across industries. To ensure a human-centric and ethical development of AI in Europe, to promote the uptake of AI and to address the risks associated with certain uses of such technology, the European Commission adopted a proposal on the world’s first set of rules on Artificial Intelligence. After having obtained the opinions of the European Economic and Social Committee, the European Committee of the Regions and the European Central Bank, the Council of the European Union is currently performing the proposed Act’s First Reading.
Scope
The AI Act, in its current proposed form, is expected to cover a broad range of Al systems, including both standalone software and AI integrated into other products or services. It will apply to AI systems used within the EU, as well as those used outside the EU if they affect people or organizations within the EU.
The proposed Act aims to lay the foundations by establishing uniform definitions in relation to use of AI systems and to be as technology neutral and future proof as possible, taking into account the fast technological and market developments related to AI.
The proposed Act further aims to address potential risks associated with AI systems and ensure that AI systems are overseen by people, are safe, transparent, traceable, non-discriminatory, and environmentally friendly.
Risk-Based Approach
One of the central elements of the AI Act is the classification of AI systems according to the level of risk, Minimal or No Risk AI systems, Limited Risk AI systems, High-Risk AI systems and Unacceptable-Risk AI systems.
Unacceptable-Risk AI systems include systems that contravene EU values and practices which have a significant potential to manipulate individuals through subliminal or purposefully manipulative techniques, exploit people’s vulnerabilities or are used for social scoring (classifying people based on their social behaviour, socio-economic status, personal characteristics). It is expected that such systems will be strictly prohibited.
High-Risk AI systems are defined as AI applications that pose significant potential risks to individuals’ safety, fundamental rights, or societal well-being. Examples of high-risk AI systems include those used in critical infrastructure, transportation, healthcare, law enforcement, and employment.
Conformity Assessment, Certification and EU-Wide Use
To ensure compliance, developers and operators of High-Risk AI systems must undergo a mandatory conformity assessment. The proposed AI Act sets out the framework for the establishment of a notified authority in each Member State to act as an independent third party in conformity assessment procedures and explains in detail the harmonised conformity assessment procedures to be followed for each type of High-Risk AI systems. The notified bodies will issue certificates confirming the conformity of AI systems with EU requirements.
AI System providers will assume the responsibility for their system’s conformity with EU regulations. Providers will be obliged to draft a written EU Declaration of Conformity for each AI System they provide and make this available to the notified authorities of all Member States in which the relevant AI System will be available. The EU Declaration of Conformity identify the AI System, its Provider, confirm that the System is in conformity with the AI Act and refer to the standards and/or specifications to which it conforms and the details of the notified body which performed the conformity assessment.
Next Steps
Once the First Reading by the Council of the European Union is completed, the European Parliament is expected to commence its First Reading of the proposed AI Act, before negotiations on the final form of the regulation can begin with the European Council.
Should you wish to find out more, contact our team at info@paraschou.com.cy.