The same elements and techniques that power the socio-economic benefits of AI can also bring new risks and negative consequences for individuals or society. In response, on April 2021 the EU Commission has published a new draft AI Regulation which is human centric and respects fundamental rights to promote trust in AI.
Please see below a summary of the main aspects contained in this draft AI Regulation:
1. Scope. The Regulation contains obligations not only for organisations that provide AI systems, but also for distributors, as well as organisations that use these systems. In terms of territorial scope, besides being applicable in the EU, both providers and users of high-risk AI systems may be required to comply with it even where they are established outside of the EU.
2. Prohibitions. The Regulation sets out a number of AI prohibited practices, which include using AI systems for exploiting information about a person or group to target their vulnerabilities, as well as general-purpose scoring of individuals where the scoring leads to a systematic detrimental treatment of certain persons or groups, among others.
3. Risk approach. The Regulation classifies a number of uses of AI as high-risk, which in practice means that organisations will have to analyse the risks of each AI initiative to determine whether it is a high-risk activity.
4. Data governance and data quality. Organisations must ensure that high-risk AI systems are trained and tested with high quality data sets, which must be relevant, representative, free of errors, complete, and statistically adequate.
5. Transparency. There are a number of transparency obligations in relation to AI systems, including informing individuals that they are interacting with an AI system, that their personal data is being processed by an emotion recognition system, or that any audio-visual content has been artificially created or modified where such content resembles existing persons, objects or events and falsely appear to be authentic.
6. Monitoring obligations. Both providers and users have obligations to monitor the operation of the AI systems, keep records of the logs and interrupt the use in case of malfunctions.
7. Conformity assessments. AI systems providers must perform a conformity assessment of any high-risk AI system to demonstrate compliance with the relevant provisions of the Regulation.
8. Fines. Infringing certain provisions of the regulation may lead to fines of up to €30M or 6% of annual global turnover.
These are the key points included in the draft Regulation that was published on 21 April. The regulation will now have to go through the entire EU parliamentary process, which could mean that approval of the regulation could be delayed for several years or even get stuck, as has happened with other initiatives such as the long-awaited ePrivacy Regulation.
In the meantime, this is perfect timing for companies to start designing internal control frameworks for AI initiatives so that, unlike what happened with the GDPR, this time the Regulation catches them with their homework done. To this end, it is highly advisable to design a framework for the ethical evaluation of AI initiatives that, in addition to establishing a system of good practices in the organisation, can serve as a baseline for subsequent evaluation processes and controls based on the final version of the AI Regulation.