It establishes rules for the development, commodification and use of AI-driven products, services, and systems. On 21 April 2021 was published the first draft. The aim of the Act is to have “measures in support of innovation” including the use of AI regulatory sandboxes. Scientific research falls outside the parameters of the Act. General purpose AI systems (image or speech recognition, audio or video generation, pattern detection, question answering, and translation) should not be considered within scope.
The Act takes a risk-based approach categorising all AI into three risky of activities: unacceptable (social scoring), high-risk (medical devices and consumer creditworthiness), and low-risk activities.
The premise behind social scoring is that an AI system would assign a starting score to every individual, which would increase or decrease depending on certain actions or behaviours. This could not be necessarily relevant or fair depending on the variables of the model (e.g. gender could generate “financial exclusion and discrimination”). The Act draws a distinction between social scoring and “lawful evaluation practices of natural persons” – permitting the latter. In turn, the processing of an individual’s financial information to ascertain their eligibility for insurance policies may be permitted albeit this deserves special consideration and is high risk.
On 29 November 2021, it was published a Compromise Text, providing further details of the obligations providers of high-risk AI systems must adhere to. Its Annex III outlines eight areas considered to be high risk: biometric systems; critical infrastructure and protection of the environment; education and vocational training; employment, workers management and access to self-employment; access to and enjoyment of private services and public services and benefits; law enforcement; migration, asylum and border control management; and administration of justice and democratic processes.
The draft Act currently includes an obligation that high-risk AI systems have data sets which are ‘free of errors’ but it has been questioned whether that is possible. As a result, the EU Committee on Industry, Research and Energy has proposed recently to amend some of the standards to what they consider more realistic: “High-risk AI systems which make use of techniques involving the training of models with data shall be developed on the basis of training, assessment, validation and testing data sets considering the latest state-of-the-art measures, according to the specific market segment or scope of application… Unsupervised learning and reinforcement learning shall be developed on the basis of training data sets that meet the quality criteria referred to in paragraphs 2 to 5…. Providers of high-risk AI systems that utilise data collected and/or managed by third parties may rely on representations from those third parties with regard to quality criteria referred to in paragraph 2, points (a), (b) and (c)…. Training, validation and testing data sets are designed with the best possible efforts to ensure that they are relevant, representative, and appropriately vetted for errors in view of the intended purpose of the AI system. In particular They shall have the appropriate statistical properties, including, where applicable, as regards the persons or groups of persons on which the high-risk AI system is intended to be used.
The Act envisages providers of high-risk AI systems who place that AI system into the EU market will register that AI system on the EU database referred to in Article 60.
Finally, questions remain as to when the Act will finally be adopted and apply to organisations. The GDPR was proposed in 2012 and only finally came into force in 2018.
More on https://bit.ly/3a2T2Zm