EU AI Legislation
The EU Artificial Intelligence Act became law from 13th March 2024. In other countries: China has AI laws already in place that prohibit the generation of fake news. From August 15, 2023 there have been restrictions on the applications of Generative AI including that it should adhere to the core socialist values of China. In the UK the ‘Artificial Intelligence (Regulation) Bill’ (a private member’s bill) is (April 2024) in the committee stage of the House of Lords. The bill proposes a UK AI authority and AI Officers to investigate safe and ethical use of AI. This legislation has yet to be passed to the House of Commons so may never become law. On 1st April 2024 the UK and USA signed a Memorandum of Understanding that they would work together to develop tests for advanced AI models. The UN has passed resolution A/78/L.49 in March 2024 calling for safe, secure and trustworthy AI systems. These resolutions and memorandums have no legal force behind them.
The EU Act does put powers in place to regulate the use of AI. Applications of AI are assigned to three risk categories.
- Applications and systems that create an unacceptable risk, such as the government-run Social Credit System of China that tracks businesses and individuals then ranks them by their trustworthiness. This category also includes untargeted scraping of facial images from the internet or CCTV footage to create facial recognition databases. This appears to include the activity of bodies such as Clearview whose facial image scraping and sharing has already attracted legal challenges. Real-time biometric identification within public spaces is allowed for limited law enforcement use cases. These include searching for missing persons and identifying suspects of serious crimes. Using AI to Assess the risk of an individual committing crimes (shades of Minority Report) is banned.
- Secondly there are high-risk applications; these involve any degree of profiling from personal data. For example a CV-scanning tool that ranks job applicants. These will be subject to compliance rules requiring risk management, data governance tools and detailed record keeping.
- Applications not explicitly banned or listed as high-risk are largely left unregulated.
General purpose AI (GPAI) models are defined as systems trained with a large amount of data and using self-supervision at scale and having the capability to serve a variety of purposes, both for direct use as well as for integration in other AI systems. These in themselves are not regulated but their use within high-risk AI applications is considered. This use would then fall under the category of the application making use of them even if such GPAI systems are free and open license.
An AI Office will be set up to govern and implement the legislation. It will assess compliance and investigate complaints relating to infringements of the act. The act is already law but will be applied in 6 months (September 2024) for prohibited AI systems, 12 months for GPAI and from 24 months for high risk AI systems.
Within the UK the ICO issued a consultation response to the EU act broadly supporting the concept of regulating AI through a risk based approach.