There is no doubt that the tool that will shape our generation in the coming years is artificial intelligence (“AI“). Recently, the development of AI at a global level has had a very accelerated growth. Although it is not an exclusive subject of study of the 21st century, considering that scientists of the 20th century were already studying and developing AI systems (i.e. the Turing Machine), it is a reality that the advances achieved until today constitute a tool that is easily accessible and open to the public and that affects society directly. Therefore, the emergence of a regulation for this tool is indispensable.
On December 9th, 2024, the European Parliament, jointly with other institutions of the European Union (“EU”), agreed on the final version of the “European Law of Artificial Intelligence”, which will not come into force immediately, but will have full legal force in European countries as of approximately 2026. While the EU’s main intention with the issuance of this law is to ensure that AI systems used in European countries are safe and respect fundamental rights, we cannot lose sight of the fact that they equally intend this regulation to be the basis for the development of AI laws in other jurisdictions.
The European AI Law regulates the use of AI systems according to the risks they pose to AI users, considering a higher mandatory burden for those systems that pose greater risks to users’ rights. Considering the above, this ordinance establishes four categories of AI with the following characteristics:
- AI of unacceptable risk. This includes AI systems that pose a threat to individuals, including those systems that manipulate the behavior of individuals or specific vulnerable groups, perform discriminatory classifications of individuals, or involve real-time biometric identification systems. Such AI systems will be prohibited as a general rule, with specific exceptions for biometric identification systems.
- High-risk IA. These are further divided into two categories; however, both involve a negative impact on the safety or fundamental rights of users. The categories of this type of AI systems are the following: (i) AI systems using products subject to EU product safety legislation; and (ii) AI systems registrable in the EU database by reason of their content. This type of AI systems will be allowed; however, they will be under continuous evaluation by a third party.
- Limited risk AI. These are those that will comply with the minimum transparency requirements for users to make conscious decisions when using this type of AI systems. They comprise content-generative AI systems such as ChatGpt, which shall disclose to the user that the content is being generated by an AI and shall design models that prevent the generation of illegal content.
- Minimum risk IA. This type of IA systems represents a minimum risk for users, and therefore voluntary compliance behavior requirements will be established.
The application of the European AI Law covers users, importers, distributors, manufacturers, and suppliers who bring AI system services to the market, provided that it is used or has effects in the European Union, regardless of the physical location of the system. Non-compliance entails very significant penalties for those companies involved in AI systems, which can even be a percentage of the volume of business.
We consider that the regulation of AI is fundamental for the protection of the fundamental rights and security of users, who are becoming more and more every day. Therefore, we recommend not losing sight of the implementation of the European AI Law and the regulations that are issued globally to meet the needs of the technology industry, which sooner or later should be studied in our country for the development of a robust Mexican AI regulation.
For further information, we suggest consulting the following source: European Commission. Press Release. “Commission welcomes political agreement on Artificial Intelligence Act”. December 9, 2023. Brussels. Accessed January 10, 2024. Link:https://ec.europa.eu/commission/presscorner/detail/es/ip_23_6473
Juan Ignacio Ferrer R.