Introduction
In the contemporary landscape, the Internet stands as a potent force driving a comprehensive revolution within this present epoch. Its presence has streamlined communication, transportation, education, healthcare and production processes, rendering them more accessible and efficient. Artificial Intelligence has pushed boundaries in the way computers are used to operate and function to make human lives easier.
AI is a mammoth structure of supercomputers that helps in facilitating machines to act seamlessly and perform many human-like tasks. The year 2022 indeed marked a significant milestone in the widespread integration of AI technology, with the Generative Pre-Training Transformer (GPT) playing a central role in driving this transformation. Open AI’s ChatGPT, a prominent application of GPT technology, captured the imagination of many and became almost synonymous with AI for a broader audience due to its natural language-processing capabilities.
AI is particularly useful for improving prediction, optimising operations and resource allocation, and personalising services. However, the implications of AI systems for fundamental rights protected under the EU Charter of Fundamental Rights, as well as the safety risks for users when AI technologies are embedded in products and services, are raising concerns.
Most notably, AI systems may jeopardise fundamental rights such as the right to non-discrimination, freedom of expression, human dignity, personal data protection and privacy. With the acceleration of artificial intelligence (AI), efforts toward its regulation are also intensifying. Each nation is striving to establish its unique framework of regulations, potentially resulting in additional divisions within the worldwide digital market.
European Union’s proposed AI Act
Given the fast development of these technologies in recent years, AI regulation has become a central policy question in the European Union (EU). Policymakers pledged to develop a ‘human-centric’ approach to AI – to ensure that Europeans can benefit from new technologies developed and functioning according to the EU’s values and principles. In its 2020 White Paper on Artificial Intelligence, the European Commission committed to promoting the uptake of AI and addressing the risks associated with certain uses of this new technology.
While the European Commission initially adopted a soft-law approach, with the publication of its non-binding 2019 Ethics Guidelines for Trustworthy AI and Policy and Investment recommendations, it has since shifted toward a legislative approach, calling for the adoption of harmonised rules for the development, placing on the market, and use of AI systems.
The European Commission tabled a proposal for an EU regulatory framework on artificial intelligence (AI) in April 2021. The draft AI act is the first-ever attempt to enact a horizontal regulation for AI. The proposed legal framework focuses on the specific utilisation of AI systems and associated risks. The Act affects providers, deployers, importers, distributors and product manufacturers.
Purpose of the Act
The Commission puts forward the proposed regulatory framework on Artificial Intelligence with the following specific objectives:
- Ensure that AI systems placed on the Union market and used are safe and respect existing laws on fundamental rights and Union values;
- Ensure legal certainty to facilitate investment and innovation in AI;
- Enhance governance and effective enforcement of existing laws on fundamental rights and safety requirements applicable to AI systems;
- Facilitate the development of a single market for lawful, safe and trustworthy AI applications, and prevent market fragmentation.
Scope of the Act
The new rules will apply primarily to providers of AI systems established within the EU, or in a third country placing AI systems on the EU market or putting them into service in the EU; as well as to users of AI systems located in the EU. To prevent circumvention of the regulation, the new rules also apply to providers and users of AI systems located in a third country where the output produced by those systems is used in the EU. However, the draft regulation does not apply to AI systems developed or used exclusively for military purposes, to public authorities in a third country, international organisations or authorities using AI systems in the framework of international agreements for law enforcement and judicial cooperation.
The proposed definition for AI
Article 3(1) of the draft Act states that:
Artificial intelligence system means: …software that is developed with [specific] techniques and approaches [listed in Annex 1] and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations or decisions influencing the environments they interact with.
Annex 1 of the proposal lays out a list of techniques and approaches that are used today to develop AI. Accordingly, the notion of ‘AI system’ will refer to a range of software-based technologies that encompasses ‘machine learning’, ‘logic and knowledge-based systems’, and ‘statistical’ approaches. This broad definition covers AI systems that can be used on a stand-alone basis or as a component of a product. Furthermore, the proposed legislation aims to be future-proof and cover current and future AI technological developments.
However, stakeholders have expressed concern with the definition of AI in the proposed Act. The Big Data Value Association stresses that the definition of AI systems is quite broad and will cover far more than what is subjectively understood as AI – including the simplest search, sorting and routing algorithms, which will consequently be subject to new rules. AccessNow, an association defending users’ digital rights, argues the definitions of ’emotion recognition’ and ‘biometric categorisation’ are technically flawed, and recommends adjustments.
The risk-based approach
The use of AI, with its specific characteristics (e.g. opacity, complexity, dependency on data, autonomous behaviour), can adversely affect several fundamental rights and users’ safety. The draft AI act hence distinguishes between AI systems posing
- unacceptable risk,
- high risk,
- limited risk, and
- low or minimal risk.
AI applications will be regulated only as strictly necessary to address specific levels of risk
Unacceptable risk: prohibited AI practices
Title II (Article 5) of the proposed AI act explicitly bans harmful AI practices that are considered to be a clear threat to people’s safety, livelihoods and rights, because of the ‘unacceptable risk’ they create.
High Risk: regulated high-risk AI systems
Title III (Article 6) of the proposed AI act regulates ‘high-risk’ AI systems that create an adverse impact on people’s safety or their fundamental rights. The draft text distinguishes between two categories of high-risk AI systems.
Limited Risk: transparency obligation
AI systems presenting ‘limited risk’ – such as systems that interact with humans (i.e. chatbots), emotion recognition systems, biometric categorisation systems, and AI systems that generate or manipulate image, audio or video content (i.e. deep-fakes) will be subject to a limited set of transparency obligations.
Low or minimal risk: no obligation
All other AI systems presenting only low or minimal risk can be developed and used in the EU without conforming to any additional legal obligations. However, the proposed AI act envisages the creation of codes of conduct that encourage providers of non-high-risk AI systems to voluntarily apply the mandatory requirements for high-risk AI systems.
AccessNow argues that the provisions concerning prohibited AI practices (Article 5) are too vague, and proposes a wider ban on the use of AI to categorise people based on physiological, behavioural or biometric data, for emotion recognition as well as dangerous uses in the context of policing, migration, asylum and border management.
Impact of the EU AI Act
The Act exerts a profound influence spanning various dimensions, encompassing the EU community, economy, technological landscape and investment climate, and extending even to non-EU nations. A significant facet of this impact is the alteration it brings to users’ rights, which stands as a key goal of the Act.
Impact on human rights
Human rights take a central role within the framework of the Act. The Act carries substantial potential for impacting human rights on a wide scale. Against the backdrop of numerous global human rights violations, its primary impetus lies in leveraging technology to mitigate such abuses – within both the physical and digital spheres. Consequently, the Act places significant emphasis on the regulation and meticulous examination of high-risk AI systems, given their capacity to jeopardise fundamental human rights. Of particular concern are issues like discrimination and gender bias, which underscore the necessity for stringent oversight and regulation of these systems.
In his article on ‘The EU Artificial Intelligence Act and its Human Rights Limitations’, Louis Holbrook highlights key aspects of the legislation. One notable feature is the Act’s approach to AI systems involved in generating or manipulating content, particularly deep fakes. It emphasises consumers’ right to be informed when interacting with AI instead of humans, imposing an obligation on deep-fake creators to disclose artificial manipulation, except when necessary for freedom of expression and the arts.
However, a concern arises as the Act designates systems capable of detecting deep-fakes as ‘high risk’ while categorising deep-fakes themselves as ‘limited risk’. This lack of explicit sanctions for non-compliance with disclosure obligations raises privacy concerns, especially in cases of non-consensual pornographic deep-fake content disproportionately impacting women.
Impact on the economy
Undoubtedly, the EU AI Act is poised to enhance consumer trust significantly – thereby fostering increased adoption in alignment with the Act’s overarching and specific goals. This heightened trust is expected to drive a surge in utilisation, even in the face of potential employment-related concerns. Furthermore, the Act’s emphasis on standardisation is anticipated to incentivise consumers to embrace AI products with greater confidence. National market surveillance authorities will be responsible for assessing operators’ compliance with the obligations and requirements for high-risk AI systems.
However, these developments are anticipated to result in expenses related to ensuring compliance, a potential deceleration in innovation, and an impact on the level of competitiveness within the market. Technology firms will find it necessary to allocate additional investments and broaden their resource allocation to align with the standards laid out by the Act. Emerging smaller companies seeking to enter the AI sector could encounter challenges attributable to the demands imposed by regulatory prerequisites.
Impact on investment
The development of AI products requires resources. Some tech firms are financed through investment from shareholders, debentures, etc. The EU AI Act coming into force will affect this sector also. The nature of regulation and market forces will determine the effect on this area. There may be a shift in investment priorities, thereby leading to increased investment in responsible AI.
The regulations set by the EU AI Act could lead to increased investments in AI research and development, especially in areas that align with the Act’s objectives and compliance requirements. Companies may allocate resources to ensure their AI systems meet the established standards, contributing to the growth of responsible and ethical AI technologies. Companies might adjust their investment strategies to focus on AI technologies that are more likely to comply with the regulations. This could lead to a redirection of funds away from high-risk or ethically problematic AI applications, potentially shaping the direction of technological advancement.
Impact on developers, providers and distributors of AI outside the EU
The EU AI Act has an extra-territorial effect. As stated earlier, the scope of the Act covers providers of AI systems established within the EU or in a third country placing AI systems on the EU market or putting them into service in the EU, as well as users of AI systems located in the EU. This means an AI product that is produced by a tech company outside the EU is also affected by the regulation.
Developers outside the EU, in countries that also have the AI Act, will have to develop their product to suit the laws of their countries and those of the EU if the product is to be used by EU states. However, this may lead to potential fragmentation. If the EU AI Act’s regulations differ significantly from regulations in other regions, it could lead to fragmentation in AI development practices, making it more complex for developers to navigate differing standards in various markets. The method of enforcement will also affect developers outside the EU, unless special enforcement rules are made for such developers.
Next phase
The next stage is the Trilogue. In the trilogue process, the parliament engages in negotiations with the EU Council and European Commission. The trilogue serves the purpose of attaining a preliminary consensus on a legislative proposition that finds acceptance from both parliament and the Council – collectively referred to as the co-legislators.
Conclusion
The Act aims to enhance operations of the domestic market by establishing a consistent legal structure, especially for the progression, promotion and utilisation of artificial intelligence while aligning with principles of the Union. This Regulation seeks various paramount public interests, including guaranteeing a strong level of safeguarding for health, safety and basic rights. Additionally, it secures the unimpeded exchange of AI-dependent products and services across borders, effectively preventing member-states from imposing constraints on the advancement, promotion and application of AI systems, unless directly sanctioned by this Regulation.