Analysing Risk-Based Approach in the Draft EU Artificial Intelligence Act

Keywords: artificial intelligence, AI systems, large language models, generative AI systems, foundation models, general purpose AI systems, Draft Artificial Intelligence Act, based approach, conformity assessment procedure, audit of AI systems

Abstract

The article delves into the risk-based approach underpinning the draft EU ArtificialIntelligence Act. Anticipated to be approved by the end of 2023, this regulation ispoised to serve as a cornerstone in the European Union’s legal framework forgoverning the development and deployment of artificial intelligence systems (AIsystems). However, the ever-evolving technological landscape continues to presentnovel challenges to legislators, necessitating ongoing solutions that will span yearsto come. Moreover, the widespread proliferation of foundation models and generalpurpose AI systems over the past year underscores the need to refine the initialrisk-based approach concept. The study comprehensively examines the inherentissues within the risk-based approach, including the delineation of AI systemcategories, their classification according to the degree of risk to human rights, andthe establishment of optimal legal requirements for each subset of these systems.The research concludes that the construction of a more adaptable normative legalframework mandates differentiation of requirements based on risk levels, as well asacross all stages of an AI system’s lifecycle and levels of autonomy. The paper alsodelves into the challenges associated with extending the risk-oriented approach toencompass foundation models and general purpose AI systems, offering distinctanalyses for each.

Downloads

Download data is not yet available.

Author Biographies

Dmitryi Kuteynikov, Tyumen State University

Candidate of Sciences (Law)

Osman Izhaev, Tyumen State University

Candidate of Sciences (Law), Senior Researcher

References

Bradford A. (2012) The Brussels Effect. Northwestern University Law Review, vol. 107, no. 1, pp. 1–64.

Chamberlain J. (2023) The Risk-Based Approach of the European Union’s Proposed Artificial Intelligence Regulation: Some Comments from a Tort Law Perspective. European Journal of Risk Regulation, vol. 14, no. 1, pp. 1–13.

Gstrein O. (2022) European AI Regulation: Brussels Effect versus Human Dignity? Zeitschrift für Europarechtliche Studien, vol. 4, pp. 755–772.

Greenleaf G. (2021) The “Brussels Effect” of the EU’s “AI Act” on Data Privacy Outside Europe. Privacy Laws & Business International Report, issue 171, pp. 3–7.

Hacker P. (2021) A legal framework for AI training data—from first principles to the Artificial Intelligence Act. Law, Innovation and Technology, vol. 13, no. 2, pp. 257–301.

Mahler T. (2021) Between risk management and proportionality: The risk-based approach in the EU’s Artificial Intelligence Act Proposal. In: Publicerad i Nordic Yearbook of Law and Informatics 2020–2021: Law in the Era of Artificial Intelligence, Mars, pp. 247–270.

Mökander J. et al. (2023) Operationalising AI governance through ethics-based auditing: an industry case study. AI and Ethics, vol. 3, issue 2, pp. 451–468.

Mökander J. et al. (2022) Conformity Assessments and Post-market Monitoring: A Guide to the Role of Auditing in the Proposed European AI Regulation. Minds & Machines, vol. 32, issue 2, pp. 241–268.

Mökander J. et al. (2021) Ethics-Based Auditing of Automated Decision-Making Systems: Nature, Scope, and Limitations. Science and Engineering Ethics, vol. 27, issue 4, pp. 1–30.

Mökander J. et al. (2023) Auditing large language models: a threelayered approach. Available at: https://doi.org/10.1007/s43681-023-00289-2

Neuwirth R. (2023) The EU Artificial Intelligence Act: Regulating Subliminal AI Systems. L.: Routledge, 144 p.

Neuwirth R. (2023) Prohibited artificial intelligence practices in the proposed EU Artificial Intelligence Act (AIA). Computer Law & Security Review, vol. 48, pp. 1–41.

Novelli C. et. al. (2023) Taking AI risks seriously: a new assessment model for the AI Act. AI & Society, vol. 38, no. 3, pp. 1–5.

Pataranutaporn P. et. al. (2023) Influencing human–AI interaction by priming beliefs about AI can increase perceived trustworthiness, empathy and effectiveness. Nat Mach Intell. Available at: https://doi.org/10.1038/s42256-023-00720-7.

Schuett J. (2023) Risk Management in the Artificial Intelligence Act. European Journal of Risk Regulation, February, pp. 1–19.

Solaiman I. (2023) The Gradient of Generative AI Release: Methods and Considerations. In: Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency. N.Y.: Association for Computing Machinery, p. 111–122.

Veale M. et. al. (2021) Demystifying the Draft EU Artificial Intelligence Act. Computer Law Review International, vol. 22, issue 4, pp. 97–112.

Vetter D. et. al. (2023) Lessons Learned from Assessing Trustworthy AI in Practice. Digital Society, vol. 2, issue 3, pp. 1–25.

Published
2023-10-31
How to Cite
KuteynikovD., & IzhaevO. (2023). Analysing Risk-Based Approach in the Draft EU Artificial Intelligence Act. Legal Issues in the Digital Age, 4(3), 97-116. https://doi.org/10.17323/2713-2749.2023.3.97.116