On the Transparency of Artificial Intelligence Algorithms from a Legal Perspective

Keywords: algorithm, artificial intelligence, AI ethics, transparency, trust, accountability

Abstract

In the modern era of active practical development of artificial intelligence (AI), lawyers are facing the question — how to resolve the ‘black box’ problem, i.e. the incomprehensibility and unpredictability of decisions that artificial intelligence makes. Developing rules that maintain the transparency and comprehensibility of AI algorithms enables artificial intelligence to be incorporated into conventional legal frameworks, thereby eliminating the threat to the concept of legal liability. In private law, protecting consumers from major online platforms makes algorithm transparency a key issue, changing the obligation to provide information to consumers, which can now be described by the formula ‘to know + to understand’. Similarly, states are unable to adequately protect citizens from the harm caused by their dependence on algorithmic applications when public services are provided. The only way to counter this is through knowledge and understanding of how algorithms work. Fundamentally new regulations are required to bring the use of AI within a legal framework, which should include requirements for algorithm transparency. Experts are actively discussing the development of a regulatory framework to establish a system for observing, monitoring and provisionally authorising the use of AI technologies. Measures are being developed for an ‘algorithmic accountability policy’ and a ‘transparency through design’ framework, which address issues throughout AI development, with an emphasis on ongoing stakeholder engagement and organisational openness, as well as the implementation of explainable AI systems. Overall, the proposed approaches to regulating AI and ensuring transparency are quite similar ones, as are the predictions regarding the mitigating role of transparent AI algorithms in building trust in AI. Of interest is the concept of ‘algorithmic sovereignty,’ that refers to a democratic state’s ability to govern the development, deployment and impact of AI systems in accordance with its own legal, cultural and ethical norms. This model is designed to promote the harmonious coexistence of different states, which in turn leads to the harmonious coexistence of humanity and AI. Overall, although the use of AI differs ideologically in the private and public spheres, transparency of algorithms is equally important and ultimately increases the likelihood of regulation.

Author Biography

Elvira V. Talapina, Institute of State and Law, Russian Academy of Sciences

Doctor of Sciences (Law), Chief Researcher, Institute of State and Law, Russian Academy of Sciences, 10 Znamenka Str., Moscow 119019, Russia, talapina@mail.ru

References

Badawy W. (2025) Algorithmic Sovereignty and Democratic Resilience: Rethinking AI Governance in the Age of Generative AI. AI and Ethics, vol. 5, pp. 4855–4862. DOI: https://doi.org/10.1007/s43681-025-00739-z

Batool A. et al. (2024) AI Governance: a Systematic Literature Review. AI and Ethics, vol. 5, pp. 3265–3279. DOI: https://doi.org/10.1007/s43681-024-00653-w

Birhane A. (2023) Algorithmic Colonization of Africa. In: S. Cave and K. Dihal (eds.). Imagining AI: How the World Sees Intelligent Machines. Oxford: University Press, pp. 247–260. DOI: https://doi.org/10.1093/oso/9780192865366.003.0016

Buriaga V.O., Djuzhoma V.V., Artemenko E.A. (2025) Shaping Artificial Intelligence Regulatory Model: International and Domestic Experience. Legal Issues in the Digital Age, vol. 6, no. 2, pp. 50–68. DOI: https://doi.org/10.17323/2713-2749.2025.2.50.68

Goldstein S. (2025) Will AI and Humanity Go to War? In: AI & Society. Cham: Chapman and Hall, 526 p. DOI: https://doi.org/10.1007/s00146-025-02460-1

Han S.J. (2025) The Question of AI and Democracy: Four Categories of AI Governance. Philosophy & Technology, vol. 38, pp. 1–26. DOI: https://doi.org/10.1007/s13347-025-00904-6

Kabytov P.P., Nazarov N.A. (2025) Transparency in Public Administration in the Digital Age: Legal, Institutional, and Technical Mechanisms. Legal Issues in the Digital Age, vol. 6, no. 2, pp. 161–182. DOI: https://doi.org/10.17323/2713-2749.2025.2.161.182

Kouroutakis A. (2024) Rule of Law in the AI Era: Addressing Accountability and the Digital Divide. Discover Artificial Intelligence, vol. 4, article 115. DOI: https://doi.org/10.1007/s44163-024-00191-8

Nihei M. (2022) Epistemic Injustice as a Philosophical Conception for Considering Fairness and Diversity in Human-Centered AI Principles. Interdisciplinary Information Sciences, vol. 28, pp. 25–43. DOI: https://doi.org/10.4036/iis.2022.A.01

Nizov V.A. (2025) The Artificial Intelligence Influence on Structure of Power: Long-Term Transformation. Legal Issues in the Digital Age, vol. 6, no. 2, pp. 183–212. DOI: https://doi.org/10.17323/2713-2749.2025.2.183.212

O’Neil C. (2016) Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. New York: Crown Publishers, 272 p. Available at: https://dl.acm.org/doi/10.5555/3002861

Park K., Yoon Ho Y. (2025) AI Algorithm Transparency, Pipelines for Trust not Prisms: Mitigating General Negative Attitudes and Enhancing Trust toward AI. Humanities and Social Sciences Communications, no. 12, p. 1160. DOI: https://doi.org/10.1057/s41599-025-05116-z

Saleem M. et al. (2025) Responsible AI in Fintech: Addressing Challenges and Strategic Solutions. In: S. Dutta et al. (eds.) Generative AI in FinTech: Revolutionizing Finance Through Intelligent Algorithms. Cham: Springer, pp. 61–72. DOI: https://doi.org/10.1007/978-3-031-76957-3_4

Spina A.G., Yu R. (2021) Artificial Intelligence between Transparency and Secrecy: from the EC Whitepaper to the AIA and Beyond. European Journal of Law and Technology, vol. 12, no. 3, pp. 1–25. Available at: https://ejlt.org/index.php/ejlt/article/view/754

Sposini L. (2024) The Governance of Algorithms: Profiling and Personalization of Online Content in the Context of European Consumer Law. Nordic Journal of European Law, vol. 7, no. 1, pp. 1–22. DOI: https://doi.org/10.36969/njel.v7i1.25734

Thaler R. (2018) New Behavioral Economics. Why People Break the Rules of Traditional Economics and How to Make Money on it. Moscow: Eksmo, 367 p. (in Russ.)

Visave J. (2025) Transparency in AI for Emergency Management: Building Trust and Accountability. AI and Ethics, no. 5, pp. 3967–3980. DOI: https://doi.org/10.1007/s43681-025-00692-x

Published
2025-12-12
How to Cite
TalapinaE. V. (2025). On the Transparency of Artificial Intelligence Algorithms from a Legal Perspective. Legal Issues in the Digital Age, 6(4), 4-24. https://doi.org/10.17323/2713-2749.2025.4.4.24
Section
Artificial Intelligence and Law