Transparency in Public Administration in the Digital Age: Legal, Institutional, and Technical Mechanisms
Аннотация
The article contains a comprehensive analysis of the very relevant topic of ensuring transparency and explainability of public administration bodies in the context of an ever-increasing introduction of automated decision-making systems and artificial intelligence systems in their operations. Authors focus on legal, organisational and technical mechanisms designed to implement the principles of transparency and explainability, as well as on challenges to their operation. The purpose is to describe the existing and proposed approaches in a comprehensive and systematic manner, identify the key risks caused by the non-transparency of automated decision-making systems, and to evaluate critically the potential that various tools can have to minimise such risks. The methodological basis of the study is general scientific methods (analysis, synthesis, system approach), and private-scientific methods of legal science, including legalistic and comparative legal analysis. The work explores the conceptual foundations of the principle of transparency of public administration in the conditions of technology transformation. In particular, the issue of the “black box” that undermines trust in state institutions and creates obstacles to juridical protection, is explored. It analyses preventive (ex ante) legal mechanisms, such as mandatory disclosure of the use of automated decision-making systems, the order and logic of their operation, information on the data used, and the introduction of pre-audit, certification and human rights impact assessment procedures. Legal mechanisms for ex post follow-up are reviewed, including the evolving concept of the “right to explanation” of a particular decision, the use of counterfactual explanations, and ensuring that users have access to the data that gave rise to a particular automated decision. The authors pay particular attention to the inextricable link between legal requirements, and institutional and technical solutions. The main conclusions are that none of the mechanisms under review are universally applicable. The necessary effect may only be reached through their comprehensive application, adaptation to the specific context and level of risk, and close integration of legal norms with technical standards and practical tools. The study highlights the need to further improve laws aimed at detailing the responsibilities of developers and operators of the automated decision-making system, and to foster a culture of transparency and responsibility to maintain public administration accountability in the interests of society and every citizen.
Литература
De Fine Licht K., De Fine Licht J. (2020) Artificial Intelligence, Transparency, and Public Decision-Making: Why Explanations are Key When Trying to Produce Perceived Legitimacy. AI & Society, no. 35, pp. 917–926.
Drunen M.Z., Helberger N., Bastian M. (2019) Know Your Algorithm: What Media Organizations Need to Explain to their Users about News Personalization. International Data Privacy Law, vol. 9, no. 4, pp. 220–235.
Edwards L., Veale M. (2018) Enslaving the Algorithm: From a Right to an Explanation to a Right to Better Decisions? IEEE Security & Privacy. no. 3, pp. 46–54.
Felzmann H., Fosch-Villaronga E., Lutz C. et al. (2020) Towards Transparency by Design for Artificial Intelligence. Science and Engineering Ethics, no. 6, pp. 3333–3361.
Grimmelikhuijsen S. (2023) Explaining Why the Computer Says No: Algorithmic Transparency Affects the Perceived Trustworthiness of Automated Decision-Making. Public Administration Review, no. 2, pp. 241–262.
Kutafin O.E. (2008) The Russian Constitutionalism. Textbook. Moscow: Norma, 544 p. (in Russ.)
Lee M.K. (2018) Understanding Perception of Algorithmic Decisions: Fairness, Trust, and Emotion in Response to Algorithmic Management. Big Data & Society, vol. 5, no. 1, pp. 1–16.
Narutto S.V., Nikitina A.V. (2022) Constitutional Principle of Trust in Modern Russian Society. Konstitucionnoe i municipalnoe pravo=Constitutional and Municipal Law, no. 7, pp. 13–18 (in Russ.)
Pogodina I.V. (2023) Forming Culture of Transparency with Help of ICTs. Gosudarstvennaya vlast i mestnoe samoupravlenie=State Power and Local Self-Government, no. 11, pp. 29–31 (in Russ.)
Rudin C. (2019) Stop Explaining Black Box Machine Learning Models for High Stakes Decisions and Use Interpretable Models Instead. Nature Machine Intelligence, no. 5, pp. 206–215.
Schemmer M., Kühl N. et al. (2021) Intelligent Decision Assistance versus Automated Decision-Making: Enhancing Knowledge Work through Explainable Artificial Intelligence, pp. 1–10.
Schiff D.S., Schiff K.J., Pierson P. (2022) Assessing Public Value Failure in Government Adoption of Artificial Intelligence. Public Administration, vol. 100, no. 3, pp. 653–673.
Silkin V.V. (2021) Transparency of the Executive Power in the Digital Age. Rossijskij yuridicheskij zhurnal=Russian Law Journal, no. 4, pp. 20–31 (in Russ.)
Sokol K., Flach P.A. (2019) Counterfactual Explanations of Machine Learning Predictions: Opportunities and Challenges for AI Safety. Safe AI AAAI, pp. 1–4.
Srinivasu P.N., Sandhya N. et al. (2020) From Black Box to Explainable AI in Healthcare: Existing Tools and Case Studies.
Veale M., Edwards L. (2018) Clarity, Surprises, and Further Questions in the Article 29 of Working Party Draft G=Guidance on Automated Decision-Making and Profiling. Computer Law & Security Review, no. 2, pp. 398–404.
Verma S., Boonsanong V. et al. (2022) Counterfactual Explanations and Algorithmic Recourses for Machine Learning: A Review. Counterfactual Explanations and Algorithmic Recourses for Machine Learning. arXiv:2010.10596 [cs, stat]. arXiv, pp. 1–23.
Wachter S., Mittelstadt B., Floridi L. (2017) Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, vol. 7, no. 2, pp. 76–99.
Wischmeyer T., Rademacher T. (2020) Artificial Intelligence and Transparency: Opening the Black Box. Regulating Artificial Intelligence. Cham: Springer International Publishing, pp. 75–101.
Copyright (c) 2025 Kabytov P.P., Nazarov N.A.

Это произведение доступно по лицензии Creative Commons «Attribution-ShareAlike» («Атрибуция — На тех же условиях») 4.0 Всемирная.
Авторы, присылающие рукописи для рассмотрения к публикации в Журнале, принимают Политику лицензирования, авторских прав, открытого доступа и использования репозиториев.