https://lida.hse.ru/issue/feed Legal Issues in the Digital Age 2025-07-03T20:31:05+03:00 Dilyara Kurbanova / Диляра Курбанова lawjournal@hse.ru Open Journal Systems <p><strong>ISSN 2713-2749</strong></p> <p><strong>“Legal Issues in the Digital Age”</strong><span style="font-weight: 400;"> open-access Journal is an academic quarterly e-publication which provides a comprehensive analysis of law in the digital world. The Journal is international in scope, and the primary objective of the Journal is to address the legal issues of the continually evolving nature of digital technological advances and the necessarily immediate responses to such developments.</span></p> <p><span style="font-weight: 400;">The target audience of the Journal comprises university professors, post-graduates, research scholars, expert community, legal practitioners and others who are interested in modern law and its interaction with information technologies.</span></p> <p><span style="font-weight: 400;">The Journal is published by the National Research University Higher School of Economics, Moscow since 2020.</span></p> <p><span style="font-weight: 400;">The address of the publisher: 20 Myasnitskaya Str. Moscow, Russia 101000</span></p> https://lida.hse.ru/article/view/27516 In Search of the Regulatory Optimum for Digital Platforms: A Comparative Analysis 2025-07-03T20:31:05+03:00 Alexey Koshel koshel@hse.ru Yaroslav Kuzminov kouzminov@hse.ru Ekaterina Kruchinskaia ekruchinskaya@hse.ru Bogdan Lesiv blesiv@hse.ru <p>The rapid growth of digital platforms and ecosystems has become a significant economic phenomenon on a global scale. This growth is due to the ability of these platforms to provide additional and flexible opportunities that are mutually beneficial for sellers, buyers, and platform workers. Because of it the activities of digital platforms have a positive impact on the overall gross domestic product of countries worldwide. The focus of the study is made on the regulatory frameworks for digital platforms both in Russia and around the world, including the rights and obligations of owners, operators, and users resulting from their participation in market transactions. The study does not include digital platforms used in the public sector or social media and messaging services. Scholar methods: comparative legal, formal logic, formal doctrinal, historical legal, as well as analytical, synthetic, and hermeneutical methods are systematically and integrally applied in the research. Based on the sources material, a hypothesis has been proposed regarding three stages of platform regulation growth globally and in Russia. Upon the results of an analysis of the three-stage evolutionary process of legal regulation for e-commerce, it has been found that there is commonly inconsistent impact of various branches of law on the different areas of social relations or different types of platforms. Among this inconsistency are legal gaps and conflicts of legal rules, which make benefits for stakeholders spontaneous rather than the result of systematic interaction within the regulatory framework. Authors of the article identify a major source of legal uncertainty: the absence of standardized terms and harmonized regulatory principles that account for the unique nature of cross-industry digital economy. Lessons from global jurisdictions and three stages of e-commerce regulation reveal that, in its latest phase, the platform economy necessitates system of tailored legal definitions to manage its multifaceted activities. The survey proposes such conceptual structures that may be employed in Russian legal system. They reflect the multidimensional nuances of civil, tax, competition, information, and administrative laws. Additionally, a balanced scheme of general principles has been developed that would ensure the transparent interaction of digital platforms with society, the state, and economic entities.</p> 2025-07-02T00:00:00+03:00 Copyright (c) 2025 Koshel A.S., Kuzminov Y.I., Kruchinskaia E.V., Lesiv B.V. https://lida.hse.ru/article/view/27517 Shaping Artificial Intelligence Regulatory Model: International and Domestic Experience 2025-07-03T20:30:51+03:00 Vladimir Buryaga buriaga@mail.ru Veronika Djuzhoma vdzhuzhoma@mail.ru Egor Artemenko artemenkoea@gmail.com <p>The article contains an analysis of AI regulatory models in Russia and other countries. The authors discuss key regulatory trends, principles and mechanisms with a special focus on balancing the incentives for technological development and the minimization of AI-related risks. The attention is centered on three principal approaches: “soft law”, experimental legal regimes (ELR) and technical regulation. The methodology of research covers a comparative legal analysis of AI-related strategic documents and legislative initiatives such as the national strategies approved by the U.S., China, India, United Kingdom, Germany and Canada, as well as regulations and codes of conduct. The authors also explore domestic experience including the 2030 National AI Development Strategy and the AI Code of Conduct as well as the use of ELR under the Federal Law “On Experimental Legal Regimes for Digital Innovation in the Russian Federation”. The main conclusions can be summed up as follows. A vast majority of countries including Russian Federation has opted for “soft law” (codes of conduct, declarations) that provides a flexible regulation by avoiding excessive administrative barriers. Experimental legal regimes are crucial for validating AI applications by allowing to test technologies in a controlled environment. In Russia ELR are widely used in transportation, health and logistics. Technical regulation including standardization is helpful to foster security and confidence in AI. The article notes widespread development of national and international standards in this field. Special regulation (along the lines of the European Union AI Act) still has not become widespread. A draft law based on the risk-oriented approach is currently discussed in Russia. The authors of the article argue for the gradual, iterative development of legal framework for AI to avoid rigid regulatory barriers emerging too prematurely. They also note the importance of international cooperation and adaptation of the best practices to shape an efficient regulatory system.</p> 2025-07-02T00:00:00+03:00 Copyright (c) 2025 Buriaga V.O., Djuzhoma V.V., Artemenko E.A. https://lida.hse.ru/article/view/27528 Trust in Artificial Intelligence: Regulatory Challenges and Prospects 2025-07-03T20:30:36+03:00 Svetlana Vashurina svashurina@hse.ru <p>The last few years have witnessed a rapid penetration of artificial intelligence (AI) into different walks of life including medicine, judicial system, public governance and other important activities. Despite multiple benefits of these technologies, their widespread dissemination raises serious concerns as to whether they are trustworthy. The article provides an analysis of the key factors behind public mistrust in AI while discussing ways to build confidence. To understand the reasons of mistrust, the author invokes the historical context, social study findings as well as judicial practices. A special focus is made on the security of AI use, AI visibility to users and on decision-making responsibility. The author also discusses the current regulatory models in this area including the development of universally applicable legal framework, regulatory sandboxes and self-regulation mechanisms for the sector, with multidisciplinary collaboration and adaptation of the effective legal system to become a key factor of this process. Only this approach will producer a balanced development and use of AI systems in the interest of all stakeholders, from their vendors to end users. For a more exhaustive coverage of this subject, the following general methods are proposed: analysis, synthesis and systematization; special legal (comparative legal and historic legal) research methods. In analyzing the available data, the author argues for a comprehensive approach to make AI trustworthy. The following hypothesis is proposed based on the study’s findings. Trust in AI is a cornerstone of efficient regulation of AI development and use in various areas. The author is convinced that, with AI made transparent, safe and reliable one, provided with human oversight through adequate regulation, the government will maintain purposeful collaboration between man and technologies thus setting the stage for AI use in critical infrastructures affecting life, health and basic rights and interests of individuals.</p> 2025-07-02T00:00:00+03:00 Copyright (c) 2025 Vashurina S.S. https://lida.hse.ru/article/view/27529 Informational Privacy in the Age of Artificial Intelligence: A Critical Analysis of India’s DPDP Act, 2023 2025-07-03T20:30:21+03:00 Usha Tandon utandon26@gmail.com Neeral Kumar Gupta neeraj_6336700@yahoo.co.in <p>Informational privacy, often referred as data privacy or data protection, is about an individual’s right to control how their personal information is collected, used and shared. Recent AI developments around the world have engulfed the world in its charm. Indian population, as well, is living under the cyber-revolution. India is gradually becoming dependent on technology for majority of the services obtained in daily life. Use of internet and Internet of Things leave traces of digital footprints which generate big data. This data can be personal as well as non-personal in nature. Such data about individuals can be utilised for understanding the socio-economic profile, culture, lifestyle, and personal information, like love life, health, well-being, sexual preferences, sexual orientation and various other types of individual traits. Issues like data breach, however, have also exposed users of information and technology to various types of risks such as cyber-crimes and other fraudulent practices. This article critically analysis recently enacted Digital Personal Data Protection Act, 2023 (DPDP) in the light of following questions: How it tackles with the issues of informational privacy and data processing? What measures have been envisaged under the DPDP Act, for the protection of informational privacy? How individual rights with respect to data protection are balanced against the legitimate state interest in ensuring safety and security of the nation? Whether this right is available only against the State or against the non-State actors as well? etc. Having critically analysed DPDP Act, the article calls for further refinement of DPDP Act in various areas, more specifically, suggesting that, it is imperative that DPDP Act requires critical decisions based on personal data to undergo human review, ensuring they are not solely the result of automated data processing.</p> 2025-07-02T00:00:00+03:00 Copyright (c) 2025 Tandon U., Gupta N.K. https://lida.hse.ru/article/view/27530 Smart Digital Facial Recognition Systems in the Context of Individual Rights and Freedoms 2025-07-03T20:30:07+03:00 Oleg Stepanov soa-45@mail.ru Denis Basangov d_basang@mail.ru <p>The authors discuss the problem of digital facial recognition technologies in the context of implementation of individual rights and freedoms. The analysis is focused on whether their use is legitimate and on interpretation of the provisions behind the underlying procedures. The authors note a significant range of goals to be addressed through the use of smart digital systems already at the goal-setting stage: economy, business, robotics, geological research, biophysics, mathematics, biophysics, avionics, security systems, health, etc. Higher amounts of data and a broader range of technologically complex decision-making objectives require to systematize the traditional methods and to develop new decision-making methodologies and algorithms. Progress of machine learning and neural networks will transform today’s digital technologies into self-sustained and self-learning systems intellectually superior to human mind. Video surveillance coupled with smart facial recognition technologies serves above all public security purposes and can considerably impact modern society. The article is devoted to the theme of legitimate use of digital facial recognition technologies and to the interpretation of provisions laying down the underlying procedures. The authors’ research interests assume an analysis of legal approaches to uphold human rights as digital facial recognition systems are increasingly introduced into social practices in Russia, European Union, United Kingdom, United States, China. The purpose of article is to shed light on regulatory details around the use of AI systems for remote biometric identification of persons in the process of statutory regulation. Methods: formal logic, comparison, analysis, synthesis, correlation, generalization. Conclusions: the analysis confirms that facial recognition technologies are progressing considerably faster than their legal regulation. Deployment of such technologies make possible ongoing surveillance, a form of collecting information on private life of persons. It is noted that accounting for these factors requires amending the national law in order to define the status and the rules of procedure for such data, as well as the ways to inform natural persons that information associated with them is being processed.</p> 2025-07-02T00:00:00+03:00 Copyright (c) 2025 Stepanov O.A., Basangov D.A. https://lida.hse.ru/article/view/27531 Brain-Computer Interface 5.0: Potential Threats, Computational Law and Protection of Digital Rights 2025-07-03T20:29:53+03:00 Said Gulyamov said.gulyamov1976@g.mail.com <p>The development of neurotechnologies is now at a critical point where direct readout and modulation of brain activity has passed from test studies to business applications, only to urgently require adequate legal and technological guarantees. The relevance of this study is prompted by the rapid development of the fifth generation brain-computer interface (BCI 5.0), a technology that provides unprecedented potential of direct access to neural processes while at the same time creating principally new threats to digital rights of individuals. The existing legal mechanisms have turned out to be inadequate for regulating altogether new risks of manipulating consciousness, unauthorized access to neural data and compromised cognitive autonomy. The study is focused on legal and technological mechanisms for protection of digital rights in the context of introducing the fifth generation neural interface technologies including analysis of regulatory gaps, technical vulnerabilities and possible security guarantees. Methodologically, the study is based on the multidisciplinary approach bringing together neuroscience, law and information technology, and on the comparative analysis of regulatory framework and inductive inference of specific regulatory mechanisms. The main hypothesis is: legacy regulatory mechanisms for data protection in biometric and telecommunication technologies are structurally inadequate for BCI 5.0 while digital rights could be protected only by a hybrid system combining special provisions with technological guarantees via mechanisms of computational law. The author puts forward a minimum set of viable security and confidentiality standards, comprehensive cryptography and blockchain-based applications, as well as detailed legislative advice for ethical and safe neurotechnological development with secure guarantees of fundamental human rights in the digital age. Findings of the study are of considerable practical value for legislators, those involved in the development of neurotechnologies, regulatory bodies and advocacy organizations by proposing specific evidence-based tools and mechanisms to strike an effective balance between the innovative development and the imperatives of protecting human dignity, mental autonomy and cognitive freedom.</p> 2025-07-02T00:00:00+03:00 Copyright (c) 2025 Gulyamov S.S. https://lida.hse.ru/article/view/27532 Transparency in Public Administration in the Digital Age: Legal, Institutional, and Technical Mechanisms 2025-07-03T20:29:39+03:00 Pavel Kabytov kapavel.v@yandex.ru Nikita Nazarov naznikitaal@gmail.com <p>The article contains a comprehensive analysis of the very relevant topic of ensuring transparency and explainability of public administration bodies in the context of an ever-increasing introduction of automated decision-making systems and artificial intelligence systems in their operations. Authors focus on legal, organisational and technical mechanisms designed to implement the principles of transparency and explainability, as well as on challenges to their operation. The purpose is to describe the existing and proposed approaches in a comprehensive and systematic manner, identify the key risks caused by the non-transparency of automated decision-making systems, and to evaluate critically the potential that various tools can have to minimise such risks. The methodological basis of the study is general scientific methods (analysis, synthesis, system approach), and private-scientific methods of legal science, including legalistic and comparative legal analysis. The work explores the conceptual foundations of the principle of transparency of public administration in the conditions of technology transformation. In particular, the issue of the “black box” that undermines trust in state institutions and creates obstacles to juridical protection, is explored. It analyses preventive (ex ante) legal mechanisms, such as mandatory disclosure of the use of automated decision-making systems, the order and logic of their operation, information on the data used, and the introduction of pre-audit, certification and human rights impact assessment procedures. Legal mechanisms for ex post follow-up are reviewed, including the evolving concept of the “right to explanation” of a particular decision, the use of counterfactual explanations, and ensuring that users have access to the data that gave rise to a particular automated decision. The authors pay particular attention to the inextricable link between legal requirements, and institutional and technical solutions. The main conclusions are that none of the mechanisms under review are universally applicable. The necessary effect may only be reached through their comprehensive application, adaptation to the specific context and level of risk, and close integration of legal norms with technical standards and practical tools. The study highlights the need to further improve laws aimed at detailing the responsibilities of developers and operators of the automated decision-making system, and to foster a culture of transparency and responsibility to maintain public administration accountability in the interests of society and every citizen.</p> 2025-07-02T00:00:00+03:00 Copyright (c) 2025 Kabytov P.P., Nazarov N.A. https://lida.hse.ru/article/view/27534 The Artificial Intelligence Influence on Structure of Power: Long-Term Transformation 2025-07-03T20:29:24+03:00 Vladimir Nizov vnizov12@gmail.com <p>Integration of artificial intelligence (AI) into public administration marks a pivotal shift in the structure of political power, transcending mere automation to catalyze a long-term transformation of governance itself. The author argues AI’s deployment disrupts the classical foundations of liberal democratic constitutionalism — particularly the separation of powers, parliamentary sovereignty, and representative democracy — by enabling the emergence of algorithmic authority (algocracy), where decision-making is centralized in opaque, technocratic systems. Drawing on political theory, comparative case studies, and interdisciplinary analysis, the researcher traces how AI reconfigures power dynamics through three interconnected processes: the erosion of transparency and accountability due to algorithmic opacity; the marginalization of legislative bodies as expertise and data-driven rationality dominate policymaking; and the ideological divergence in AI governance, reflecting competing visions of legitimacy and social order. The article highlights AI’s influence extends beyond technical efficiency, fundamentally altering the balance of interests among social groups and institutions. While algorithmic governance promises procedural fairness and optimized resource allocation, it risks entrenching epistocratic rule — where authority is concentrated in knowledge elites or autonomous systems — thereby undermining democratic participation. Empirical examples like AI-driven predictive policing and legislative drafting tools, illustrate how power consolidates in executive agencies and technocratic networks, bypassing traditional checks and balances. The study examines paradox of trust in AI systems: while citizens in authoritarian regimes exhibit high acceptance of algorithmic governance, democracies grapple with legitimacy crises as public oversight diminishes. The author contends “new structure of power” will hinge on reconciling AI’s transformative potential with safeguards for human dignity, pluralism, and constitutionalism. It proposes a reimagined framework for governance — one that decentralizes authority along thematic expertise rather than institutional branches, while embedding ethical accountability into algorithmic design. The long-term implications demand interdisciplinary collaboration, adaptive legal frameworks, and a redefinition of democratic legitimacy in an era where power is increasingly exercised by code rather than by humans.</p> 2025-07-02T00:00:00+03:00 Copyright (c) 2025 Nizov V.A.