• Ph. D. (C) Dmitry Kuteynikov
  • Ph. D. Osman Izhaev
  • Dr. Valerian Lebedev
  • Ph. D. (C) Sergey Zenin

Palabras clave:

Automated decision-making systems, Accountability, Transparency, Human rights, Controllers


The present article analyzes the legal approaches of Europe and the USA, which underlay various measures taken to minimize the risks of human rights violations when using automated decisionmaking systems in society. The methodological basis of the research included general scientific methods of cognition, namely, the principle of objectivity, consistency, induction, and deduction. In Europe and the USA, different concepts of legal regulation of issues concerning algorithmic accountability and transparency are applied. In Europe, the audit of the automated decision-making systems is conducted through the legislation on protecting personal data. The study concludes that currently, this act does not impose a legal obligation on the controllers to disclose technical information, i.e. to open a black box to the subject of personal data. This may happen in the long run, when the legislative bodies will adopt acts that specify the provisions of the General Data Protection Regulation (GDPR), which now implies that the controller must provide the personal data subject, in respect of which the algorithm is used, with meaningful information about the logic of decisions made. It is concluded that this approach is characterized by the primacy of the interests of personal data subjects over those who derive economic and other benefits from the use of algorithms. The review of US jurisdiction has shown that there is no a comprehensive legal act regulating issues of algorithmic accountability and transparency. Certain regulatory requirements are contained in various anti-discrimination acts that regulate specific areas of human activity. It is concluded that antidiscrimination laws are not a suitable tool for resolving issues arising during the application of algorithms. Also, several current legislative initiatives at the federal and state levels were analyzed, which propose to introduce a mandatory assessment of the impact of the automated decision-making system. These initiatives involve the disclosure of a certain list of information about the operation of the algorithm. It is noted that the USA concept is more preferable for entities that use algorithms for their benefit.



Cómo citar

Kuteynikov, Dmitry, Osman Izhaev, Valerian Lebedev, y Sergey Zenin. 2020. «BLACK BOX: TRANSPARENCY AND ACCOUNTABILITY OF AUTOMATED DECISIONMAKING SYSTEMS». Revista Inclusiones, marzo, 324-33.

Artículos más leídos del mismo autor/a