Electronic Democracy and Digital Justice: Driving Principles for AI Regulation in the Prism of Human Rights





Human Rights, Artificial Intelligence, Electronic Democracy, Digital Justice, Regulation


A growing debate in several European fora is paving the way for future rules for Artificial Intelligence (AI). A principles-based approach prevails, with various lists of principles drawn up in recent years. These lists, which are often built on human rights, are only a starting point for a future regulation. It is now necessary to move forward, turning abstract principles into a context-based response to the challenges of AI. This article therefore places the principles and operational rules of the current European and international human rights framework in the context of AI applications in two core, and little explored, areas of digital transformation: electronic democracy and digital justice. Several binding and non-binding legal instruments are available for each of these areas, but they were adopted in a pre-AI era, which affects their effectiveness in providing an adequate and specific response to the challenges of AI. Although the existing guiding principles remain valid, their application should therefore be reconsidered in the light of the social and technical changes induced by AI. To contribute to the ongoing debate on future AI regulation, this article outlines a contextualised application of the principles governing e-democracy and digital justice in view of current and future AI applications.


Não há dados estatísticos.

Biografia do Autor

Alessandro Mantelero, Polytechnic University of Turin (Italy).

Alessandro Mantelero is aggregate professor of Private Law at the Polytechnic University of Turin, Director of Privacy and Faculty Fellow at the Nexa Center for Internet and Society and Research Consultant at the Sino-Italian Research Center for Internet Torts at Nanjing University of Information Science & Technology.


AGRE, P.E. Surveillance and Capture: Two Models of Privacy. The Information Society 10, 101,1994.

AI NOW INSTITUTE. AI Now 2017 Report. New York, 2017. Available at https://assets.contentful.com/8wprhhvnpfc0/1A9c3ZTCZa2KEYM64Wsc2a/8636557c5fb14f2b74b2be64c3ce0c78/_AI_Now_Institute_2017_Report_.pdf. Accessed: 26 oct. 2017.

ALETRAS, N. et al. Predicting Judicial Decisions of the European Court of Human Rights: A Natural Language Processing Perspective. PeerJ Computer Science 2, 93, 2016, doi:10.7717/peerj-cs.93.

BARRETT, L. Reasoably Suspicious Algorithms: Predictive Policing at the United States Border. New York University Review of Law & Social Change 41, 327, 2017.

BENNETT MOSES L.; CHAN, J. Algorithmic Prediction in Policing: Assumptions, Evaluation, and Accountability. Policing and Society 28, 806, 2018.

BRAUNEIS R.; GOODMAN, E.P. Algorithmic Transparency for the Smart City. Yale J.L. & Tech. 20, 103, 2018.

BUKOVSKA, B. Spotlight on Artificial Intelligence and Freedom of Expression #SAIFE’. Organization for Security and Co-operation in Europe, 2020. Available at https://www.osce.org/files/f/documents/9/f/456319_0.pdf. Accessed: 11 aug. 2020.

BURRELL, J. How the Machine “Thinks”: Understanding Opacity in Machine Learning Algorithms. Big Data & Society 3(1), 2016, doi: 10.1177/2053951715622512.

BYCHAWSKA-SINIARSKA, D. Protection the Right to Freedom of Expression under the European Convention on Human Rights. Council of Europe, 2017.

CARUANA, R. et al. Intelligible models for healthcare: predicting pneumonia risk and hospital 30-day readmission. Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2015.

CEPEJ – European Commission for the Efficiency of Justice. European Ethical Charter on the Use of Artificial Intelligence in Judicial Systems and Their Environment, 2018.

CITRON D.K.; CALO R. The Automated Administrative State: A Crisis of Legitimacy. Emory Law Journal 70(4), 2021.

CLAY T. (ed), L’arbitrage en ligne. Rapport du Club des Juristes, 2019. Available at https://www.leclubdesjuristes.com/les-commissions/larbitrage-en-ligne/. Accessed: 30 may 2020.

COHEN, J.E. Between Truth and Power. The Legal Construction of Informational Capitalism. New York, Oxford University Press, 2019.

Committee of Ministers of the Council of Europe. The 12 Principles of Good Governance enshrined in the Strategy on Innovation and Good Governance at local level, 2008.

Council of Bars & Law Societies of Europe. CCBE Considerations on the Legal Aspects of Artificial Intelligence, 2020.

Council of Europe – Ad hoc Committee on Artificial Intelligence (CAHAI). Feasibility Study, CAHAI(2020)23, 2020. Available at https://rm.coe.int/cahai-2020-23-final-eng-feasibility-study-/1680a0c6da. Accessed: 29 jul.2021.

Council of Europe – Committee of the Convention for the Protection of Individuals with regards to Processing of Personal Data (Convention 108). Guidelines on artificial intelligence and data protection, 2019. Available at https://rm.coe.int/guidelines-on-artificial-intelligence-and-data-protection/168091f9d8. Accessed: 20 feb. 2020.

Council of Europe Directorate General of Democracy and Political Affairs and Directorate of Democratic Institutions, Project «Good Governance in the Information Society», CM(2009)9 Addendum 3, 2009

Council of Europe Parliamentary Assembly. Resolution 2254 (2019)1. Media freedom as a condition for democratic elections, 2019a.

Council of Europe Parliamentary Assembly. Resolution 2300 (2019)1. Improving the protection of whistle-blowers all over Europe, 2019b.

Council of Europe, Consultative Committee of the Convention of the Protection of Individuals with Regard to Automatic Processing of Personal Data. Profiling and Convention 108+: Suggestions for an update, T-PD(2019)07BISrev, 2019.

Council of Europe, Directorate General of Democracy – European Committee on Democracy and Governance. The Compendium of the most relevant Council of Europe texts in the area of democracy, 2016.

Council of Europe, Directorate General of democracy and Political Affairs – Directorate of Democratic Institutions. Guidelines on transparency of e-enabled elections, 2011.

Council of Europe. Additional Protocol to the European Charter of Local Self-Government on the right to participate in the affairs of a local authority, 2009a.

______. Algorithms and Human Rights. Study on the Human Rights Dimensions of Automated Data Processing Techniques and Possible Regulatory Implications, 2018. Available at https://edoc.coe.int/en/internet/7589-algorithms-and-human-rights-study-on-the-human-rights-dimensions-of-automated-data-processing-techniques-and-possible-regulatory-implications.html. Accessed: 5 may 2018.

______. Convention for the Protection of Human Rights and Dignity of the Human Being with regard to the Application of Biology and Medicine: Convention on Human Rights and Biomedicine, 1997.

______. Declaration by the Committee of Ministers on the manipulative capabilities of algorithmic processes, 2019a.

______. Guidelines for civil participation in political decision making, CM(2017)83-final, 2017a.

______. Modernised Convention for the Protection of Individuals with Regard to the Processing of Personal Data (Convention 108+), 2018.Council of Europe.

______. Recommendation CM/Rec(2001)10 on the European Code of Police Ethics, 2001.

______. Recommendation CM/Rec(2004)15 on electronic governance (“e-governance”), 2004.

______. Recommendation CM/Rec(2007)15 on measures concerning media coverage of election campaigns, 2007.

______. Recommendation CM/Rec(2009)1 on electronic democracy (e-democracy), 2009b.

______. Recommendation CM/Rec(2009)2 on the evaluation, auditing and monitoring of participation and participation policies at local and regional level, 2009c.

______. Recommendation CM/Rec(2010)13 on the protection of individuals with regard to automatic processing of personal data in the context of profiling, 2010.

______. Recommendation CM/Rec(2011)7 on a new notion of media, 2011.

______. Recommendation CM/Rec(2014)7 on the protection of whistleblowers, 2014.

______. Recommendation CM/Rec(2016)4 on the protection of journalism and safety of journalists and other media actors, 2016a.

______. Recommendation CM/Rec(2016)5 on Internet freedom, 2016b.

______. Recommendation CM/Rec(2017)5 on standards for e-voting, 2017b.

______. Recommendation CM/Rec(2018)1 on media pluralism and transparency of media ownership, 2018a.

______. Recommendation CM/Rec(2018)4 on the participation of citizens in local public life, 2018b.

______. Recommendation CM/Rec(2019)3 on supervision of local authorities’ activities, 2019b.

______. Recommendation CM/Rec(2020)1 on the human rights impacts of algorithmic systems, 2020b.

______. Study on the Human Rights Dimensions of Automated Data Processing Techniques (in Particular Algorithms) and Possible Regulatory Implications, 2018c. Available at https://rm.coe.int/algorithms-and-humanrights-en-rev/16807956b5 Accessed: 15 jan. 2019.

Council of Europe. Graphical visualisation of the distribution of strategic and ethical frameworks relating to artificial intelligence, 2020a.

CRAWFORD K.; JOLER, V. Anatomy of an AI System: The Amazon Echo As An Anatomical Map of Human Labor, Data and Planetary Resources, 2018. Available at http://www.anatomyof.ai. Accessed: 27 dec. 2019.

CUMMINGS, M.L. et al. Chatham House Report. Artificial Intelligence and International Affairs Disruption Anticipated, 2018. Available at https://www.chathamhouse.org/sites/default/files/publications/research/2018-06-14-artificial-intelligence-international-affairs-cummings-roff-cukier-parakilas-bryce.pdf. Accessed: 21 mar. 2020.

DIAKOPOULPS. N. Algorithmic Accountability Reporting: On the Investigation of Black Boxes, 2013. Available at https://academiccommons.columbia.edu/doi/10.7916/D8ZK5TW2. Accessed: 18 mar. 2018.

EDWARDS L.; VEALE, M. Slave to the Algorithm? Why a ‘Right to an Explanation’ Is Probably Not the Remedy You Are Looking For. Duke Law & Technology Review 16, 18, 2017.

EU Code of Practice on Disinformation, 2018. Available at https://digital-strategy.ec.europa.eu/en/policies/code-practice-disinformation. Accessed: 24 mar. 2021.

European Commission for Democracy trough Law (Venice Commission). Joint Report of the Venice Commission and of the Directorate of Information society and Actions Against Crime of the Directorate General of Human Rights and Rule of Law (DGI) on Digital Technologies and Elections, 2019.

European Commission, Networks, Content and Technology- Directorate-General for Communication. A Multi-Dimensional Approach to Disinformation Report of the Independent High-Level Group on Fake News and Online Disinformation, 2018.

European Commission. Proposal for a Regulation of the European Parliament and of the Council laying down harmonized rules on Artificial Intelligence (Artificial Intelligence Act) and amending certain Union legislative acts, COM(2021) 206 final, 2021.

European Union Agency for Fundamental Rights. #BigData: Discrimination in Data-Supported Decision Making, 2018.

European Union Agency for Fundamental Rights. Data Quality and Artificial Intelligence – Mitigating Bias and Error to Protect Fundamental Rights, 2019.

European Union Agency for Fundamental Rights. Preventing Unlawful Profiling Today and in the Future: A Guide, 2018.

EYKHOLT, K. et al. Robust Physical-World Attacks on Deep Learning Visual Classification. 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2018. Available at https://openaccess.thecvf.com/content_cvpr_2018/papers/Eykholt_Robust_Physical-World_Attacks_CVPR_2018_paper.pdf. Accessed: 23 apr. 2021.

FAYE JACOBSEN, A. The Right to Public Participation. A Human Rights Law Update. Issue Paper, 2013. Available at https://www.humanrights.dk/publications/right-public-participation-human-rights-law-update. Accessed: 14 jan. 2021.

FERRYMAN K.; Pitcan, M. Fairness in Precision Medicine, 2018. Available at https://datasociety.net/wp-content/uploads/2018/02/Data.Society.Fairness.In_.Precision.Medicine.Feb2018.FINAL-2.26.18.pdf. Accessed: 8 apr. 2018.

GANDY Jr., O.H.; Nemorin, S. Toward a Political Economy of Nudge: Smart City Variations. Information, Communication & Society 22, 2112, 2019.

GESLEY, J. Regulation of Artificial Intelligence in Selected Jurisdictions, 2019. Available at https://www.loc.gov/law/help/artificial-intelligence/index.php. Accessed: 30 dec. 2019.

GOODMAN E.; Powles, J. Urbanism Under Google: Lessons from Sidewalk Toronto. Fordham Law Review 88(2), 457, 2019.

GRABER, C.B. Artificial Intelligence, Affordances and Fundamental Rights. In Hildebrandt M.; O’Hara K. (eds) Life and the Law in the Era of Data-Driven Agency. Edward Elgar, 2020.

HAGENDORFF, T. The Ethics of AI Ethics: An Evaluation of Guidelines’, Minds and Machines 30, 99, 2020.

HILDEBRANDT M.; Gutwirth, S. (eds) Profiling the European Citizen: Cross-Disciplinary Perspectives, Dordrecht, 2008.

HILDEBRANDT, M. Algorithmic Regulation and the Rule of Law. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 376, 2018a.

______. Primitives of Legal Protection in the Era of Data-Driven Platforms. Georgetown Law Technology Review 2, 252, 2018b.

______ Privacy as Protection of the Incomputable Self: From Agnostic to Agonistic Machine Learning. Theoretical Inquiries in Law 20, 83, 2019.

______. The Issue of Bias. The Framing Powers of Machine Learning. In Pelillo, M.; Scantamburlo, T. (eds) Machines We Trust. Perspectives on Dependable AI (MIT Press: Cambridge, MA) 2021.

Independent High-Level Group on Artificial Intelligence. Ethics Guidelines for Trustworthy AI, 2019. Available at https://ec.europa.eu/digital-single-market/en/news/ethics-guidelines-trustworthyai. Accessed: 2 mar. 2020.

JOBIN, A.; IENCA, M.; VAYENA, E. The Global Landscape of AI Ethics Guidelines’, Nature Machine Intelligence 1, 389, 2019.

KAMINSKI M. E.; MALGIERI, G. Multi-Layered Explanations from Algorithmic Impact Assessments in the GDPR. Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency. Association for Computing Machinery, 2020, doi: 10.1145/3351095.3372875.

KOLKMAN, D. The (in)Credibility of Algorithmic Models to Non-Experts. Information, Communication & Society 1, 2020, doi: 10.1080/1369118X.2020.1761860.

LOIDEAIN, N. N.; Adams, R. From Alexa to Siri and the GDPR: The Gendering of Virtual Personal Assistants and the Role of Data Protection Impact Assessments. Computer Law & Security Review 36, 2020, doi:10.1016/j.clsr.2019.105366.

LYNSKEY, O. Criminal Justice Profiling and EU Data Protection Law: Precarious Protection from Predictive Policing. Int J Law Context, 15, 162, 2019.

MAISLEY, N. The International Right of Rights? Article 25(a) of the ICCPR as a Human Right to Take Part in International Law-Making. Eur. J. Int. Law 28, 89, 2017.

MANHEIM, K.; KAPLAN, L. Artificial Intelligence: Risks to Privacy and Democracy. Yale Journal of Law & Technology 21, 106, 2019.

MANTELERO, A. AI and Big Data: A Blueprint for a Human Rights, Social and Ethical Impact Assessment. Computer Law & Security Review 34(4), 754, 2018.

______. Artificial Intelligence and Data Protection: Challenges and Possible Remedies. Report on Artificial Intelligence. Consultative Committee of the Convention for the Protection of Individuals with Regard to Automatic Processing of personal data, 2019. Available at https://rm.coe.int/2018-lignes-directrices-sur-l-intelligence-artificielle-et-la-protecti/168098e1b7. Accessed: 20 feb. 2020.

______. Personal Data for Decisional Purposes in the Age of Analytics: From an Individual to a Collective Dimension of Data Protection. Computer Law & Security Review 32(2), 238, 2016.

______. Regulating AI Within the Human Rights Framework: A Roadmapping Methodology. In Czech et al. (eds) European Yearbook on Human Rights 2020, 477-502, 2020.

______; Esposito. M.S. An Evidence-Based Methodology for Human Rights Impact Assessment (HRIA) in the Development of AI Data-Intensive Systems. Computer Law & Sec. Rev., 41, 2021, DOI: 10.1016/j.clsr.2021.105561.

MANTELERO, A.; VACIAGO, G. Data Protection in a Big Data Society. Ideas for a Future Regulation. Digital Investigation 15, 104, 2015.

MEHR, H. Artificial Intelligence for Citizen Services and Government, 2017. Available at https://ash.harvard.edu/files/ash/files/artificial_intelligence_for_citizen_services.pdf. Accessed: 15 mar. 2021.

MEIJER A.; WESSELS M. Predictive Policing: Review of Benefits and Drawbacks. Int J Publ Admin 42, 1031, 2019.

MIKHAYLOV, J.; Esteve, M.; Campion, A. Artificial Intelligence for the Public Sector: Opportunities and Challenges of Cross-Sector Collaboration. Phil. Trans. R. Soc. A, 376, 2018, doi: 10.1098/rsta.2017.0357.

MITTELSTADT, B. From Individual to Group Privacy in Big Data Analytics. Philosophy & Technology 30, 475, 2017.

NEMITZ, P. Constitutional Democracy and Technology in the Age of Artificial Intelligence. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences 378, 2018, doi: 10.1098/rsta.2018.0089.

NUNEZ, C. Artificial Intelligence and Legal Ethics: Whether AI Lawyers Can Make Ethical Decisions. Tulane Journal of Technology and Intellectual Property 20, 189, 2017.

OECD, Recommendation of the Council on Artificial Intelligence, 2019.

OSOBA, O.A.; Welser, W. An Intelligence in Our Image: The Risks of Bias and Errors in Artificial Intelligence, 2017. Available at https://www.rand.org/pubs/research_reports/RR1744.html. Accessed: 20 may 2020.

OSWALD, M. Algorithm-Assisted Decision-Making in the Public Sector: Framing the Issues Using Administrative Law Rules Governing Discretionary Power. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences’ 376, 2018, doi: 10.1098/rsta.2017.0359.

______. Algorithm-Assisted Decision-Making in the Public Sector: Framing the Issues Using Administrative Law Rules Governing Discretionary Power. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences 376, 2018, doi: 10.1098/rsta.2017.0359.

PASQUALE, F. The Black Box Society. The Secret Algorithms That Control Money and Information.Harvard University Press, 2015.

PASQUALE F.; CASHWELL, G. Prediction, Persuasion, and the Jurisprudence of Behaviourism. University of Toronto Law Journal, 68, 2018.

PERRY, W.L. et al. Predictive Policing: The Role of Crime Forecasting in Law Enforcement Operations, 2013. Available at https://www.rand.org/pubs/research_reports/RR233.html. Accessed: 30 mar. 2020.

Privacy International, Smart Cities: Utopian Vision, Dystopian Reality, 2017. Available at https://privacyinternational.org/sites/default/files/2017-12/Smart%20Cities-Utopian%20Vision%2C%20Dystopian%20Reality.pdf. Accessed: 12 may 2020.

RAAB, C.D. Information Privacy, Impact Assessment, and the Place of Ethics. Computer Law & Security Review 37, 2020, doi:10.1016/j.clsr.2020.105404.

RANCHORDÁS, S. Nudging Citizens through Technology in Smart Cities. International Review of Law, Computers & Technology 34(2), 254, 2020.

RE, R.M.; Solow-Niederman, A. Developing Artificially Intelligent Justice. Stanford Technology Law Review 22, 242, 2019.

RICHARDSON, R.; SCHULTZ, J.M.; CRAWFORD, K. Dirty Data, Bad Predictions: How Civil Rights Violations Impact Police Data, Predictive Policing Systems, and Justice. New York University Law Review 94, 42, 2019.

ROSENBAUM, D. P. The limits of hot spots policing. In Weisburd D.; Braga A.A. (eds) Police innovation: contrasting perspectives, Cambridge University Press, 2006.

SAVAGET, P.; CHIARINI, T.; Evans, S. Empowering Political Participation through Artificial Intelligence. Science and Public Policy 46, 369, 2019.

SCHRAG, Z. M. Ethical Imperialism. Institutional Review Boards and the Social Sciences 1965-2009. Baltimore, Johns Hopkins University Press, 2017.

SELBST, A. D. Disparate Impact in Big Data Policing. Georgia Law Review 52(1), 109, 2017.

SELBST, A. D.; BAROCAS, S. The Intuitive Appeal of Explainable Machines. Fordham L. Rev. 87, 1085, 2018.

SUNSTEIN, C. R. The Ethics of Nudging, Yale Journal on Regulation 32, 412, 2015.

______. Why Nudge? The Politics of Libertarian Paternalism. Yale University Press, 2015.)

SUNSTEIN C.R.; THALER, R.H. Libertarian Paternalism in Not an Oxymoron’, University of Chicago Law Review 70, 1159, 2003.

TAYLOR, L.; FLORIDI, L.; van der Sloot, B. (eds). Group Privacy : New Challenges of Data Technologies. Cham, Springer, 2017.

THALER, R. H.; SUNSTEIN, C. R. Nudge. Improving Decisions about Health, Wealth, and Happiness. New Haven, Yale University Press, 2008.

The United Nations (UN) Special Rapporteur on Freedom of Opinion and Expression, the Organization for Security and Co-operation in Europe (OSCE) Representative on Freedom of the Media, the Organization of American States (OAS) Special Rapporteur on Freedom of Expression and the African Commission on Human and Peoples’ Rights (ACHPR) Special Rapporteur on Freedom of Expression and Access to Information. Joint Declaration on “Fake News,” Disinformation and Propaganda, 2017.

TUBARO, P.; CASILLI, A. A.; COVILLE, M. The Trainer, the Verifier, the Imitator: Three Ways in Which Human Platform Workers Support Artificial Intelligence. Big Data & Society 7(1), 2020, doi: 10.1177/2053951720919776.

UN Committee on Economic, Social and Cultural Rights (CESCR). General Comment No. 1: Reporting by States Parties, 27 July 1981.

UN Human Rights Committee (HRC). CCPR General Comment No. 25: The right to participate in public affairs, voting rights and the right of equal access to public service (Art. 25), CCPR/C/21/Rev.1/Add.7, 12 July 1996.

UNESCO. Draft Text of the Recommendation on the Ethics of Artificial Intelligence, 2021. Available at https://unesdoc.unesco.org/ark:/48223/pf0000377897. Accessed: 3 sep. 2021.

VAN BRAKEL R.; De Hert, P. Policing, Surveillance and Law in a Pre-Crime Society: Understanding the Consequences of Technology Based Strategies. Cahiers Politiestudies, Jaargang 3, 163, 2011.

VEALE, M.; BINNS, R. Fairer machine learning in the real world: Mitigating discrimination without collecting sensitive data. Big Data & Society 4(2), 2017, doi:10.1177/2053951717743530.

VERBEEK, P-P. Understanding and Designing the Morality of Things, Chicago-London, The University of Chicago Press, 2011.

VERONESE, A.; NUNES LOPES ESPIÑEIRA LEMOS, A. Trayectoria normativa de la inteligencia artificial en los países de Latinoamérica con un marco jurídico para la protección de datos: límites y posibilidades de las políticas integradoras. Revista Latinoamericana de Economía y Sociedad Digital 2, 2021. available at https://revistalatam.digital/article/210207/. Accessed: 27 aug. 2021.

WACHTER, S. Affinity Profiling and Discrimination by Association in Online Behavioural Advertising. Berkeley Tech. L.J., 35(2), 367, 2021.

WEST, S.M.; WHITTAER, M.; Crawford, K. Discriminating Systems. Gender, Race, and Power in AI, 2019. Available at https://ainowinstitute.org/discriminatingsystems.pdf. Accessed: 15 may 2019.

ZALNIERIUTE, M.; Bennett Moses, L.; Williams, G. The Rule of Law and Automation of Government Decision-Making. The Modern Law Review 82(3), 425, 2019.

ZAVRŠNIK, A. Algorithmic Justice: Algorithms and Big Data in Criminal Justice Settings. European Journal of Criminology 1, 2019, doi:10.1177/1477370819876762.

ZUIDERVEEN BORGESIUS, F. Strengthening Legal Protection against Discrimination by Algorithms and Artificial Intelligence. The International Journal of Human Rights 24(10), 1572, 2020.




Como Citar

Mantelero, A. (2022). Electronic Democracy and Digital Justice: Driving Principles for AI Regulation in the Prism of Human Rights. Direito Público, 18(100). https://doi.org/10.11117/rdp.v18i100.6199