<?xml version="1.0" encoding="utf-8"?><!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Publishing DTD v1.1 20151215//EN" "http://jats.nlm.nih.gov/publishing/1.1/JATS-journalpublishing1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:mml="http://www.w3.org/1998/Math/MathML" dtd-version="1.1" specific-use="sps-1.9" article-type="research-article" xml:lang="en">
    <front>
        <journal-meta>
            <journal-id journal-id-type="publisher-id">rdp</journal-id>
            <journal-title-group>
                <journal-title>Revista Direito Público</journal-title>
                <abbrev-journal-title abbrev-type="publisher">Rev. Dir. Publico</abbrev-journal-title>
            </journal-title-group>
            <issn pub-type="epub">2236-1766</issn>
            <publisher>
                <publisher-name>Instituto Brasileiro de Ensino, Desenvolvimento e Pesquisa</publisher-name>
            </publisher>
        </journal-meta>
        <article-meta>
            <article-id pub-id-type="doi">10.11117/rdp.v18i100.6199</article-id>
            <article-categories>
                <subj-group subj-group-type="heading">
                    <subject>Assunto Especial</subject>
                    <subj-group>
                        <subject>Dossiê – Inteligência Artificial, Ética e Epistemologia</subject>
                    </subj-group>
                </subj-group>
            </article-categories>
            <title-group>
                <article-title>Electronic Democracy and Digital Justice: Driving Principles for AI Regulation in the Prism of Human Rights</article-title>
                <trans-title-group xml:lang="pt">
                    <trans-title>Democracia Eletrônica e Justiça Digital: Princípios Condutores para a Regulação da IA no Prisma dos Direitos Humanos</trans-title>
                </trans-title-group>
            </title-group>
            <contrib-group>
                <contrib contrib-type="author">
                    <contrib-id contrib-id-type="orcid">0000-0001-6020-0571</contrib-id>
                    <name>
                        <surname>MANTELERO</surname>
                        <given-names>ALESSANDRO</given-names>
                    </name>
                    <xref ref-type="aff" rid="aff01"/>
                    <xref ref-type="fn" rid="fn118"/>
                    <xref ref-type="corresp" rid="c01"/>
                </contrib>
            </contrib-group>
            <aff id="aff01">
                <institution content-type="orgname">Instituto Politécnico de Turim</institution>
                <addr-line>
                    <named-content content-type="city">Turim</named-content>
                </addr-line>
                <country country="IT">Italy</country>
                <institution content-type="original">Instituto Politécnico de Turim. Turim, Itália.</institution>
            </aff>
            <author-notes>
                <fn fn-type="other" id="fn118">
                    <label>Alessandro Mantelero</label>
                    <p>Associate Professor of Private Law and Law &amp; Technology at the Polytechnic University of Turin, and Council of Europe Scientific Expert on AI, data protection and human rights. He is Associate Editor of Computer Law &amp; Security Review and member of the Editorial Board of European Data Protection Law Review.</p>
                </fn>
                <corresp id="c01">E-mail: <email>alessandro.mantelero@polito.it</email>
                </corresp>
            </author-notes>
            <pub-date publication-format="electronic" date-type="pub">
                <day>0</day>
                <month>0</month>
                <year>2023</year>
            </pub-date>
            <pub-date publication-format="electronic" date-type="collection">
                <season>Oct-Dec</season>
                <year>2021</year>
            </pub-date>
            <volume>18</volume>
            <issue>100</issue>
            <fpage>29</fpage>
            <lpage>62</lpage>
            <permissions>
                <license license-type="open-access" xlink:href="http://creativecommons.org/licenses/by-nc/4.0/" xml:lang="en">
                    <license-p>This is an Open Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License which permits unrestricted non-commercial use, distribution, and reproduction in any medium provided the original work is properly cited.</license-p>
                </license>
            </permissions>
            <abstract>
                <title>ABSTRACT</title>
                <p>A growing debate in several European fora is paving the way for future rules for Artificial Intelligence (AI). A principles-based approach prevails, with various lists of principles drawn up in recent years. These lists, which are often built on human rights, are only a starting point for a future regulation. It is now necessary to move forward, turning abstract principles into a context-based response to the challenges of AI. This article therefore places the principles and operational rules of the current European and international human rights framework in the context of AI applications in two core, and little explored, areas of digital transformation: electronic democracy and digital justice. Several binding and non-binding legal instruments are available for each of these areas, but they were adopted in a pre-AI era, which affects their effectiveness in providing an adequate and specific response to the challenges of AI. Although the existing guiding principles remain valid, their application should therefore be reconsidered in the light of the social and technical changes induced by AI. To contribute to the ongoing debate on future AI regulation, this article outlines a contextualised application of the principles governing e-democracy and digital justice in view of current and future AI applications.</p>
            </abstract>
            <trans-abstract xml:lang="pt">
                <title>RESUMO</title>
                <p>Um debate crescente em vários fóruns europeus está abrindo caminho para futuras regras para Inteligência Artificial (IA). Prevalece uma abordagem baseada em princípios, com várias listas de princípios elaboradas nos últimos anos. Essas listas, muitas vezes, são baseadas nos Direitos Humanos, sendo ponto de partida para uma futura regulamentação. Agora é necessário avançar, transformando princípios abstratos em uma resposta baseada em contexto para os desafios da IA. Este artigo, portanto, coloca os princípios e as regras operacionais do atual quadro europeu e internacional de Direitos Humanos no contexto das aplicações de IA em duas áreas centrais e pouco exploradas da transformação digital: democracia eletrônica e justiça digital. Há disponíveis instrumentos jurídicos vinculativos e não vinculativos para cada uma dessas áreas, mas foram adotados numa era pré-IA, o que afeta a sua eficácia na resposta adequada e específica aos desafios da IA. Embora os princípios orientadores existentes permaneçam válidos, sua aplicação deve, portanto, ser reconsiderada à luz das mudanças sociais e das técnicas induzidas pela IA. Para contribuir para o debate em curso sobre a futura regulamentação da IA, este artigo descreve uma aplicação contextualizada dos princípios que regem a e-democracia e a justiça digital, tendo em vista as aplicações de IA atuais e futuras.</p>
            </trans-abstract>
            <kwd-group xml:lang="en">
                <title>KEYWORDS</title>
                <kwd>Human Rights</kwd>
                <kwd>Artificial Intelligence</kwd>
                <kwd>Electronic Democracy</kwd>
                <kwd>Digital Justice</kwd>
                <kwd>Regulation</kwd>
            </kwd-group>
            <kwd-group xml:lang="pt">
                <title>PALAVRAS-CHAVE</title>
                <kwd>Direitos humanos</kwd>
                <kwd>inteligência artificial</kwd>
                <kwd>democracia eletrônica</kwd>
                <kwd>justiça digital</kwd>
                <kwd>regulamento</kwd>
            </kwd-group>
            <counts>
                <fig-count count="0"/>
                <table-count count="0"/>
                <equation-count count="0"/>
                <ref-count count="127"/>
                <page-count count="34"/>
            </counts>
        </article-meta>
    </front>
    <body>
        <p>SUMMARY: 1 Ai challenges and human rights; 2 Ai and electronic democracy; 2.1 Participation 2and good governance; 2.2 Elections; 3 Ai and digital justice; 3.1 Adrs and court decisions; 3.2 Crime prevention; 4 Conclusions; References.</p>
        <sec>
            <title>1 AI CHALLENGES AND HUMAN RIGHTS</title>
            <p>Artificial Intelligence (AI) is part of our daily life. It is used to moderate public debate, fashion the social environment and support human decision-makers in various fields, including justice. AI is therefore a component of many decision-making processes affecting individuals and groups, actively shaping our communities and personal lives<xref ref-type="fn" rid="fn02">2</xref>. This means that AI is no longer a mere technical or marketing trend but a regulatory issue<xref ref-type="fn" rid="fn03">3</xref>, given the social consequences and, in some cases, legal effects.</p>
            <p>To correctly frame this debate, it is important to keep in mind the difference between natural and artificial intelligence, where the latter is nothing more than a data-driven and mathematical form of information processing<xref ref-type="fn" rid="fn04">4</xref>. AI is not able to think, elaborate concepts or develop theories of causality: AI merely takes a path recognition approach to order huge amounts of data and infer new information and correlations.</p>
            <p>Data dependence is both the strength and the weakness of these systems. Poor data undermines the quality of their results<xref ref-type="fn" rid="fn05">5</xref>, datafication can only partially represent reality<xref ref-type="fn" rid="fn06">6</xref> and incredibly large datasets and complex AI solutions often do not allow human decision makers to inspect and check the ‘reasoning’ of the machine<xref ref-type="fn" rid="fn07">7</xref>. The upshot of these technical and structural constraints can be summed up under three main headings: bias, obscurity, and ownership.</p>
            <p>Regarding bias, the design and development of AI tools can be affected by different biases that, in many cases, differ from human bias<xref ref-type="fn" rid="fn08">8</xref>. Bias does not only concern the much debated data quality (for example selection bias)<xref ref-type="fn" rid="fn09">9</xref>, but also the methodologies adopted (<italic>e.g.</italic>, pre-processing and data cleaning biases, measurement bias, bias in survey methodologies)<xref ref-type="fn" rid="fn10">10</xref>, the target of investigation (<italic>e.g.</italic>, historical bias in pre-existing data-sets and under- or over-representation of certain groups in new data-sets), and the psychological attitude of the data scientists (<italic>e.g.</italic>, confirmation bias).</p>
            <p>This brief listing of potential biases also reveals the human component of AI solutions, often underestimated in a misleading comparison between humans and machines. This dichotomy understates the role of human intervention in AI data processing<xref ref-type="fn" rid="fn11">11</xref> and the intentional or unintentional transposition of developers’ views into the AI reference values used for classification<xref ref-type="fn" rid="fn12">12</xref>.</p>
            <p>As for obscurity, this concerns both the AI tools used and the way they impact on individuals, whose circumstances are analysed and represented through them. Not only is the way some AI applications actually function and process information unknown<xref ref-type="fn" rid="fn13">13</xref>, even to data scientists, but individuals are often unaware of their being dynamically grouped on the basis of unseen correlations and inferences, without being able to know the identity of the other members of the group. Obscurity therefore entails two different consequences: first, data scientists are unable to clearly justify the specific decisions suggested by AI; and second, people are passively scrutinised by AI without having a meaningful or effective role in AI design or the opportunity to voice their collective interests<xref ref-type="fn" rid="fn14">14</xref>.</p>
            <p>This level of obscurity and the limitations to democratic participation in AI development is heightened by a third feature of many AI products: ownership. The proprietary nature of the algorithms used and, in certain cases, of the data silos used to train and implement them mean that intellectual property rights are a further barrier to access to the architecture of these applications and to public oversight<xref ref-type="fn" rid="fn15">15</xref>.</p>
            <p>These three inherent constraints – bias, obscurity, and ownership – have a direct impact on the challenges of AI and its social acceptance in monitoring and governing human activities (<italic>e.g.</italic>, smart cities),<xref ref-type="fn" rid="fn16">16</xref> offering personalised services (<italic>e.g.</italic>, predictive medicine)<xref ref-type="fn" rid="fn17">17</xref> and, more in general, supporting humans in the decision-making process.</p>
            <p>Issues surrounding data-intensive solutions and their use in decision-making processes concern a variety of interests related to human rights and freedoms<xref ref-type="fn" rid="fn18">18</xref>. To address the growing concern about the potential impact of AI on human rights and freedoms, several initiatives have been proposed at local, national and international levels, and a variety of guidelines have been drawn up by NGOs, research centres and corporate entities. Several proposals have focused on ethics<xref ref-type="fn" rid="fn19">19</xref>, often blurring the line between law and ethics, describing human rights and freedoms as ethical values with their ‘ethicisation’ and relativization.</p>
            <p>This emphasis on the ethical dimension can entail the risk of extending to the field of data science an ethical imperialism whose effects are already known in biomedicine and the social sciences<xref ref-type="fn" rid="fn20">20</xref>. In this regard, previous experience in ethical assessment of scientific research suggests that careful consideration should be given to the distinction between ethical and legal values and the differences between ethical approaches<xref ref-type="fn" rid="fn21">21</xref>. Several documents providing guidelines on AI refer to ethics in a fairly broad and indefinite manner, with no clarification (or justification) of the ethical framework used<xref ref-type="fn" rid="fn22">22</xref>.</p>
            <p>Ethical responses to uncertainty in a rapidly changing technological and social environment may paradoxically become a new source of ambiguity. Discretionary and, in some cases, interest-based values risk weakening the legal framework or indirectly redefining it without following an appropriate procedure as required by the regulatory process<xref ref-type="fn" rid="fn23">23</xref>.</p>
            <p>Without underestimating the role of ethics in technology development, these considerations suggest a more balanced integration of law and ethics in AI regulation, based on the emphasis on the role of human rights as the universal cornerstone of the future architecture of AI regulation. From a regulatory perspective, the main challenge is to contextualise the legal principles and provisions enshrined in international human rights instruments, drafted in a pre-AI era, within the current scenario where predictive policing tools, automated digital propaganda and other new AI-based applications are reshaping many aspects of our society and human relations.</p>
            <p>Regulatory initiatives have been proposed in several countries<xref ref-type="fn" rid="fn24">24</xref>, many of them referring explicitly to all or some human rights. However, these are often generic statements without a proper contextualisation of the rights and freedoms considered. Although it is relatively easy to agree on a general list of rights and freedoms that should underpin AI development, these lists do little to advance the regulatory process, since general principles, such as transparency or participation, can be interpreted in many different ways.</p>
            <p>An effective contribution to the human rights debate in this field can therefore only come from a proper contextualisation of these guiding principles within the AI scenario. This means placing such rules, including the operational ones, in the context of the changes to society produced by AI and providing a more refined and specific formulation of the guiding principles with a view to possible future AI regulation.</p>
            <p>This contextualisation of the guiding principles and rules can provide a more refined and elaborate formulation, taking into account the specific nature of AI products and services, and helping to better address the challenges arising from AI.</p>
            <p>From a methodological perspective, an analysis of international legally binding instruments is the obligatory starting point in defining the existing legal framework, identifying its guiding values and verifying whether this framework and its principles properly address the issues raised by AI, with a view to preserving the harmonisation of the existing legal framework in the fields of democracy and justice.</p>
            <p>The methodology adopted is therefore necessarily deductive, extracting the guiding principles from the variety of regulations concerning the fields in question. The theoretical basis of this approach relies on the assumption that the general principles provided by international human rights instruments should underpin all human activities, including AI-based innovation<xref ref-type="fn" rid="fn25">25</xref>.</p>
            <p>These guiding principles should be considered within the scenario of AI-driven transformation, which in many cases requires adaptation. They remain valid, but their implementation must be reconsidered in the light of the social and technical changes brought about by AI. This will deliver a more contextualised and granular application of these principles so that they can make a concrete contribution to the shape of future AI regulation.</p>
            <p>Against this background, the following sections examine two critical areas of AI application: electronic democracy and digital justice. While in other areas, such as data protection and biomedicine, the specific nature of the sectors and recent soft-law regulatory initiatives<xref ref-type="fn" rid="fn26">26</xref> make it possible to draft some provisions for future AI regulation<xref ref-type="fn" rid="fn27">27</xref>, in these two realms this is much more difficult. In addition, key principles that can be seen as guiding elements of future AI regulation, such as transparency and explainability<xref ref-type="fn" rid="fn28">28</xref>, are open to varying interpretations and implementations, given the higher political significance of both democracy and justice. The analysis therefore focuses on high-level principles and their contextualisation, resulting in a more limited elaboration of key guiding provisions.</p>
        </sec>
        <sec>
            <title>2 AI AND ELECTRONIC DEMOCRACY</title>
            <p>Democracy covers an extremely wide array of societal and legal issues<xref ref-type="fn" rid="fn29">29</xref>, most of them likely to be implemented with the support of ICT<xref ref-type="fn" rid="fn30">30</xref>. In this scenario, AI can play an important role in the present and future development of digital democracy in an information society.</p>
            <p>The broad dimension of this topic makes it difficult to identify a single binding sector-specific legal instrument for reference. Several international instruments deal with democracy and its different aspects, starting with the UN Declaration of Human Rights and the International Covenant on Civil and Political Rights. Similarly, in the European context, key principles for democracy are present in several international sources.</p>
            <p>Based on Article 25 ICCPR, we can identify two main areas of intervention related to electronic democracy: (i) participation<xref ref-type="fn" rid="fn31">31</xref> and good governance, and (ii) elections. Undoubtedly, it is difficult or impossible to draw a red line between these fields as they are interconnected in various ways. AI can have an impact on all of them: participation (<italic>e.g.</italic>, citizens engagement, participation platforms), good governance (<italic>e.g.</italic>, e-government, decision-making processes, smart cities), pre-electoral phase (<italic>e.g.</italic>, financing, targeting and profiling, propaganda), elections (<italic>e.g.</italic>, prediction of election results, e-voting), and the post-election period (<italic>e.g.</italic>, electoral dispute resolution).</p>
            <p>As in any classification, this distinction is characterised by a margin of directionality. It is worth pointing here out that this is a functional classification based on different AI impacts, with no intention to provide a legal or political representation of democracy and its different key elements. The relationship between participation, good governance, and elections can therefore be considered from different angles and shaped in different ways, unifying certain areas or further subdividing them.</p>
            <p>Participation is expressed both through taking part in the democratic debate and through the electoral process, but the way that AI tools interact with participation in these two cases differs and there are distinct international legal instruments specific to the electoral process.</p>
            <sec>
                <title>2.1 Participation 2and good governance</title>
                <p>The right to participate in public affairs (Article 25 Covenant) is based on a broad concept of public affairs<xref ref-type="fn" rid="fn32">32</xref>, which includes public debate and dialogue between citizens and their representatives, with a close link to freedom of expression, assembly, and association<xref ref-type="fn" rid="fn33">33</xref>. In this respect, AI is relevant from two different perspectives: as a means to participation and as the subject of participatory decisions.</p>
                <p>Considering AI as a means, technical and educational barriers can undermine the exercise of the right to participate. Participation tools based on AI should therefore consider the risks of under-representation and lack of transparency in participative processes (for example platforms for the drafting of bills). At the same time, AI is also the subject of participatory decisions, as they include decisions on the development of AI in general and its use in public affairs.</p>
                <p>AI-based participative platforms (<italic>e.g.</italic>, Consul, Citizenlab, Decidim<xref ref-type="fn" rid="fn34">34</xref>) can make a significant contribution to the democratic process, facilitating citizen interaction, prioritising of objectives, and collaborative approaches in decision-making<xref ref-type="fn" rid="fn35">35</xref> on topics of general interests at different levels (neighbourhood, municipality, metropolitan area, region, country)<xref ref-type="fn" rid="fn36">36</xref>.</p>
                <p>Specific issues arise in relation to AI tools for democratic participation (including those for preventing and fighting corruption<xref ref-type="fn" rid="fn37">37</xref>), which are associated with the following four main areas: transparency, accountability, inclusiveness, and openness. In this regard, the general principles set out in international binding instruments have an important implementation in the Recommendation CM/Rec(2009)1 of the Committee of Ministers of the Council of Europe to member states on electronic democracy (e-democracy), which provides a basis for further elaboration of the guiding principles in the field of AI with regard to democracy.</p>
                <p>Transparency is a requirement for the use of technological applications for democratic purposes<xref ref-type="fn" rid="fn38">38</xref>. This principle is common to other fields, such as healthcare<xref ref-type="fn" rid="fn39">39</xref>, but is a context-based notion. While in healthcare transparency is closely related to self-determination, here it is not only a requirement for citizens’ self-determination with respect to a technical tool but is also a component of the democratic participatory process<xref ref-type="fn" rid="fn40">40</xref>. Transparency no longer has an individual dimension but assumes a collective dimension as a guarantee of the democratic process.</p>
                <p>In this context, the use of AI-based solutions for e-democracy must be transparent in respect of their logic and functioning (<italic>e.g.</italic>, content selection in participatory platforms) providing clear, easily accessible, intelligible, and updated information about the AI tools used and their justification<xref ref-type="fn" rid="fn41">41</xref>.</p>
                <p>Moreover, the implementation of this notion of transparency should also consider the range of different users of these tools, adopting an accessible approach<xref ref-type="fn" rid="fn42">42</xref> from the early stages of the design of AI tools. This is to ensure effective transparency with regard to vulnerable and impaired groups, giving added value to accessibility in this context.</p>
                <p>Transparency and accessibility are closely related to the nature of the architecture used to build AI systems. Open source and open standards can therefore contribute to democratic oversight of the most critical AI applications<xref ref-type="fn" rid="fn43">43</xref>. There are cases where openness is affected by limitations, due to the nature of the specific AI application (for example crime prevention). In these cases, auditability, as well as certification schemes, play a more important role than they already do in relation to AI systems in general<xref ref-type="fn" rid="fn44">44</xref>.</p>
                <p>In the context of AI applications to foster democratic participation, an important role can be also played by interoperability<xref ref-type="fn" rid="fn45">45</xref> as it facilitates integration between different services/platforms for e-democracy and at different geographical levels. This aspect is already relevant for e-democracy in general<xref ref-type="fn" rid="fn46">46</xref>, and should therefore be extended to the design of AI-based systems.</p>
                <p>Another key principle in e-democracy is accountability. In this regard, to be accountable, AI service providers and entities using AI-based solutions for e-democracy shall adopt forms of algorithm vigilance that promote the accountability of all relevant stakeholders by assessing and documenting the expected impacts on individuals and society in each phase of the AI system lifecycle on a continuous basis, to ensure compliance with human rights, the rule of law and democracy<xref ref-type="fn" rid="fn47">47</xref>.</p>
                <p>Finally, given the role of media in the context of democratic participation<xref ref-type="fn" rid="fn48">48</xref>, AI applications must not compromise the confidentiality and security of communications and protection of journalistic sources and whistle-blowers<xref ref-type="fn" rid="fn49">49</xref>.</p>
                <p>In addressing the different aspects of developing AI solutions for democratic participation, a first consideration is that a democratic approach is incompatible with a techno-determinist approach. AI solutions to address societal problems should therefore be the result of an inclusive process. Hence, legal values such as the protection of minorities, pluralism and diversity should be a necessary consideration in the development of these solutions.</p>
                <p>From a democratic perspective, the first question we should ask is: do we really need an AI-based solution to a given problem as opposed to other options<xref ref-type="fn" rid="fn50">50</xref>, considering the potential impact of AI on rights and freedoms? If the answer to this question is yes, the next step is to examine value-embedding in AI development<xref ref-type="fn" rid="fn51">51</xref>.</p>
                <p>The proposed AI solutions must be designed from a human rights-oriented perspective, ensuring full respect for human rights and fundamental freedoms, including the adoption of assessment tools and procedures for this purpose<xref ref-type="fn" rid="fn52">52</xref>. In the case of AI applications with a high impact on human rights and freedoms, such as electoral processes, legal compliance should be prior assessed. In addition, AI systems for public tasks should be auditable and, where not excluded by competing prevailing interests, audits should be publicly available.</p>
                <p>Another important aspect to be considered is the public-private partnership that frequently characterises AI services for citizens<xref ref-type="fn" rid="fn53">53</xref>, weighing which is the best choice between in-house and third-party solutions, including the many different combinations of these two extremes. In this regard, when AI solutions are fully or partially developed by private companies, transparency of contracts and clear rules on access and use of citizens’ data have a critical value in terms of democratic oversight.</p>
                <p>Restrictions on access and use of citizens’ data are not only relevant from a data protection perspective (principles of data minimisation and purpose limitation) but more generally with regard to the bulk of data generated by a community, which also includes non-personal data and aggregated data. This issue should be considered as a component of democracy in the digital environment, where the collective dimension of the digital resources generated by a community should entail forms of citizen control and oversight, as happens for the other resources of a territory/community.</p>
                <p>The considerations already expressed above on openness as a key element of democratic participation tools should be recalled here, given their impact on the design of AI systems. Furthermore, the design, development and deployment of these systems should also consider the adoption of an environmentally friendly and sustainable strategy<xref ref-type="fn" rid="fn54">54</xref>.</p>
                <p>Finally, it is worth noting that while AI-design is a key component of these systems, design is not neutral. Values can be embedded in technological artefacts<xref ref-type="fn" rid="fn55">55</xref>, including AI systems. These values can be chosen intentionally and, in the context of e-democracy, this must be based on a democratic process. But they may also be unintentionally embedded into AI solutions, due to the cultural, social and gender composition of AI developer teams. For this reason, inclusiveness has an added value here, in terms of inclusion and diversity<xref ref-type="fn" rid="fn56">56</xref> in AI development.</p>
                <p>The principles discussed for e-democracy can be repeated with regard to good governance<xref ref-type="fn" rid="fn57">57</xref>. This is the case with smart cities and sensor-based environmental management, where open, transparent and inclusive decision-making processes play a central role<xref ref-type="fn" rid="fn58">58</xref>. Similarly, the use of AI to supervise the activities of local authorities<xref ref-type="fn" rid="fn59">59</xref>, for auditing and anticorruption purposes<xref ref-type="fn" rid="fn60">60</xref>, should be based on openness (open source software), transparency and auditability.</p>
                <p>More generally, AI can be used in government/citizen interaction to automate citizen’ inquiries and information requests<xref ref-type="fn" rid="fn61">61</xref>. However, in these cases, it is important to guarantee the right to know we are interacting with a machine<xref ref-type="fn" rid="fn62">62</xref> and to have a human contact point. Moreover, access to public services must not depend on the provision of data that is unnecessary and not proportionate to the purpose.</p>
                <p>Special attention should also be paid to the potential use of AI in human-machine interaction to implement nudging strategies<xref ref-type="fn" rid="fn63">63</xref>. Here, due to the complexity and obscurity of the technical solutions adopted, AI can increase the passive role of citizens and negatively affect the democratic decision-making process. Otherwise, an active approach based on conscious and active participation in community goals should be preferred and better managed by AI participation tools. Where adopted, nudging strategies should still follow an evidence-based approach.</p>
                <p>Finally, the use of AI systems in governance tasks raises challenging questions about the relationship between human decision-makers and the role of AI in the decision-making process<xref ref-type="fn" rid="fn64">64</xref>. These issues are more relevant with regard to the functions that have a high impact on individual rights and freedoms, as in the case of jurisdictional decisions<xref ref-type="fn" rid="fn65">65</xref>.</p>
            </sec>
            <sec>
                <title>2.2 Elections</title>
                <p>The impact of AI on electoral processes is broad and concerns the pre-election, election, and post-election phases in different ways. However, an analysis focused on the stages of the electoral process does not adequately highlight the different ways in which AI solutions interact with it.</p>
                <p>The influence of AI is therefore better represented by the following distinction: AI for the electoral process (e-voting, predictions of results, and electoral dispute resolution) and AI for electoral campaigns (micro-targeting and profiling, propaganda and fake news). While in the first area AI is mainly a technological improvement of an existing process, in the field of electoral campaigning AI-based profiling and propaganda raise new concerns that are only partially addressed by the existing legal framework. In addition, several documents have emphasised the active role of states in creating an enabling environment for freedom of expression<xref ref-type="fn" rid="fn66">66</xref>.</p>
                <p>As regards the technological implementation of e-democracy (e-voting, prediction of results, and electoral dispute resolution), some of the key principles mentioned with regard to democratic participation are also relevant here. Accessibility, transparency, openness, risk management and accountability (including the adoption of certification and auditing procedures) are fundamental elements of the technological solutions adopted in these stages of the electoral process<xref ref-type="fn" rid="fn67">67</xref>.</p>
                <p>As regards AI for campaigning (micro-targeting and profiling, propaganda and fake news), some of the issues raised concern the processing of personal data in general. The principles set out in Convention 108+ can therefore be applied and properly contextualised<xref ref-type="fn" rid="fn68">68</xref>.</p>
                <p>More specific and new responses are needed in the case of propaganda and disinformation<xref ref-type="fn" rid="fn69">69</xref>. Here the existing binding and non-binding instruments do not set specific provisions, given the novelty of the disinformation based on new forms of communication, such as social networks, which differ from traditional media<xref ref-type="fn" rid="fn70">70</xref> and often bypass the professional mediation of the journalists.</p>
                <p>However, general principles, such as the principle of non-interference by public authorities on media activities to influence elections<xref ref-type="fn" rid="fn71">71</xref>, can be extended to these new forms of propaganda and disinformation. Considering the use of AI to automate propaganda, future AI regulation should extend the scope of the general principles of non-interference to AI-based systems used to provide false, misleading and harmful information. In addition, to prevent such interference, states<xref ref-type="fn" rid="fn72">72</xref> and social media providers should adopt a by-design approach to increase their resilience to disinformation and propaganda.</p>
                <p>Similarly, the obligation to cover election campaigns in a fair, balanced, and impartial manner<xref ref-type="fn" rid="fn73">73</xref> should entail obligations for media and social media operators regarding the transparency of the logic of the algorithms used for content selection,<xref ref-type="fn" rid="fn74">74</xref> ensuring pluralism and diversity of voices<xref ref-type="fn" rid="fn75">75</xref>, including critical ones<xref ref-type="fn" rid="fn76">76</xref>.</p>
                <p>Moreover, states and intermediaries should promote and facilitate access to tools to detect disinformation and non-human agents, as well as support independent research on the impact of disinformation and projects offering fact-checking services to users<xref ref-type="fn" rid="fn77">77</xref>.</p>
                <p>Given the important role played by advertising in disinformation and propaganda, the criteria used by AI-based solutions for political advertising should be transparent<xref ref-type="fn" rid="fn78">78</xref>, auditable and provide equal conditions to all the political parties and candidates<xref ref-type="fn" rid="fn79">79</xref>. In addition, intermediaries should review their advertising models to ensure that they do not adversely affect the diversity of opinions and ideas<xref ref-type="fn" rid="fn80">80</xref>.</p>
            </sec>
        </sec>
        <sec>
            <title>3 AI AND DIGITAL JUSTICE</title>
            <p>As in the case of democracy, the field of justice is a broad domain and analysing the whole spectrum of the consequences of AI on justice would be too ambitious. In line with the scope of this contribution, this section sets out to describe the main challenges associated with the use of AI in digital justice and the principles which, based on international legally binding instruments, can contribute to its future regulation.</p>
            <p>This analysis is facilitated by the European Ethical Charter on the use of artificial intelligence (AI) in judicial systems and their environment, adopted by the CEPEJ in 2019, which directly addresses the relationship between justice and AI. Although this non-binding instrument is classed as an ethical charter, to a large extent it concerns legal principles enshrined in international instruments.</p>
            <p>Guiding principles for the development of AI in the field of digital justice can be derived from the following binding instruments: the Universal Declaration of Human Rights, the International Covenant on Civil and Political Rights, the Convention for the Protection of Human Rights and Fundamental Freedoms, the International Convention on the Elimination of All Forms of Racial Discrimination, the Convention on the Elimination of All Forms of Discrimination against Women, and the Convention for the Protection of Human Rights and Fundamental Freedoms<xref ref-type="fn" rid="fn81">81</xref>.</p>
            <p>Given the range of types and purposes of operations in this field and the various professional figures and procedures involved, this section makes a functional distinction between two areas: (i) judicial decisions and alternative dispute resolutions (ADRs) and (ii) crime prevention/prediction. Before analysing and contextualising the key principles relating to these two areas, we should offer some general observation, which may also apply to the action of the public administration as a whole<xref ref-type="fn" rid="fn82">82</xref>.</p>
            <p>First of all, it is worth noting that – compared to human decisions, and more specifically judicial decisions – the logic behind AI systems does not resemble legal reasoning. Instead, they simply execute codes based on a data-centric and mathematical/statistical approach.</p>
            <p>In addition, error rates for AI are close to, or lower than, the human brain in fields such as image labelling, but more complicated decision-making tasks have higher error rates. This is the case with legal reasoning in problem solving<xref ref-type="fn" rid="fn83">83</xref>. At the same time, while a misclassification of an image of a cat may have limited adverse effects, an error rate in legal decisions<xref ref-type="fn" rid="fn84">84</xref> has a high impact on rights and freedom of individuals.</p>
            <p>It is worth pointing out that the difference between errors in human and machine decision-making has an important consequence in terms of scale: while human error affects only individual cases, poor design and bias in AI inevitably affect all people in the same or similar circumstances, with AI tools being applied to a whole series of cases. This may cause group discrimination, adversely affecting individuals belonging to different traditional and non-traditional categories<xref ref-type="fn" rid="fn85">85</xref>.</p>
            <p>Given the textual nature of legal documents, natural language processing (NLP) can play an important role in AI applications for the justice sphere<xref ref-type="fn" rid="fn86">86</xref>. This raises several critical issues surrounding commercial solutions developed with a focus on the English-speaking market, making them less effective in a legal environment that uses languages other than English<xref ref-type="fn" rid="fn87">87</xref>. Moreover, legal decisions are often characterised by implicit unexpressed reasoning, which may be amenable to expert systems, but not by language-based machine learning tools. Finally, the presence of general clauses requires a prior knowledge of the relevant legal interpretation and continual updates which cannot be derived from text mining.</p>
            <p>All these constraints suggest a careful and more critical adoption of AI in the field of justice than in other domains and, with regard to court decisions and ARDs, suggest following a distinction between cases characterised by routinely and fact-based evaluations and cases characterised by a significant margin for legal reasoning and discretion<xref ref-type="fn" rid="fn88">88</xref>.</p>
            <sec>
                <title>3.1 Adrs and court decisions</title>
                <p>Several so-called Legal Tech AI products do not have a direct impact on the decision-making processes in courts or alternative dispute resolutions (ADRs), but rather facilitate content and knowledge management, organisational management, and performance measurement<xref ref-type="fn" rid="fn89">89</xref>. These applications include, for example, tools for contracts categorisation, detection of divergent or incompatible contractual clauses, e-discovery, drafting assistance, law provision retrieval, assisted compliance review. In addition, some applications can provide basic problem-solving functions based on standard questions and standardised situations (<italic>e.g.</italic>, legal chatbots).</p>
                <p>Although AI has an impact in such cases on legal practice and legal knowledge that raises various ethical issues<xref ref-type="fn" rid="fn90">90</xref>, the potential adverse consequences for human rights, democracy and the rule of law are limited. To a large extent, they are related to inefficiencies or flaws of these systems.</p>
                <p>In the case of content and knowledge management, including research and document analysis, these flaws can generate incomplete or inaccurate representations of facts or situations, but this affects the meta-products, the results of a research tool that need to be interpreted and adequately motivated when used in court. Liability rules, in the context of product liability, for instance, can address these issues.</p>
                <p>In addition, bias (poor case selection, misclassification etc.) affecting standard text-based computer-assisted search tools for the analysis of legislation, case-law, and literature<xref ref-type="fn" rid="fn91">91</xref>, can be countered by suitable education and training of legal professionals and the transparency of AI systems (that is the description of their logic, potential bias and limitations) can reduce the negative consequences.</p>
                <p>Transparency should also characterise the use by courts of AI for legal research and document analysis. Judges must be transparent as to which decisions depend on AI and how the results provided by AI are used to contribute to the arguments, in line with the principles of fair trial and equality of arms<xref ref-type="fn" rid="fn92">92</xref>.</p>
                <p>Finally, transparency can play an important role with regard to legal chatbots based on AI, making users aware of their logic and the resources used (for example list of cases analysed). Full transparency should also include the sources used to train these algorithms and access to the database used to provide answers. Where these databases are private, third-party audits should be available to assess the quality of datasets and how potential biases have been addressed, including the risk of under- or over-representation of certain categories (non-discrimination).</p>
                <p>Further critical issues affect AI applications designed to automate alternative dispute resolution or to support judicial decision. Here, the distinction between codified justice and equitable justice<xref ref-type="fn" rid="fn93">93</xref> suggests that AI should be circumscribed for decision-making purposes to cases characterised by routine and fact-based evaluations. This entails the importance to carry out further research on the classification of the different kind of decisional processes to identify those routinised applications of legal reasoning that can be demanded to AI, preserving in any case human overview that also guarantees legal creativity of decision-makers<xref ref-type="fn" rid="fn94">94</xref>.</p>
                <p>Regarding equitable justice, as the literature points out<xref ref-type="fn" rid="fn95">95</xref>, its logic is more complicated than the simple outcome of individual cases. Expressed and unexpressed values and considerations, both legal and non-legal, characterise the reasoning of the courts and are not replicable by the logic of AI. ML-based systems are not able to perform a legal reasoning. They extract inferences by identifying patterns in legal datasets, which is not the same as the elaboration of legal reasoning.</p>
                <p>Considering the wider context of the social role of courts, jurisprudence is an evolving system, open to new societal and political issues. AI path-dependent tools could therefore stymie this evolutive process: the deductive and path-dependent nature of certain AI solutions can undermine the important role of human decision-makers in the evolution of law in practice and legal reasoning.</p>
                <p>Moreover, at the individual level, path-dependency may also entail the risk of ‘deterministic analyses’<xref ref-type="fn" rid="fn96">96</xref>, prompting the resurgence of deterministic doctrines to the detriment of doctrines of individualisation of the sanction and with prejudice to the principle of rehabilitation and individualisation in sentencing.</p>
                <p>In addition, in several cases, including ADR, both the mediation between the parties’ demands and the analysis of the psychological component of human actions (fault, intentionality) require emotional intelligence that AI systems do not have.</p>
                <p>These concerns are reflected in the existing legal framework provided by the international legal instruments. The Universal Declaration of Human Rights (Articles 7 and 10), the International Covenant on Civil and Political Rights (Article 14), the Convention for the Protection of Human Rights and Fundamental Freedoms (Article 6) and also the Charter of Fundamental Rights of the European Union (Article 47) stress the following key requirements with regard to the exercise of judicial power: equal treatment before the law, impartiality, independence and competency. AI tools do not possess these qualities, and this limits their contribution to the decision-making process as carried out by courts.</p>
                <p>As stated by the European Commission for the Efficiency of Justice, ‘the neutrality of algorithms is a myth, as their creators consciously or unintentionally transfer their own value systems into them’. Many cases of biases regarding AI applications confirm that these systems too often – albeit in many cases unintentionally – provide a partial representation of society and individual cases, which is not compatible with the principles of equal treatment before the law and non-discrimination<xref ref-type="fn" rid="fn97">97</xref>. Data quality and other forms of quality assessment (impact assessment, audits, etc.) can reduce this risk but, given the degree of potentially affected interests in the event of biased decisions, the risks remain high in the case of equitable justice and seem disproportionate to the benefits largely in terms of efficiency for the justice system<xref ref-type="fn" rid="fn98">98</xref>.</p>
                <p>Further concerns affect the principles of fair trial and of equality of arms<xref ref-type="fn" rid="fn99">99</xref>, when court decisions are based on the results of proprietary algorithms whose training data and structure are not publicly available<xref ref-type="fn" rid="fn100">100</xref>. A broad notion of transparency might address these issues in relation to the use of AI in judicial decisions, but the transparency of AI – a challenging goal in itself – cannot address the other structural and functional objections cited above.</p>
                <p>In addition, data scientists can shape AI tools in different ways in the design and training phases, so that were AI tools to become an obligatory part of the decision-making process, governments selecting the tools to be used by the courts could potentially indirectly interfere with the independence of the judges.</p>
                <p>This risk is not eliminated by the fact that the judge remains free to disregard AI decisions, providing a specific motivation. Although human oversight is an important element<xref ref-type="fn" rid="fn101">101</xref>, its effective impact may be undermined by the psychological or utilitarian (cost-efficient) propensity of the human decision-maker to take advantage of the solution provided by AI<xref ref-type="fn" rid="fn102">102</xref>.</p>
            </sec>
            <sec>
                <title>3.2 Crime prevention</title>
                <p>The complexity of crime detection and prevention has stimulated research in AI applications to facilitate human activities. In recent years, several solutions<xref ref-type="fn" rid="fn103">103</xref> and a growing literature have been developed in the field of predictive policing, which is a proactive data-driven approach to crime prevention. Essentially, the available solutions pursue two different goals: to predict where and when crimes might occur or to predict who might commit a crime<xref ref-type="fn" rid="fn104">104</xref>.</p>
                <p>These two purposes have a distinct potential impact on human rights and freedom, which is more pronounced when AI is used for individual predictions. However, in both cases, we can repeat here the considerations about the general challenges related to AI (obscurity, intellectual property rights, large-scale data collection<xref ref-type="fn" rid="fn105">105</xref>, etc.) discussed in the previous sections and partially addressed by transparency, data quality, data protection, auditing and the other measures<xref ref-type="fn" rid="fn106">106</xref>. It is worth noting that the role of transparency in the judicial context could be limited so as not to frustrate the deterrent effect of these tools<xref ref-type="fn" rid="fn107">107</xref>. Full transparency could therefore be replaced by auditing and oversight by independent authorities.</p>
                <p>Leaving aside the organisational aspects regarding the limitation of police officers’ self-determination in the performance of their duties, the main issues with regard to the use of AI to predict crime on geographic and temporal basis concern the impact of these tools on the right to non-discrimination<xref ref-type="fn" rid="fn108">108</xref>. Self-fulfilling bias, community bias<xref ref-type="fn" rid="fn109">109</xref> and historical bias<xref ref-type="fn" rid="fn110">110</xref> can produce forms of stigmatisation for certain groups and the areas where they typically live.</p>
                <p>Where data analysis is used to classify crimes and infer evidence on criminal networks, proprietary solutions raise issues in terms of respect for the principles of fair trial and of equality of arms with regard to the collection and use of evidence. Moreover, if the daily operations of policy departments are guided by predictive software, this raises a problem of accountability of the strategies adopted, as they are partially determined by software and hence by software developer companies, rather than the police.</p>
                <p>A sharper conflict with human rights arises in the area of predictive policing tools that use profiling to support individual forecasting. Quite apart from the question of data processing and profiling<xref ref-type="fn" rid="fn111">111</xref>, these solutions can also adversely affect the principle of presumption of innocence, procedural fairness, and the right to non-discrimination<xref ref-type="fn" rid="fn112">112</xref>.</p>
                <p>While non-discrimination issues could be partially addressed, the remaining conflicts seem to be more difficult to resolve. From a human rights standpoint and in terms of proportionality (including the right to respect for private and family life)<xref ref-type="fn" rid="fn113">113</xref>, the risk of prejudice to these principles seems high and not adequately countered by the evidence of benefits for individual and collective rights and freedoms<xref ref-type="fn" rid="fn114">114</xref>. In the light of future AI regulation, this should urge careful consideration of these issues, taking into account the distinction between the technical possibilities of AI solutions and their concrete benefits in safeguarding and enhancing human rights and freedoms.</p>
                <p>Finally, from a wider and comprehensive human rights perspective, the focus on crime by data-driven AI tools drives a short-term factual approach that underrates the social issues that are often crime-related and require long-term social strategies involving the effective enhancement of individual and social rights and freedoms<xref ref-type="fn" rid="fn115">115</xref>.</p>
            </sec>
        </sec>
        <sec sec-type="conclusions">
            <title>4 CONCLUSIONS</title>
            <p>The latest wave of AI development is having a growing transformative impact on society and raises new questions in several fields, from predictive medicine and media content moderation to the quantified self and judicial systems.</p>
            <p>With a view to preserving the harmonisation of the existing legal framework in the field of human rights, this article sets out to contribute to the debate on future AI regulation by building on existing binding instruments, contextualising their principles and providing key regulatory guidance in the fields of electronic democracy and digital justice.</p>
            <p>This approach is based on the assumption that all human activities, including innovation through AI, should be underpinned by the general international principles on human rights. Moreover, only the human rights framework can provide a universal reference for the regulation of AI, while other yardsticks (for example ethics) do not have the same global dimension, are more context-dependent and characterised by a variety of theoretical approaches.</p>
            <p>The findings of this analysis show that a limited number of cases do share common principles (for example individual self-determination, non-discrimination, human oversight). This is due to several factors.</p>
            <p>First, some principles are sector specific. This is the case, for instance, with the independence of judges or the principles of fair trial and equality of arms, which concern justice alone<xref ref-type="fn" rid="fn116">116</xref>.</p>
            <p>Second, some guiding principles are shared by different areas, but with different nuances in each context. This is true for transparency, which is often regarded as pivotal in AI regulation, but takes on different meanings in different regulatory contexts.</p>
            <p>Transparency, as a means to control the power over data in the hands of public and private entities, is crucial with regard to AI applications for democratic participation and good governance. In the context of justice, transparency has a more complex significance, being vital to safeguard fundamental rights and freedoms (<italic>e.g.</italic>, use of AI in the courts), but also requiring limitation to avoid prejudicing competing interests (<italic>e.g.</italic>, crime detection and prevention in predictive policing).</p>
            <p>We can therefore conclude that transparency is a guiding principle, but we must go beyond a mere claim for transparency as a key principle for AI regulation. As with other key principles (such as participation, inclusion, democratic oversight, and openness), a proper contextualisation is needed, with provisions that take into account the different contexts in which they operate.</p>
            <p>Third, some principles are different, but belong to the same conceptual area, assuming various nuances in the different contexts. This is the case with accountability and guiding principles on risk management in general. Here the level of detail and related requirements can be more or less elaborate. While, for instance, in the field of data protection there are several provisions implementing these principles with a significant degree of detail<xref ref-type="fn" rid="fn117">117</xref>, in the case of democracy and justice these principles are less developed in data-intensive applications such as AI.</p>
            <p>Finally, there are certain components of an AI regulatory strategy that are not principles, but operational approaches and solutions, common to the different areas though requiring context-based development. This is the case with the important role played by education and training.</p>
            <p>Such considerations suggest only partial harmonisation is achievable. The framework of future international AI regulation should therefore be based on a legally binding instrument that includes both general provisions – focusing on common principles and operational solutions – and more specific and sectoral provisions, covering those principles that are relevant only in a given field or cases where the same principle is contextualised differently in the different fields.</p>
            <p>The analysis carried out in the previous sections has also confirmed that the existing framework based on human rights can provide an appropriate and common context for the development of more specific binding instruments to regulate AI, in line with the principles enshrined in the international legal instruments and capable of effectively addressing the issues raised by AI.</p>
            <p>With a view to future regulation of AI, this study does not rule out a number of gaps, largely due to the fact that in broad areas, such as democracy and justice, differing options and interpretations are available, depending on the political and societal vision of the future relationship between humans and machines. Further investigation in the field of human rights and AI, as well as the ongoing debate at international and regional level, will contribute to bridging these gaps.</p>
        </sec>
    </body>
    <back>
        <fn-group>            
            <fn fn-type="other" id="fn02">
                <label>2</label>
                <p>For an analysis of the different impacts of AI on individuals and society see <xref ref-type="bibr" rid="B46">Council of Europe, 2018c</xref>; <xref ref-type="bibr" rid="B84">MANTELERO-ESPOSITO, 2021</xref>; <xref ref-type="bibr" rid="B127">ZUIDERVEEN BORGESIUS, 2020</xref>.</p>
            </fn>
            <fn fn-type="other" id="fn03">
                <label>3</label>
                <p>See <xref ref-type="bibr" rid="B55">European Commission, 2021</xref>; Council of Europe – Ad hoc <xref ref-type="bibr" rid="B17">Committee on Artificial Intelligence (CAHAI), 2020</xref>; <xref ref-type="bibr" rid="B45">Council of Europe, 2020b</xref>; Council of Europe – Committee of the Convention for the Protection of Individuals with regards to Processing of Personal Data (Convention 108), 2019; <xref ref-type="bibr" rid="B92">OECD, 2019</xref>; <xref ref-type="bibr" rid="B118">UNESCO, 2021</xref>. See also <xref ref-type="bibr" rid="B122">VERONESE-NUNES LOPES ESPIÑEIRA LEMOS, 2021</xref>.</p>
            </fn>
            <fn fn-type="other" id="fn04">
                <label>4</label>
                <p>See <xref ref-type="bibr" rid="B71">HILDEBRANDT, 2021</xref>.</p>
            </fn>
            <fn fn-type="other" id="fn05">
                <label>5</label>
                <p>See <xref ref-type="bibr" rid="B57">European Union Agency for Fundamental Rights, 2019</xref>.</p>
            </fn>
            <fn fn-type="other" id="fn06">
                <label>6</label>
                <p>See <xref ref-type="bibr" rid="B01">AGRE, 1994</xref>; <xref ref-type="bibr" rid="B70">HILDEBRANDT, 2019</xref>.</p>
            </fn>
            <fn fn-type="other" id="fn07">
                <label>7</label>
                <p>See <xref ref-type="bibr" rid="B75">KOLKMAN, 2020</xref>.</p>
            </fn>
            <fn fn-type="other" id="fn08">
                <label>8</label>
                <p>See <xref ref-type="bibr" rid="B49">CUMMINGS et al., 2018</xref>, 2; <xref ref-type="bibr" rid="B10">CARUANA et al., 2015</xref>; <xref ref-type="bibr" rid="B59">EYKHOLT et al., 2018</xref>.</p>
            </fn>
            <fn fn-type="other" id="fn09">
                <label>9</label>
                <p>See <xref ref-type="bibr" rid="B02">AI Now Institute, 2017</xref>, 4 and 16-17.</p>
            </fn>
            <fn fn-type="other" id="fn10">
                <label>10</label>
                <p>See <xref ref-type="bibr" rid="B120">VEALE-BINNS, 2017</xref>.</p>
            </fn>
            <fn fn-type="other" id="fn11">
                <label>11</label>
                <p>See <xref ref-type="bibr" rid="B115">TUBARO-CASILLI-COVILLE, 2020</xref>; <xref ref-type="bibr" rid="B48">CRAWFORD-JOLER, 2018</xref>; <xref ref-type="bibr" rid="B76">LOIDEAIN-ADAMS, 2020</xref>.</p>
            </fn>
            <fn fn-type="other" id="fn12">
                <label>12</label>
                <p>See also <xref ref-type="bibr" rid="B124">WEST-WHITTAKER-CRAWFORD, 2019</xref>.</p>
            </fn>
            <fn fn-type="other" id="fn13">
                <label>13</label>
                <p>See <xref ref-type="bibr" rid="B107">SELBST, 163</xref>; <xref ref-type="bibr" rid="B08">BURRELL, 2016</xref>; <xref ref-type="bibr" rid="B06">BRAUNEIS-GOODMAN, 131</xref>.</p>
            </fn>
            <fn fn-type="other" id="fn14">
                <label>14</label>
                <p>See also <xref ref-type="bibr" rid="B65">GRABER, 2020</xref>; <xref ref-type="bibr" rid="B82">MANTELERO, 2016</xref>.</p>
            </fn>
            <fn fn-type="other" id="fn15">
                <label>15</label>
                <p>See <xref ref-type="bibr" rid="B96">PASQUALE, 2015</xref>, 193.</p>
            </fn>
            <fn fn-type="other" id="fn16">
                <label>16</label>
                <p>See also<xref ref-type="bibr" rid="B99"> Privacy International, 2017</xref>; <xref ref-type="bibr" rid="B64">GOODMAN-POWLES, 2019</xref>. See also <xref ref-type="bibr" rid="B14">COHEN, 2019</xref>, 62-3.</p>
            </fn>
            <fn fn-type="other" id="fn17">
                <label>17</label>
                <p>See <xref ref-type="bibr" rid="B61">FERRYMAN-PITCAN, 2018</xref>.</p>
            </fn>
            <fn fn-type="other" id="fn18">
                <label>18</label>
                <p>See <xref ref-type="bibr" rid="B84">MANTELERO-ESPOSITO, 2021</xref>; <xref ref-type="bibr" rid="B30">Council of Europe, 2018c</xref>.</p>
            </fn>
            <fn fn-type="other" id="fn19">
                <label>19</label>
                <p>See <xref ref-type="bibr" rid="B73">JOBIN-IENCA-VAYENA, 2019</xref>; <xref ref-type="bibr" rid="B66">HAGENDORFF, 2020</xref>.</p>
            </fn>
            <fn fn-type="other" id="fn20">
                <label>20</label>
                <p>See <xref ref-type="bibr" rid="B106">SCHRAG, 2017</xref>.</p>
            </fn>
            <fn fn-type="other" id="fn21">
                <label>21</label>
                <p>See <xref ref-type="bibr" rid="B71">HILDEBRANDT, 2021</xref>.</p>
            </fn>
            <fn fn-type="other" id="fn22">
                <label>22</label>
                <p>See <xref ref-type="bibr" rid="B100">RAAB, 2020</xref>. See also<xref ref-type="bibr" rid="B72"> Independent High-Level Group on Artificial Intelligence, 2019</xref>.</p>
            </fn>
            <fn fn-type="other" id="fn23">
                <label>23</label>
                <p>See <xref ref-type="bibr" rid="B90">NEMITZ, 2018</xref>.</p>
            </fn>
            <fn fn-type="other" id="fn24">
                <label>24</label>
                <p>See <xref ref-type="bibr" rid="B63">GESLEY, 2019</xref>;<xref ref-type="bibr" rid="B47"> Council of Europe, 2020a</xref>. See also <xref ref-type="bibr" rid="B79">MANHEIM-KAPLAN, 2019</xref>, 160.</p>
            </fn>
            <fn fn-type="other" id="fn25">
                <label>25</label>
                <p>See<xref ref-type="bibr" rid="B45"> Council of Europe, 2020b</xref>.</p>
            </fn>
            <fn fn-type="other" id="fn26">
                <label>26</label>
                <p>See for example <xref ref-type="bibr" rid="B18">Council of Europe – Committee of the Convention for the Protection of Individuals with regards to Processing of Personal Data, 2019</xref>; <xref ref-type="bibr" rid="B11">CEPEJ, 2018</xref>.</p>
            </fn>
            <fn fn-type="other" id="fn27">
                <label>27</label>
                <p>See <xref ref-type="bibr" rid="B83">MANTELERO, 2020</xref>.</p>
            </fn>
            <fn fn-type="other" id="fn28">
                <label>28</label>
                <p><italic>E.g.</italic>, <xref ref-type="bibr" rid="B108">SELBST-BAROCAS, 2018</xref>; <xref ref-type="bibr" rid="B51">EDWARDS-VEALE, 2017</xref>; <xref ref-type="bibr" rid="B50">DIAKOPOULPS, 2013</xref>. See also <xref ref-type="bibr" rid="B74">KAMINSKI-MALGIERI, 2020</xref>.</p>
            </fn>
            <fn fn-type="other" id="fn29">
                <label>29</label>
                <p><italic>E.g.</italic>, <xref ref-type="bibr" rid="B23">Council of Europe, Directorate General of Democracy – European Committee on Democracy and Governance, 2016</xref>.</p>
            </fn>
            <fn fn-type="other" id="fn30">
                <label>30</label>
                <p><italic>E.g.</italic>,<xref ref-type="bibr" rid="B19"> Council of Europe Directorate General of Democracy and Political Affairs and Directorate of Democratic Institutions, 2009</xref>;<xref ref-type="bibr" rid="B25"> Council of Europe, 2009a</xref>, Article 2.2.iii.</p>
            </fn>
            <fn fn-type="other" id="fn31">
                <label>31</label>
                <p>For a more detailed analysis see <xref ref-type="bibr" rid="B60">Faye Jacobsen, 2013</xref>. See also <xref ref-type="bibr" rid="B78">MAISLEY, 2017</xref>.</p>
            </fn>
            <fn fn-type="other" id="fn32">
                <label>32</label>
                <p>See <xref ref-type="bibr" rid="B117">UN Human Rights Committee, 1996</xref>.</p>
            </fn>
            <fn fn-type="other" id="fn33">
                <label>33</label>
                <p>See also <xref ref-type="bibr" rid="B116">UN Committee on Economic, Social and Cultural Rights, 1981</xref>, para 5.</p>
            </fn>
            <fn fn-type="other" id="fn34">
                <label>34</label>
                <p>Information on these platforms is available at <ext-link ext-link-type="uri" xlink:href="https://decidim.org/">https://decidim.org/</ext-link>; <ext-link ext-link-type="uri" xlink:href="https://consulproject.org/en/">https://consulproject.org/en/</ext-link>; <ext-link ext-link-type="uri" xlink:href="https://www.citizenlab.co/">https://www.citizenlab.co/</ext-link>. Accessed: 29 dec. 2019.</p>
            </fn>
            <fn fn-type="other" id="fn35">
                <label>35</label>
                <p>See also<xref ref-type="bibr" rid="B29"> Council of Europe, 2017a</xref>.</p>
            </fn>
            <fn fn-type="other" id="fn36">
                <label>36</label>
                <p>See also<xref ref-type="bibr" rid="B35"> Council of Europe, 2009c</xref>.</p>
            </fn>
            <fn fn-type="other" id="fn37">
                <label>37</label>
                <p>See United Nations, Convention against Corruption, 2003, Article 13.</p>
            </fn>
            <fn fn-type="other" id="fn38">
                <label>38</label>
                <p>See<xref ref-type="bibr" rid="B34"> Council of Europe, 2009b</xref>, para 6.</p>
            </fn>
            <fn fn-type="other" id="fn39">
                <label>39</label>
                <p>See<xref ref-type="bibr" rid="B27"> Council of Europe, 1997</xref>.</p>
            </fn>
            <fn fn-type="other" id="fn40">
                <label>40</label>
                <p>See also<xref ref-type="bibr" rid="B29"> Council of Europe, 2017a</xref>.</p>
            </fn>
            <fn fn-type="other" id="fn41">
                <label>41</label>
                <p>See<xref ref-type="bibr" rid="B34"> Council of Europe, 2009b</xref>, para 6 and Appendix, para P.57. See also<xref ref-type="bibr" rid="B40"> Council of Europe, 2016b</xref>, Appendix, paras 2.1.3 and 3.2. On the importance of justification see <xref ref-type="bibr" rid="B69">Hildebrandt, 2018b</xref>, 271-3.</p>
            </fn>
            <fn fn-type="other" id="fn42">
                <label>42</label>
                <p>See also <xref ref-type="bibr" rid="B43">Council of Europe, 2018b</xref>, Appendix, para B.IV.</p>
            </fn>
            <fn fn-type="other" id="fn43">
                <label>43</label>
                <p>See also <xref ref-type="bibr" rid="B34">Council of Europe, 2009b</xref>, para 6 and Appendix, paras G.58 and P.54.</p>
            </fn>
            <fn fn-type="other" id="fn44">
                <label>44</label>
                <p>It is worth to underline that auditing and certification schemes play an important role also in cases of open-source AI architecture, as this nature does not imply per se absence of bias or any other shortcomings. See also <xref ref-type="bibr" rid="B34">Council of Europe, 2009b</xref>, Appendix, paras P. 55 and G.57.</p>
            </fn>
            <fn fn-type="other" id="fn45">
                <label>45</label>
                <p>See also <xref ref-type="bibr" rid="B34">Council of Europe, 2009b</xref>, Appendix, paras P. 56, G.56, 59 and 60.</p>
            </fn>
            <fn fn-type="other" id="fn46">
                <label>46</label>
                <p>See also <xref ref-type="bibr" rid="B34">Council of Europe, 2009b</xref>, para 6.</p>
            </fn>
            <fn fn-type="other" id="fn47">
                <label>47</label>
                <p>See also <xref ref-type="bibr" rid="B45">Council of Europe, 2020b</xref>.</p>
            </fn>
            <fn fn-type="other" id="fn48">
                <label>48</label>
                <p>See <xref ref-type="bibr" rid="B39">Council of Europe, 2016a</xref>, Appendix, para 2; <xref ref-type="bibr" rid="B28">Council of Europe Parliamentary Assembly, 2019a</xref>.</p>
            </fn>
            <fn fn-type="other" id="fn49">
                <label>49</label>
                <p>See also<xref ref-type="bibr" rid="B21"> Council of Europe Parliamentary Assembly, 2019b</xref>;<xref ref-type="bibr" rid="B38"> Council of Europe, 2014</xref>.</p>
            </fn>
            <fn fn-type="other" id="fn50">
                <label>50</label>
                <p>See also<xref ref-type="bibr" rid="B45"> Council of Europe, 2020b</xref>.</p>
            </fn>
            <fn fn-type="other" id="fn51">
                <label>51</label>
                <p>See also <xref ref-type="bibr" rid="B18">Council of Europe, 2019a</xref>, para 7.</p>
            </fn>
            <fn fn-type="other" id="fn52">
                <label>52</label>
                <p>See<xref ref-type="bibr" rid="B34"> Council of Europe, 2009b</xref>, paras 5 and 6, and Appendix, para G.67. See also <xref ref-type="bibr" rid="B80">Mantelero, 2018</xref>.</p>
            </fn>
            <fn fn-type="other" id="fn53">
                <label>53</label>
                <p>See <xref ref-type="bibr" rid="B88">MIKHAYLOV-ESTEVE-CAMPION, 2018</xref>.</p>
            </fn>
            <fn fn-type="other" id="fn54">
                <label>54</label>
                <p>See also<xref ref-type="bibr" rid="B34"> Council of Europe, 2009b</xref>, Appendix, para P. 58.</p>
            </fn>
            <fn fn-type="other" id="fn55">
                <label>55</label>
                <p>See also <xref ref-type="bibr" rid="B121">VERBEEK, 2011</xref>, 41-65.</p>
            </fn>
            <fn fn-type="other" id="fn56">
                <label>56</label>
                <p>See also<xref ref-type="bibr" rid="B45"> Council of Europe, 2020b</xref>, Appendix, para 3.5.</p>
            </fn>
            <fn fn-type="other" id="fn57">
                <label>57</label>
                <p>See also<xref ref-type="bibr" rid="B34"> Council of Europe, 2009b</xref>, Appendix, para P. 4;<xref ref-type="bibr" rid="B32"> Council of Europe, 2004</xref>;<xref ref-type="bibr" rid="B15"> Committee of Ministers of the Council of Europe, 2008</xref>.</p>
            </fn>
            <fn fn-type="other" id="fn58">
                <label>58</label>
                <p>See also<xref ref-type="bibr" rid="B99"> Privacy International, 2017</xref>.</p>
            </fn>
            <fn fn-type="other" id="fn59">
                <label>59</label>
                <p>See also <xref ref-type="bibr" rid="B44">Council of Europe, 2019b</xref>, Appendix, paras 4 and 9.</p>
            </fn>
            <fn fn-type="other" id="fn60">
                <label>60</label>
                <p>See also <xref ref-type="bibr" rid="B105">SAVAGET-CHIARINI-EVANS, 2019</xref> (discussing the Brazilian case of the ‘Operação Serenata de Amor’).</p>
            </fn>
            <fn fn-type="other" id="fn61">
                <label>61</label>
                <p>See <xref ref-type="bibr" rid="B86">MEHR, 2017</xref>.</p>
            </fn>
            <fn fn-type="other" id="fn62">
                <label>62</label>
                <p>See also <xref ref-type="bibr" rid="B18">Council of Europe – Committee of the Convention for the Protection of Individuals with regards to Processing of Personal Data (Convention 108), 2019</xref>, para 2.11.</p>
            </fn>
            <fn fn-type="other" id="fn63">
                <label>63</label>
                <p>On the use of nudging in the smart city context, see <xref ref-type="bibr" rid="B101">Ranchordás, 2019</xref>; <xref ref-type="bibr" rid="B62">Gandy-Nemorin, 2019</xref>. See generally <xref ref-type="bibr" rid="B109">Sunstein, 2015a</xref> and <xref ref-type="bibr" rid="B110">2015b</xref>; <xref ref-type="bibr" rid="B113">Thaler-Sunstein, 2008</xref>; <xref ref-type="bibr" rid="B111">Sunstein-Thaler, 2003</xref>.</p>
            </fn>
            <fn fn-type="other" id="fn64">
                <label>64</label>
                <p>See also <xref ref-type="bibr" rid="B12">CITRON-CALO, 2021</xref>.</p>
            </fn>
            <fn fn-type="other" id="fn65">
                <label>65</label>
                <p>See Section 3.</p>
            </fn>
            <fn fn-type="other" id="fn66">
                <label>66</label>
                <p>See <xref ref-type="bibr" rid="B42">Council of Europe, 2018a</xref>; <xref ref-type="bibr" rid="B114">The United Nations (UN) Special Rapporteur on Freedom of Opinion and Expression et al., 2017</xref>. See also<xref ref-type="bibr" rid="B40"> Council of Europe, 2016b</xref>, Appendix, paras 1.5, 2.1 and 3; <xref ref-type="bibr" rid="B54">European Commission for Democracy trough Law (Venice Commission), 2019</xref>, para 151.E; <xref ref-type="bibr" rid="B07">Bukovska, 2020</xref>. See also <xref ref-type="bibr" rid="B09">Bychawska-Siniarska, 2017</xref>.</p>
            </fn>
            <fn fn-type="other" id="fn67">
                <label>67</label>
                <p>See<xref ref-type="bibr" rid="B41"> Council of Europe, 2017b</xref>, Appendix I, paras 1, 2, 32, and 35-40. See also <xref ref-type="bibr" rid="B24">Council of Europe, Directorate General of democracy and Political Affairs – Directorate of Democratic Institutions, 2011</xref>.</p>
            </fn>
            <fn fn-type="other" id="fn68">
                <label>68</label>
                <p>See<xref ref-type="bibr" rid="B36"> Council of Europe, 2010</xref>; <xref ref-type="bibr" rid="B18">Council of Europe, Consultative Committee of the Convention of the Protection of Individuals with Regard to Automatic Processing of Personal Data, 2019</xref>.</p>
            </fn>
            <fn fn-type="other" id="fn69">
                <label>69</label>
                <p>See <xref ref-type="bibr" rid="B79">MANHEIM-KAPLAN, 2019</xref>; <xref ref-type="bibr" rid="B56">European Commission, Networks, Content and Technology- Directorate-General for Communication, 2018</xref>.</p>
            </fn>
            <fn fn-type="other" id="fn70">
                <label>70</label>
                <p>See also<xref ref-type="bibr" rid="B37"> Council of Europe, 2011</xref>.</p>
            </fn>
            <fn fn-type="other" id="fn71">
                <label>71</label>
                <p>See<xref ref-type="bibr" rid="B33"> Council of Europe, 2007</xref>, para I.1.</p>
            </fn>
            <fn fn-type="other" id="fn72">
                <label>72</label>
                <p>See also The United Nations (UN) Special Rapporteur on Freedom of Opinion and Expression et al., ‘Joint Declaration on “Fake News,” Disinformation and Propaganda’, para 2.c.</p>
            </fn>
            <fn fn-type="other" id="fn73">
                <label>73</label>
                <p>See<xref ref-type="bibr" rid="B33"> Council of Europe, 2007</xref>, para II.1.</p>
            </fn>
            <fn fn-type="other" id="fn74">
                <label>74</label>
                <p>See also The United Nations (UN) Special Rapporteur on Freedom of Opinion and Expression et al., Appendix, paras 2.1.3 and 2.3.5.</p>
            </fn>
            <fn fn-type="other" id="fn75">
                <label>75</label>
                <p>See also <xref ref-type="bibr" rid="B52">EU Code of Practice on Disinformation, 2018</xref>.</p>
            </fn>
            <fn fn-type="other" id="fn76">
                <label>76</label>
                <p>See also<xref ref-type="bibr" rid="B39"> Council of Europe, 2016a</xref>, Appendix, para 15.</p>
            </fn>
            <fn fn-type="other" id="fn77">
                <label>77</label>
                <p>See also The United Nations (UN) Special Rapporteur on Freedom of Opinion and Expression et al., para 4.e; <xref ref-type="bibr" rid="B57">European Commission for Democracy trough Law (Venice Commission), 2019</xref>, para 151.D.</p>
            </fn>
            <fn fn-type="other" id="fn78">
                <label>78</label>
                <p>See also <xref ref-type="bibr" rid="B20">Council of Europe Parliamentary Assembly, 2019a</xref>, paras 9.2 and 11.1; <xref ref-type="bibr" rid="B57">European Commission for Democracy trough Law (Venice Commission), 2019</xref>, paras 151.A and 151.B.</p>
            </fn>
            <fn fn-type="other" id="fn79">
                <label>79</label>
                <p>See also<xref ref-type="bibr" rid="B33"> Council of Europe, 2007</xref>, para II.5.</p>
            </fn>
            <fn fn-type="other" id="fn80">
                <label>80</label>
                <p>See also The United Nations (UN) Special Rapporteur on Freedom of Opinion and Expression et al., para 4.e.</p>
            </fn>
            <fn fn-type="other" id="fn81">
                <label>81</label>
                <p>See also, with regard to the EU area, the Charter of Fundamental Rights of the European Union.</p>
            </fn>
            <fn fn-type="other" id="fn82">
                <label>82</label>
                <p>See Section 2.</p>
            </fn>
            <fn fn-type="other" id="fn83">
                <label>83</label>
                <p>See also <xref ref-type="bibr" rid="B93">OSOBA-WELSER, 2017</xref>, 18. See also <xref ref-type="bibr" rid="B49">Cummings et al., 2018</xref>, 13.</p>
            </fn>
            <fn fn-type="other" id="fn84">
                <label>84</label>
                <p>See <xref ref-type="bibr" rid="B03">Aletras et al., 2016</xref>. See also <xref ref-type="bibr" rid="B97">Pasquale-Cashwell, 2018</xref>; <xref ref-type="bibr" rid="B68">Hildebrandt, 2018a</xref>.</p>
            </fn>
            <fn fn-type="other" id="fn85">
                <label>85</label>
                <p>See <xref ref-type="bibr" rid="B123">WACHTER, 2021</xref>; <xref ref-type="bibr" rid="B89">MITTELSTADT, 2017</xref>, 485. See also <xref ref-type="bibr" rid="B112">TAYLOR-FLORIDI-VAN der Sloot, 2017</xref>.</p>
            </fn>
            <fn fn-type="other" id="fn86">
                <label>86</label>
                <p>But see <xref ref-type="bibr" rid="B95">OSWALD, 2018</xref>; <xref ref-type="bibr" rid="B97">PASQUALE-CASHWELL, 2018</xref>.</p>
            </fn>
            <fn fn-type="other" id="fn87">
                <label>87</label>
                <p>See<xref ref-type="bibr" rid="B16"> Council of Bars &amp; Law Societies of Europe, 2020</xref>, 29.</p>
            </fn>
            <fn fn-type="other" id="fn88">
                <label>88</label>
                <p>See the following Section on the distinction between codified justice and equitable justice.</p>
            </fn>
            <fn fn-type="other" id="fn89">
                <label>89</label>
                <p>See <xref ref-type="bibr" rid="B11">CEPEJ, 2018</xref>, Appendix II.</p>
            </fn>
            <fn fn-type="other" id="fn90">
                <label>90</label>
                <p>See also <xref ref-type="bibr" rid="B91">NUNEZ, 2017</xref>.</p>
            </fn>
            <fn fn-type="other" id="fn91">
                <label>91</label>
                <p>See the notion of e-justice in <xref ref-type="bibr" rid="B34">Council of Europe, 2009b</xref>, Appendix, para 38.</p>
            </fn>
            <fn fn-type="other" id="fn92">
                <label>92</label>
                <p>See also <xref ref-type="bibr" rid="B11">CEPEJ, 2018</xref>.</p>
            </fn>
            <fn fn-type="other" id="fn93">
                <label>93</label>
                <p>See <xref ref-type="bibr" rid="B102">RE-SOLOW-NIEDERMAN, 2019</xref>, 252-4.</p>
            </fn>
            <fn fn-type="other" id="fn94">
                <label>94</label>
                <p>See also <xref ref-type="bibr" rid="B13">Clay, 2019</xref>), 58. In this regard, for example, a legal system that provides compensation for physical injuries on the basis of the effective patrimonial damages could be automatised, but it will not be able to reconsider the foundation of the legal reasoning and extend compensation to non-personal and existential damages.</p>
            </fn>
            <fn fn-type="other" id="fn95">
                <label>95</label>
                <p>See <xref ref-type="bibr" rid="B102">RE-SOLOW-NIEDERMAN, 2019</xref>.</p>
            </fn>
            <fn fn-type="other" id="fn96">
                <label>96</label>
                <p>See <xref ref-type="bibr" rid="B11">CEPEJ, 2018</xref>, 9.</p>
            </fn>
            <fn fn-type="other" id="fn97">
                <label>97</label>
                <p>See also <xref ref-type="bibr" rid="B11">CEPEJ, 2018</xref>.</p>
            </fn>
            <fn fn-type="other" id="fn98">
                <label>98</label>
                <p>See also<xref ref-type="bibr" rid="B45"> Council of Europe, 2020b</xref>, Appendix, para 11. See also Pasquale and Cashwell, ‘Prediction, Persuasion, and the Jurisprudence of Behaviourism’.</p>
            </fn>
            <fn fn-type="other" id="fn99">
                <label>99</label>
                <p>See also <xref ref-type="bibr" rid="B11">CEPEJ, 2018</xref>, Appendix I, para 138.</p>
            </fn>
            <fn fn-type="other" id="fn100">
                <label>100</label>
                <p>See also <xref ref-type="bibr" rid="B11">CEPEJ, 2018</xref>, Appendix I, para 131.</p>
            </fn>
            <fn fn-type="other" id="fn101">
                <label>101</label>
                <p>See also <xref ref-type="bibr" rid="B125">ZALNIERIUTE-BENNETT MOSES-WILLIAMS, 2019</xref>. In the case of administrative decisions, this propensity may be reinforced by the threat of potential sanctions for taking a decision that ignores results produced by analytics; <xref ref-type="bibr" rid="B18">Council of Europe – Committee of the Convention for the Protection of Individuals with regards to Processing of Personal Data, 2019</xref>, para 3.4.</p>
            </fn>
            <fn fn-type="other" id="fn102">
                <label>102</label>
                <p>See also <xref ref-type="bibr" rid="B12">CITRON-CALO, 2021</xref>; <xref ref-type="bibr" rid="B81">MANTELERO, 2019</xref>; <xref ref-type="bibr" rid="B06">BRAUNEIS-GOODMAN, 2018</xref>, 127.</p>
            </fn>
            <fn fn-type="other" id="fn103">
                <label>103</label>
                <p>See <xref ref-type="bibr" rid="B126">ZAVRŠNIK, 2019</xref>; <xref ref-type="bibr" rid="B58">European Union Agency for Fundamental Rights, 2018</xref>, 98-100; OSOBA-WELSER, 2017.</p>
            </fn>
            <fn fn-type="other" id="fn104">
                <label>104</label>
                <p>For a taxonomy of predictive methods, see <xref ref-type="bibr" rid="B98">PERRY et al., 2013</xref>.</p>
            </fn>
            <fn fn-type="other" id="fn105">
                <label>105</label>
                <p>See also<xref ref-type="bibr" rid="B31"> Council of Europe, 2001</xref>, Appendix, para 42.</p>
            </fn>
            <fn fn-type="other" id="fn106">
                <label>106</label>
                <p>See also <xref ref-type="bibr" rid="B103">RICHARDSON-SCHULTZ-CRAWFORD, 2019</xref>.</p>
            </fn>
            <fn fn-type="other" id="fn107">
                <label>107</label>
                <p>See also <xref ref-type="bibr" rid="B94">OSWALD, 2018</xref>; <xref ref-type="bibr" rid="B04">Barrett, 2017</xref>, 361-2.</p>
            </fn>
            <fn fn-type="other" id="fn108">
                <label>108</label>
                <p>See <xref ref-type="bibr" rid="B56">European Union Agency for Fundamental Rights, 2018</xref>, 10.</p>
            </fn>
            <fn fn-type="other" id="fn109">
                <label>109</label>
                <p>See also <xref ref-type="bibr" rid="B04">BARRETT, 358-9</xref>.</p>
            </fn>
            <fn fn-type="other" id="fn110">
                <label>110</label>
                <p>See <xref ref-type="bibr" rid="B05">BENNETT MOSES-Chan, 2018</xref>.</p>
            </fn>
            <fn fn-type="other" id="fn111">
                <label>111</label>
                <p>See <xref ref-type="bibr" rid="B77">LYNSKEY, 2019</xref>; <xref ref-type="bibr" rid="B85">MANTELERO-VACIAGO, 2015</xref>; <xref ref-type="bibr" rid="B67">Hildebrandt-Gutwirth, 2008</xref>.</p>
            </fn>
            <fn fn-type="other" id="fn112">
                <label>112</label>
                <p>See also<xref ref-type="bibr" rid="B31"> Council of Europe, 2001</xref>, Appendix, paras 47 and 49.</p>
            </fn>
            <fn fn-type="other" id="fn113">
                <label>113</label>
                <p>See van <xref ref-type="bibr" rid="B119">BRAKEL-De Hert, 2011</xref>, 183.</p>
            </fn>
            <fn fn-type="other" id="fn114">
                <label>114</label>
                <p>See <xref ref-type="bibr" rid="B87">MEIJER-WESSELS, 2019</xref>.</p>
            </fn>
            <fn fn-type="other" id="fn115">
                <label>115</label>
                <p>See also <xref ref-type="bibr" rid="B104">ROSENBAUM, 2006</xref>, 245-66.</p>
            </fn>
            <fn fn-type="other" id="fn116">
                <label>116</label>
                <p>See also the principles of equitable access and of beneficence in health sector, or the principles of non-interference by public authorities in the media to influence elections and the obligation to treat all political parties and candidates equally in electoral advertising.</p>
            </fn>
            <fn fn-type="other" id="fn117">
                <label>117</label>
                <p>See <xref ref-type="bibr" rid="B26">Council of Europe, 2018</xref>a, and Council of Europe – Committee of the Convention for the Protection of Individuals with regards to Processing of Personal Data, 2019.</p>
            </fn>
        </fn-group>
        <ref-list>
            <title>REFERENCES</title>

            <ref id="B01">

                <mixed-citation>AGRE, P.E. Surveillance and Capture: Two Models of Privacy. <italic>The Information Society</italic> 10, 101,1994.</mixed-citation>

                <element-citation publication-type="journal">
                    <person-group person-group-type="author">
                        <name>
                            <surname>AGRE</surname>
                            <given-names>P.E.</given-names>
                        </name>
                    </person-group>
                    <article-title>Surveillance and Capture: Two Models of Privacy</article-title>
                    <source>The Information Society</source>
                    <volume>10</volume>
                    <issue>101</issue>
                    <year>1994</year>

                </element-citation>
            </ref>
            <ref id="B02">

                <mixed-citation>AI NOW INSTITUTE. AI Now 2017 Report. New York, 2017. Available at https://assets.contentful.com/8wprhhvnpfc0/1A9c3ZTCZa2KEYM64Wsc2a/8636557c5fb14f2b74b2be64c3ce0c78/_AI_Now_Institute_2017_Report_.pdf. Accessed: 26 oct. 2017.</mixed-citation>

                <element-citation publication-type="report">
                    <person-group person-group-type="author">
                        <collab>AI NOW INSTITUTE</collab>
                    </person-group>
                    <source>AI Now 2017 Report</source>
                    <publisher-loc>New York</publisher-loc>
                    <year>2017</year>
                    <comment>Available at <ext-link ext-link-type="uri" xlink:href="https://assets.contentful.com/8wprhhvnpfc0/1A9c3ZTCZa2KEYM64Wsc2a/8636557c5fb14f2b74b2be64c3ce0c78/_AI_Now_Institute_2017_Report_.pdf">https://assets.contentful.com/8wprhhvnpfc0/1A9c3ZTCZa2KEYM64Wsc2a/8636557c5fb14f2b74b2be64c3ce0c78/_AI_Now_Institute_2017_Report_.pdf</ext-link></comment>
                    <date-in-citation content-type="access-date">26 oct. 2017</date-in-citation>

                </element-citation>
            </ref>
            <ref id="B03">

                <mixed-citation>ALETRAS, N. et al. Predicting Judicial Decisions of the European Court of Human Rights: A Natural Language Processing Perspective. <italic>PeerJ Computer Science</italic> 2, 93, 2016, doi:10.7717/peerj-cs.93.</mixed-citation>

                <element-citation publication-type="journal">
                    <person-group person-group-type="author">
                        <name>
                            <surname>ALETRAS</surname>
                            <given-names>N.</given-names>
                        </name>
                        <etal/>
                    </person-group>
                    <article-title>Predicting Judicial Decisions of the European Court of Human Rights: A Natural Language Processing Perspective</article-title>
                    <source>PeerJ Computer Science</source>
                    <volume>2</volume>
                    <issue>93</issue>
                    <year>2016</year>
                    <pub-id pub-id-type="doi">10.7717/peerj-cs.93</pub-id>

                </element-citation>
            </ref>
            <ref id="B04">

                <mixed-citation>BARRETT, L. Reasoably Suspicious Algorithms: Predictive Policing at the United States Border. <italic>New York University Review of Law &amp; Social Change</italic> 41, 327, 2017.</mixed-citation>

                <element-citation publication-type="journal">
                    <person-group person-group-type="author">
                        <name>
                            <surname>BARRETT</surname>
                            <given-names>L.</given-names>
                        </name>
                    </person-group>
                    <article-title>Reasoably Suspicious Algorithms: Predictive Policing at the United States Border</article-title>
                    <source>New York University Review of Law &amp; Social Change</source>
                    <volume>41</volume>
                    <issue>327</issue>
                    <year>2017</year>

                </element-citation>
            </ref>
            <ref id="B05">

                <mixed-citation>BENNETT MOSES L.; CHAN, J. Algorithmic Prediction in Policing: Assumptions, Evaluation, and Accountability. <italic>Policing and Society</italic> 28, 806, 2018.</mixed-citation>

                <element-citation publication-type="journal">
                    <person-group person-group-type="author">
                        <name>
                            <surname>BENNETT</surname>
                            <given-names>MOSES L.</given-names>
                        </name>
                        <name>
                            <surname>CHAN</surname>
                            <given-names>J.</given-names>
                        </name>
                    </person-group>
                    <article-title>Algorithmic Prediction in Policing: Assumptions, Evaluation, and Accountability</article-title>
                    <source>Policing and Society</source>
                    <volume>28</volume>
                    <issue>806</issue>
                    <year>2018</year>

                </element-citation>
            </ref>
            <ref id="B06">

                <mixed-citation>BRAUNEIS R.; GOODMAN, E.P. Algorithmic Transparency for the Smart City. <italic>Yale J.L. &amp; Tech</italic>. 20, 103, 2018.</mixed-citation>

                <element-citation publication-type="journal">
                    <person-group person-group-type="author">
                        <name>
                            <surname>BRAUNEIS</surname>
                            <given-names>R.</given-names>
                        </name>
                        <name>
                            <surname>GOODMAN</surname>
                            <given-names>E.P.</given-names>
                        </name>
                    </person-group>
                    <article-title>Algorithmic Transparency for the Smart City</article-title>
                    <source>Yale J.L. &amp; Tech</source>
                    <volume>20</volume>
                    <issue>103</issue>
                    <year>2018</year>

                </element-citation>
            </ref>
            <ref id="B07">

                <mixed-citation>BUKOVSKA, B. Spotlight on Artificial Intelligence and Freedom of Expression #SAIFE’. Organization for Security and Co-operation in Europe, 2020. Available at https://www.osce.org/files/f/documents/9/f/456319_0.pdf. Accessed: 11 aug. 2020.</mixed-citation>

                <element-citation publication-type="webpage">
                    <person-group person-group-type="author">
                        <name>
                            <surname>BUKOVSKA</surname>
                            <given-names>B.</given-names>
                        </name>
                    </person-group>
                    <source>Spotlight on Artificial Intelligence and Freedom of Expression #SAIFE’. Organization for Security and Co-operation in Europe</source>
                    <year>2020</year>
                    <comment>Available at <ext-link ext-link-type="uri" xlink:href="https://www.osce.org/files/f/documents/9/f/456319_0.pdf">https://www.osce.org/files/f/documents/9/f/456319_0.pdf</ext-link></comment>
                    <date-in-citation content-type="access-date">11 aug. 2020</date-in-citation>

                </element-citation>
            </ref>
            <ref id="B08">

                <mixed-citation>BURRELL, J. How the Machine “Thinks”: Understanding Opacity in Machine Learning Algorithms. <italic>Big Data &amp; Society</italic> 3(1), 2016, doi: 10.1177/2053951715622512.</mixed-citation>

                <element-citation publication-type="journal">
                    <person-group person-group-type="author">
                        <name>
                            <surname>BURRELL</surname>
                            <given-names>J.</given-names>
                        </name>
                    </person-group>
                    <article-title>How the Machine “Thinks”: Understanding Opacity in Machine Learning Algorithms</article-title>
                    <source>Big Data &amp; Society</source>
                    <volume>3</volume>
                    <issue>1</issue>
                    <year>2016</year>
                    <pub-id pub-id-type="doi">10.1177/2053951715622512</pub-id>

                </element-citation>
            </ref>
            <ref id="B09">

                <mixed-citation>BYCHAWSKA-SINIARSKA, D. Protection the Right to Freedom of Expression under the European Convention on Human Rights. Council of Europe, 2017.</mixed-citation>

                <element-citation publication-type="book">
                    <person-group person-group-type="author">
                        <name>
                            <surname>BYCHAWSKA-SINIARSKA</surname>
                            <given-names>D.</given-names>
                        </name>
                    </person-group>
                    <source>Protection the Right to Freedom of Expression under the European Convention on Human Rights</source>
                    <publisher-name>Council of Europe</publisher-name>
                    <year>2017</year>

                </element-citation>
            </ref>
            <ref id="B10">

                <mixed-citation>CARUANA, R. et al. Intelligible models for healthcare: predicting pneumonia risk and hospital 30-day readmission. <italic>Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining</italic>, 2015.</mixed-citation>

                <element-citation publication-type="confproc">
                    <person-group person-group-type="author">
                        <name>
                            <surname>CARUANA</surname>
                            <given-names>R.</given-names>
                        </name>
                        <etal/>
                    </person-group>
                    <source>Intelligible models for healthcare: predicting pneumonia risk and hospital 30-day readmission</source>
                    <conf-name>Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining</conf-name>
                    <conf-date>2015</conf-date>
                    <year>2015</year>

                </element-citation>
            </ref>
            <ref id="B11">

                <mixed-citation>CEPEJ – European Commission for the Efficiency of Justice. European Ethical Charter on the Use of Artificial Intelligence in Judicial Systems and Their Environment, 2018.</mixed-citation>

                <element-citation publication-type="book">
                    <person-group person-group-type="author">
                        <collab>CEPEJ – European Commission for the Efficiency of Justice</collab>
                    </person-group>
                    <source>European Ethical Charter on the Use of Artificial Intelligence in Judicial Systems and Their Environment</source>
                    <year>2018</year>

                </element-citation>
            </ref>
            <ref id="B12">

                <mixed-citation>CITRON D.K.; CALO R. The Automated Administrative State: A Crisis of Legitimacy. <italic>Emory Law Journal</italic> 70(4), 2021.</mixed-citation>

                <element-citation publication-type="journal">
                    <person-group person-group-type="author">
                        <name>
                            <surname>CITRON</surname>
                            <given-names>D.K.</given-names>
                        </name>
                        <name>
                            <surname>CALO</surname>
                            <given-names>R.</given-names>
                        </name>
                    </person-group>
                    <article-title>The Automated Administrative State: A Crisis of Legitimacy</article-title>
                    <source>Emory Law Journal</source>
                    <volume>70</volume>
                    <issue>4</issue>
                    <year>2021</year>

                </element-citation>
            </ref>
            <ref id="B13">

                <mixed-citation>CLAY T. (ed), <italic>L’arbitrage en ligne. Rapport du Club des Juristes</italic>, 2019. Available at https://www.leclubdesjuristes.com/les-commissions/larbitrage-en-ligne/. Accessed: 30 may 2020.</mixed-citation>

                <element-citation publication-type="webpage">
                    <person-group person-group-type="editor">
                        <name>
                            <surname>CLAY</surname>
                            <given-names>T.</given-names>
                        </name>
                    </person-group>
                    <source>L’arbitrage en ligne. Rapport du Club des Juristes</source>
                    <year>2019</year>
                    <comment>Available at <ext-link ext-link-type="uri" xlink:href="https://www.leclubdesjuristes.com/les-commissions/larbitrage-en-ligne/">https://www.leclubdesjuristes.com/les-commissions/larbitrage-en-ligne/</ext-link></comment>
                    <date-in-citation content-type="access-date">30 may 2020</date-in-citation>

                </element-citation>
            </ref>
            <ref id="B14">

                <mixed-citation>COHEN, J.E. <italic>Between Truth and Power. The Legal Construction of Informational Capitalism</italic>. New York, Oxford University Press, 2019.</mixed-citation>

                <element-citation publication-type="book">
                    <person-group person-group-type="author">
                        <name>
                            <surname>COHEN</surname>
                            <given-names>J.E.</given-names>
                        </name>
                    </person-group>
                    <source>Between Truth and Power. The Legal Construction of Informational Capitalism</source>
                    <publisher-loc>New York</publisher-loc>
                    <publisher-name>Oxford University Press</publisher-name>
                    <year>2019</year>

                </element-citation>
            </ref>
            <ref id="B15">

                <mixed-citation>Committee of Ministers of the Council of Europe. The 12 Principles of Good Governance enshrined in the Strategy on Innovation and Good Governance at local level, 2008.</mixed-citation>

                <element-citation publication-type="book">
                    <person-group person-group-type="author">
                        <collab>Committee of Ministers of the Council of Europe</collab>
                    </person-group>
                    <source>The 12 Principles of Good Governance enshrined in the Strategy on Innovation and Good Governance at local level</source>
                    <year>2008</year>

                </element-citation>
            </ref>
            <ref id="B16">

                <mixed-citation>Council of Bars &amp; Law Societies of Europe. CCBE Considerations on the Legal Aspects of Artificial Intelligence, 2020.</mixed-citation>

                <element-citation publication-type="book">
                    <person-group person-group-type="author">
                        <collab>Council of Bars &amp; Law Societies of Europe</collab>
                    </person-group>
                    <source>CCBE Considerations on the Legal Aspects of Artificial Intelligence</source>
                    <year>2020</year>

                </element-citation>
            </ref>
            <ref id="B17">

                <mixed-citation>Council of Europe – Ad hoc Committee on Artificial Intelligence (CAHAI). Feasibility Study, CAHAI(2020)23, 2020. Available at https://rm.coe.int/cahai-2020-23-final-eng-feasibility-study-/1680a0c6da. Accessed: 29 jul.2021.</mixed-citation>

                <element-citation publication-type="webpage">
                    <person-group person-group-type="author">
                        <collab>Council of Europe – Ad hoc Committee on Artificial Intelligence (CAHAI)</collab>
                    </person-group>
                    <source>Feasibility Study, CAHAI(2020)23</source>
                    <year>2020</year>
                    <comment>Available at <ext-link ext-link-type="uri" xlink:href="https://rm.coe.int/cahai-2020-23-final-eng-feasibility-study-/1680a0c6da">https://rm.coe.int/cahai-2020-23-final-eng-feasibility-study-/1680a0c6da</ext-link></comment>
                    <date-in-citation content-type="access-date">29 jul.2021</date-in-citation>

                </element-citation>
            </ref>
            <ref id="B18">

                <mixed-citation>Council of Europe – Committee of the Convention for the Protection of Individuals with regards to Processing of Personal Data (Convention 108). Guidelines on artificial intelligence and data protection, 2019. Available at https://rm.coe.int/guidelines-on-artificial-intelligence-and-data-protection/168091f9d8. Accessed: 20 feb. 2020.</mixed-citation>

                <element-citation publication-type="confproc">
                    <person-group person-group-type="author">
                        <collab>Council of Europe</collab>
                    </person-group>
                    <conf-name>Committee of the Convention for the Protection of Individuals with regards to Processing of Personal Data (Convention 108)</conf-name>
                    <source>Guidelines on artificial intelligence and data protection</source>
                    <year>2019</year>
                    <comment>Available at <ext-link ext-link-type="uri" xlink:href="https://rm.coe.int/guidelines-on-artificial-intelligence-and-data-protection/168091f9d8">https://rm.coe.int/guidelines-on-artificial-intelligence-and-data-protection/168091f9d8</ext-link></comment>
                    <date-in-citation content-type="access-date">20 feb. 2020</date-in-citation>

                </element-citation>
            </ref>
            <ref id="B19">

                <mixed-citation>Council of Europe Directorate General of Democracy and Political Affairs and Directorate of Democratic Institutions, Project «Good Governance in the Information Society», CM(2009)9 Addendum 3, 2009</mixed-citation>

                <element-citation publication-type="book">
                    <person-group person-group-type="author">
                        <collab>Council of Europe Directorate General of Democracy and Political Affairs and Directorate of Democratic Institutions</collab>
                    </person-group>
                    <source>Project «Good Governance in the Information Society»</source>
                    <publisher-name>CM(2009)9 Addendum 3</publisher-name>
                    <year>2009</year>

                </element-citation>
            </ref>
            <ref id="B20">

                <mixed-citation>Council of Europe Parliamentary Assembly. Resolution 2254 (2019)1. Media freedom as a condition for democratic elections, 2019a.</mixed-citation>

                <element-citation publication-type="book">
                    <person-group person-group-type="author">
                        <collab>Council of Europe Parliamentary Assembly</collab>
                    </person-group>
                    <source>Resolution 2254 (2019)1. Media freedom as a condition for democratic elections</source>
                    <year>2019a</year>

                </element-citation>
            </ref>
            <ref id="B21">

                <mixed-citation>Council of Europe Parliamentary Assembly. Resolution 2300 (2019)1. Improving the protection of whistle-blowers all over Europe, 2019b.</mixed-citation>

                <element-citation publication-type="book">
                    <person-group person-group-type="author">
                        <collab>Council of Europe Parliamentary Assembly</collab>
                    </person-group>
                    <source>Resolution 2300 (2019)1. Improving the protection of whistle-blowers all over Europe</source>
                    <year>2019b</year>

                </element-citation>
            </ref>
            <ref id="B22">

                <mixed-citation>Council of Europe, Consultative Committee of the Convention of the Protection of Individuals with Regard to Automatic Processing of Personal Data. Profiling and Convention 108+: Suggestions for an update, T-PD(2019)07BISrev, 2019.</mixed-citation>

                <element-citation publication-type="confproc">
                    <person-group person-group-type="author">
                        <collab>Council of Europe</collab>
                    </person-group>
                    <source>Consultative Committee of the Convention of the Protection of Individuals with Regard to Automatic Processing of Personal Data</source>
                    <conf-name>Profiling and Convention 108+: Suggestions for an update</conf-name>
                    <conf-date>2019</conf-date>
                    <year>2019</year>

                </element-citation>
            </ref>
            <ref id="B23">

                <mixed-citation>Council of Europe, Directorate General of Democracy – European Committee on Democracy and Governance. The Compendium of the most relevant Council of Europe texts in the area of democracy, 2016.</mixed-citation>

                <element-citation publication-type="confproc">
                    <person-group person-group-type="author">
                        <collab>Council of Europe, Directorate General of Democracy</collab>
                    </person-group>
                    <conf-name>European Committee on Democracy and Governance</conf-name>
                    <source>The Compendium of the most relevant Council of Europe texts in the area of democracy</source>
                    <year>2016</year>

                </element-citation>
            </ref>
            <ref id="B24">

                <mixed-citation>Council of Europe, Directorate General of democracy and Political Affairs – Directorate of Democratic Institutions. Guidelines on transparency of e-enabled elections, 2011.</mixed-citation>

                <element-citation publication-type="confproc">
                    <person-group person-group-type="author">
                        <collab>Council of Europe, Directorate General of democracy and Political Affairs</collab>
                    </person-group>
                    <conf-name>Directorate of Democratic Institutions</conf-name>
                    <source>Guidelines on transparency of e-enabled elections</source>
                    <year>2011</year>

                </element-citation>
            </ref>
            <ref id="B25">

                <mixed-citation>Council of Europe. Additional Protocol to the European Charter of Local Self-Government on the right to participate in the affairs of a local authority, 2009a.</mixed-citation>

                <element-citation publication-type="book">
                    <person-group person-group-type="author">
                        <collab>Council of Europe</collab>
                    </person-group>
                    <source>Additional Protocol to the European Charter of Local Self-Government on the right to participate in the affairs of a local authority</source>
                    <year>2009a</year>

                </element-citation>
            </ref>
            <ref id="B26">

                <mixed-citation>______. Algorithms and Human Rights. Study on the Human Rights Dimensions of Automated Data Processing Techniques and Possible Regulatory Implications, 2018. Available at https://edoc.coe.int/en/internet/7589-algorithms-and-human-rights-study-on-the-human-rights-dimensions-of-automated-data-processing-techniques-and-possible-regulatory-implications.html. Accessed: 5 may 2018.</mixed-citation>

                <element-citation publication-type="webpage">
                    <person-group person-group-type="author">
                        <collab>Council of Europe</collab>
                    </person-group>
                    <source>Algorithms and Human Rights. Study on the Human Rights Dimensions of Automated Data Processing Techniques and Possible Regulatory Implications</source>
                    <year>2018</year>
                    <comment>Available at <ext-link ext-link-type="uri" xlink:href="https://edoc.coe.int/en/internet/7589-algorithms-and-human-rights-study-on-the-human-rights-dimensions-of-automated-data-processing-techniques-and-possible-regulatory-implications.html">https://edoc.coe.int/en/internet/7589-algorithms-and-human-rights-study-on-the-human-rights-dimensions-of-automated-data-processing-techniques-and-possible-regulatory-implications.html</ext-link></comment>
                    <date-in-citation content-type="access-date">5 may 2018</date-in-citation>

                </element-citation>
            </ref>
            <ref id="B27">

                <mixed-citation>______. Convention for the Protection of Human Rights and Dignity of the Human Being with regard to the Application of Biology and Medicine: Convention on Human Rights and Biomedicine, 1997.</mixed-citation>

                <element-citation publication-type="book">
                    <person-group person-group-type="author">
                        <collab>Council of Europe</collab>
                    </person-group>
                    <source>Convention for the Protection of Human Rights and Dignity of the Human Being with regard to the Application of Biology and Medicine: Convention on Human Rights and Biomedicine</source>
                    <year>1997</year>

                </element-citation>
            </ref>
            <ref id="B28">

                <mixed-citation>______. Declaration by the Committee of Ministers on the manipulative capabilities of algorithmic processes, 2019a.</mixed-citation>

                <element-citation publication-type="book">
                    <person-group person-group-type="author">
                        <collab>Council of Europe</collab>
                    </person-group>
                    <source>Declaration by the Committee of Ministers on the manipulative capabilities of algorithmic processes</source>
                    <year>2019a</year>

                </element-citation>
            </ref>
            <ref id="B29">

                <mixed-citation>______. Guidelines for civil participation in political decision making, CM(2017)83-final, 2017a.</mixed-citation>

                <element-citation publication-type="book">
                    <person-group person-group-type="author">
                        <collab>Council of Europe</collab>
                    </person-group>
                    <source>Guidelines for civil participation in political decision making, CM(2017)83-final</source>
                    <year>2017a</year>

                </element-citation>
            </ref>
            <ref id="B30">

                <mixed-citation>______. Modernised Convention for the Protection of Individuals with Regard to the Processing of Personal Data (Convention 108+), 2018.Council of Europe.</mixed-citation>

                <element-citation publication-type="book">
                    <person-group person-group-type="author">
                        <collab>Council of Europe</collab>
                    </person-group>
                    <source>Modernised Convention for the Protection of Individuals with Regard to the Processing of Personal Data (Convention 108+)</source>
                    <year>2018</year>

                </element-citation>
            </ref>
            <ref id="B31">

                <mixed-citation>______. Recommendation CM/Rec(2001)10 on the European Code of Police Ethics, 2001.</mixed-citation>

                <element-citation publication-type="book">
                    <person-group person-group-type="author">
                        <collab>Council of Europe</collab>
                    </person-group>
                    <source>Recommendation CM/Rec(2001)10 on the European Code of Police Ethics</source>
                    <year>2001</year>

                </element-citation>
            </ref>
            <ref id="B32">

                <mixed-citation>______. Recommendation CM/Rec(2004)15 on electronic governance (“e-governance”), 2004.</mixed-citation>

                <element-citation publication-type="book">
                    <person-group person-group-type="author">
                        <collab>Council of Europe</collab>
                    </person-group>
                    <source>Recommendation CM/Rec(2004)15 on electronic governance (“e-governance”)</source>
                    <year>2004</year>

                </element-citation>
            </ref>
            <ref id="B33">

                <mixed-citation>______. Recommendation CM/Rec(2007)15 on measures concerning media coverage of election campaigns, 2007.</mixed-citation>

                <element-citation publication-type="book">
                    <person-group person-group-type="author">
                        <collab>Council of Europe</collab>
                    </person-group>
                    <source>Recommendation CM/Rec(2007)15 on measures concerning media coverage of election campaigns</source>
                    <year>2007</year>

                </element-citation>
            </ref>
            <ref id="B34">

                <mixed-citation>______. Recommendation CM/Rec(2009)1 on electronic democracy (e-democracy), 2009b.</mixed-citation>

                <element-citation publication-type="book">
                    <person-group person-group-type="author">
                        <collab>Council of Europe</collab>
                    </person-group>
                    <source>Recommendation CM/Rec(2009)1 on electronic democracy (e-democracy)</source>
                    <year>2009b</year>

                </element-citation>
            </ref>
            <ref id="B35">

                <mixed-citation>______. Recommendation CM/Rec(2009)2 on the evaluation, auditing and monitoring of participation and participation policies at local and regional level, 2009c.</mixed-citation>

                <element-citation publication-type="book">
                    <person-group person-group-type="author">
                        <collab>Council of Europe</collab>
                    </person-group>
                    <source>Recommendation CM/Rec(2009)2 on the evaluation, auditing and monitoring of participation and participation policies at local and regional level</source>
                    <year>2009c</year>

                </element-citation>
            </ref>
            <ref id="B36">

                <mixed-citation>______. Recommendation CM/Rec(2010)13 on the protection of individuals with regard to automatic processing of personal data in the context of profiling, 2010.</mixed-citation>

                <element-citation publication-type="book">
                    <person-group person-group-type="author">
                        <collab>Council of Europe</collab>
                    </person-group>
                    <source>Recommendation CM/Rec(2010)13 on the protection of individuals with regard to automatic processing of personal data in the context of profiling</source>
                    <year>2010</year>

                </element-citation>
            </ref>
            <ref id="B37">

                <mixed-citation>______. Recommendation CM/Rec(2011)7 on a new notion of media, 2011.</mixed-citation>

                <element-citation publication-type="book">
                    <person-group person-group-type="author">
                        <collab>Council of Europe</collab>
                    </person-group>
                    <source>Recommendation CM/Rec(2011)7 on a new notion of media</source>
                    <year>2011</year>

                </element-citation>
            </ref>
            <ref id="B38">

                <mixed-citation>______. Recommendation CM/Rec(2014)7 on the protection of whistleblowers, 2014.</mixed-citation>

                <element-citation publication-type="book">
                    <person-group person-group-type="author">
                        <collab>Council of Europe</collab>
                    </person-group>
                    <source>Recommendation CM/Rec(2014)7 on the protection of whistleblowers</source>
                    <year>2014</year>

                </element-citation>
            </ref>
            <ref id="B39">

                <mixed-citation>______. Recommendation CM/Rec(2016)4 on the protection of journalism and safety of journalists and other media actors, 2016a.</mixed-citation>

                <element-citation publication-type="book">
                    <person-group person-group-type="author">
                        <collab>Council of Europe</collab>
                    </person-group>
                    <source>Recommendation CM/Rec(2016)4 on the protection of journalism and safety of journalists and other media actors</source>
                    <year>2016a</year>

                </element-citation>
            </ref>
            <ref id="B40">

                <mixed-citation>______. Recommendation CM/Rec(2016)5 on Internet freedom, 2016b.</mixed-citation>

                <element-citation publication-type="book">
                    <person-group person-group-type="author">
                        <collab>Council of Europe</collab>
                    </person-group>
                    <source>Recommendation CM/Rec(2016)5 on Internet freedom</source>
                    <year>2016b</year>

                </element-citation>
            </ref>
            <ref id="B41">

                <mixed-citation>______. Recommendation CM/Rec(2017)5 on standards for e-voting, 2017b.</mixed-citation>

                <element-citation publication-type="book">
                    <person-group person-group-type="author">
                        <collab>Council of Europe</collab>
                    </person-group>
                    <source>Recommendation CM/Rec(2017)5 on standards for e-voting</source>
                    <year>2017b</year>

                </element-citation>
            </ref>
            <ref id="B42">

                <mixed-citation>______. Recommendation CM/Rec(2018)1 on media pluralism and transparency of media ownership, 2018a.</mixed-citation>

                <element-citation publication-type="book">
                    <person-group person-group-type="author">
                        <collab>Council of Europe</collab>
                    </person-group>
                    <source>Recommendation CM/Rec(2018)1 on media pluralism and transparency of media ownership</source>
                    <year>2018a</year>

                </element-citation>
            </ref>
            <ref id="B43">

                <mixed-citation>______. Recommendation CM/Rec(2018)4 on the participation of citizens in local public life, 2018b.</mixed-citation>

                <element-citation publication-type="book">
                    <person-group person-group-type="author">
                        <collab>Council of Europe</collab>
                    </person-group>
                    <source>Recommendation CM/Rec(2018)4 on the participation of citizens in local public life</source>
                    <year>2018b</year>

                </element-citation>
            </ref>
            <ref id="B44">

                <mixed-citation>______. Recommendation CM/Rec(2019)3 on supervision of local authorities’ activities, 2019b.</mixed-citation>

                <element-citation publication-type="book">
                    <person-group person-group-type="author">
                        <collab>Council of Europe</collab>
                    </person-group>
                    <source>Recommendation CM/Rec(2019)3 on supervision of local authorities’ activities</source>
                    <year>2019b</year>

                </element-citation>
            </ref>
            <ref id="B45">

                <mixed-citation>______. Recommendation CM/Rec(2020)1 on the human rights impacts of algorithmic systems, 2020b.</mixed-citation>

                <element-citation publication-type="book">
                    <person-group person-group-type="author">
                        <collab>Council of Europe</collab>
                    </person-group>
                    <source>Recommendation CM/Rec(2020)1 on the human rights impacts of algorithmic systems</source>
                    <year>2020b</year>

                </element-citation>
            </ref>
            <ref id="B46">

                <mixed-citation>______. Study on the Human Rights Dimensions of Automated Data Processing Techniques (in Particular Algorithms) and Possible Regulatory Implications, 2018c. Available at https://rm.coe.int/algorithms-and-humanrights-en-rev/16807956b5 Accessed: 15 jan. 2019.</mixed-citation>

                <element-citation publication-type="webpage">
                    <person-group person-group-type="author">
                        <collab>Council of Europe</collab>
                    </person-group>
                    <source>Study on the Human Rights Dimensions of Automated Data Processing Techniques (in Particular Algorithms) and Possible Regulatory Implications</source>
                    <year>2018c</year>
                    <comment>Available at <ext-link ext-link-type="uri" xlink:href="https://rm.coe.int/algorithms-and-humanrights-en-rev/16807956b5">https://rm.coe.int/algorithms-and-humanrights-en-rev/16807956b5</ext-link></comment>
                    <date-in-citation content-type="access-date">15 jan. 2019</date-in-citation>

                </element-citation>
            </ref>
            <ref id="B47">

                <mixed-citation>Council of Europe. Graphical visualisation of the distribution of strategic and ethical frameworks relating to artificial intelligence, 2020a.</mixed-citation>

                <element-citation publication-type="book">
                    <person-group person-group-type="author">
                        <collab>Council of Europe</collab>
                    </person-group>
                    <source>Graphical visualisation of the distribution of strategic and ethical frameworks relating to artificial intelligence</source>
                    <year>2020a</year>

                </element-citation>
            </ref>
            <ref id="B48">

                <mixed-citation>CRAWFORD K.; JOLER, V. Anatomy of an AI System: The Amazon Echo As An Anatomical Map of Human Labor, Data and Planetary Resources, 2018. Available at http://www.anatomyof.ai. Accessed: 27 dec. 2019.</mixed-citation>

                <element-citation publication-type="webpage">
                    <person-group person-group-type="author">
                        <name>
                            <surname>CRAWFORD</surname>
                            <given-names>K.</given-names>
                        </name>
                        <name>
                            <surname>JOLER</surname>
                            <given-names>V.</given-names>
                        </name>
                    </person-group>
                    <source>Anatomy of an AI System: The Amazon Echo As An Anatomical Map of Human Labor, Data and Planetary Resources</source>
                    <year>2018</year>
                    <comment>Available at <ext-link ext-link-type="uri" xlink:href="http://www.anatomyof.ai">http://www.anatomyof.ai</ext-link></comment>
                    <date-in-citation content-type="access-date">27 dec. 2019</date-in-citation>

                </element-citation>
            </ref>
            <ref id="B49">

                <mixed-citation>CUMMINGS, M.L. et al. Chatham House Report. Artificial Intelligence and International Affairs Disruption Anticipated, 2018. Available at https://www.chathamhouse.org/sites/default/files/publications/research/2018-06-14-artificial-intelligence-international-affairs-cummings-roff-cukier-parakilas-bryce.pdf. Accessed: 21 mar. 2020.</mixed-citation>

                <element-citation publication-type="webpage">
                    <person-group person-group-type="author">
                        <name>
                            <surname>CUMMINGS</surname>
                            <given-names>M.L.</given-names>
                        </name>
                        <etal/>
                    </person-group>
                    <source>Chatham House Report. Artificial Intelligence and International Affairs Disruption Anticipated</source>
                    <year>2018</year>
                    <comment>Available at <ext-link ext-link-type="uri" xlink:href="https://www.chathamhouse.org/sites/default/files/publications/research/2018-06-14-artificial-intelligence-international-affairs-cummings-roff-cukier-parakilas-bryce.pdf">https://www.chathamhouse.org/sites/default/files/publications/research/2018-06-14-artificial-intelligence-international-affairs-cummings-roff-cukier-parakilas-bryce.pdf</ext-link></comment>
                    <date-in-citation content-type="access-date">21 mar. 2020</date-in-citation>

                </element-citation>
            </ref>
            <ref id="B50">

                <mixed-citation>DIAKOPOULPS. N. <italic>Algorithmic Accountability Reporting: On the Investigation of Black Boxes</italic>, 2013. Available at https://academiccommons.columbia.edu/doi/10.7916/D8ZK5TW2. Accessed: 18 mar. 2018.</mixed-citation>

                <element-citation publication-type="book">
                    <person-group person-group-type="author">
                        <name>
                            <surname>DIAKOPOULPS</surname>
                            <given-names>N.</given-names>
                        </name>
                    </person-group>
                    <source>Algorithmic Accountability Reporting: On the Investigation of Black Boxes</source>
                    <year>2013</year>
                    <comment>Available at <ext-link ext-link-type="uri" xlink:href="https://academiccommons.columbia.edu/doi/10.7916/D8ZK5TW2">https://academiccommons.columbia.edu/doi/10.7916/D8ZK5TW2</ext-link></comment>
                    <date-in-citation content-type="access-date">18 mar. 2018</date-in-citation>

                </element-citation>
            </ref>
            <ref id="B51">

                <mixed-citation>EDWARDS L.; VEALE, M. Slave to the Algorithm? Why a ‘Right to an Explanation’ Is Probably Not the Remedy You Are Looking For. <italic>Duke Law &amp; Technology Review</italic> 16, 18, 2017.</mixed-citation>

                <element-citation publication-type="journal">
                    <person-group person-group-type="author">
                        <name>
                            <surname>EDWARDS</surname>
                            <given-names>L.</given-names>
                        </name>
                        <name>
                            <surname>VEALE</surname>
                            <given-names>M.</given-names>
                        </name>
                    </person-group>
                    <article-title>Slave to the Algorithm? Why a ‘Right to an Explanation’ Is Probably Not the Remedy You Are Looking For</article-title>
                    <source>Duke Law &amp; Technology Review</source>
                    <volume>16</volume>
                    <issue>18</issue>
                    <year>2017</year>

                </element-citation>
            </ref>
            <ref id="B52">

                <mixed-citation>EU Code of Practice on Disinformation, 2018. Available at https://digital-strategy.ec.europa.eu/en/policies/code-practice-disinformation. Accessed: 24 mar. 2021.</mixed-citation>

                <element-citation publication-type="webpage">
                    <source>EU Code of Practice on Disinformation</source>
                    <year>2018</year>
                    <comment>Available at <ext-link ext-link-type="uri" xlink:href="https://digital-strategy.ec.europa.eu/en/policies/code-practice-disinformation">https://digital-strategy.ec.europa.eu/en/policies/code-practice-disinformation</ext-link></comment>
                    <date-in-citation content-type="access-date">24 mar. 2021</date-in-citation>

                </element-citation>
            </ref>
            <ref id="B53">

                <mixed-citation>European Commission for Democracy trough Law (Venice Commission). Joint Report of the Venice Commission and of the Directorate of Information society and Actions Against Crime of the Directorate General of Human Rights and Rule of Law (DGI) on Digital Technologies and Elections, 2019.</mixed-citation>

                <element-citation publication-type="report">
                    <person-group person-group-type="author">
                        <collab>European Commission for Democracy trough Law (Venice Commission)</collab>
                    </person-group>
                    <source>Joint Report of the Venice Commission and of the Directorate of Information society and Actions Against Crime of the Directorate General of Human Rights and Rule of Law (DGI) on Digital Technologies and Elections</source>
                    <year>2019</year>

                </element-citation>
            </ref>
            <ref id="B54">

                <mixed-citation>European Commission, Networks, Content and Technology- Directorate-General for Communication. A Multi-Dimensional Approach to Disinformation Report of the Independent High-Level Group on Fake News and Online Disinformation, 2018.</mixed-citation>

                <element-citation publication-type="report">
                    <person-group person-group-type="author">
                        <collab>European Commission, Networks, Content and Technology- Directorate-General for Communication</collab>
                    </person-group>
                    <source>A Multi-Dimensional Approach to Disinformation Report of the Independent High-Level Group on Fake News and Online Disinformation</source>
                    <year>2018</year>

                </element-citation>
            </ref>
            <ref id="B55">

                <mixed-citation>European Commission. Proposal for a Regulation of the European Parliament and of the Council laying down harmonized rules on Artificial Intelligence (Artificial Intelligence Act) and amending certain Union legislative acts, COM(2021) 206 final, 2021.</mixed-citation>

                <element-citation publication-type="book">
                    <person-group person-group-type="author">
                        <collab>European Commission</collab>
                    </person-group>
                    <source>Proposal for a Regulation of the European Parliament and of the Council laying down harmonized rules on Artificial Intelligence (Artificial Intelligence Act) and amending certain Union legislative acts</source>
                    <publisher-name>COM(2021) 206 final</publisher-name>
                    <year>2021</year>

                </element-citation>
            </ref>
            <ref id="B56">

                <mixed-citation>European Union Agency for Fundamental Rights. #BigData: Discrimination in Data-Supported Decision Making, 2018.</mixed-citation>

                <element-citation publication-type="book">
                    <person-group person-group-type="author">
                        <collab>European Union Agency for Fundamental Rights</collab>
                    </person-group>
                    <source>#BigData: Discrimination in Data-Supported Decision Making</source>
                    <year>2018</year>

                </element-citation>
            </ref>
            <ref id="B57">

                <mixed-citation>European Union Agency for Fundamental Rights. Data Quality and Artificial Intelligence – Mitigating Bias and Error to Protect Fundamental Rights, 2019.</mixed-citation>

                <element-citation publication-type="book">
                    <person-group person-group-type="author">
                        
                            <collab>European Union Agency for Fundamental Rights</collab>
                        
                    </person-group>
                    <source>Data Quality and Artificial Intelligence – Mitigating Bias and Error to Protect Fundamental Rights</source>
                    <year>2019</year>

                </element-citation>
            </ref>
            <ref id="B58">

                <mixed-citation>European Union Agency for Fundamental Rights. Preventing Unlawful Profiling Today and in the Future: A Guide, 2018.</mixed-citation>

                <element-citation publication-type="book">
                    <person-group person-group-type="author">
                       
                        <collab>European Union Agency for Fundamental Rights</collab>
                        
                    </person-group>
                    <source>Preventing Unlawful Profiling Today and in the Future: A Guide</source>
                    <year>2018</year>

                </element-citation>
            </ref>
            <ref id="B59">

                <mixed-citation>EYKHOLT, K. et al. Robust Physical-World Attacks on Deep Learning Visual Classification. <italic>2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition</italic>, 2018. Available at https://openaccess.thecvf.com/content_cvpr_2018/papers/Eykholt_Robust_Physical-World_Attacks_CVPR_2018_paper.pdf. Accessed: 23 apr. 2021.</mixed-citation>

                <element-citation publication-type="confproc">
                    <person-group person-group-type="author">
                        <name>
                            <surname>EYKHOLT</surname>
                            <given-names>K.</given-names>
                        </name>
                        <etal/>
                    </person-group>
                    <source>Robust Physical-World Attacks on Deep Learning Visual Classification</source>
                    <conf-name>2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition</conf-name>
                    <year>2018</year>
                    <comment>Available at <ext-link ext-link-type="uri" xlink:href="https://openaccess.thecvf.com/content_cvpr_2018/papers/Eykholt_Robust_Physical-World_Attacks_CVPR_2018_paper.pdf">https://openaccess.thecvf.com/content_cvpr_2018/papers/Eykholt_Robust_Physical-World_Attacks_CVPR_2018_paper.pdf</ext-link></comment>
                    <date-in-citation content-type="access-date">23 apr. 2021</date-in-citation>

                </element-citation>
            </ref>
            <ref id="B60">

                <mixed-citation>FAYE JACOBSEN, A. The Right to Public Participation. A Human Rights Law Update. Issue Paper, 2013. Available at https://www.humanrights.dk/publications/right-public-participation-human-rights-law-update. Accessed: 14 jan. 2021.</mixed-citation>

                <element-citation publication-type="book">
                    <person-group person-group-type="author">
                        <name>
                            <surname>FAYE JACOBSEN</surname>
                            <given-names>A.</given-names>
                        </name>
                    </person-group>
                    <source>The Right to Public Participation. A Human Rights Law Update</source>
                    <publisher-name>Issue Paper</publisher-name>
                    <year>2013</year>
                    <comment>Available at <ext-link ext-link-type="uri" xlink:href="https://www.humanrights.dk/publications/right-public-participation-human-rights-law-update">https://www.humanrights.dk/publications/right-public-participation-human-rights-law-update</ext-link></comment>
                    <date-in-citation content-type="access-date">14 jan. 2021</date-in-citation>

                </element-citation>
            </ref>
            <ref id="B61">

                <mixed-citation>FERRYMAN K.; Pitcan, M. <italic>Fairness in Precision Medicine</italic>, 2018. Available at https://datasociety.net/wp-content/uploads/2018/02/Data.Society.Fairness.In_.Precision.Medicine.Feb2018.FINAL-2.26.18.pdf. Accessed: 8 apr. 2018.</mixed-citation>

                <element-citation publication-type="book">
                    <person-group person-group-type="author">
                        <name>
                            <surname>FERRYMAN</surname>
                            <given-names>K.</given-names>
                        </name>
                        <name>
                            <surname>Pitcan</surname>
                            <given-names>M.</given-names>
                        </name>
                    </person-group>
                    <source>Fairness in Precision Medicine</source>
                    <year>2018</year>
                    <comment>Available at <ext-link ext-link-type="uri" xlink:href="https://datasociety.net/wp-content/uploads/2018/02/Data.Society.Fairness.In_.Precision.Medicine.Feb2018.FINAL-2.26.18.pdf">https://datasociety.net/wp-content/uploads/2018/02/Data.Society.Fairness.In_.Precision.Medicine.Feb2018.FINAL-2.26.18.pdf</ext-link></comment>
                    <date-in-citation content-type="access-date">8 apr. 2018</date-in-citation>

                </element-citation>
            </ref>
            <ref id="B62">

                <mixed-citation>GANDY Jr., O.H.; Nemorin, S. Toward a Political Economy of Nudge: Smart City Variations. <italic>Information, Communication &amp; Society</italic> 22, 2112, 2019.</mixed-citation>

                <element-citation publication-type="journal">
                    <person-group person-group-type="author">
                        <name>
                            <surname>GANDY</surname>
                            <given-names>O.H.</given-names>
                            <suffix>Jr.</suffix>
                        </name>
                        <name>
                            <surname>Nemorin</surname>
                            <given-names>S.</given-names>
                        </name>
                    </person-group>
                    <article-title>Toward a Political Economy of Nudge: Smart City Variations</article-title>
                    <source>Information, Communication &amp; Society</source>
                    <volume>22</volume>
                    <issue>2112</issue>
                    <year>2019</year>

                </element-citation>
            </ref>
            <ref id="B63">

                <mixed-citation>GESLEY, J. Regulation of Artificial Intelligence in Selected Jurisdictions, 2019. Available at https://www.loc.gov/law/help/artificial-intelligence/index.php. Accessed: 30 dec. 2019.</mixed-citation>

                <element-citation publication-type="book">
                    <person-group person-group-type="author">
                        <name>
                            <surname>GESLEY</surname>
                            <given-names>J.</given-names>
                        </name>
                    </person-group>
                    <source>Regulation of Artificial Intelligence in Selected Jurisdictions</source>
                    <year>2019</year>
                    <comment>Available at <ext-link ext-link-type="uri" xlink:href="https://www.loc.gov/law/help/artificial-intelligence/index.php">https://www.loc.gov/law/help/artificial-intelligence/index.php</ext-link></comment>
                    <date-in-citation content-type="access-date">30 dec. 2019</date-in-citation>

                </element-citation>
            </ref>
            <ref id="B64">

                <mixed-citation>GOODMAN E.; Powles, J. Urbanism Under Google: Lessons from Sidewalk Toronto. <italic>Fordham Law Review</italic> 88(2), 457, 2019.</mixed-citation>

                <element-citation publication-type="journal">
                    <person-group person-group-type="author">
                        <name>
                            <surname>GOODMAN</surname>
                            <given-names>E.</given-names>
                        </name>
                        <name>
                            <surname>Powles</surname>
                            <given-names>J.</given-names>
                        </name>
                    </person-group>
                    <article-title>Urbanism Under Google: Lessons from Sidewalk Toronto</article-title>
                    <source>Fordham Law Review</source>
                    <volume>88</volume>
                    <issue>2</issue>
                    <fpage>457</fpage>
                    <lpage>457</lpage>
                    <year>2019</year>

                </element-citation>
            </ref>
            <ref id="B65">

                <mixed-citation>GRABER, C.B. Artificial Intelligence, Affordances and Fundamental Rights. In Hildebrandt M.; O’Hara K. (eds) <italic>Life and the Law in the Era of Data-Driven Agency</italic>. Edward Elgar, 2020.</mixed-citation>

                <element-citation publication-type="book">
                    <person-group person-group-type="author">
                        <name>
                            <surname>GRABER</surname>
                            <given-names>C.B.</given-names>
                        </name>
                    </person-group>
                    <chapter-title>Artificial Intelligence, Affordances and Fundamental Rights</chapter-title>
                    <person-group person-group-type="editor">
                        <name>
                            <surname>Hildebrandt</surname>
                            <given-names>M.</given-names>
                        </name>
                        <name>
                            <surname>O’Hara</surname>
                            <given-names>K.</given-names>
                        </name>
                    </person-group>
                    <source>Life and the Law in the Era of Data-Driven Agency</source>
                    <publisher-name>Edward Elgar</publisher-name>
                    <year>2020</year>

                </element-citation>
            </ref>
            <ref id="B66">

                <mixed-citation>HAGENDORFF, T. The Ethics of AI Ethics: An Evaluation of Guidelines’, <italic>Minds and Machines</italic> 30, 99, 2020.</mixed-citation>

                <element-citation publication-type="journal">
                    <person-group person-group-type="author">
                        <name>
                            <surname>HAGENDORFF</surname>
                            <given-names>T.</given-names>
                        </name>
                    </person-group>
                    <article-title>The Ethics of AI Ethics: An Evaluation of Guidelines’</article-title>
                    <source>Minds and Machines</source>
                    <volume>30</volume>
                    <issue>99</issue>
                    <year>2020</year>

                </element-citation>
            </ref>
            <ref id="B67">

                <mixed-citation>HILDEBRANDT M.; Gutwirth, S. (eds) <italic>Profiling the European Citizen: Cross-Disciplinary Perspectives</italic>, Dordrecht, 2008.</mixed-citation>

                <element-citation publication-type="book">
                    <person-group person-group-type="editor">
                        <name>
                            <surname>HILDEBRANDT</surname>
                            <given-names>M.</given-names>
                        </name>
                        <name>
                            <surname>Gutwirth</surname>
                            <given-names>S.</given-names>
                        </name>
                    </person-group>
                    <source>Profiling the European Citizen: Cross-Disciplinary Perspectives</source>
                    <publisher-name>Dordrecht</publisher-name>
                    <year>2008</year>

                </element-citation>
            </ref>
            <ref id="B68">

                <mixed-citation>HILDEBRANDT, M. Algorithmic Regulation and the Rule of Law. <italic>Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences</italic>, 376, 2018a.</mixed-citation>

                <element-citation publication-type="journal">
                    <person-group person-group-type="author">
                        <name>
                            <surname>HILDEBRANDT</surname>
                            <given-names>M.</given-names>
                        </name>
                    </person-group>
                    <article-title>Algorithmic Regulation and the Rule of Law</article-title>
                    <source>Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences</source>
                    <volume>376</volume>
                    <year>2018a</year>

                </element-citation>
            </ref>
            <ref id="B69">

                <mixed-citation>______. Primitives of Legal Protection in the Era of Data-Driven Platforms. <italic>Georgetown Law Technology Review</italic> 2, 252, 2018b.</mixed-citation>

                <element-citation publication-type="journal">
                    <person-group person-group-type="author">
                        <name>
                            <surname>HILDEBRANDT</surname>
                            <given-names>M.</given-names>
                        </name>
                    </person-group>
                    <article-title>Primitives of Legal Protection in the Era of Data-Driven Platforms</article-title>
                    <source>Georgetown Law Technology Review</source>
                    <volume>2</volume>
                    <issue>252</issue>
                    <year>2018b</year>

                </element-citation>
            </ref>
            <ref id="B70">

                <mixed-citation>______ Privacy as Protection of the Incomputable Self: From Agnostic to Agonistic Machine Learning. <italic>Theoretical Inquiries in Law</italic> 20, 83, 2019.</mixed-citation>

                <element-citation publication-type="journal">
                    <person-group person-group-type="author">
                        <name>
                            <surname>HILDEBRANDT</surname>
                            <given-names>M.</given-names>
                        </name>
                    </person-group>
                    <article-title>Privacy as Protection of the Incomputable Self: From Agnostic to Agonistic Machine Learning</article-title>
                    <source>Theoretical Inquiries in Law</source>
                    <volume>20</volume>
                    <issue>83</issue>
                    <year>2019</year>

                </element-citation>
            </ref>
            <ref id="B71">

                <mixed-citation>______. The Issue of Bias. The Framing Powers of Machine Learning. In Pelillo, M.; Scantamburlo, T. (eds) <italic>Machines We Trust. Perspectives on Dependable AI</italic> (MIT Press: Cambridge, MA) 2021.</mixed-citation>

                <element-citation publication-type="book">
                    <person-group person-group-type="author">
                        <name>
                            <surname>HILDEBRANDT</surname>
                            <given-names>M.</given-names>
                        </name>
                    </person-group>
                    <chapter-title>The Issue of Bias. The Framing Powers of Machine Learning</chapter-title>
                    <person-group person-group-type="editor">
                        <name>
                            <surname>Pelillo</surname>
                            <given-names>M.</given-names>
                        </name>
                        <name>
                            <surname>Scantamburlo</surname>
                            <given-names>T.</given-names>
                        </name>
                    </person-group>
                    <source>Machines We Trust. Perspectives on Dependable AI</source>
                    <publisher-name>MIT Press</publisher-name>
                    <publisher-loc>Cambridge, MA</publisher-loc>
                    <year>2021</year>

                </element-citation>
            </ref>
            <ref id="B72">

                <mixed-citation>Independent High-Level Group on Artificial Intelligence. Ethics Guidelines for Trustworthy AI, 2019. Available at https://ec.europa.eu/digital-single-market/en/news/ethics-guidelines-trustworthyai. Accessed: 2 mar. 2020.</mixed-citation>

                <element-citation publication-type="webpage">
                    <person-group person-group-type="author">
                        <collab>Independent High-Level Group on Artificial Intelligence</collab>
                    </person-group>
                    <source>Ethics Guidelines for Trustworthy AI</source>
                    <year>2019</year>
                    <comment>Available at <ext-link ext-link-type="uri" xlink:href="https://ec.europa.eu/digital-single-market/en/news/ethics-guidelines-trustworthyai">https://ec.europa.eu/digital-single-market/en/news/ethics-guidelines-trustworthyai</ext-link></comment>
                    <date-in-citation content-type="access-date">2 mar. 2020</date-in-citation>

                </element-citation>
            </ref>
            <ref id="B73">

                <mixed-citation>JOBIN, A.; IENCA, M.; VAYENA, E. The Global Landscape of AI Ethics Guidelines’, <italic>Nature Machine Intelligence</italic> 1, 389, 2019.</mixed-citation>

                <element-citation publication-type="journal">
                    <person-group person-group-type="author">
                        <name>
                            <surname>JOBIN</surname>
                            <given-names>A.</given-names>
                        </name>
                        <name>
                            <surname>IENCA</surname>
                            <given-names>M.</given-names>
                        </name>
                        <name>
                            <surname>VAYENA</surname>
                            <given-names>E.</given-names>
                        </name>
                    </person-group>
                    <article-title>The Global Landscape of AI Ethics Guidelines’</article-title>
                    <source>Nature Machine Intelligence</source>
                    <volume>1</volume>
                    <issue>389</issue>
                    <year>2019</year>

                </element-citation>
            </ref>
            <ref id="B74">

                <mixed-citation>KAMINSKI M. E.; MALGIERI, G. Multi-Layered Explanations from Algorithmic Impact Assessments in the GDPR. Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency. Association for Computing Machinery, 2020, doi: 10.1145/3351095.3372875.</mixed-citation>

                <element-citation publication-type="confproc">
                    <person-group person-group-type="author">
                        <name>
                            <surname>KAMINSKI</surname>
                            <given-names>M. E.</given-names>
                        </name>
                        <name>
                            <surname>MALGIERI</surname>
                            <given-names>G.</given-names>
                        </name>
                    </person-group>
                    <source>Multi-Layered Explanations from Algorithmic Impact Assessments in the GDPR</source>
                    <conf-name>Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency</conf-name>
                    <publisher-name>Association for Computing Machinery</publisher-name>
                    <year>2020</year>
                    <pub-id pub-id-type="doi">10.1145/3351095.3372875</pub-id>

                </element-citation>
            </ref>
            <ref id="B75">

                <mixed-citation>KOLKMAN, D. The (in)Credibility of Algorithmic Models to Non-Experts. <italic>Information, Communication &amp; Society</italic> 1, 2020, doi: 10.1080/1369118X.2020.1761860.</mixed-citation>

                <element-citation publication-type="journal">
                    <person-group person-group-type="author">
                        <name>
                            <surname>KOLKMAN</surname>
                            <given-names>D.</given-names>
                        </name>
                    </person-group>
                    <article-title>The (in)Credibility of Algorithmic Models to Non-Experts</article-title>
                    <source>Information, Communication &amp; Society</source>
                    <volume>1</volume>
                    <year>2020</year>
                    <pub-id pub-id-type="doi">10.1080/1369118X.2020.1761860</pub-id>

                </element-citation>
            </ref>
            <ref id="B76">

                <mixed-citation>LOIDEAIN, N. N.; Adams, R. From Alexa to Siri and the GDPR: The Gendering of Virtual Personal Assistants and the Role of Data Protection Impact Assessments. <italic>Computer Law &amp; Security Review</italic> 36, 2020, doi:10.1016/j.clsr.2019.105366.</mixed-citation>

                <element-citation publication-type="journal">
                    <person-group person-group-type="author">
                        <name>
                            <surname>LOIDEAIN</surname>
                            <given-names>N. N.</given-names>
                        </name>
                        <name>
                            <surname>Adams</surname>
                            <given-names>R.</given-names>
                        </name>
                    </person-group>
                    <article-title>From Alexa to Siri and the GDPR: The Gendering of Virtual Personal Assistants and the Role of Data Protection Impact Assessments</article-title>
                    <source>Computer Law &amp; Security Review</source>
                    <volume>36</volume>
                    <year>2020</year>
                    <pub-id pub-id-type="doi">10.1016/j.clsr.2019.105366</pub-id>

                </element-citation>
            </ref>
            <ref id="B77">

                <mixed-citation>LYNSKEY, O. Criminal Justice Profiling and EU Data Protection Law: Precarious Protection from Predictive Policing. <italic>Int J Law Context</italic>, 15, 162, 2019.</mixed-citation>

                <element-citation publication-type="journal">
                    <person-group person-group-type="author">
                        <name>
                            <surname>LYNSKEY</surname>
                            <given-names>O.</given-names>
                        </name>
                    </person-group>
                    <article-title>Criminal Justice Profiling and EU Data Protection Law: Precarious Protection from Predictive Policing</article-title>
                    <source>Int J Law Context</source>
                    <volume>15</volume>
                    <issue>162</issue>
                    <year>2019</year>

                </element-citation>
            </ref>
            <ref id="B78">

                <mixed-citation>MAISLEY, N. The International Right of Rights? Article 25(a) of the ICCPR as a Human Right to Take Part in International Law-Making. <italic>Eur. J. Int. Law</italic> 28, 89, 2017.</mixed-citation>

                <element-citation publication-type="journal">
                    <person-group person-group-type="author">
                        <name>
                            <surname>MAISLEY</surname>
                            <given-names>N.</given-names>
                        </name>
                    </person-group>
                    <article-title>The International Right of Rights? Article 25(a) of the ICCPR as a Human Right to Take Part in International Law-Making</article-title>
                    <source>Eur. J. Int. Law</source>
                    <volume>28</volume>
                    <issue>89</issue>
                    <year>2017</year>

                </element-citation>
            </ref>
            <ref id="B79">

                <mixed-citation>MANHEIM, K.; KAPLAN, L. Artificial Intelligence: Risks to Privacy and Democracy. <italic>Yale Journal of Law &amp; Technology</italic> 21, 106, 2019.</mixed-citation>

                <element-citation publication-type="journal">
                    <person-group person-group-type="author">
                        <name>
                            <surname>MANHEIM</surname>
                            <given-names>K.</given-names>
                        </name>
                        <name>
                            <surname>KAPLAN</surname>
                            <given-names>L.</given-names>
                        </name>
                    </person-group>
                    <article-title>Artificial Intelligence: Risks to Privacy and Democracy</article-title>
                    <source>Yale Journal of Law &amp; Technology</source>
                    <volume>21</volume>
                    <issue>106</issue>
                    <year>2019</year>

                </element-citation>
            </ref>
            <ref id="B80">

                <mixed-citation>MANTELERO, A. AI and Big Data: A Blueprint for a Human Rights, Social and Ethical Impact Assessment. <italic>Computer Law &amp; Security Review</italic> 34(4), 754, 2018.</mixed-citation>

                <element-citation publication-type="journal">
                    <person-group person-group-type="author">
                        <name>
                            <surname>MANTELERO</surname>
                            <given-names>A.</given-names>
                        </name>
                    </person-group>
                    <article-title>AI and Big Data: A Blueprint for a Human Rights, Social and Ethical Impact Assessment</article-title>
                    <source>Computer Law &amp; Security Review</source>
                    <volume>34</volume>
                    <issue>4</issue>
                    <fpage>754</fpage>
                    <lpage>754</lpage>
                    <year>2018</year>

                </element-citation>
            </ref>
            <ref id="B81">

                <mixed-citation>______. Artificial Intelligence and Data Protection: Challenges and Possible Remedies. Report on Artificial Intelligence. Consultative Committee of the Convention for the Protection of Individuals with Regard to Automatic Processing of personal data, 2019. Available at https://rm.coe.int/2018-lignes-directrices-sur-l-intelligence-artificielle-et-la-protecti/168098e1b7. Accessed: 20 feb. 2020.</mixed-citation>

                <element-citation publication-type="report">
                    <person-group person-group-type="author">
                        <name>
                            <surname>MANTELERO</surname>
                            <given-names>A.</given-names>
                        </name>
                    </person-group>
                    <source>Artificial Intelligence and Data Protection: Challenges and Possible Remedies. Report on Artificial Intelligence. Consultative Committee of the Convention for the Protection of Individuals with Regard to Automatic Processing of personal data</source>
                    <year>2019</year>
                    <comment>Available at <ext-link ext-link-type="uri" xlink:href="https://rm.coe.int/2018-lignes-directrices-sur-l-intelligence-artificielle-et-la-protecti/168098e1b7">https://rm.coe.int/2018-lignes-directrices-sur-l-intelligence-artificielle-et-la-protecti/168098e1b7</ext-link></comment>
                    <date-in-citation content-type="access-date">20 feb. 2020</date-in-citation>

                </element-citation>
            </ref>
            <ref id="B82">

                <mixed-citation>______. Personal Data for Decisional Purposes in the Age of Analytics: From an Individual to a Collective Dimension of Data Protection. <italic>Computer Law &amp; Security Review</italic> 32(2), 238, 2016.</mixed-citation>

                <element-citation publication-type="journal">
                    <person-group person-group-type="author">
                        <name>
                            <surname>MANTELERO</surname>
                            <given-names>A.</given-names>
                        </name>
                    </person-group>
                    <article-title>Personal Data for Decisional Purposes in the Age of Analytics: From an Individual to a Collective Dimension of Data Protection</article-title>
                    <source>Computer Law &amp; Security Review</source>
                    <volume>32</volume>
                    <issue>2</issue>
                    <fpage>238</fpage>
                    <lpage>238</lpage>
                    <year>2016</year>

                </element-citation>
            </ref>
            <ref id="B83">

                <mixed-citation>______. Regulating AI Within the Human Rights Framework: A Roadmapping Methodology. In Czech et al. (eds) <italic>European Yearbook on Human Rights 2020</italic>, 477-502, 2020.</mixed-citation>

                <element-citation publication-type="book">
                    <person-group person-group-type="author">
                        <name>
                            <surname>MANTELERO</surname>
                            <given-names>A.</given-names>
                        </name>
                    </person-group>
                    <chapter-title>Regulating AI Within the Human Rights Framework: A Roadmapping Methodology</chapter-title>
                    <person-group person-group-type="editor">                       
                            <collab>Czech</collab>                        
                        <etal/>
                    </person-group>
                    <source>European Yearbook on Human Rights 2020</source>
                    <fpage>477</fpage>
                    <lpage>502</lpage>
                    <year>2020</year>

                </element-citation>
            </ref>
            <ref id="B84">

                <mixed-citation>______; Esposito. M.S. An Evidence-Based Methodology for Human Rights Impact Assessment (HRIA) in the Development of AI Data-Intensive Systems. <italic>Computer Law &amp; Sec. Rev</italic>., 41, 2021, DOI: 10.1016/j.clsr.2021.105561.</mixed-citation>

                <element-citation publication-type="journal">
                    <person-group person-group-type="author">
                        <name>
                            <surname>MANTELERO</surname>
                            <given-names>A.</given-names>
                        </name>
                    </person-group>
                    <person-group person-group-type="author">
                        <name>
                            <surname>Esposito</surname>
                            <given-names>M.S.</given-names>
                        </name>
                    </person-group>
                    <article-title>An Evidence-Based Methodology for Human Rights Impact Assessment (HRIA) in the Development of AI Data-Intensive Systems</article-title>
                    <source>Computer Law &amp; Sec. Rev</source>
                    <volume>41</volume>
                    <year>2021</year>
                    <pub-id pub-id-type="doi">10.1016/j.clsr.2021.105561</pub-id>

                </element-citation>
            </ref>
            <ref id="B85">

                <mixed-citation>MANTELERO, A.; VACIAGO, G. Data Protection in a Big Data Society. Ideas for a Future Regulation. <italic>Digital Investigation</italic> 15, 104, 2015.</mixed-citation>

                <element-citation publication-type="journal">
                    <person-group person-group-type="author">
                        <name>
                            <surname>MANTELERO</surname>
                            <given-names>A.</given-names>
                        </name>
                        <name>
                            <surname>VACIAGO</surname>
                            <given-names>G.</given-names>
                        </name>
                    </person-group>
                    <article-title>Data Protection in a Big Data Society. Ideas for a Future Regulation</article-title>
                    <source>Digital Investigation</source>
                    <volume>15</volume>
                    <issue>104</issue>
                    <year>2015</year>

                </element-citation>
            </ref>
            <ref id="B86">

                <mixed-citation>MEHR, H. Artificial Intelligence for Citizen Services and Government, 2017. Available at https://ash.harvard.edu/files/ash/files/artificial_intelligence_for_citizen_services.pdf. Accessed: 15 mar. 2021.</mixed-citation>

                <element-citation publication-type="book">
                    <person-group person-group-type="author">
                        <name>
                            <surname>MEHR</surname>
                            <given-names>H.</given-names>
                        </name>
                    </person-group>
                    <source>Artificial Intelligence for Citizen Services and Government</source>
                    <year>2017</year>
                    <comment>Available at <ext-link ext-link-type="uri" xlink:href="https://ash.harvard.edu/files/ash/files/artificial_intelligence_for_citizen_services.pdf">https://ash.harvard.edu/files/ash/files/artificial_intelligence_for_citizen_services.pdf</ext-link></comment>
                    <date-in-citation content-type="access-date">15 mar. 2021</date-in-citation>

                </element-citation>
            </ref>
            <ref id="B87">

                <mixed-citation>MEIJER A.; WESSELS M. Predictive Policing: Review of Benefits and Drawbacks. <italic>Int J Publ Admin</italic> 42, 1031, 2019.</mixed-citation>

                <element-citation publication-type="journal">
                    <person-group person-group-type="author">
                        <name>
                            <surname>MEIJER</surname>
                            <given-names>A.</given-names>
                        </name>
                        <name>
                            <surname>WESSELS</surname>
                            <given-names>M.</given-names>
                        </name>
                    </person-group>
                    <article-title>Predictive Policing: Review of Benefits and Drawbacks</article-title>
                    <source>Int J Publ Admin</source>
                    <volume>42</volume>
                    <issue>1031</issue>
                    <year>2019</year>

                </element-citation>
            </ref>
            <ref id="B88">

                <mixed-citation>MIKHAYLOV, J.; Esteve, M.; Campion, A. Artificial Intelligence for the Public Sector: Opportunities and Challenges of Cross-Sector Collaboration. <italic>Phil. Trans. R. Soc. A</italic>, 376, 2018, doi: 10.1098/rsta.2017.0357.</mixed-citation>

                <element-citation publication-type="journal">
                    <person-group person-group-type="author">
                        <name>
                            <surname>MIKHAYLOV</surname>
                            <given-names>J.</given-names>
                        </name>
                        <name>
                            <surname>Esteve</surname>
                            <given-names>M.</given-names>
                        </name>
                        <name>
                            <surname>Campion</surname>
                            <given-names>A.</given-names>
                        </name>
                    </person-group>
                    <article-title>Artificial Intelligence for the Public Sector: Opportunities and Challenges of Cross-Sector Collaboration</article-title>
                    <source>Phil. Trans. R. Soc. A</source>
                    <volume>376</volume>
                    <year>2018</year>
                    <pub-id pub-id-type="doi">10.1098/rsta.2017.0357</pub-id>

                </element-citation>
            </ref>
            <ref id="B89">

                <mixed-citation>MITTELSTADT, B. From Individual to Group Privacy in Big Data Analytics. <italic>Philosophy &amp; Technology</italic> 30, 475, 2017.</mixed-citation>

                <element-citation publication-type="journal">
                    <person-group person-group-type="author">
                        <name>
                            <surname>MITTELSTADT</surname>
                            <given-names>B.</given-names>
                        </name>
                    </person-group>
                    <article-title>From Individual to Group Privacy in Big Data Analytics</article-title>
                    <source>Philosophy &amp; Technology</source>
                    <volume>30</volume>
                    <issue>475</issue>
                    <year>2017</year>

                </element-citation>
            </ref>
            <ref id="B90">

                <mixed-citation>NEMITZ, P. Constitutional Democracy and Technology in the Age of Artificial Intelligence. <italic>Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences</italic> 378, 2018, doi: 10.1098/rsta.2018.0089.</mixed-citation>

                <element-citation publication-type="journal">
                    <person-group person-group-type="author">
                        <name>
                            <surname>NEMITZ</surname>
                            <given-names>P.</given-names>
                        </name>
                    </person-group>
                    <article-title>Constitutional Democracy and Technology in the Age of Artificial Intelligence</article-title>
                    <source>Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences</source>
                    <volume>378</volume>
                    <year>2018</year>
                    <pub-id pub-id-type="doi">10.1098/rsta.2018.0089</pub-id>

                </element-citation>
            </ref>
            <ref id="B91">

                <mixed-citation>NUNEZ, C. Artificial Intelligence and Legal Ethics: Whether AI Lawyers Can Make Ethical Decisions. <italic>Tulane Journal of Technology and Intellectual Property</italic> 20, 189, 2017.</mixed-citation>

                <element-citation publication-type="journal">
                    <person-group person-group-type="author">
                        <name>
                            <surname>NUNEZ</surname>
                            <given-names>C.</given-names>
                        </name>
                    </person-group>
                    <article-title>Artificial Intelligence and Legal Ethics: Whether AI Lawyers Can Make Ethical Decisions</article-title>
                    <source>Tulane Journal of Technology and Intellectual Property</source>
                    <volume>20</volume>
                    <issue>189</issue>
                    <year>2017</year>

                </element-citation>
            </ref>
            <ref id="B92">

                <mixed-citation>OECD, Recommendation of the Council on Artificial Intelligence, 2019.</mixed-citation>

                <element-citation publication-type="book">
                    <person-group person-group-type="author">
                        <collab>OECD</collab>
                    </person-group>
                    <source>Recommendation of the Council on Artificial Intelligence</source>
                    <year>2019</year>

                </element-citation>
            </ref>
            <ref id="B93">

                <mixed-citation>OSOBA, O.A.; Welser, W. An Intelligence in Our Image: The Risks of Bias and Errors in Artificial Intelligence, 2017. Available at https://www.rand.org/pubs/research_reports/RR1744.html. Accessed: 20 may 2020.</mixed-citation>

                <element-citation publication-type="webpage">
                    <person-group person-group-type="author">
                        <name>
                            <surname>OSOBA</surname>
                            <given-names>O.A.</given-names>
                        </name>
                        <name>
                            <surname>Welser</surname>
                            <given-names>W.</given-names>
                        </name>
                    </person-group>
                    <source>An Intelligence in Our Image: The Risks of Bias and Errors in Artificial Intelligence</source>
                    <year>2017</year>
                    <comment>Available at <ext-link ext-link-type="uri" xlink:href="https://www.rand.org/pubs/research_reports/RR1744.html">https://www.rand.org/pubs/research_reports/RR1744.html</ext-link></comment>
                    <date-in-citation content-type="access-date">20 may 2020</date-in-citation>

                </element-citation>
            </ref>
            <ref id="B94">

                <mixed-citation>OSWALD, M. Algorithm-Assisted Decision-Making in the Public Sector: Framing the Issues Using Administrative Law Rules Governing Discretionary Power. <italic>Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences’</italic> 376, 2018, doi: 10.1098/rsta.2017.0359.</mixed-citation>

                <element-citation publication-type="journal">
                    <person-group person-group-type="author">
                        <name>
                            <surname>OSWALD</surname>
                            <given-names>M.</given-names>
                        </name>
                    </person-group>
                    <article-title>Algorithm-Assisted Decision-Making in the Public Sector: Framing the Issues Using Administrative Law Rules Governing Discretionary Power</article-title>
                    <source>Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences’</source>
                    <volume>376</volume>
                    <year>2018</year>
                    <pub-id pub-id-type="doi">10.1098/rsta.2017.0359</pub-id>

                </element-citation>
            </ref>
            <ref id="B95">

                <mixed-citation>______. Algorithm-Assisted Decision-Making in the Public Sector: Framing the Issues Using Administrative Law Rules Governing Discretionary Power. <italic>Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences</italic> 376, 2018, doi: 10.1098/rsta.2017.0359.</mixed-citation>

                <element-citation publication-type="journal">
                    <person-group person-group-type="author">
                        <name>
                            <surname>OSWALD</surname>
                            <given-names>M.</given-names>
                        </name>
                    </person-group>
                    <article-title>Algorithm-Assisted Decision-Making in the Public Sector: Framing the Issues Using Administrative Law Rules Governing Discretionary Power</article-title>
                    <source>Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences</source>
                    <volume>376</volume>
                    <year>2018</year>
                    <pub-id pub-id-type="doi">10.1098/rsta.2017.0359</pub-id>

                </element-citation>
            </ref>
            <ref id="B96">

                <mixed-citation>PASQUALE, F. <italic>The Black Box Society. The Secret Algorithms That Control Money and Information</italic>.Harvard University Press, 2015.</mixed-citation>

                <element-citation publication-type="book">
                    <person-group person-group-type="author">
                        <name>
                            <surname>PASQUALE</surname>
                            <given-names>F.</given-names>
                        </name>
                    </person-group>
                    <source>The Black Box Society. The Secret Algorithms That Control Money and Information</source>
                    <publisher-name>Harvard University Press</publisher-name>
                    <year>2015</year>

                </element-citation>
            </ref>
            <ref id="B97">

                <mixed-citation>PASQUALE F.; CASHWELL, G. Prediction, Persuasion, and the Jurisprudence of Behaviourism. <italic>University of Toronto Law Journal</italic>, 68, 2018.</mixed-citation>

                <element-citation publication-type="journal">
                    <person-group person-group-type="author">
                        <name>
                            <surname>PASQUALE</surname>
                            <given-names>F.</given-names>
                        </name>
                        <name>
                            <surname>CASHWELL</surname>
                            <given-names>G.</given-names>
                        </name>
                    </person-group>
                    <article-title>Prediction, Persuasion, and the Jurisprudence of Behaviourism</article-title>
                    <source>University of Toronto Law Journal</source>
                    <volume>68</volume>
                    <year>2018</year>

                </element-citation>
            </ref>
            <ref id="B98">

                <mixed-citation>PERRY, W.L. et al. Predictive Policing: The Role of Crime Forecasting in Law Enforcement Operations, 2013. Available at https://www.rand.org/pubs/research_reports/RR233.html. Accessed: 30 mar. 2020.</mixed-citation>

                <element-citation publication-type="webpage">
                    <person-group person-group-type="author">
                        <name>
                            <surname>PERRY</surname>
                            <given-names>W.L.</given-names>
                        </name>
                        <etal/>
                    </person-group>
                    <source>Predictive Policing: The Role of Crime Forecasting in Law Enforcement Operations</source>
                    <year>2013</year>
                    <comment>Available at <ext-link ext-link-type="uri" xlink:href="https://www.rand.org/pubs/research_reports/RR233.html">https://www.rand.org/pubs/research_reports/RR233.html</ext-link></comment>
                    <date-in-citation content-type="access-date">30 mar. 2020</date-in-citation>

                </element-citation>
            </ref>
            <ref id="B99">

                <mixed-citation>Privacy International, Smart Cities: Utopian Vision, Dystopian Reality, 2017. Available at https://privacyinternational.org/sites/default/files/2017-12/Smart%20Cities-Utopian%20Vision%2C%20Dystopian%20Reality.pdf. Accessed: 12 may 2020.</mixed-citation>

                <element-citation publication-type="book">
                    <person-group person-group-type="author">
                        <collab>Privacy International, Smart Cities</collab>
                    </person-group>
                    <source>Utopian Vision, Dystopian Reality</source>
                    <year>2017</year>
                    <comment>Available at <ext-link ext-link-type="uri" xlink:href="https://privacyinternational.org/sites/default/files/2017-12/Smart%20Cities-Utopian%20Vision%2C%20Dystopian%20Reality.pdf">https://privacyinternational.org/sites/default/files/2017-12/Smart%20Cities-Utopian%20Vision%2C%20Dystopian%20Reality.pdf</ext-link></comment>
                    <date-in-citation content-type="access-date">12 may 2020</date-in-citation>

                </element-citation>
            </ref>
            <ref id="B100">

                <mixed-citation>RAAB, C.D. Information Privacy, Impact Assessment, and the Place of Ethics. <italic>Computer Law &amp; Security Review</italic> 37, 2020, doi:10.1016/j.clsr.2020.105404.</mixed-citation>

                <element-citation publication-type="journal">
                    <person-group person-group-type="author">
                        <name>
                            <surname>RAAB</surname>
                            <given-names>C.D.</given-names>
                        </name>
                    </person-group>
                    <article-title>Information Privacy, Impact Assessment, and the Place of Ethics</article-title>
                    <source>Computer Law &amp; Security Review</source>
                    <volume>37</volume>
                    <year>2020</year>
                    <pub-id pub-id-type="doi">10.1016/j.clsr.2020.105404</pub-id>

                </element-citation>
            </ref>
            <ref id="B101">

                <mixed-citation>RANCHORDÁS, S. Nudging Citizens through Technology in Smart Cities. <italic>International Review of Law, Computers &amp; Technology</italic> 34(2), 254, 2020.</mixed-citation>

                <element-citation publication-type="journal">
                    <person-group person-group-type="author">
                        <name>
                            <surname>RANCHORDÁS</surname>
                            <given-names>S.</given-names>
                        </name>
                    </person-group>
                    <article-title>Nudging Citizens through Technology in Smart Cities</article-title>
                    <source>International Review of Law, Computers &amp; Technology</source>
                    <volume>34</volume>
                    <issue>2</issue>
                    <fpage>254</fpage>
                    <lpage>254</lpage>
                    <year>2020</year>

                </element-citation>
            </ref>
            <ref id="B102">

                <mixed-citation>RE, R.M.; Solow-Niederman, A. Developing Artificially Intelligent Justice. <italic>Stanford Technology Law Review</italic> 22, 242, 2019.</mixed-citation>

                <element-citation publication-type="journal">
                    <person-group person-group-type="author">
                        <name>
                            <surname>RE</surname>
                            <given-names>R.M.</given-names>
                        </name>
                        <name>
                            <surname>Solow-Niederman</surname>
                            <given-names>A.</given-names>
                        </name>
                    </person-group>
                    <article-title>Developing Artificially Intelligent Justice</article-title>
                    <source>Stanford Technology Law Review</source>
                    <volume>22</volume>
                    <issue>242</issue>
                    <year>2019</year>

                </element-citation>
            </ref>
            <ref id="B103">

                <mixed-citation>RICHARDSON, R.; SCHULTZ, J.M.; CRAWFORD, K. Dirty Data, Bad Predictions: How Civil Rights Violations Impact Police Data, Predictive Policing Systems, and Justice. <italic>New York University Law Review</italic> 94, 42, 2019.</mixed-citation>

                <element-citation publication-type="journal">
                    <person-group person-group-type="author">
                        <name>
                            <surname>RICHARDSON</surname>
                            <given-names>R.</given-names>
                        </name>
                        <name>
                            <surname>SCHULTZ</surname>
                            <given-names>J.M.</given-names>
                        </name>
                        <name>
                            <surname>CRAWFORD</surname>
                            <given-names>K.</given-names>
                        </name>
                    </person-group>
                    <article-title>Dirty Data, Bad Predictions: How Civil Rights Violations Impact Police Data, Predictive Policing Systems, and Justice</article-title>
                    <source>New York University Law Review</source>
                    <volume>94</volume>
                    <issue>42</issue>
                    <year>2019</year>

                </element-citation>
            </ref>
            <ref id="B104">

                <mixed-citation>ROSENBAUM, D. P. The limits of hot spots policing. In Weisburd D.; Braga A.A. (eds) <italic>Police innovation: contrasting perspectives</italic>, Cambridge University Press, 2006.</mixed-citation>

                <element-citation publication-type="book">
                    <person-group person-group-type="author">
                        <name>
                            <surname>ROSENBAUM</surname>
                            <given-names>D. P.</given-names>
                        </name>
                    </person-group>
                    <chapter-title>The limits of hot spots policing</chapter-title>
                    <person-group person-group-type="editor">
                        <name>
                            <surname>Weisburd</surname>
                            <given-names>D.</given-names>
                        </name>
                        <name>
                            <surname>Braga</surname>
                            <given-names>A.A.</given-names>
                        </name>
                    </person-group>
                    <source>Police innovation: contrasting perspectives</source>
                    <publisher-name>Cambridge University Press</publisher-name>
                    <year>2006</year>

                </element-citation>
            </ref>
            <ref id="B105">

                <mixed-citation>SAVAGET, P.; CHIARINI, T.; Evans, S. Empowering Political Participation through Artificial Intelligence. <italic>Science and Public Policy</italic> 46, 369, 2019.</mixed-citation>

                <element-citation publication-type="journal">
                    <person-group person-group-type="author">
                        <name>
                            <surname>SAVAGET</surname>
                            <given-names>P.</given-names>
                        </name>
                        <name>
                            <surname>CHIARINI</surname>
                            <given-names>T.</given-names>
                        </name>
                        <name>
                            <surname>Evans</surname>
                            <given-names>S.</given-names>
                        </name>
                    </person-group>
                    <article-title>Empowering Political Participation through Artificial Intelligence</article-title>
                    <source>Science and Public Policy</source>
                    <volume>46</volume>
                    <issue>369</issue>
                    <year>2019</year>

                </element-citation>
            </ref>
            <ref id="B106">

                <mixed-citation>SCHRAG, Z. M. <italic>Ethical Imperialism. Institutional Review Boards and the Social Sciences 1965-2009</italic>. Baltimore, Johns Hopkins University Press, 2017.</mixed-citation>

                <element-citation publication-type="book">
                    <person-group person-group-type="author">
                        <name>
                            <surname>SCHRAG</surname>
                            <given-names>Z. M.</given-names>
                        </name>
                    </person-group>
                    <source>Ethical Imperialism. Institutional Review Boards and the Social Sciences 1965-2009</source>
                    <publisher-loc>Baltimore</publisher-loc>
                    <publisher-name>Johns Hopkins University Press</publisher-name>
                    <year>2017</year>

                </element-citation>
            </ref>
            <ref id="B107">

                <mixed-citation>SELBST, A. D. Disparate Impact in Big Data Policing. <italic>Georgia Law Review</italic> 52(1), 109, 2017.</mixed-citation>

                <element-citation publication-type="journal">
                    <person-group person-group-type="author">
                        <name>
                            <surname>SELBST</surname>
                            <given-names>A. D.</given-names>
                        </name>
                    </person-group>
                    <article-title>Disparate Impact in Big Data Policing</article-title>
                    <source>Georgia Law Review</source>
                    <volume>52</volume>
                    <issue>1</issue>
                    <fpage>109</fpage>
                    <lpage>109</lpage>
                    <year>2017</year>

                </element-citation>
            </ref>
            <ref id="B108">

                <mixed-citation>SELBST, A. D.; BAROCAS, S. The Intuitive Appeal of Explainable Machines. <italic>Fordham L. Rev</italic>. 87, 1085, 2018.</mixed-citation>

                <element-citation publication-type="journal">
                    <person-group person-group-type="author">
                        <name>
                            <surname>SELBST</surname>
                            <given-names>A. D.</given-names>
                        </name>
                        <name>
                            <surname>BAROCAS</surname>
                            <given-names>S.</given-names>
                        </name>
                    </person-group>
                    <article-title>The Intuitive Appeal of Explainable Machines</article-title>
                    <source>Fordham L. Rev</source>
                    <volume>87</volume>
                    <issue>1085</issue>
                    <year>2018</year>

                </element-citation>
            </ref>
            <ref id="B109">

                <mixed-citation>SUNSTEIN, C. R. The Ethics of Nudging, <italic>Yale Journal on Regulation</italic> 32, 412, 2015.</mixed-citation>

                <element-citation publication-type="journal">
                    <person-group person-group-type="author">
                        <name>
                            <surname>SUNSTEIN</surname>
                            <given-names>C. R.</given-names>
                        </name>
                    </person-group>
                    <article-title>The Ethics of Nudging</article-title>
                    <source>Yale Journal on Regulation</source>
                    <volume>32</volume>
                    <issue>412</issue>
                    <year>2015</year>

                </element-citation>
            </ref>
            <ref id="B110">

                <mixed-citation>______. <italic>Why Nudge? The Politics of Libertarian Paternalism</italic>. Yale University Press, 2015.)</mixed-citation>

                <element-citation publication-type="book">
                    <person-group person-group-type="author">
                        <name>
                            <surname>SUNSTEIN</surname>
                            <given-names>C. R.</given-names>
                        </name>
                    </person-group>
                    <source>Why Nudge? The Politics of Libertarian Paternalism</source>
                    <publisher-name>Yale University Press</publisher-name>
                    <year>2015</year>

                </element-citation>
            </ref>
            <ref id="B111">

                <mixed-citation>SUNSTEIN C.R.; THALER, R.H. Libertarian Paternalism in Not an Oxymoron’, <italic>University of Chicago Law Review</italic> 70, 1159, 2003.</mixed-citation>

                <element-citation publication-type="journal">
                    <person-group person-group-type="author">
                        <name>
                            <surname>SUNSTEIN</surname>
                            <given-names>C. R.</given-names>
                        </name>
                        <name>
                            <surname>THALER</surname>
                            <given-names>R.H.</given-names>
                        </name>
                    </person-group>
                    <article-title>Libertarian Paternalism in Not an Oxymoron’</article-title>
                    <source>University of Chicago Law Review</source>
                    <volume>70</volume>
                    <issue>1159</issue>
                    <year>2003</year>

                </element-citation>
            </ref>
            <ref id="B112">

                <mixed-citation>TAYLOR, L.; FLORIDI, L.; van der Sloot, B. (eds). <italic>Group Privacy : New Challenges of Data Technologies</italic>. Cham, Springer, 2017.</mixed-citation>

                <element-citation publication-type="book">
                    <person-group person-group-type="editor">
                        <name>
                            <surname>TAYLOR</surname>
                            <given-names>L.</given-names>
                        </name>
                        <name>
                            <surname>FLORIDI</surname>
                            <given-names>L.</given-names>
                        </name>
                        <name>
                            <surname>van der Sloot</surname>
                            <given-names>B.</given-names>
                        </name>
                    </person-group>
                    <source>Group Privacy : New Challenges of Data Technologies</source>
                    <publisher-loc>Cham</publisher-loc>
                    <publisher-name>Springer</publisher-name>
                    <year>2017</year>

                </element-citation>
            </ref>
            <ref id="B113">

                <mixed-citation>THALER, R. H.; SUNSTEIN, C. R. <italic>Nudge. Improving Decisions about Health, Wealth, and Happiness</italic>. New Haven, Yale University Press, 2008.</mixed-citation>

                <element-citation publication-type="book">
                    <person-group person-group-type="author">
                        <name>
                            <surname>THALER</surname>
                            <given-names>R. H.</given-names>
                        </name>
                        <name>
                            <surname>SUNSTEIN</surname>
                            <given-names>C. R.</given-names>
                        </name>
                    </person-group>
                    <source>Nudge. Improving Decisions about Health, Wealth, and Happiness</source>
                    <publisher-loc>New Haven</publisher-loc>
                    <publisher-name>Yale University Press</publisher-name>
                    <year>2008</year>

                </element-citation>
            </ref>
            <ref id="B114">

                <mixed-citation>The United Nations (UN) Special Rapporteur on Freedom of Opinion and Expression, the Organization for Security and Co-operation in Europe (OSCE) Representative on Freedom of the Media, the Organization of American States (OAS) Special Rapporteur on Freedom of Expression and the African Commission on Human and Peoples’ Rights (ACHPR) Special Rapporteur on Freedom of Expression and Access to Information. Joint Declaration on “Fake News,” Disinformation and Propaganda, 2017.</mixed-citation>

                <element-citation publication-type="legal-doc">
                    <person-group>
                        <collab>The United Nations (UN)</collab>
                        <collab>Special Rapporteur on Freedom of Opinion and Expression, the Organization for Security and Co-operation in Europe (OSCE)</collab>
                    </person-group>
                    <article-title>Representative on Freedom of the Media, the Organization of American States (OAS) Special Rapporteur on Freedom of Expression and the African Commission on Human and Peoples’ Rights (ACHPR) Special Rapporteur on Freedom of Expression and Access to Information</article-title>
                    <source>Joint Declaration on “Fake News,” Disinformation and Propaganda</source>
                    <year>2017</year>

                </element-citation>
            </ref>
            <ref id="B115">

                <mixed-citation>TUBARO, P.; CASILLI, A. A.; COVILLE, M. The Trainer, the Verifier, the Imitator: Three Ways in Which Human Platform Workers Support Artificial Intelligence. <italic>Big Data &amp; Society</italic> 7(1), 2020, doi: 10.1177/2053951720919776.</mixed-citation>

                <element-citation publication-type="journal">
                    <person-group person-group-type="author">
                        <name>
                            <surname>TUBARO</surname>
                            <given-names>P.</given-names>
                        </name>
                        <name>
                            <surname>CASILLI</surname>
                            <given-names>A. A.</given-names>
                        </name>
                        <name>
                            <surname>COVILLE</surname>
                            <given-names>M.</given-names>
                        </name>
                    </person-group>
                    <article-title>The Trainer, the Verifier, the Imitator: Three Ways in Which Human Platform Workers Support Artificial Intelligence</article-title>
                    <source>Big Data &amp; Society</source>
                    <volume>7</volume>
                    <issue>1</issue>
                    <year>2020</year>
                    <pub-id pub-id-type="doi">10.1177/2053951720919776</pub-id>

                </element-citation>
            </ref>
            <ref id="B116">

                <mixed-citation>UN Committee on Economic, Social and Cultural Rights (CESCR). General Comment No. 1: Reporting by States Parties, 27 July 1981.</mixed-citation>

                <element-citation publication-type="report">
                    <person-group person-group-type="author">
                        <collab>UN Committee on Economic, Social and Cultural Rights (CESCR)</collab>
                    </person-group>
                    <source>General Comment No. 1: Reporting by States Parties</source>
                    <day>27</day>
                    <month>07</month>
                    <year>1981</year>

                </element-citation>
            </ref>
            <ref id="B117">

                <mixed-citation>UN Human Rights Committee (HRC). CCPR General Comment No. 25: The right to participate in public affairs, voting rights and the right of equal access to public service (Art. 25), CCPR/C/21/Rev.1/Add.7, 12 July 1996.</mixed-citation>

                <element-citation publication-type="book">
                    <person-group person-group-type="author">
                        <collab>UN Human Rights Committee (HRC)</collab>
                    </person-group>
                    <source>CCPR General Comment No. 25: The right to participate in public affairs, voting rights and the right of equal access to public service (Art. 25)</source>
                    <comment>CCPR/C/21/Rev.1/Add.7</comment>
                    <day>12</day>
                    <month>07</month>
                    <year>1996</year>

                </element-citation>
            </ref>
            <ref id="B118">

                <mixed-citation>UNESCO. Draft Text of the Recommendation on the Ethics of Artificial Intelligence, 2021. Available at https://unesdoc.unesco.org/ark:/48223/pf0000377897. Accessed: 3 sep. 2021.</mixed-citation>

                <element-citation publication-type="book">
                    <person-group person-group-type="author">
                        <collab>UNESCO</collab>
                    </person-group>
                    <source>Draft Text of the Recommendation on the Ethics of Artificial Intelligence</source>
                    <year>2021</year>
                    <comment>Available at <ext-link ext-link-type="uri" xlink:href="https://unesdoc.unesco.org/ark:/48223/pf0000377897">https://unesdoc.unesco.org/ark:/48223/pf0000377897</ext-link></comment>
                    <date-in-citation content-type="access-date">3 sep. 2021</date-in-citation>

                </element-citation>
            </ref>
            <ref id="B119">

                <mixed-citation>VAN BRAKEL R.; De Hert, P. Policing, Surveillance and Law in a Pre-Crime Society: Understanding the Consequences of Technology Based Strategies. <italic>Cahiers Politiestudies, Jaargang</italic> 3, 163, 2011.</mixed-citation>

                <element-citation publication-type="journal">
                    <person-group person-group-type="author">
                        <name>
                            <surname>VAN BRAKEL</surname>
                            <given-names>R.</given-names>
                        </name>
                        <name>
                            <surname>De Hert</surname>
                            <given-names>P.</given-names>
                        </name>
                    </person-group>
                    <article-title>Policing, Surveillance and Law in a Pre-Crime Society: Understanding the Consequences of Technology Based Strategies</article-title>
                    <source>Cahiers Politiestudies, Jaargang</source>
                    <volume>3</volume>
                    <issue>163</issue>
                    <year>2011</year>

                </element-citation>
            </ref>
            <ref id="B120">

                <mixed-citation>VEALE, M.; BINNS, R. Fairer machine learning in the real world: Mitigating discrimination without collecting sensitive data. <italic>Big Data &amp; Society</italic> 4(2), 2017, doi:10.1177/2053951717743530.</mixed-citation>

                <element-citation publication-type="journal">
                    <person-group person-group-type="author">
                        <name>
                            <surname>VEALE</surname>
                            <given-names>M.</given-names>
                        </name>
                        <name>
                            <surname>BINNS</surname>
                            <given-names>R.</given-names>
                        </name>
                    </person-group>
                    <article-title>Fairer machine learning in the real world: Mitigating discrimination without collecting sensitive data</article-title>
                    <source>Big Data &amp; Society</source>
                    <volume>4</volume>
                    <issue>2</issue>
                    <year>2017</year>
                    <pub-id pub-id-type="doi">10.1177/2053951717743530</pub-id>

                </element-citation>
            </ref>
            <ref id="B121">

                <mixed-citation>VERBEEK, P-P. Understanding and Designing the Morality of Things, Chicago-London, The University of Chicago Press, 2011.</mixed-citation>

                <element-citation publication-type="book">
                    <person-group person-group-type="author">
                        <name>
                            <surname>VERBEEK</surname>
                            <given-names>P-P.</given-names>
                        </name>
                    </person-group>
                    <source>Understanding and Designing the Morality of Things</source>
                    <publisher-loc>Chicago-London</publisher-loc>
                    <publisher-name>The University of Chicago Press</publisher-name>
                    <year>2011</year>

                </element-citation>
            </ref>
            <ref id="B122">

                <mixed-citation>VERONESE, A.; NUNES LOPES ESPIÑEIRA LEMOS, A. Trayectoria normativa de la inteligencia artificial en los países de Latinoamérica con un marco jurídico para la protección de datos: límites y posibilidades de las políticas integradoras. <italic>Revista Latinoamericana de Economía y Sociedad Digital</italic> 2, 2021. available at https://revistalatam.digital/article/210207/. Accessed: 27 aug. 2021.</mixed-citation>

                <element-citation publication-type="journal">
                    <person-group person-group-type="author">
                        <name>
                            <surname>VERONESE</surname>
                            <given-names>A.</given-names>
                        </name>
                        <name>
                            <surname>NUNES LOPES ESPIÑEIRA LEMOS</surname>
                            <given-names>A.</given-names>
                        </name>
                    </person-group>
                    <article-title>Trayectoria normativa de la inteligencia artificial en los países de Latinoamérica con un marco jurídico para la protección de datos: límites y posibilidades de las políticas integradoras</article-title>
                    <source>Revista Latinoamericana de Economía y Sociedad Digital</source>
                    <volume>2</volume>
                    <year>2021</year>
                    <comment>available at <ext-link ext-link-type="uri" xlink:href="https://revistalatam.digital/article/210207/">https://revistalatam.digital/article/210207/</ext-link></comment>
                    <date-in-citation content-type="access-date">27 aug. 2021</date-in-citation>

                </element-citation>
            </ref>
            <ref id="B123">

                <mixed-citation>WACHTER, S. Affinity Profiling and Discrimination by Association in Online Behavioural Advertising. <italic>Berkeley Tech. L.J</italic>., 35(2), 367, 2021.</mixed-citation>

                <element-citation publication-type="journal">
                    <person-group person-group-type="author">
                        <name>
                            <surname>WACHTER</surname>
                            <given-names>S.</given-names>
                        </name>
                    </person-group>
                    <article-title>Affinity Profiling and Discrimination by Association in Online Behavioural Advertising</article-title>
                    <source>Berkeley Tech. L.J</source>
                    <volume>35</volume>
                    <issue>2</issue>
                    <fpage>367</fpage>
                    <lpage>367</lpage>
                    <year>2021</year>

                </element-citation>
            </ref>
            <ref id="B124">

                <mixed-citation>WEST, S.M.; WHITTAER, M.; Crawford, K. Discriminating Systems. Gender, Race, and Power in AI, 2019. Available at https://ainowinstitute.org/discriminatingsystems.pdf. Accessed: 15 may 2019.</mixed-citation>

                <element-citation publication-type="book">
                    <person-group person-group-type="author">
                        <name>
                            <surname>WEST</surname>
                            <given-names>S.M.</given-names>
                        </name>
                        <name>
                            <surname>WHITTAER</surname>
                            <given-names>M.</given-names>
                        </name>
                        <name>
                            <surname>Crawford</surname>
                            <given-names>K.</given-names>
                        </name>
                    </person-group>
                    <source>Discriminating Systems. Gender, Race, and Power in AI</source>
                    <year>2019</year>
                    <comment>Available at <ext-link ext-link-type="uri" xlink:href="https://ainowinstitute.org/discriminatingsystems.pdf">https://ainowinstitute.org/discriminatingsystems.pdf</ext-link></comment>
                    <date-in-citation content-type="access-date">15 may 2019</date-in-citation>

                </element-citation>
            </ref>
            <ref id="B125">

                <mixed-citation>ZALNIERIUTE, M.; Bennett Moses, L.; Williams, G. The Rule of Law and Automation of Government Decision-Making. <italic>The Modern Law Review</italic> 82(3), 425, 2019.</mixed-citation>

                <element-citation publication-type="journal">
                    <person-group person-group-type="author">
                        <name>
                            <surname>ZALNIERIUTE</surname>
                            <given-names>M.</given-names>
                        </name>
                        <name>
                            <surname>Bennett Moses</surname>
                            <given-names>L.</given-names>
                        </name>
                        <name>
                            <surname>Williams</surname>
                            <given-names>G.</given-names>
                        </name>
                    </person-group>
                    <article-title>The Rule of Law and Automation of Government Decision-Making</article-title>
                    <source>The Modern Law Review</source>
                    <volume>82</volume>
                    <issue>3</issue>
                    <fpage>425</fpage>
                    <lpage>425</lpage>
                    <year>2019</year>

                </element-citation>
            </ref>
            <ref id="B126">

                <mixed-citation>ZAVRŠNIK, A. Algorithmic Justice: Algorithms and Big Data in Criminal Justice Settings. <italic>European Journal of Criminology</italic> 1, 2019, doi:10.1177/1477370819876762.</mixed-citation>

                <element-citation publication-type="journal">
                    <person-group person-group-type="author">
                        <name>
                            <surname>ZAVRŠNIK</surname>
                            <given-names>A.</given-names>
                        </name>
                    </person-group>
                    <article-title>Algorithmic Justice: Algorithms and Big Data in Criminal Justice Settings</article-title>
                    <source>European Journal of Criminology</source>
                    <volume>1</volume>
                    <year>2019</year>
                    <pub-id pub-id-type="doi">10.1177/1477370819876762</pub-id>

                </element-citation>
            </ref>
            <ref id="B127">

                <mixed-citation>ZUIDERVEEN BORGESIUS, F. Strengthening Legal Protection against Discrimination by Algorithms and Artificial Intelligence. <italic>The International Journal of Human Rights</italic> 24(10), 1572, 2020.</mixed-citation>

                <element-citation publication-type="journal">
                    <person-group person-group-type="author">
                        <name>
                            <surname>ZUIDERVEEN BORGESIUS</surname>
                            <given-names>F.</given-names>
                        </name>
                    </person-group>
                    <article-title>Strengthening Legal Protection against Discrimination by Algorithms and Artificial Intelligence</article-title>
                    <source>The International Journal of Human Rights</source>
                    <volume>24</volume>
                    <issue>10</issue>
                    <fpage>1572</fpage>
                    <lpage>1572</lpage>
                    <year>2020</year>

                </element-citation>
            </ref>
        </ref-list>
    </back>
</article>
