Menu
Home Explore People Places Arts History Plants & Animals Science Life & Culture Technology
On this page
Regulation of artificial intelligence
Guidelines and laws to regulate AI

Regulation of artificial intelligence involves developing public sector policies and laws as part of the broader regulation of algorithms. This evolving landscape includes efforts by international organizations like the IEEE and OECD. Since 2016, many AI ethics guidelines have been introduced to ensure social control and balance fostering innovation with managing risks. Organizations deploying AI are crucial in creating trustworthy AI by following principles and mitigating risks. Regulation through review boards can also address challenges related to the AI control problem, promoting responsible AI development and deployment.

Background

According to Stanford University's 2025 AI Index, legislative mentions of AI rose 21.3% across 75 countries since 2023, marking a ninefold increase since 2016. The U.S. federal agencies introduced 59 AI-related regulations in 2024—more than double the number in 2023.89

There is currently no broad consensus on the degree or mechanics of AI regulation. Several prominent figures in the field, including Elon Musk, Sam Altman, Dario Amodei, and Demis Hassabis have publicly called for immediate regulation of AI. 10111213 In 2023, following ChatGPT-4's creation, Elon Musk and others signed an open letter urging a moratorium on the training of more powerful AI systems.14 Others, such as Mark Zuckerberg and Marc Andreessen, have warned about the risk of preemptive regulation stifling innovation. 15 16

In a 2022 Ipsos survey, attitudes towards AI varied greatly by country; 78% of Chinese citizens, but only 35% of Americans,In 2023, following ChatGPT-4's creation, Elon Musk and others signed an open letter urging a moratorium on the training of more powerful AI systems.17 agreed that "products and services using AI have more benefits than drawbacks".18 A 2023 Reuters/Ipsos poll found that 61% of Americans agree, and 22% disagree, that AI poses risks to humanity.19 In a 2023 Fox News poll, 35% of Americans thought it "very important", and an additional 41% thought it "somewhat important", for the federal government to regulate AI, versus 13% responding "not very important" and 8% responding "not at all important".2021

Perspectives

The regulation of artificial intelligence is the development of public sector policies and laws for promoting and regulating AI.22 Regulation is now generally considered necessary [by whom?] to both encourage AI and manage associated risks.232425 Public administration and policy considerations generally focus on the technical and economic implications and on trustworthy and human-centered AI systems,26 regulation of Artificial Superintelligence, 27 the risks and biases of machine-learning algorithms, the explainability of model outputs28, and the tension between open source AI and unchecked AI use. 293031

There have been both hard law and soft law proposals to regulate AI.32 Some legal scholars have noted that hard law approaches to AI regulation have substantial challenges.3334 Among the challenges, AI technology is rapidly evolving leading to a "pacing problem" where traditional laws and regulations often cannot keep up with emerging applications and their associated risks and benefits.3536 Similarly, the diversity of AI applications challenges existing regulatory agencies, which often have limited jurisdictional scope.37 As an alternative, some legal scholars argue that soft law approaches to AI regulation are promising because soft laws can be adapted more flexibly to meet the needs of emerging and evolving AI technology and nascent applications.3839 However, soft law approaches often lack substantial enforcement potential.4041

Cason Schmit, Megan Doerr, and Jennifer Wagner proposed the creation of a quasi-governmental regulator by leveraging intellectual property rights (i.e., copyleft licensing) in certain AI objects (i.e., AI models and training datasets) and delegating enforcement rights to a designated enforcement entity.42 They argue that AI can be licensed under terms that require adherence to specified ethical practices and codes of conduct. (e.g., soft law principles).43

Prominent youth organizations focused on AI, namely Encode Justice, have also issued comprehensive agendas calling for more stringent AI regulations and public-private partnerships.4445

AI regulation could derive from basic principles. A 2020 Berkman Klein Center for Internet & Society meta-review of existing sets of principles, such as the Asilomar Principles and the Beijing Principles, identified eight such basic principles: privacy, accountability, safety and security, transparency and explainability, fairness and non-discrimination, human control of technology, professional responsibility, and respect for human values.46 AI law and regulations have been divided into three main topics, namely governance of autonomous intelligence systems, responsibility and accountability for the systems, and privacy and safety issues.47 A public administration approach sees a relationship between AI law and regulation, the ethics of AI, and 'AI society', defined as workforce substitution and transformation, social acceptance and trust in AI, and the transformation of human to machine interaction.48 The development of public sector strategies for management and regulation of AI is deemed necessary at the local, national,49 and international levels50 and in a variety of fields, from public service management51 and accountability52 to law enforcement,5354 healthcare (especially the concept of a Human Guarantee),5556575859 the financial sector,60 robotics,6162 autonomous vehicles,63 the military64 and national security,65 and international law.6667

Henry Kissinger, Eric Schmidt, and Daniel Huttenlocher published a joint statement in November 2021 entitled "Being Human in an Age of AI", calling for a government commission to regulate AI.68

As a response to the AI control problem

Main article: AI control problem

Regulation of AI can be seen as positive social means to manage the AI control problem (the need to ensure long-term beneficial AI), with other social responses such as doing nothing or banning being seen as impractical, and approaches such as enhancing human capabilities through transhumanism techniques like brain-computer interfaces being seen as potentially complementary.6970 Regulation of research into artificial general intelligence (AGI) focuses on the role of review boards, from university or corporation to international levels, and on encouraging research into AI safety,71 together with the possibility of differential intellectual progress (prioritizing protective strategies over risky strategies in AI development) or conducting international mass surveillance to perform AGI arms control.72 For instance, the 'AGI Nanny' is a proposed strategy, potentially under the control of humanity, for preventing the creation of a dangerous superintelligence as well as for addressing other major threats to human well-being, such as subversion of the global financial system, until a true superintelligence can be safely created. It entails the creation of a smarter-than-human, but not superintelligent, AGI system connected to a large surveillance network, with the goal of monitoring humanity and protecting it from danger.73 Regulation of conscious, ethically aware AGIs focuses on how to integrate them with existing human society and can be divided into considerations of their legal standing and of their moral rights.74 Regulation of AI has been seen as restrictive, with a risk of preventing the development of AGI.75

Global guidance

The development of a global governance board to regulate AI development was suggested at least as early as 2017.76 In December 2018, Canada and France announced plans for a G7-backed International Panel on Artificial Intelligence, modeled on the International Panel on Climate Change, to study the global effects of AI on people and economies and to steer AI development.77 In 2019, the Panel was renamed the Global Partnership on AI.7879

The Global Partnership on Artificial Intelligence (GPAI) was launched in June 2020, stating a need for AI to be developed in accordance with human rights and democratic values, to ensure public confidence and trust in the technology, as outlined in the OECD Principles on Artificial Intelligence (2019).80 The 15 founding members of the Global Partnership on Artificial Intelligence are Australia, Canada, the European Union, France, Germany, India, Italy, Japan, the Republic of Korea, Mexico, New Zealand, Singapore, Slovenia, the United States and the UK. In 2023, the GPAI has 29 members.81 The GPAI Secretariat is hosted by the OECD in Paris, France. GPAI's mandate covers four themes, two of which are supported by the International Centre of Expertise in Montréal for the Advancement of Artificial Intelligence, namely, responsible AI and data governance. A corresponding centre of excellence in Paris will support the other two themes on the future of work, and on innovation and commercialization. GPAI also investigated how AI can be leveraged to respond to the COVID-19 pandemic.82

The OECD AI Principles83 were adopted in May 2019, and the G20 AI Principles in June 2019.848586 In September 2019 the World Economic Forum issued ten 'AI Government Procurement Guidelines'.87 In February 2020, the European Union published its draft strategy paper for promoting and regulating AI.88

At the United Nations (UN), several entities have begun to promote and discuss aspects of AI regulation and policy, including the UNICRI Centre for AI and Robotics.89 In partnership with INTERPOL, UNICRI's Centre issued the report AI and Robotics for Law Enforcement in April 201990 and the follow-up report Towards Responsible AI Innovation in May 2020.91 At UNESCO's Scientific 40th session in November 2019, the organization commenced a two-year process to achieve a "global standard-setting instrument on ethics of artificial intelligence". In pursuit of this goal, UNESCO forums and conferences on AI were held to gather stakeholder views. A draft text of a Recommendation on the Ethics of AI of the UNESCO Ad Hoc Expert Group was issued in September 2020 and included a call for legislative gaps to be filled.92 UNESCO tabled the international instrument on the ethics of AI for adoption at its General Conference in November 2021;93 this was subsequently adopted.94 While the UN is making progress with the global management of AI, its institutional and legal capability to manage the AGI existential risk is more limited.95

An initiative of International Telecommunication Union (ITU) in partnership with 40 UN sister agencies, AI for Good is a global platform which aims to identify practical applications of AI to advance the United Nations Sustainable Development Goals and scale those solutions for global impact. It is an action-oriented, global & inclusive United Nations platform fostering development of AI to positively impact health, climate, gender, inclusive prosperity, sustainable infrastructure, and other global development priorities.

Recent research has indicated that countries will also begin to use artificial intelligence as a tool for national cyberdefense. AI is a new factor in the cyber arms industry, as it can be used for defense purposes. Therefore, academics urge that nations should establish regulations for the use of AI, similar to how there are regulations for other military industries.96

In recent years, academic researchers have made more efforts to promote multilateral dialogue and policy development, advocating for the adoption of international frameworks that govern the deployment of AI in military and cybersecurity contexts, with a strong emphasis on human rights and international humanitarian law. Initiatives such as the Munich Convention process, which brought together scholars from institutions including the Technical University of Munich, Rutgers University, Stellenbosch University, Ulster University, and University of Edinburgh, have called for a binding international agreement to protect human rights in the age of AI.97[non-primary source needed]

Regional and national regulation

The regulatory and policy landscape for AI is an emerging issue in regional and national jurisdictions globally, for example in the European Union98 and Russia.99 Since early 2016, many national, regional and international authorities have begun adopting strategies, actions plans and policy papers on AI.100101 These documents cover a wide range of topics such as regulation and governance, as well as industrial strategy, research, talent and infrastructure.102103

Different countries have approached the problem in different ways. Regarding the three largest economies, it has been said that "the United States is following a market-driven approach, China is advancing a state-driven approach, and the EU is pursuing a rights-driven approach."104

Australia

In October 2023, the Australian Computer Society, Business Council of Australia, Australian Chamber of Commerce and Industry, Ai Group (aka Australian Industry Group), Council of Small Business Organisations Australia, and Tech Council of Australia jointly published an open letter calling for a national approach to AI strategy.105 The letter backs the federal government establishing a whole-of-government AI taskforce.106

Additionally, in August of 2024, the Australian government set a Voluntary AI Safety Standard, which was followed by a Proposals Paper later in September of that year, outlining potential guardrails for high-risk AI that could become mandatory. These guardrails include areas such as model testing, transparency, human oversight, and record-keeping, all of which may be enforced through new legislation. As noted, however, Australia has not yet passed AI-specific laws, but existing statutes such as the Privacy Act 1988, Corporations Act 2001, and Online Safety Act 2021 all have applications which apply to AI use.107

In September 2024, a bill also was introduced which granted the Australian Communications and Media Authority powers to regulate AI-generated misinformation. Several agencies, including the ACMA, ACCC, and Office of the Australian Information Commissioner, are all expected to play roles in future AI regulation.108

Brazil

On September 30, 2021, the Brazilian Chamber of Deputies (Câmara dos Deputados) approved the Brazilian Legal Framework for Artificial Intelligence (Marco Legal da Inteligência Artificial). This legislation aimed to regulate AI development and usage while promoting research and innovation in ethical AI solutions that prioritize culture, justice, fairness, and accountability.109 The 10-article bill established several key objectives: developing ethical principles for AI, promoting sustained research investment, and removing barriers to innovation. Article 4 specifically emphasized preventing discriminatory AI solutions, ensuring plurality, and protecting human rights.

When the bill was first released to the public, it faced substantial criticism, alarming the government for critical provisions. The underlying issue is that this bill failed to thoroughly and carefully address accountability, transparency, and inclusivity principles. Article VI establishes subjective liability, meaning any individual that is damaged by an AI system and is wishing to receive compensation must specify the stakeholder and prove that there was a mistake in the machine's life cycle. Scholars emphasize that it is out of legal order to assign an individual responsible for proving algorithmic errors given the high degree of autonomy, unpredictability, and complexity of AI systems.110 This also drew attention to the currently occurring issues with face recognition systems in Brazil leading to unjust arrests by the police, which would then imply that when this bill is adopted, individuals would have to prove and justify these machine errors.

The main controversy of this draft bill was directed to three proposed principles. First, the non-discrimination principle111, suggests that AI must be developed and used in a way that merely mitigates the possibility of abusive and discriminatory practices. Secondly, the pursuit of neutrality principle lists recommendations for stakeholders to mitigate biases; however, with no obligation to achieve this goal. Lastly, the transparency principle states that a system's transparency is only necessary when there is a high risk of violating fundamental rights. As easily observed, the Brazilian Legal Framework for Artificial Intelligence lacks binding and obligatory clauses and is rather filled with relaxed guidelines. In fact, experts emphasize that this bill may even make accountability for AI discriminatory biases even harder to achieve. Compared to the EU's proposal of extensive risk-based regulations, the Brazilian Bill has 10 articles proposing vague and generic recommendations.

The Brazilian AI Bill lacks the diverse perspectives that characterized earlier Brazilian internet legislation. When Brazil drafted the Marco Civil da Internet (Brazilian Internet Bill of Rights) in the 2000s, it used a multistakeholder approach that brought together various groups—including government, civil society, academia, and industry—to participate in dialogue, decision-making, and implementation. This collaborative process helps capture different viewpoints and trade-offs among stakeholders with varying interests, ultimately improving transparency and effectiveness in AI regulation.112

In May 2023, a new bill was passed, superseding the 2021 bill. It calls for risk assessments of AI systems before deployment and distinguishes "high risk" and "excessive risk" systems. The latter are characterized by their potential to expose or exploit vulnerabilities and will be subject to regulation by the Executive Branch.113

Canada

The Pan-Canadian Artificial Intelligence Strategy (2017) is supported by federal funding of Can $125 million with the objectives of increasing the number of outstanding AI researchers and skilled graduates in Canada, establishing nodes of scientific excellence at the three major AI centres, developing 'global thought leadership' on the economic, ethical, policy and legal implications of AI advances and supporting a national research community working on AI.114 The Canada CIFAR AI Chairs Program is the cornerstone of the strategy. It benefits from funding of Can$86.5 million over five years to attract and retain world-renowned AI researchers.115 The federal government appointed an Advisory Council on AI in May 2019 with a focus on examining how to build on Canada's strengths to ensure that AI advancements reflect Canadian values, such as human rights, transparency and openness. The Advisory Council on AI has established a working group on extracting commercial value from Canadian-owned AI and data analytics.116 In 2020, the federal government and Government of Quebec announced the opening of the International Centre of Expertise in Montréal for the Advancement of Artificial Intelligence, which will advance the cause of responsible development of AI.117 In June 2022, the government of Canada started a second phase of the Pan-Canadian Artificial Intelligence Strategy.118 In November 2022, Canada has introduced the Digital Charter Implementation Act (Bill C-27), which proposes three acts that have been described as a holistic package of legislation for trust and privacy: the Consumer Privacy Protection Act, the Personal Information and Data Protection Tribunal Act, and the Artificial Intelligence & Data Act (AIDA).119120

In September of 2023, the Canadian Government introduced a Voluntary Code of Conduct for the Responsible Development and Management of Advanced Generative AI Systems. The code, based initially on public consultations, seeks to provide interim guidance to Canadian companies on responsible AI practices. Ultimately, its intended to serve as a stopgap until formal legislation, such as the Artificial Intelligence and Data Act (AIDA), is enacted.121122 Moreover, in November 2024, the Canadian government additionally announced the creation of the Canadian Artificial Intelligence Safety Institute (CAISI) as part of a 2.4 billion CAD federal AI investment package. This includes 2 billion CAD to support a new AI Sovereign Computing Strategy and the AI Computing Access Fund, which aims to bolster Canada’s advanced computing infrastructure. Further funding includes 700 million CAD for domestic AI development, 1 billion CAD for public supercomputing infrastructure, and 300 million CAD to assist companies in accessing new AI resources.123

China

Further information: Artificial intelligence industry in China

The regulation of AI in China is mainly governed by the State Council of the People's Republic of China's July 8, 2017 "A Next Generation Artificial Intelligence Development Plan" (State Council Document No. 35), in which the Central Committee of the Chinese Communist Party and the State Council of the PRC urged the governing bodies of China to promote the development of AI up to 2030. Regulation of the issues of ethical and legal support for the development of AI is accelerating, and policy ensures state control of Chinese companies and over valuable data, including storage of data on Chinese users within the country and the mandatory use of People's Republic of China's national standards for AI, including over big data, cloud computing, and industrial software.124125126 In 2021, China published ethical guidelines for the use of AI in China which state that researchers must ensure that AI abides by shared human values, is always under human control, and is not endangering public safety.127 In 2023, China introduced Interim Measures for the Management of Generative AI Services.128

On August 15, 2023, China's first generative AI measures officially came into force, becoming one of the first comprehensive national regulatory frameworks for generative AI. The measures apply to all providers offering generative AI services to the Chinese public, including foreign entities, ultimately setting the rules related to data protection, transparency, and algorithmic accountability.129130

In parallel, earlier regulations such as the Chinese government's Deep Synthesis Provisions (effective January 2023) and the Algorithm Recommendation Provisions (effective March 2022) continue to shape China's governance of AI-driven systems, including requirements for watermarking and algorithm filing with the Cyberspace Administration of China (CAC).131 Additionally, In October 2023, China also implemented a set of Ethics Review Measures for science and technology, mandating certain ethical assessments of AI projects which were deemed deemed socially sensitive or capable of negatively influencing public opinion.132 As of mid-2024, over 1,400 AI algorithms had been already registered under the CAC's algorithm filing regime, which includes disclosure requirements and penalties for noncompliance.133 This layered approach reflects a broader policy process shaped by not only central directives but also academic input, civil society concerns, and public discourse.134

Colombia

Colombia has not issued a specific artificial intelligence (AI) law, nonetheless, this does not mean a lack of framework or initiatives to govern it. In fact, there are numerous instruments issued with that purpose, including national policies, ethical frameworks, road maps, rulings, and guidelines. In addition, there are other existing regulations applicable to AI system, such as data protection, intellectual property, consumer laws, and civil liability rules.

One of the first specific instruments issued was the CONPES 3920 of 2019, the National Policy on Exploitation of Data (Big Data). The main purpose of this policy was to leverage data in Colombia by creating the conditions to handle it as an asset to generate social and economic value135.

Another milestone occured in 2021, when the National Government published the AI Ethical Framework in Colombia. It was a soft law guide for public entities, offering recommendations and suggestions to consider in the manage of projects that incorporate the use of AI136.

An additional framework for AI was adopted by Colombia in 2022: the Recommendation on the Ethics of Artificial Intelligence by UNESCO137. It includes values and principles applicable in the public and private sectors in all stages of the AI system life cycle138.

A regional political compromise was made in 2023 on AI that involved Latin American and Caribbean countries. It was called the Santiago Declaration, whose main purpose is to promote an ethical AI in the region139.

2024 was a prolific year in the path of govening AI in Colombia. In fact, a road map for an ethical and sustainable AI Adoption was launched by the National Government140. The Superintendence of Industry and Commerce issued a guide on the personal data treatment in AI systems141. The Judiciary Council published a guideline for the use of AI in the judicial sector142. In the global context, the OECD principles were updated143, and the Global Digital Compact by the United Nations was published144, and the UN Resolution A/78/L.49145 on safe, trustworthy, and reliable AI systems for sustainable development.

In 2025, a new national policy on AI was issued by the National Government, contained in the CONPES 4144146, and the ruling T-067/25147 by the Constitutional Court provided some rules for access to public information and transparency of algorithms.

Until the Congress issues AI regulations, this soft law documents can guide the design, develop and use of AI systems in Colombia.

Council of Europe

The Council of Europe (CoE) is an international organization that promotes human rights, democracy and the rule of law. It comprises 46 member states, including all 29 Signatories of the European Union's 2018 Declaration of Cooperation on Artificial Intelligence. The CoE has created a common legal space in which the members have a legal obligation to guarantee rights as set out in the European Convention on Human Rights. Specifically in relation to AI, "The Council of Europe's aim is to identify intersecting areas between AI and our standards on human rights, democracy and rule of law, and to develop relevant standard setting or capacity-building solutions". The large number of relevant documents identified by the CoE include guidelines, charters, papers, reports and strategies.148 The authoring bodies of these AI regulation documents are not confined to one sector of society and include organizations, companies, bodies and nation-states.149

In 2019, the Council of Europe initiated a process to assess the need for legally binding regulation of AI, focusing specifically on its implications for human rights and democratic values. Negotiations on a treaty began in September 2022, involving the 46 member states of the Council of Europe, as well as Argentina, Australia, Canada, Costa Rica, the Holy See, Israel, Japan, Mexico, Peru, the United States of America, and Uruguay, as well as the European Union. On 17 May 2024, the "Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law" was adopted. It was opened for signature on 5 September 2024. Although developed by a European organisation, the treaty is open for accession by states from other parts of the world. The first ten signatories were: Andorra, Georgia, Iceland, Norway, Moldova, San Marino, the United Kingdom, Israel, the United States, and the European Union.150151

Czech Republic

The Czech Republic adopted a National AI Strategy in 2019, and an updated National AI Strategy of the Czech Republic 2030 in 2024.152 The updated strategy includes a provision to ensure effective legislation, to create codes of ethics for developers and users, to establish supervisory bodies and to promote the ethical use of AI. 153

European Union

Main article: Artificial Intelligence Act

The EU is one of the largest jurisdictions in the world and plays an active role in the global regulation of digital technology through the GDPR,154 Digital Services Act, and the Digital Markets Act.155156 For AI in particular, the Artificial intelligence Act is regarded in 2023 as the most far-reaching regulation of AI worldwide.157158

Most European Union (EU) countries have their own national strategies towards regulating AI, but these are largely convergent.159 The European Union is guided by a European Strategy on Artificial Intelligence,160 supported by a High-Level Expert Group on Artificial Intelligence.161162 In April 2019, the European Commission published its Ethics Guidelines for Trustworthy Artificial Intelligence (AI),163 following this with its Policy and investment recommendations for trustworthy Artificial Intelligence in June 2019.164 The EU Commission's High Level Expert Group on Artificial Intelligence carries out work on Trustworthy AI, and the Commission has issued reports on the Safety and Liability Aspects of AI and on the Ethics of Automated Vehicles. In 2020. the EU Commission sought views on a proposal for AI specific legislation, and that process is ongoing.165

On February 2, 2020, the European Commission published its White Paper on Artificial Intelligence – A European approach to excellence and trust.166167 The White Paper consists of two main building blocks, an 'ecosystem of excellence' and a 'ecosystem of trust'. The 'ecosystem of trust' outlines the EU's approach for a regulatory framework for AI. In its proposed approach, the Commission distinguishes AI applications based on whether they are 'high-risk' or not. Only high-risk AI applications should be in the scope of a future EU regulatory framework. An AI application is considered high-risk if it operates in a risky sector (such as healthcare, transport or energy) and is "used in such a manner that significant risks are likely to arise". For high-risk AI applications, the requirements are mainly about the : "training data", "data and record-keeping", "information to be provided", "robustness and accuracy", and "human oversight". There are also requirements specific to certain usages such as remote biometric identification. AI applications that do not qualify as 'high-risk' could be governed by a voluntary labeling scheme. As regards compliance and enforcement, the Commission considers prior conformity assessments which could include 'procedures for testing, inspection or certification' and/or 'checks of the algorithms and of the data sets used in the development phase'. A European governance structure on AI in the form of a framework for cooperation of national competent authorities could facilitate the implementation of the regulatory framework.168

A January 2021 draft was leaked online on April 14, 2021,169 before the Commission presented their official "Proposal for a Regulation laying down harmonised rules on artificial intelligence" a week later.170 Shortly after, the Artificial Intelligence Act (also known as the AI Act) was formally proposed on this basis.171 This proposal includes a refinement of the 2020 risk-based approach with, this time, 4 risk categories: "minimal", "limited", "high" and "unacceptable".172 The proposal has been severely critiqued in the public debate. Academics have expressed concerns about various unclear elements in the proposal – such as the broad definition of what constitutes AI – and feared unintended legal implications, especially for vulnerable groups such as patients and migrants.173174 The risk category "general-purpose AI" was added to the AI Act to account for versatile models like ChatGPT, which did not fit the application-based regulation framework.175 Unlike for other risk categories, general-purpose AI models can be regulated based on their capabilities, not just their uses. Weaker general-purpose AI models are subject transparency requirements, while those considered to pose "systemic risks" (notably those trained using computational capabilities exceeding 1025 FLOPS) must also undergo a thorough evaluation process.176 A subsequent version of the AI Act was finally adopted in May 2024.177 The AI Act will be progressively enforced.178 Recognition of emotions and real-time remote biometric identification will be prohibited, with some exemptions, such as for law enforcement.179

The European Union's AI Act has created a regulatory framework with significant implications globally. This legislation introduces a risk-based approach to categorizing AI systems, focusing on high-risk applications like healthcare, education, and public safety.180 It requires organizations to ensure transparency, data governance, and human oversight in their AI solutions. While this aims to foster ethical AI use, the stringent requirements could increase compliance costs and delay technology deployment, impacting innovation-driven industries.

Observers have expressed concerns about the multiplication of legislative proposals under the von der Leyen Commission. The speed of the legislative initiatives is partially led by political ambitions of the EU and could put at risk the digital rights of the European citizens, including rights to privacy,181 especially in the face of uncertain guarantees of data protection through cyber security.182 Among the stated guiding principles in the variety of legislative proposals in the area of AI under the von der Leyen Commission are the objectives of strategic autonomy183 and the concept of digital sovereignty.184 On May 29, 2024, the European Court of Auditors published a report stating that EU measures were not well coordinated with those of EU countries; that the monitoring of investments was not systematic; and that stronger governance was needed.185

Finland

Finland has appointed a working group to evaluate what national legislation is required by the EU Artificial intelligence Act, and to prepare a legislative proposal on its national implementation. The working group began its evaluation on April 29, 2024, and is expected to conclude by June 30, 2026.186

Germany

In November 2020,187 DIN, DKE and the German Federal Ministry for Economic Affairs and Energy published the first edition of the "German Standardization Roadmap for Artificial Intelligence" (NRM KI) and presented it to the public at the Digital Summit of the Federal Government of Germany.188 NRM KI describes requirements to future regulations and standards in the context of AI. The implementation of the recommendations for action is intended to help to strengthen the German economy and science in the international competition in the field of artificial intelligence and create innovation-friendly conditions for this emerging technology. The first edition is a 200-page long document written by 300 experts. The second edition of the NRM KI was published to coincide with the German government's Digital Summit on December 9, 2022.189 DIN coordinated more than 570 participating experts from a wide range of fields from science, industry, civil society and the public sector. The second edition is a 450-page long document.

On the one hand, NRM KI covers the focus topics in terms of applications (e.g. medicine, mobility, energy & environment, financial services, industrial automation) and fundamental issues (e.g. AI classification, security, certifiability, socio-technical systems, ethics).190 On the other hand, it provides an overview of the central terms in the field of AI and its environment across a wide range of interest groups and information sources. In total, the document covers 116 standardisation needs and provides six central recommendations for action.191

G7

On 30 October 2023, members of the G7 subscribe to eleven guiding principles for the design, production and implementation of advanced artificial intelligence systems, as well as a voluntary Code of Conduct for artificial intelligence developers in the context of the Hiroshima Process.192

The agreement receives the applause of Ursula von der Leyen who finds in it the principles of the AI Directive, currently being finalized.

New guidelines also aim to establish a coordinated global effort towards the responsible development and use of advanced AI systems. While non-binding, the G7 governments encourage organizations to voluntarily adopt the guidelines, which emphasize a risk-based approach across the AI lifecycle—from pre-deployment risk assessment to post-deployment incident reporting and mitigation.193

The AIP&CoC also highlight the importance of AI system security, internal adversarial testing ('red teaming'), public transparency about capabilities and limitations, and governance procedures that include privacy safeguards and content authentication tools. The guidelines additionally promote AI innovation directed at solving global challenges such as climate change and public health, and call for advancing international technical standards.194

Looking ahead, the G7 intends to further refine their principles and Code of Conduct in collaboration with other organizations like the OECD, GPAI, and broader stakeholders. Areas of broader development include more clrsnrt AI terminology (e.g., “advanced AI systems”), the setting of risk benchmarks, and mechanisms for cross-border information sharing on potential AI risks. Despite general alignment on AI safety, analysts have noted that differing regulatory philosophies—such as the EU’s prescriptive AI Act versus the U.S.’s sector-specific approach—may challenge global regulatory harmonization.195

Israel

On October 30, 2022, pursuant to government resolution 212 of August 2021, the Israeli Ministry of Innovation, Science and Technology released its "Principles of Policy, Regulation and Ethics in AI" white paper for public consultation.196 By December 2023, the Ministry of Innovation and the Ministry of Justice published a joint AI regulation and ethics policy paper, outlining several AI ethical principles and a set of recommendations including opting for sector-based regulation, a risk-based approach, preference for "soft" regulatory tools and maintaining consistency with existing global regulatory approaches to AI.197

In December of 2023, Israel unveiled its first comprehensive national AI policy, which was jointly developed through a collaboration between ministerial and stakeholder consultation. In general, the new policy outlines ethical principles aligned with current OECD guidelines and recommends a sector-based, risk-driven regulatory framework, which focuses on areas like transparency and accountability.198 The policy proposes the creation of a national AI Policy Coordination Center to support regulators, and further developing the tools necessary for responsible AI deployment. In addition, alongside 56 other nations, to domestic policy development, Israel signed the world’s first binding international treaty on artificial intelligence in March of 2024. The specific treaty, led by the Council of Europe, has obliged signatories to ensure current AI systems uphold democratic values, human rights, and the rule of law.199

Italy

In October 2023, the Italian privacy authority approved a regulation that provides three principles for therapeutic decisions taken by automated systems: transparency of decision-making processes, human supervision of automated decisions and algorithmic non-discrimination.200

In March 2024, the President of the Italian Data Protection Authority reaffirmed their agency’s readiness to implement the European Union’s newly introduced Artificial Intelligence Act, praising the framework of institutional competence and independence.201 Italy has continued to develop guidance on AI applications through existing legal frameworks, including recent innovations in areas such as facial recognition for law enforcement, AI in healthcare, deepfakes, and smart assistants.202 The Italian government’s National AI Strategy (2022–2024) emphasizes responsible innovation and outlines goals for talent development, public and private sector adoption, and regulatory clarity, particularly in coordination with EU-level initiatives.203 While Italy has not enacted standalone AI legislation, courts and regulators have begun interpreting existing laws to address transparency, non-discrimination, and human oversight in algorithmic decision-making.

Morocco

In Morocco, a new legislative proposal has been put forward by a coalition of political parties in Parliament to establish the National Agency for Artificial Intelligence (AI). This agency is intended to regulate AI technologies, enhance collaboration with international entities in the field, and increase public awareness of both the possibilities and risks associated with AI.204

In recent years, Morocco has made efforts to advance its use of artificial intelligence in the legal sector, particularly through AI tools that assist with judicial prediction and document analysis, helping to streamline case law research and support legal practitioners with more complex tasks. Alongside these efforts to establish a national AI agency, AI is being gradually introduced into legislative and judicial processes in Morocco, with ongoing discussions emphasizing the benefits as well as the potential risks of these technologies.205

Generally speaking Morocco's broader digital policy includes robust data governance measures including the 2009 Personal Data Protection Law and the 2020 Cybersecurity Law, which establish requirements in areas such as privacy, breach notification, and data localization.206 As of 2024, additional decrees have also expanded cybersecurity standards for cloud infrastructure and data audits within the nation. And while general data localization is not mandated, sensitive government and critical infrastructure data must be stored domestically. Oversight is led by the National Commission for the Protection of Personal Data (CNDP) and the General Directorate of Information Systems Security (DGSSI), though public enforcement actions in the country remain limited.207

New Zealand

As of July 2023, no AI-specific legislation exists, but AI usage is regulated by existing laws, including the Privacy Act, the Human Rights Act, the Fair Trading Act and the Harmful Digital Communications Act.208

In 2020, the New Zealand Government sponsored a World Economic Forum pilot project titled "Reimagining Regulation for the Age of AI", aimed at creating regulatory frameworks around AI.209 The same year, the Privacy Act was updated to regulate the use of New Zealanders' personal information in AI.210 In 2023, the Privacy Commissioner released guidance on using AI in accordance with information privacy principles.211 In February 2024, the Attorney-General and Technology Minister announced the formation of a Parliamentary cross-party AI caucus, and that framework for the Government's use of AI was being developed. She also announced that no extra regulation was planned at that stage.212

Philippines

In 2023, a bill was filed in the Philippine House of Representatives which proposed the establishment of the Artificial Intelligence Development Authority (AIDA) which would oversee the development and research of artificial intelligence. AIDA was also proposed to be a watchdog against crimes using AI.213

The Commission on Elections has also considered in 2024 the ban of using AI and deepfake for campaigning. They look to implement regulations that would apply as early as for the 2025 general elections.214

Spain

In 2018, the Spanish Ministry of Science, Innovation and Universities approved an R&D Strategy on Artificial Intelligence.215

This section is an excerpt from Spanish Agency for the Supervision of Artificial Intelligence § History.[edit]

With the formation of the second government of Pedro Sánchez in January 2020, the areas related to new technologies that, since 2018, were in the Ministry of Economy, were strengthened. Thus, in 2020 the Secretariat of State for Digitalization and Artificial Intelligence (SEDIA) was created.216 From this higher body, following the recommendations made by the R&D Strategy on Artificial Intelligence of 2018,217 the National Artificial Intelligence Strategy (2020) was developed, which already provided for actions concerning the governance of artificial intelligence and the ethical standards that should govern its use. This project was also included within the Recovery, Transformation and Resilience Plan (2021).

During 2021,218 the Government revealed that these ideas would be developed through a new government agency, and the General State Budget for 2022 authorized its creation and allocated five million euros for its development.219

The Council of Ministers, at its meeting on 13 September 2022, began the process for the election of the AESIA headquarters.220221 16 Spanish provinces presented candidatures, with the Government opting for A Coruña, which proposed the La Terraza building.222

On 22 August 2023, the Government approved the internal regulations of the Agency.223 With this, Spain became the first European country with an agency dedicated to the supervision of AI, anticipating the entry into force of the future European Regulation on Artificial Intelligence,224 which establishes the need for Member States to have with a supervisory authority in this matter.

Switzerland

Switzerland currently has no specific AI legislation, but on 12 February 2025, the Federal Council announced plans to ratify the Council of Europe’s AI Convention and incorporate it into Swiss law. A draft bill and implementation plan are to be prepared by the end of 2026. The approach includes sector-specific regulation, limited cross-sector rules, such as data protection, and non-binding measures such as industry agreements. The goals are to support innovation, protect fundamental rights, and build public trust in AI.225

United Kingdom

The UK supported the application and development of AI in business via the Digital Economy Strategy 2015–2018226 introduced at the beginning of 2015 by Innovate UK as part of the UK Digital Strategy.227 In the public sector, the Department for Digital, Culture, Media and Sport advised on data ethics and the Alan Turing Institute provided guidance on responsible design and implementation of AI systems.228229 In terms of cyber security, in 2020 the National Cyber Security Centre has issued guidance on 'Intelligent Security Tools'.230231 The following year, the UK published its 10-year National AI Strategy,232 which describes actions to assess long-term AI risks, including AGI-related catastrophic risks.233

In March 2023, the UK released the white paper A pro-innovation approach to AI regulation.234 This white paper presents general AI principles, but leaves significant flexibility to existing regulators in how they adapt these principles to specific areas such as transport or financial markets.235 In November 2023, the UK hosted the first AI safety summit, with the prime minister Rishi Sunak aiming to position the UK as a leader in AI safety regulation.236237 During the summit, the UK created an AI Safety Institute, as an evolution of the Frontier AI Taskforce led by Ian Hogarth. The institute was notably assigned the responsibility of advancing the safety evaluations of the world's most advanced AI models, also called frontier AI models.238

The UK government indicated its reluctance to legislate early, arguing that it may reduce the sector's growth and that laws might be rendered obselete by further technological progress.239

United States

Main article: Regulation of AI in the United States

Discussions on regulation of AI in the United States have included topics such as the timeliness of regulating AI, the nature of the federal regulatory framework to govern and promote AI, including what agency should lead, the regulatory and governing powers of that agency, and how to update regulations in the face of rapidly changing technology, as well as the roles of state governments and courts.240

Regulation of fully autonomous weapons

Main article: Lethal autonomous weapon

Legal questions related to lethal autonomous weapons systems (LAWS), in particular compliance with the laws of armed conflict, have been under discussion at the United Nations since 2013, within the context of the Convention on Certain Conventional Weapons.241 Notably, informal meetings of experts took place in 2014, 2015 and 2016 and a Group of Governmental Experts (GGE) was appointed to further deliberate on the issue in 2016. A set of guiding principles on LAWS affirmed by the GGE on LAWS were adopted in 2018.242

In 2016, China published a position paper questioning the adequacy of existing international law to address the eventuality of fully autonomous weapons, becoming the first permanent member of the U.N. Security Council to broach the issue,243 and leading to proposals for global regulation.244 The possibility of a moratorium or preemptive ban of the development and use of LAWS has also been raised on several occasions by other national delegations to the Convention on Certain Conventional Weapons and is strongly advocated for by the Campaign to Stop Killer Robots – a coalition of non-governmental organizations.245 The US government maintains that current international humanitarian law is capable of regulating the development or use of LAWS.246 The Congressional Research Service indicated in 2023 that the US does not have LAWS in its inventory, but that its policy does not prohibit the development and employment of it.247

See also

References

  1. Cath, Corinne (2018). "Governing artificial intelligence: ethical, legal and technical opportunities and challenges". Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences. 376 (2133): 20180080. Bibcode:2018RSPTA.37680080C. doi:10.1098/rsta.2018.0080. PMC 6191666. PMID 30322996. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6191666

  2. Erdélyi, Olivia J.; Goldsmith, Judy (2020). "Regulating Artificial Intelligence: Proposal for a Global Solution". arXiv:2005.11072 [cs.CY]. /wiki/ArXiv_(identifier)

  3. Tallberg, Jonas; Erman, Eva; Furendal, Markus; Geith, Johannes; Klamberg, Mark; Lundgren, Magnus (2023). "Global Governance of Artificial Intelligence: Next Steps for Empirical and Normative Research". International Studies Review. 25 (3). arXiv:2305.11528. doi:10.1093/isr/viad040. https://doi.org/10.1093%2Fisr%2Fviad040

  4. Héder, M (2020). "A criticism of AI ethics guidelines". Információs Társadalom. 20 (4): 57–73. doi:10.22503/inftars.XX.2020.4.5. S2CID 233252939. https://doi.org/10.22503%2Finftars.XX.2020.4.5

  5. Curtis, Caitlin; Gillespie, Nicole; Lockey, Steven (2022-05-24). "AI-deploying organizations are key to addressing 'perfect storm' of AI risks". AI and Ethics. 3 (1): 145–153. doi:10.1007/s43681-022-00163-7. ISSN 2730-5961. PMC 9127285. PMID 35634256. Archived from the original on 2023-03-15. Retrieved 2022-05-30. https://doi.org/10.1007/s43681-022-00163-7

  6. "An Ethical Approach to AI is an Absolute Imperative, Andreas Kaplan". Archived from the original on 17 December 2019. Retrieved 26 April 2021. https://olbios.org/an-ethical-approach-to-ai-is-an-absolute-imperative/

  7. Sotala, Kaj; Yampolskiy, Roman V (2014-12-19). "Responses to catastrophic AGI risk: a survey". Physica Scripta. 90 (1): 018001. Bibcode:2015PhyS...90a8001S. doi:10.1088/0031-8949/90/1/018001. ISSN 0031-8949. https://doi.org/10.1088%2F0031-8949%2F90%2F1%2F018001

  8. Vincent, James (3 April 2023). "AI is entering an era of corporate control". The Verge. Archived from the original on 19 June 2023. Retrieved 19 June 2023. https://www.theverge.com/23667752/ai-progress-2023-report-stanford-corporate-control

  9. "Artificial Intelligence Index Report 2025". Artificial Intelligence Index Report 2025. Stanford University. 2025. Archived (PDF) from the original on 16 June 2025. Retrieved 16 June 2025. https://hai.stanford.edu/ai-index/2025-ai-index-report

  10. Varanasi, Lakshmi. "OpenAI's Sam Altman says an international agency should monitor the 'most powerful' AI to ensure 'reasonable safety'". Business Insider. Business Insider. Retrieved 16 June 2025. https://web.archive.org/web/20250409150654/https://www.businessinsider.com/sam-altman-openai-artificial-intelligence-regulation-international-agency-2024-5

  11. Aloisi, Silva. "Elon Musk repeats call for artificial intelligence regulation". Reuters. Reuters. Retrieved 16 June 2025. https://www.reuters.com/technology/elon-musk-repeats-call-artificial-intelligence-regulation-2023-06-16/#:~:text=PARIS%2C%20June%2016%20%28Reuters%29%20,the%20AI%20sector%20needed%20regulation

  12. Milmo, Dqan. "This article is more than 1 year old AI risk must be treated as seriously as climate crisis, says Google DeepMind chief". The Guardian. The Guardian. Retrieved 16 June 2025. https://www.theguardian.com/technology/2023/oct/24/ai-risk-climate-crisis-google-deepmind-chief-demis-hassabis-regulation

  13. Sherry, Ben. "Why Anthropic CEO Dario Amodei Is Asking for AI Regulation". Inc. Inc. Retrieved 16 June 2025. https://www.inc.com/ben-sherry/why-anthropic-ceo-dario-amodei-is-asking-for-ai-regulation/91198864

  14. "Pause Giant AI Experiments: An Open Letter". Future of Life Institute. Retrieved 2025-06-16. https://futureoflife.org/open-letter/pause-giant-ai-experiments/

  15. Goldman, Sharon. "How Mark Zuckerberg has fully rebuilt Meta around Llama". Fortune. Fortune. Retrieved 16 June 2025. https://fortune.com/2024/11/19/zuckerberg-meta-ai-openai-llama/

  16. Heath, Ryan. "Civilization depends on more AI, Marc Andreessen says". Axios. Axios. Retrieved 16 June 2025. https://www.axios.com/2023/10/17/marc-andreessen-ai-manifesto-techno-optimist

  17. "Pause Giant AI Experiments: An Open Letter". Future of Life Institute. Retrieved 2025-06-16. https://futureoflife.org/open-letter/pause-giant-ai-experiments/

  18. Vincent, James (3 April 2023). "AI is entering an era of corporate control". The Verge. Archived from the original on 19 June 2023. Retrieved 19 June 2023. https://www.theverge.com/23667752/ai-progress-2023-report-stanford-corporate-control

  19. Edwards, Benj (17 May 2023). "Poll: AI poses risk to humanity, according to majority of Americans". Ars Technica. Archived from the original on 19 June 2023. Retrieved 19 June 2023. https://arstechnica.com/information-technology/2023/05/poll-61-of-americans-say-ai-threatens-humanitys-future/

  20. Kasperowicz, Peter (1 May 2023). "Regulate AI? GOP much more skeptical than Dems that government can do it right: poll". Fox News. Archived from the original on 19 June 2023. Retrieved 19 June 2023. https://www.foxnews.com/politics/regulate-ai-gop-much-more-skeptical-than-dems-that-the-government-can-do-it-right-poll

  21. "Fox News Poll" (PDF). Fox News. 2023. Archived (PDF) from the original on 12 May 2023. Retrieved 19 June 2023. https://static.foxnews.com/foxnews.com/content/uploads/2023/05/Fox_April-21-24-2023_Complete_National_Topline_May-1-Release.pdf

  22. Barfield, Woodrow; Pagallo, Ugo (2018). Research handbook on the law of artificial intelligence. Cheltenham, UK: Edward Elgar Publishing. ISBN 978-1-78643-904-8. OCLC 1039480085. 978-1-78643-904-8

  23. Wirtz, Bernd W.; Weyerer, Jan C.; Geyer, Carolin (2018-07-24). "Artificial Intelligence and the Public Sector—Applications and Challenges". International Journal of Public Administration. 42 (7): 596–615. doi:10.1080/01900692.2018.1498103. ISSN 0190-0692. S2CID 158829602. Archived from the original on 2020-08-18. Retrieved 2020-08-17. https://zenodo.org/record/3569435

  24. Buiten, Miriam C. (2019). "Towards Intelligent Regulation of Artificial Intelligence". European Journal of Risk Regulation. 10 (1): 41–59. doi:10.1017/err.2019.8. https://doi.org/10.1017%2Ferr.2019.8

  25. Mantelero, Alessandro; Esposito, Maria Samantha (2021). "An evidence-based methodology for human rights impact assessment (HRIA) in the development of AI data-intensive systems". Computer Law & Security Review. 41: 105561. arXiv:2407.20951. doi:10.1016/j.clsr.2021.105561. ISSN 0267-3649. S2CID 237588123. https://doi.org/10.1016%2Fj.clsr.2021.105561

  26. Artificial intelligence in society. Paris: Organisation for Economic Co-operation and Development. 11 June 2019. ISBN 978-92-64-54519-9. OCLC 1105926611. 978-92-64-54519-9

  27. Kamyshansky, Vladimir P.; Rudenko, Evgenia Y.; Kolomiets, Evgeniy A.; Kripakova, Dina R. (2020), "Revisiting the Place of Artificial Intelligence in Society and the State", Artificial Intelligence: Anthropogenic Nature vs. Social Origin, Advances in Intelligent Systems and Computing, vol. 1100, Cham: Springer International Publishing, pp. 359–364, doi:10.1007/978-3-030-39319-9_41, ISBN 978-3-030-39318-2, S2CID 213070224 978-3-030-39318-2

  28. Buiten, Miriam C. (2019). "Towards Intelligent Regulation of Artificial Intelligence". European Journal of Risk Regulation. 10 (1): 41–59. doi:10.1017/err.2019.8. https://doi.org/10.1017%2Ferr.2019.8

  29. "Co-Governance and the Future of AI Regulation". Harvard Law Review. Harvard Law Review. Retrieved 16 June 2025. https://harvardlawreview.org/print/vol-138/co-governance-and-the-future-of-ai-regulation/

  30. "Not all AI models should be freely available, argues a legal scholar". The Economist. The Economist. Retrieved 16 June 2025. https://www.economist.com/by-invitation/2024/07/29/not-all-ai-models-should-be-freely-available-argues-a-legal-scholar

  31. "Keep the code behind AI open, say two entrepreneurs". The Economist. The Economist. Retrieved 16 June 2025. https://www.economist.com/by-invitation/2024/07/29/keep-the-code-behind-ai-open-say-two-entrepreneurs

  32. "Special Issue on Soft Law Governance of Artificial Intelligence: IEEE Technology and Society Magazine publication information". IEEE Technology and Society Magazine. 40 (4): C2. December 2021. doi:10.1109/MTS.2021.3126194. https://doi.org/10.1109%2FMTS.2021.3126194

  33. Marchant, Gary. ""Soft Law" Governance of AI" (PDF). AI Pulse. AI PULSE Papers. Archived (PDF) from the original on 21 March 2023. Retrieved 28 February 2023. https://escholarship.org/content/qt0jq252ks/qt0jq252ks.pdf

  34. Johnson, Walter G.; Bowman, Diana M. (December 2021). "A Survey of Instruments and Institutions Available for the Global Governance of Artificial Intelligence". IEEE Technology and Society Magazine. 40 (4): 68–76. doi:10.1109/MTS.2021.3123745. S2CID 245053179. /wiki/Doi_(identifier)

  35. Marchant, Gary. ""Soft Law" Governance of AI" (PDF). AI Pulse. AI PULSE Papers. Archived (PDF) from the original on 21 March 2023. Retrieved 28 February 2023. https://escholarship.org/content/qt0jq252ks/qt0jq252ks.pdf

  36. Johnson, Walter G.; Bowman, Diana M. (December 2021). "A Survey of Instruments and Institutions Available for the Global Governance of Artificial Intelligence". IEEE Technology and Society Magazine. 40 (4): 68–76. doi:10.1109/MTS.2021.3123745. S2CID 245053179. /wiki/Doi_(identifier)

  37. Marchant, Gary. ""Soft Law" Governance of AI" (PDF). AI Pulse. AI PULSE Papers. Archived (PDF) from the original on 21 March 2023. Retrieved 28 February 2023. https://escholarship.org/content/qt0jq252ks/qt0jq252ks.pdf

  38. Marchant, Gary. ""Soft Law" Governance of AI" (PDF). AI Pulse. AI PULSE Papers. Archived (PDF) from the original on 21 March 2023. Retrieved 28 February 2023. https://escholarship.org/content/qt0jq252ks/qt0jq252ks.pdf

  39. Johnson, Walter G.; Bowman, Diana M. (December 2021). "A Survey of Instruments and Institutions Available for the Global Governance of Artificial Intelligence". IEEE Technology and Society Magazine. 40 (4): 68–76. doi:10.1109/MTS.2021.3123745. S2CID 245053179. /wiki/Doi_(identifier)

  40. Marchant, Gary. ""Soft Law" Governance of AI" (PDF). AI Pulse. AI PULSE Papers. Archived (PDF) from the original on 21 March 2023. Retrieved 28 February 2023. https://escholarship.org/content/qt0jq252ks/qt0jq252ks.pdf

  41. Sutcliffe, Hillary R.; Brown, Samantha (December 2021). "Trust and Soft Law for AI". IEEE Technology and Society Magazine. 40 (4): 14–24. doi:10.1109/MTS.2021.3123741. S2CID 244955938. /wiki/Doi_(identifier)

  42. Schmit, C. D.; Doerr, M. J.; Wagner, J. K. (17 February 2023). "Leveraging IP for AI governance". Science. 379 (6633): 646–648. Bibcode:2023Sci...379..646S. doi:10.1126/science.add2202. PMID 36795826. S2CID 256901479. /wiki/Bibcode_(identifier)

  43. Schmit, C. D.; Doerr, M. J.; Wagner, J. K. (17 February 2023). "Leveraging IP for AI governance". Science. 379 (6633): 646–648. Bibcode:2023Sci...379..646S. doi:10.1126/science.add2202. PMID 36795826. S2CID 256901479. /wiki/Bibcode_(identifier)

  44. Lima-Strong, Cristiano (16 May 2024). "Youth activists call on world leaders to set AI safeguards by 2030". Washington Post. Retrieved 24 June 2024. https://www.washingtonpost.com/politics/2024/05/16/youth-activists-call-world-leaders-set-ai-safeguards-by-2030/

  45. Haldane, Matt (21 May 2024). "Student AI activists at Encode Justice release 22 goals for 2030 ahead of global summit in Seoul". Archived from the original on 25 September 2024. Retrieved 24 June 2024. https://www.scmp.com/tech/policy/article/3263482/student-ai-activists-encode-justice-release-22-goals-2030-ahead-global-summit-seoul

  46. Fjeld, Jessica; Achten, Nele; Hilligoss, Hannah; Nagy, Adam; Srikumar, Madhu (2020-01-15). Principled Artificial Intelligence: Mapping Consensus in Ethical and Rights-based Approaches to Principles for AI (Report). Berkman Klein Center for Internet & Society. Archived from the original on 2021-07-16. Retrieved 2021-07-04. https://dash.harvard.edu/handle/1/42160420

  47. Wirtz, Bernd W.; Weyerer, Jan C.; Geyer, Carolin (2018-07-24). "Artificial Intelligence and the Public Sector—Applications and Challenges". International Journal of Public Administration. 42 (7): 596–615. doi:10.1080/01900692.2018.1498103. ISSN 0190-0692. S2CID 158829602. Archived from the original on 2020-08-18. Retrieved 2020-08-17. https://zenodo.org/record/3569435

  48. Wirtz, Bernd W.; Weyerer, Jan C.; Sturm, Benjamin J. (2020-04-15). "The Dark Sides of Artificial Intelligence: An Integrated AI Governance Framework for Public Administration". International Journal of Public Administration. 43 (9): 818–829. doi:10.1080/01900692.2020.1749851. ISSN 0190-0692. S2CID 218807452. /wiki/Doi_(identifier)

  49. Bredt, Stephan (2019-10-04). "Artificial Intelligence (AI) in the Financial Sector—Potential and Public Strategies". Frontiers in Artificial Intelligence. 2: 16. doi:10.3389/frai.2019.00016. ISSN 2624-8212. PMC 7861258. PMID 33733105. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7861258

  50. White Paper: On Artificial Intelligence – A European approach to excellence and trust (PDF). Brussels: European Commission. 2020. p. 1. https://commission.europa.eu/system/files/2020-02/commission-white-paper-artificial-intelligence-feb2020_en.pdf

  51. Wirtz, Bernd W.; Müller, Wilhelm M. (2018-12-03). "An integrated artificial intelligence framework for public management". Public Management Review. 21 (7): 1076–1100. doi:10.1080/14719037.2018.1549268. ISSN 1471-9037. S2CID 158267709. /wiki/Doi_(identifier)

  52. Reisman, Dillon; Schultz, Jason; Crawford, Kate; Whittaker, Meredith (2018). Algorithmic impact assessments: A practical framework for public agency accountability (PDF). New York: AI Now Institute. Archived from the original (PDF) on 2020-06-14. Retrieved 2020-04-28. https://web.archive.org/web/20200614205833/https://ainowinstitute.org/aiareport2018.pdf

  53. White Paper: On Artificial Intelligence – A European approach to excellence and trust (PDF). Brussels: European Commission. 2020. p. 1. https://commission.europa.eu/system/files/2020-02/commission-white-paper-artificial-intelligence-feb2020_en.pdf

  54. "Towards Responsible Artificial Intelligence Innovation". UNICRI. July 2020. Archived from the original on 2022-07-05. Retrieved 2022-07-18. https://unicri.it/towards-responsible-artificial-intelligence-innovation

  55. Kohli, Ajay; Mahajan, Vidur; Seals, Kevin; Kohli, Ajit; Jha, Saurabh (2019). "Concepts in U.S. Food and Drug Administration Regulation of Artificial Intelligence for Medical Imaging". American Journal of Roentgenology. 213 (4): 886–888. doi:10.2214/ajr.18.20410. ISSN 0361-803X. PMID 31166758. S2CID 174813195. Archived from the original on 2024-09-25. Retrieved 2021-03-27. https://dx.doi.org/10.2214/ajr.18.20410

  56. Hwang, Thomas J.; Kesselheim, Aaron S.; Vokinger, Kerstin N. (2019-12-17). "Lifecycle Regulation of Artificial Intelligence– and Machine Learning–Based Software Devices in Medicine". JAMA. 322 (23): 2285–2286. doi:10.1001/jama.2019.16842. ISSN 0098-7484. PMID 31755907. S2CID 208230202. Archived from the original on 2024-09-25. Retrieved 2021-03-27. https://dx.doi.org/10.1001/jama.2019.16842

  57. Sharma, Kavita; Manchikanti, Padmavati (2020-10-01). "Regulation of Artificial Intelligence in Drug Discovery and Health Care". Biotechnology Law Report. 39 (5): 371–380. doi:10.1089/blr.2020.29183.ks. ISSN 0730-031X. S2CID 225540889. Archived from the original on 2024-09-25. Retrieved 2021-03-27. https://dx.doi.org/10.1089/blr.2020.29183.ks

  58. Petkus, Haroldas; Hoogewerf, Jan; Wyatt, Jeremy C (2020). "What do senior physicians think about AI and clinical decision support systems: Quantitative and qualitative analysis of data from specialty societies". Clinical Medicine. 20 (3): 324–328. doi:10.7861/clinmed.2019-0317. ISSN 1470-2118. PMC 7354034. PMID 32414724. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7354034

  59. Cheng, Jerome Y.; Abel, Jacob T.; Balis, Ulysses G.J.; McClintock, David S.; Pantanowitz, Liron (2021). "Challenges in the Development, Deployment, and Regulation of Artificial Intelligence in Anatomic Pathology". The American Journal of Pathology. 191 (10): 1684–1692. doi:10.1016/j.ajpath.2020.10.018. ISSN 0002-9440. PMID 33245914. S2CID 227191875. https://doi.org/10.1016%2Fj.ajpath.2020.10.018

  60. Bredt, Stephan (2019-10-04). "Artificial Intelligence (AI) in the Financial Sector—Potential and Public Strategies". Frontiers in Artificial Intelligence. 2: 16. doi:10.3389/frai.2019.00016. ISSN 2624-8212. PMC 7861258. PMID 33733105. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7861258

  61. Gurkaynak, Gonenc; Yilmaz, Ilay; Haksever, Gunes (2016). "Stifling artificial intelligence: Human perils". Computer Law & Security Review. 32 (5): 749–758. doi:10.1016/j.clsr.2016.05.003. ISSN 0267-3649. /wiki/Doi_(identifier)

  62. Iphofen, Ron; Kritikos, Mihalis (2019-01-03). "Regulating artificial intelligence and robotics: ethics by design in a digital society". Contemporary Social Science. 16 (2): 170–184. doi:10.1080/21582041.2018.1563803. ISSN 2158-2041. S2CID 59298502. /wiki/Doi_(identifier)

  63. Gurkaynak, Gonenc; Yilmaz, Ilay; Haksever, Gunes (2016). "Stifling artificial intelligence: Human perils". Computer Law & Security Review. 32 (5): 749–758. doi:10.1016/j.clsr.2016.05.003. ISSN 0267-3649. /wiki/Doi_(identifier)

  64. AI principles: Recommendations on the ethical use of artificial intelligence by the Department of Defense (PDF). Washington, DC: United States Defense Innovation Board. 2019. OCLC 1126650738. Archived from the original (PDF) on 2020-01-14. Retrieved 2020-03-28. https://web.archive.org/web/20200114222649/https://media.defense.gov/2019/Oct/31/2002204458/-1/-1/0/DIB_AI_PRINCIPLES_PRIMARY_DOCUMENT.PDF

  65. Babuta, Alexander; Oswald, Marion; Janjeva, Ardi (2020). Artificial Intelligence and UK National Security: Policy Considerations (PDF). London: Royal United Services Institute. Archived from the original (PDF) on 2020-05-02. Retrieved 2020-04-28. https://web.archive.org/web/20200502044604/https://rusi.org/sites/default/files/ai_national_security_final_web_version.pdf

  66. "Robots with Guns: The Rise of Autonomous Weapons Systems". Snopes.com. 21 April 2017. Archived from the original on 25 September 2024. Retrieved 24 December 2017. https://www.snopes.com/2017/04/21/robots-with-guns/

  67. Bento, Lucas (2017). "No Mere Deodands: Human Responsibilities in the Use of Violent Intelligent Systems Under Public International Law". Harvard Scholarship Depository. Archived from the original on 2020-03-23. Retrieved 2019-09-14. https://dash.harvard.edu/handle/1/33813394

  68. Kissinger, Henry (1 November 2021). "The Challenge of Being Human in the Age of AI". The Wall Street Journal. Archived from the original on 4 November 2021. Retrieved 4 November 2021. /wiki/Henry_Kissinger

  69. Sotala, Kaj; Yampolskiy, Roman V (2014-12-19). "Responses to catastrophic AGI risk: a survey". Physica Scripta. 90 (1): 018001. Bibcode:2015PhyS...90a8001S. doi:10.1088/0031-8949/90/1/018001. ISSN 0031-8949. https://doi.org/10.1088%2F0031-8949%2F90%2F1%2F018001

  70. Barrett, Anthony M.; Baum, Seth D. (2016-05-23). "A model of pathways to artificial superintelligence catastrophe for risk and decision analysis". Journal of Experimental & Theoretical Artificial Intelligence. 29 (2): 397–414. arXiv:1607.07730. doi:10.1080/0952813x.2016.1186228. ISSN 0952-813X. S2CID 928824. /wiki/ArXiv_(identifier)

  71. Barrett, Anthony M.; Baum, Seth D. (2016-05-23). "A model of pathways to artificial superintelligence catastrophe for risk and decision analysis". Journal of Experimental & Theoretical Artificial Intelligence. 29 (2): 397–414. arXiv:1607.07730. doi:10.1080/0952813x.2016.1186228. ISSN 0952-813X. S2CID 928824. /wiki/ArXiv_(identifier)

  72. Sotala, Kaj; Yampolskiy, Roman V (2014-12-19). "Responses to catastrophic AGI risk: a survey". Physica Scripta. 90 (1): 018001. Bibcode:2015PhyS...90a8001S. doi:10.1088/0031-8949/90/1/018001. ISSN 0031-8949. https://doi.org/10.1088%2F0031-8949%2F90%2F1%2F018001

  73. Sotala, Kaj; Yampolskiy, Roman V (2014-12-19). "Responses to catastrophic AGI risk: a survey". Physica Scripta. 90 (1): 018001. Bibcode:2015PhyS...90a8001S. doi:10.1088/0031-8949/90/1/018001. ISSN 0031-8949. https://doi.org/10.1088%2F0031-8949%2F90%2F1%2F018001

  74. Sotala, Kaj; Yampolskiy, Roman V (2014-12-19). "Responses to catastrophic AGI risk: a survey". Physica Scripta. 90 (1): 018001. Bibcode:2015PhyS...90a8001S. doi:10.1088/0031-8949/90/1/018001. ISSN 0031-8949. https://doi.org/10.1088%2F0031-8949%2F90%2F1%2F018001

  75. Gurkaynak, Gonenc; Yilmaz, Ilay; Haksever, Gunes (2016). "Stifling artificial intelligence: Human perils". Computer Law & Security Review. 32 (5): 749–758. doi:10.1016/j.clsr.2016.05.003. ISSN 0267-3649. /wiki/Doi_(identifier)

  76. Boyd, Matthew; Wilson, Nick (2017-11-01). "Rapid developments in Artificial Intelligence: how might the New Zealand government respond?". Policy Quarterly. 13 (4). doi:10.26686/pq.v13i4.4619. ISSN 2324-1101. https://doi.org/10.26686%2Fpq.v13i4.4619

  77. Innovation, Science and Economic Development Canada (2019-05-16). "Declaration of the International Panel on Artificial Intelligence". gcnws. Archived from the original on 2020-03-29. Retrieved 2020-03-29. https://www.canada.ca/en/innovation-science-economic-development/news/2019/05/declaration-of-the-international-panel-on-artificial-intelligence.html

  78. Simonite, Tom (2020-01-08). "The world has a plan to rein in AI—but the US doesn't like it". Wired. Archived from the original on 2020-04-18. Retrieved 2020-03-29. https://www.wired.com/story/world-plan-rein-ai-us-doesnt-like/

  79. "AI Regulation: Has the Time Arrived?". InformationWeek. 24 February 2020. Archived from the original on 2020-05-23. Retrieved 2020-03-29. https://www.informationweek.com/big-data/ai-machine-learning/ai-regulation-has-the-time-arrived/a/d-id/1337099

  80. UNESCO Science Report: the Race Against Time for Smarter Development. Paris: UNESCO. 11 June 2021. ISBN 978-92-3-100450-6. Archived from the original on 18 June 2022. Retrieved 18 September 2021. 978-92-3-100450-6

  81. "Community". GPAI. Archived from the original on March 30, 2023. https://gpai.ai/community/

  82. UNESCO Science Report: the Race Against Time for Smarter Development. Paris: UNESCO. 11 June 2021. ISBN 978-92-3-100450-6. Archived from the original on 18 June 2022. Retrieved 18 September 2021. 978-92-3-100450-6

  83. "AI-Principles Overview". OECD.AI. Archived from the original on 2023-10-23. Retrieved 2023-10-20. https://oecd.ai/en/ai-principles

  84. "AI Regulation: Has the Time Arrived?". InformationWeek. 24 February 2020. Archived from the original on 2020-05-23. Retrieved 2020-03-29. https://www.informationweek.com/big-data/ai-machine-learning/ai-regulation-has-the-time-arrived/a/d-id/1337099

  85. G20 Ministerial Statement on Trade and Digital Economy (PDF). Tsukuba City, Japan: G20. 2019. https://www.mofa.go.jp/mofaj/files/000486596.pdf

  86. "International AI ethics panel must be independent". Nature. 572 (7770): 415. 2019-08-21. Bibcode:2019Natur.572R.415.. doi:10.1038/d41586-019-02491-x. PMID 31435065. https://doi.org/10.1038%2Fd41586-019-02491-x

  87. Guidelines for AI Procurement (PDF). Cologny/Geneva: World Economic Forum. 2019. Archived (PDF) from the original on 2020-07-17. Retrieved 2020-04-28. http://www3.weforum.org/docs/WEF_Guidelines_for_AI_Procurement.pdf

  88. White Paper: On Artificial Intelligence – A European approach to excellence and trust (PDF). Brussels: European Commission. 2020. p. 1. https://commission.europa.eu/system/files/2020-02/commission-white-paper-artificial-intelligence-feb2020_en.pdf

  89. Babuta, Alexander; Oswald, Marion; Janjeva, Ardi (2020). Artificial Intelligence and UK National Security: Policy Considerations (PDF). London: Royal United Services Institute. Archived from the original (PDF) on 2020-05-02. Retrieved 2020-04-28. https://web.archive.org/web/20200502044604/https://rusi.org/sites/default/files/ai_national_security_final_web_version.pdf

  90. "High-Level Event: Artificial Intelligence and Robotics – Reshaping the Future of Crime, Terrorism and Security". UNICRI. Archived from the original on 2022-07-18. Retrieved 2022-07-18. https://unicri.it/news/article/AI_Robotics_Crime_Terrorism_Security

  91. "Towards Responsible Artificial Intelligence Innovation". UNICRI. July 2020. Archived from the original on 2022-07-05. Retrieved 2022-07-18. https://unicri.it/towards-responsible-artificial-intelligence-innovation

  92. NíFhaoláin, Labhaoise; Hines, Andrew; Nallur, Vivek (2020). Assessing the Appetite for Trustworthiness and the Regulation of Artificial Intelligence in Europe (PDF). Dublin: Technological University Dublin, School of Computer Science, Dublin. pp. 1–12. Archived (PDF) from the original on 2021-01-15. Retrieved 2021-03-27. This article incorporates text available under the CC BY 4.0 license. (The CC BY 4.0 licence means that everyone have the right to reuse the text that is quoted here, or other parts of the original article itself, if they credit the authors. More info: Creative Commons license) Changes were made as follows: citations removed and minor grammatical amendments. http://ceur-ws.org/Vol-2771/AICS2020_paper_53.pdf

  93. UNESCO Science Report: the Race Against Time for Smarter Development. Paris: UNESCO. 11 June 2021. ISBN 978-92-3-100450-6. Archived from the original on 18 June 2022. Retrieved 18 September 2021. 978-92-3-100450-6

  94. "Recommendation on the ethics of artificial intelligence". UNESCO. 2020-02-27. Archived from the original on 2022-07-18. Retrieved 2022-07-18. https://en.unesco.org/artificial-intelligence/ethics

  95. Nindler, Reinmar (2019-03-11). "The United Nation's Capability to Manage Existential Risks with a Focus on Artificial Intelligence". International Community Law Review. 21 (1): 5–34. doi:10.1163/18719732-12341388. ISSN 1871-9740. S2CID 150911357. Archived from the original on 2022-08-30. Retrieved 2022-08-30. https://brill.com/view/journals/iclr/21/1/article-p5_3.xml

  96. Taddeo, Mariarosaria; Floridi, Luciano (April 2018). "Regulate artificial intelligence to avert cyber arms race". Nature. 556 (7701): 296–298. Bibcode:2018Natur.556..296T. doi:10.1038/d41586-018-04602-6. PMID 29662138. https://doi.org/10.1038%2Fd41586-018-04602-6

  97. "The Munich Convention on AI, Data and Human Rights". February 2025.{{cite web}}: CS1 maint: url-status (link) https://www.researchgate.net/publication/380245681_The_Munich_Convention_on_AI_Data_and_Human_Rights

  98. Law Library of Congress (U.S.). Global Legal Research Directorate, issuing body. Regulation of artificial intelligence in selected jurisdictions. LCCN 2019668143. OCLC 1110727808. /wiki/LCCN_(identifier)

  99. Popova, Anna V.; Gorokhova, Svetlana S.; Abramova, Marianna G.; Balashkina, Irina V. (2021), The System of Law and Artificial Intelligence in Modern Russia: Goals and Instruments of Digital Modernization, Studies in Systems, Decision and Control, vol. 314, Cham: Springer International Publishing, pp. 89–96, doi:10.1007/978-3-030-56433-9_11, ISBN 978-3-030-56432-2, S2CID 234309883, archived from the original on 2024-09-25, retrieved 2021-03-27 978-3-030-56432-2

  100. "OECD Observatory of Public Sector Innovation – Ai Strategies and Public Sector Components". 21 November 2019. Archived from the original on 2024-09-25. Retrieved 2020-05-04. https://oecd-opsi.org/projects/ai/strategies/

  101. Berryhill, Jamie; Heang, Kévin Kok; Clogher, Rob; McBride, Keegan (2019). Hello, World: Artificial Intelligence and its Use in the Public Sector (PDF). Paris: OECD Observatory of Public Sector Innovation. Archived (PDF) from the original on 2019-12-20. Retrieved 2020-05-05. https://oecd-opsi.org/wp-content/uploads/2019/11/AI-Report-Online.pdf

  102. Artificial intelligence in society. Paris: Organisation for Economic Co-operation and Development. 11 June 2019. ISBN 978-92-64-54519-9. OCLC 1105926611. 978-92-64-54519-9

  103. Campbell, Thomas A. (2019). Artificial Intelligence: An Overview of State Initiatives (PDF). Evergreen, CO: FutureGrasp, LLC. Archived from the original (PDF) on March 31, 2020. https://web.archive.org/web/20200331140959/http://www.unicri.it/in_focus/files/Report_AI-An_Overview_of_State_Initiatives_FutureGrasp_7-23-19.pdf

  104. Bradford, Anu (2023-06-27). "The Race to Regulate Artificial Intelligence". Foreign Affairs. ISSN 0015-7120. Archived from the original on 2023-08-11. Retrieved 2023-08-11. https://www.foreignaffairs.com/united-states/race-regulate-artificial-intelligence

  105. "Australia needs a national approach to AI strategy". Information Age. Retrieved 2023-11-08. https://ia.acs.org.au/article/2023/australia-needs-a-national-approach-to-ai-strategy.html

  106. "Australia needs a national approach to AI strategy". Information Age. Retrieved 2023-11-08. https://ia.acs.org.au/article/2023/australia-needs-a-national-approach-to-ai-strategy.html

  107. "AI Watch: Global regulatory tracker - Australia". whitecase.com. 16 December 2024. Retrieved May 8, 2025. https://www.whitecase.com/insight-our-thinking/ai-watch-global-regulatory-tracker-australia

  108. "AI Watch: Global regulatory tracker - Australia". whitecase.com. 16 December 2024. Retrieved May 8, 2025. https://www.whitecase.com/insight-our-thinking/ai-watch-global-regulatory-tracker-australia

  109. "Câmara aprova marco legal da inteligência artificial no Brasil". Revista Globo Rural (in Brazilian Portuguese). 2022-08-24. Retrieved 2025-06-16. https://globorural.globo.com/Noticias/Politica/noticia/2021/09/camara-aprova-marco-legal-da-inteligencia-artificial-no-brasil.html

  110. Belli, Luca; Curzi, Yasmin; Gaspar, Walter B. (2023-04-01). "AI regulation in Brazil: Advancements, flows, and need to learn from the data protection experience". Computer Law & Security Review. 48: 105767. doi:10.1016/j.clsr.2022.105767. ISSN 2212-473X. https://www.sciencedirect.com/science/article/pii/S0267364922001108

  111. "Insufficiency of Ethical Principles for the Regulation of Artificial Intelligence: Antiracism and Antidiscrimination as Vectors for AI Regulation in Brazil". Data Privacy Brasil Research. Retrieved 2025-06-16. https://www.dataprivacybr.org/en/documentos/insufficiency-of-ethical-principles-for-the-regulation-of-artificial-intelligence-antiracism-and-antidiscrimination-as-vectors-for-ai-regulation-in-brazil/

  112. Belli, Luca; Curzi, Yasmin; Gaspar, Walter B. (2023-04-01). "AI regulation in Brazil: Advancements, flows, and need to learn from the data protection experience". Computer Law & Security Review. 48: 105767. doi:10.1016/j.clsr.2022.105767. ISSN 2212-473X. https://www.sciencedirect.com/science/article/pii/S0267364922001108

  113. "Brazil: Introduced Bill No. 2338 of 2023 regulating the use of Artificial Intelligence, including algorithm design and technical standards". digitalpolicyalert.org. 2023. Retrieved 16 June 2025. https://digitalpolicyalert.org/event/11237-introduced-bill-no-2338-of-2023-regulating-the-use-of-artificial-intelligence-including-algorithm-design-and-technical-standards

  114. UNESCO Science Report: the Race Against Time for Smarter Development. Paris: UNESCO. 11 June 2021. ISBN 978-92-3-100450-6. Archived from the original on 18 June 2022. Retrieved 18 September 2021. 978-92-3-100450-6

  115. UNESCO Science Report: the Race Against Time for Smarter Development. Paris: UNESCO. 11 June 2021. ISBN 978-92-3-100450-6. Archived from the original on 18 June 2022. Retrieved 18 September 2021. 978-92-3-100450-6

  116. UNESCO Science Report: the Race Against Time for Smarter Development. Paris: UNESCO. 11 June 2021. ISBN 978-92-3-100450-6. Archived from the original on 18 June 2022. Retrieved 18 September 2021. 978-92-3-100450-6

  117. UNESCO Science Report: the Race Against Time for Smarter Development. Paris: UNESCO. 11 June 2021. ISBN 978-92-3-100450-6. Archived from the original on 18 June 2022. Retrieved 18 September 2021. 978-92-3-100450-6

  118. Innovation, Science and Economic Development Canada (2022-06-22). "Government of Canada launches second phase of the Pan-Canadian Artificial Intelligence Strategy". www.canada.ca. Archived from the original on 2023-10-26. Retrieved 2023-10-24. https://www.canada.ca/en/innovation-science-economic-development/news/2022/06/government-of-canada-launches-second-phase-of-the-pan-canadian-artificial-intelligence-strategy.html

  119. Canada, Government of (2022-08-18). "Bill C-27 summary: Digital Charter Implementation Act, 2022". ised-isde.canada.ca. Archived from the original on 2023-12-20. Retrieved 2023-10-24. https://ised-isde.canada.ca/site/innovation-better-canada/en/canadas-digital-charter/bill-summary-digital-charter-implementation-act-2020

  120. "Government Bill (House of Commons) C-27 (44–1) – First Reading – Digital Charter Implementation Act, 2022 – Parliament of Canada". www.parl.ca. Retrieved 2022-07-12. https://www.parl.ca/DocumentViewer/en/44-1/bill/C-27/first-reading

  121. "Intelligence and Data Act". Innovation, Science and Economic Development Canada. 2023-09-27. Retrieved May 4, 2025. https://ised-isde.canada.ca/site/innovation-better-canada/en/artificial-intelligence-and-data-act

  122. "AI Watch: Global regulatory tracker – Canada". Whitecase.com. 2024-12-16. Retrieved May 8, 2025. https://www.whitecase.com/insight-our-thinking/ai-watch-global-regulatory-tracker-canada

  123. "AI Watch: Global regulatory tracker – Canada". Whitecase.com. 2024-12-16. Retrieved May 8, 2025. https://www.whitecase.com/insight-our-thinking/ai-watch-global-regulatory-tracker-canada

  124. State Council China. "New Generation of Artificial Intelligence Development Plan". www.unodc.org. Archived from the original on June 7, 2023. Retrieved 2022-07-18. https://web.archive.org/web/20230607094245/https://www.unodc.org/ji/en/resdb/data/chn/2017/new_generation_of_artificial_intelligence_development_plan.html

  125. Department of International Cooperation Ministry of Science and Technology (September 2017). "Next Generation Artificial Intelligence Development Plan Issued by State Council" (PDF). China Science & Technology Newsletter (17): 2–12. Archived from the original (PDF) on January 21, 2022 – via Ministry of Foreign Affairs of China. https://web.archive.org/web/20220121145209/https://www.mfa.gov.cn/ce/cefi/eng/kxjs/P020171025789108009001.pdf

  126. Wu, Fei; Lu, Cewu; Zhu, Mingjie; Chen, Hao; Zhu, Jun; Yu, Kai; Li, Lei; Li, Ming; Chen, Qianfeng; Li, Xi; Cao, Xudong (2020). "Towards a new generation of artificial intelligence in China". Nature Machine Intelligence. 2 (6): 312–316. doi:10.1038/s42256-020-0183-4. ISSN 2522-5839. S2CID 220507829. Archived from the original on 2022-07-18. Retrieved 2022-07-18. https://www.nature.com/articles/s42256-020-0183-4

  127. "Ethical Norms for New Generation Artificial Intelligence Released". Center for Security and Emerging Technology. Archived from the original on 2023-02-10. Retrieved 2022-07-18. https://cset.georgetown.edu/publication/ethical-norms-for-new-generation-artificial-intelligence-released/

  128. "China just gave the world a blueprint for reigning in generative A.I." Fortune. Archived from the original on 2023-07-24. Retrieved 2023-07-24. https://fortune.com/2023/07/14/china-ai-regulations-offer-blueprint/

  129. "Navigating the Complexities of AI Regulation in China". Reed Smith. August 2024. Retrieved 2025-05-08. https://www.reedsmith.com/en/perspectives/2024/08/navigating-the-complexities-of-ai-regulation-in-china

  130. Sharma, Animesh Kumar; Sharma, Rahul (2024). "Comparative Analysis of Data Protection Laws and ai Privacy Risks in brics Nations: A Comprehensive Examination". Global Journal of Comparative Law. 13 (1): 56–85. doi:10.1163/2211906X-13010003. https://brill.com/view/journals/gjcl/13/1/article-p56_003.xml

  131. Sheehan, Matt (2024-02-27). "Tracing the Roots of China's AI Regulations". Carnegie Endowment for International Peace. Retrieved 2025-05-06. https://carnegieendowment.org/research/2024/02/tracing-the-roots-of-chinas-ai-regulations

  132. "Navigating the Complexities of AI Regulation in China". Reed Smith. August 2024. Retrieved 2025-05-08. https://www.reedsmith.com/en/perspectives/2024/08/navigating-the-complexities-of-ai-regulation-in-china

  133. "Navigating the Complexities of AI Regulation in China". Reed Smith. August 2024. Retrieved 2025-05-08. https://www.reedsmith.com/en/perspectives/2024/08/navigating-the-complexities-of-ai-regulation-in-china

  134. Sheehan, Matt (2024-02-27). "Tracing the Roots of China's AI Regulations". Carnegie Endowment for International Peace. Retrieved 2025-05-06. https://carnegieendowment.org/research/2024/02/tracing-the-roots-of-chinas-ai-regulations

  135. National Planning Department, Ministry of Information and Communications Technology, and Superintendency of Industry and Commerce (2018). CONPES 3920. Available on: https://colaboracion.dnp.gov.co/CDT/Conpes/Econ%C3%B3micos/3920.pdf. Retrieved on June 16 of 2025. https://colaboracion.dnp.gov.co/CDT/Conpes/Econ%C3%B3micos/3920.pdf

  136. Ministry of Information and Communications Technology (2021). Ethical Framework in ColombiaAvailable on: https://mintic.gov.co/portal/inicio/Sala-de-prensa/Noticias/208109:Colombia-adopta-de-forma-temprana-recomendaciones-de-etica-en-Inteligencia-Artificial-de-la-Unesco-para-la-region#:~:text=Colombia%20se%20convirti%C3%B3%20en%20uno,de%20las%20Naciones%20Unidas%20para. Retrieved on June 16 of 2025. https://mintic.gov.co/portal/inicio/Sala-de-prensa/Noticias/208109:Colombia-adopta-de-forma-temprana-recomendaciones-de-etica-en-Inteligencia-Artificial-de-la-Unesco-para-la-region#:~:text=Colombia%20se%20convirti%C3%B3%20en%20uno,de%20las%20Naciones%20Unidas%20para

  137. Ibid

  138. Ibid

  139. Cumbre Ministerial y de Altas Autoridades de América Latina y el Caribe (2023). Santiago Declaration. Available on: https://minciencia.gob.cl/uploads/filer_public/40/2a/402a35a0-1222-4dab-b090-5c81bbf34237/declaracion_de_santiago.pdf. Retrieved on June 16, 2025. https://minciencia.gob.cl/uploads/filer_public/40/2a/402a35a0-1222-4dab-b090-5c81bbf34237/declaracion_de_santiago.pdf

  140. Ministry of Science, Technology and Innovation (2024). road map for an ethical and sustainable AI Adoption. Available on: https://minciencias.gov.co/sites/default/files/upload/noticias/hoja_de_ruta_adopcion_etica_y_sostenible_de_inteligencia_artificial_colombia_0.pdf. Retrieved on June 16, 2025. https://minciencias.gov.co/sites/default/files/upload/noticias/hoja_de_ruta_adopcion_etica_y_sostenible_de_inteligencia_artificial_colombia_0.pdf

  141. Superintendence of Industry and Commerce (2024). External Circular 2, 2024. Available on: https://sedeelectronica.sic.gov.co/sites/default/files/normativa/Circular%20Externa%20No.%20002%20del%2021%20de%20agosto%20de%202024.pdf. Retrieved on June 16, 2025. https://sedeelectronica.sic.gov.co/sites/default/files/normativa/Circular%20Externa%20No.%20002%20del%2021%20de%20agosto%20de%202024.pdf

  142. Judiciary Council (2024). Agreement PCSJA24-12243. Available on: https://actosadministrativos.ramajudicial.gov.co/GetFile.ashx?url=%7E%2FApp_Data%2FUpload%2FPCSJA24-12243.pdf. Retrieved on: June 16, 2025. https://actosadministrativos.ramajudicial.gov.co/GetFile.ashx?url=%7E%2FApp_Data%2FUpload%2FPCSJA24-12243.pdf

  143. OECD (2024). Recommendation of the Council on Artificial Intelligence. Available on: https://legalinstruments.oecd.org/en/instruments/OECD-LEGAL-0449. Retrieved on June 16, 2025. https://legalinstruments.oecd.org/en/instruments/OECD-LEGAL-0449

  144. UN (2024). Global Digital Compact. Available on: https://www.un.org/global-digital-compact/en. Retrieved on June 16, 2025. https://www.un.org/global-digital-compact/en

  145. UN (2024). Resolution A/78/L.49. Available on: https://docs.un.org/en/A/78/L.49. Retrieved on June 16, 2025. https://docs.un.org/en/A/78/L.49

  146. National Planning Department (2025). Conper 4144. Conpeshttps://colaboracion.dnp.gov.co/CDT/Conpes/Econ%C3%B3micos/4144.pdf. Retrieved on June 16, 2025. https://colaboracion.dnp.gov.co/CDT/Conpes/Econ%C3%B3micos/4144.pdf

  147. Constitutional Court of Colombia (2025). Sentencia T-067/25. Available on: https://www.corteconstitucional.gov.co/Relatoria/2025/T-067-25.htm. Retrieved on June 16, 2025. https://www.corteconstitucional.gov.co/Relatoria/2025/T-067-25.htm

  148. "Council of Europe and Artificial Intelligence". Artificial Intelligence. Archived from the original on 2024-01-19. Retrieved 2021-07-29. https://www.coe.int/en/web/artificial-intelligence/home

  149. NíFhaoláin, Labhaoise; Hines, Andrew; Nallur, Vivek (2020). Assessing the Appetite for Trustworthiness and the Regulation of Artificial Intelligence in Europe (PDF). Dublin: Technological University Dublin, School of Computer Science, Dublin. pp. 1–12. Archived (PDF) from the original on 2021-01-15. Retrieved 2021-03-27. This article incorporates text available under the CC BY 4.0 license. (The CC BY 4.0 licence means that everyone have the right to reuse the text that is quoted here, or other parts of the original article itself, if they credit the authors. More info: Creative Commons license) Changes were made as follows: citations removed and minor grammatical amendments. http://ceur-ws.org/Vol-2771/AICS2020_paper_53.pdf

  150. "The Framework Convention on Artificial Intelligence". Council of Europe. Archived from the original on 2024-09-05. Retrieved 2024-09-05. https://www.coe.int/en/web/artificial-intelligence/the-framework-convention-on-artificial-intelligence

  151. "Council of Europe opens first ever global treaty on AI for signature". Council of Europe. 5 September 2024. Archived from the original on 2024-09-17. Retrieved 2024-09-17. https://www.coe.int/en/web/portal/-/council-of-europe-opens-first-ever-global-treaty-on-ai-for-signature

  152. "Artificial Intelligence | MPO". mpo.gov.cz. Retrieved 2025-06-16. https://mpo.gov.cz/en/business/digital-economy/artificial-intelligence/

  153. "Czechia as a technological leader. Government approved the National Strategy for Artificial Intelligence of the Czech Republic 2030 | MPO". mpo.gov.cz. Retrieved 2025-06-16. https://mpo.gov.cz/en/guidepost/for-the-media/press-releases/czechia-as-a-technological-leader--government-approved-the-national-strategy-for-artificial-intelligence-of-the-czech-republic-2030--282278/

  154. Peukert, Christian; Bechtold, Stefan; Kretschmer, Tobias; Batikas, Michail (2020-09-30). "Regulatory export and spillovers: How GDPR affects global markets for data". CEPR. Archived from the original on 2023-10-26. Retrieved 2023-10-26. https://cepr.org/voxeu/columns/regulatory-export-and-spillovers-how-gdpr-affects-global-markets-data

  155. Coulter, Martin (2023-08-24). "Big Tech braces for EU Digital Services Act regulations". Reuters. Archived from the original on 2023-10-26. Retrieved 2023-10-26. https://www.reuters.com/technology/big-tech-braces-roll-out-eus-digital-services-act-2023-08-24/

  156. "Europe's new role in digital regulation". Le Monde.fr. 2023-08-28. Archived from the original on 2023-10-26. Retrieved 2023-10-26. https://www.lemonde.fr/en/opinion/article/2023/08/28/europe-s-new-role-in-digital-regulation_6112363_23.html

  157. Satariano, Adam (2023-06-14). "Europeans Take a Major Step Toward Regulating A.I." The New York Times. ISSN 0362-4331. Archived from the original on 2023-10-26. Retrieved 2023-10-25. https://www.nytimes.com/2023/06/14/technology/europe-ai-regulation.html

  158. Browne, Ryan (2023-06-14). "EU lawmakers pass landmark artificial intelligence regulation". CNBC. Archived from the original on 2023-10-26. Retrieved 2023-10-25. https://www.cnbc.com/2023/06/14/eu-lawmakers-pass-landmark-artificial-intelligence-regulation.html

  159. NíFhaoláin, Labhaoise; Hines, Andrew; Nallur, Vivek (2020). Assessing the Appetite for Trustworthiness and the Regulation of Artificial Intelligence in Europe (PDF). Dublin: Technological University Dublin, School of Computer Science, Dublin. pp. 1–12. Archived (PDF) from the original on 2021-01-15. Retrieved 2021-03-27. This article incorporates text available under the CC BY 4.0 license. (The CC BY 4.0 licence means that everyone have the right to reuse the text that is quoted here, or other parts of the original article itself, if they credit the authors. More info: Creative Commons license) Changes were made as follows: citations removed and minor grammatical amendments. http://ceur-ws.org/Vol-2771/AICS2020_paper_53.pdf

  160. Anonymous (2018-04-25). "Communication Artificial Intelligence for Europe". Shaping Europe's digital future – European Commission. Archived from the original on 2020-05-13. Retrieved 2020-05-05. https://ec.europa.eu/digital-single-market/en/news/communication-artificial-intelligence-europe

  161. smuhana (2018-06-14). "High-Level Expert Group on Artificial Intelligence". Shaping Europe's digital future – European Commission. Archived from the original on 2019-10-24. Retrieved 2020-05-05. https://ec.europa.eu/digital-single-market/en/high-level-expert-group-artificial-intelligence

  162. Andraško, Jozef; Mesarčík, Matúš; Hamuľák, Ondrej (2021-01-02). "The regulatory intersections between artificial intelligence, data protection and cyber security: challenges and opportunities for the EU legal framework". AI & Society. 36 (2): 623–636. doi:10.1007/s00146-020-01125-5. ISSN 0951-5666. S2CID 230109912. Archived from the original on 2024-09-25. Retrieved 2021-03-27. https://dx.doi.org/10.1007/s00146-020-01125-5

  163. "Ethics guidelines for trustworthy AI". European Commission. 2019. Archived from the original on 2023-03-29. Retrieved 2022-05-30. https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai

  164. "Policy and investment recommendations for trustworthy Artificial Intelligence". Shaping Europe's digital future – European Commission. 2019-06-26. Retrieved 2020-05-05. https://digital-strategy.ec.europa.eu/en/library/policy-and-investment-recommendations-trustworthy-artificial-intelligence

  165. NíFhaoláin, Labhaoise; Hines, Andrew; Nallur, Vivek (2020). Assessing the Appetite for Trustworthiness and the Regulation of Artificial Intelligence in Europe (PDF). Dublin: Technological University Dublin, School of Computer Science, Dublin. pp. 1–12. Archived (PDF) from the original on 2021-01-15. Retrieved 2021-03-27. This article incorporates text available under the CC BY 4.0 license. (The CC BY 4.0 licence means that everyone have the right to reuse the text that is quoted here, or other parts of the original article itself, if they credit the authors. More info: Creative Commons license) Changes were made as follows: citations removed and minor grammatical amendments. http://ceur-ws.org/Vol-2771/AICS2020_paper_53.pdf

  166. "White Paper on Artificial Intelligence – a European approach to excellence and trust". European Commission. 19 February 2020. Archived from the original on 2024-01-05. Retrieved 2021-06-07. https://digital-strategy.ec.europa.eu/en/consultations/white-paper-artificial-intelligence-european-approach-excellence-and-trust

  167. Broadbent, Meredith (17 March 2021). "What's Ahead for a Cooperative Regulatory Agenda on Artificial Intelligence?". www.csis.org. Archived from the original on 7 June 2021. Retrieved 2021-06-07. https://www.csis.org/analysis/whats-ahead-cooperative-regulatory-agenda-artificial-intelligence

  168. European Commission. (2020). White paper on artificial intelligence: a European approach to excellence and trust. OCLC 1141850140. /wiki/OCLC_(identifier)

  169. Heikkilä, Melissa (2021-04-14). "POLITICO AI: Decoded: The EU's AI rules — Finland talks to machines — Facebook's fairness project" (newsletter). POLITICO. Retrieved 2021-05-14. https://www.politico.eu/newsletter/ai-decoded/politico-ai-decoded-transatlantic-schisms-finland-talks-to-machines-facebooks-fairness-project/

  170. European Commission (2021-04-21). Europe fit for the Digital Age: Commission proposes new rules and actions for excellence and trust in Artificial Intelligence Archived 2021-05-14 at the Wayback Machine (press release). Retrieved 2021-05-14. https://ec.europa.eu/commission/presscorner/detail/en/ip_21_1682

  171. Pery, Andrew (2021-10-06). "Trustworthy Artificial Intelligence and Process Mining: Challenges and Opportunities". DeepAI. Archived from the original on 2022-02-18. Retrieved 2022-02-27. https://deepai.org/publication/trustworthy-artificial-intelligence-and-process-mining-challenges-and-opportunities

  172. Browne, Ryan (2023-05-15). "Europe takes aim at ChatGPT with what might soon be the West's first A.I. law. Here's what it means". CNBC. Retrieved 2023-10-25. https://www.cnbc.com/2023/05/15/eu-ai-act-europe-takes-aim-at-chatgpt-with-landmark-regulation.html

  173. Veale, Michael; Borgesius, Frederik Zuiderveen (2021-08-01). "Demystifying the Draft EU Artificial Intelligence Act — Analysing the good, the bad, and the unclear elements of the proposed approach". Computer Law Review International. 22 (4): 97–112. arXiv:2107.03721. doi:10.9785/cri-2021-220402. ISSN 2194-4164. S2CID 235765823. Archived from the original on 2023-03-26. Retrieved 2023-01-12. https://www.degruyter.com/document/doi/10.9785/cri-2021-220402/html

  174. van Kolfschooten, Hannah (January 2022). "EU regulation of artificial intelligence: Challenges for patients' rights". Common Market Law Review. 59 (1): 81–112. doi:10.54648/COLA2022005. S2CID 248591427. Archived from the original on 2024-09-25. Retrieved 2023-12-10. https://dare.uva.nl/personal/pure/en/publications/eu-regulation-of-artificial-intelligence-challenges-for-patients-rights(7393eabd-82ef-4a92-9ea8-9d3c2a21eb1a).html

  175. Coulter, Martin (December 7, 2023). "What is the EU AI Act and when will regulation come into effect?". Reuters. Archived from the original on 2023-12-10. Retrieved 2024-06-01. https://www.reuters.com/technology/what-are-eus-landmark-ai-rules-2023-12-06/

  176. Bertuzzi, Luca (December 7, 2023). "AI Act: EU policymakers nail down rules on AI models, butt heads on law enforcement". euractiv. Archived from the original on January 8, 2024. Retrieved June 1, 2024. https://www.euractiv.com/section/artificial-intelligence/news/ai-act-eu-policymakers-nail-down-rules-on-ai-models-butt-heads-on-law-enforcement/

  177. Browne, Ryan (2024-05-21). "World's first major law for artificial intelligence gets final EU green light". CNBC. Archived from the original on 2024-05-21. Retrieved 2024-06-01. https://www.cnbc.com/2024/05/21/worlds-first-major-law-for-artificial-intelligence-gets-final-eu-green-light.html

  178. "Artificial Intelligence Act: MEPs adopt landmark law". European Parliament. 2024-03-13. Archived from the original on 2024-03-15. Retrieved 2024-06-01. https://www.europarl.europa.eu/news/en/press-room/20240308IPR19015/artificial-intelligence-act-meps-adopt-landmark-law

  179. "Experts react: The EU made a deal on AI rules. But can regulators move at the speed of tech?". Atlantic Council. 11 December 2023. https://www.atlanticcouncil.org/blogs/new-atlanticist/experts-react/experts-react-the-eu-made-a-deal-on-ai-rules-but-can-regulators-move-at-the-speed-of-tech/

  180. "European approach to artificial intelligence | Shaping Europe's digital future". digital-strategy.ec.europa.eu. 2024-11-20. Retrieved 2024-12-09. https://digital-strategy.ec.europa.eu/en/policies/european-approach-artificial-intelligence

  181. Natale, Lara (February 2022). "EU's digital ambitions beset with strategic dissonance". Encompass. Retrieved 25 February 2022. https://encompass-europe.com/comment/eus-digital-ambitions-beset-with-strategic-dissonance

  182. Andraško, Jozef; Mesarčík, Matúš; Hamuľák, Ondrej (2021-01-02). "The regulatory intersections between artificial intelligence, data protection and cyber security: challenges and opportunities for the EU legal framework". AI & Society. 36 (2): 623–636. doi:10.1007/s00146-020-01125-5. ISSN 0951-5666. S2CID 230109912. Archived from the original on 2024-09-25. Retrieved 2021-03-27. https://dx.doi.org/10.1007/s00146-020-01125-5

  183. Bertuzzi, Luca; Killeen, Molly (17 September 2021). "Digital Brief powered by Google: make it or break it, Chips Act, showing the path". Euractiv. Retrieved 25 February 2022. https://www.euractiv.com/section/digital/news/digital-brief-powered-by-google-make-it-or-break-it-showing-the-path/

  184. Propp, Kenneth (7 February 2022). "France's new mantra: liberty, equality, digital sovereignty". Atlantic Council. Archived from the original on 25 February 2022. Retrieved 25 February 2022. https://www.atlanticcouncil.org/blogs/new-atlanticist/frances-new-mantra-liberty-equality-digital-sovereignty/

  185. "Artificial intelligence: EU must pick up the pace". European Court of Auditors. 29 May 2024. Archived from the original on 25 September 2024. Retrieved 29 May 2024. https://www.eca.europa.eu/en/news/NEWS-SR-2024-08

  186. "AI Regulation - Työ- ja elinkeinoministeriö". Työ- ja elinkeinoministeriö. Retrieved 2025-06-16. https://tem.fi/en/ai-regulation

  187. Klimaschutz, BMWK-Bundesministerium für Wirtschaft und. ""KI – Made in Germany" etablieren". www.bmwk.de (in German). Archived from the original on 12 June 2023. Retrieved 12 June 2023. https://web.archive.org/web/20230612114711/https://www.bmwk.de/Redaktion/DE/Pressemitteilungen/2020/11/20201130-ki-made-in-germany-etablieren.html

  188. "DIN, DKE und BMWi veröffentlichen Normungsroadmap für Künstliche Intelligenz". all-electronics (in German). Retrieved 12 June 2023. https://www.all-electronics.de/markt/din-dke-und-bmwi-veroeffentlichen-normungsroadmap-fuer-kuenstliche-intelligenz.html

  189. Runze, Gerhard; Haimerl, Martin; Hauer, Marc; Holoyad, Taras; Obert, Otto; Pöhls, Henrich; Tagiew, Rustam; Ziehn, Jens (2023). "Ein Werkzeug für eine gemeinsame KI-Terminologie – Das AI-Glossary als Weg aus Babylon". Java Spektrum (in German) (3): 42–46. Archived from the original on 2024-04-27. Retrieved 2023-06-12. https://webreader.javaspektrum.de/de/profiles/4967c6d5eae1-javaspektrum/editions/javaspektrum-03-2023

  190. Runze, Gerhard; Haimerl, Martin; Hauer, Marc; Holoyad, Taras; Obert, Otto; Pöhls, Henrich; Tagiew, Rustam; Ziehn, Jens (2023). "Ein Werkzeug für eine gemeinsame KI-Terminologie – Das AI-Glossary als Weg aus Babylon". Java Spektrum (in German) (3): 42–46. Archived from the original on 2024-04-27. Retrieved 2023-06-12. https://webreader.javaspektrum.de/de/profiles/4967c6d5eae1-javaspektrum/editions/javaspektrum-03-2023

  191. "Normungsroadmap Künstliche Intelligenz". www.dke.de (in German). Retrieved 12 June 2023. https://www.dke.de/de/arbeitsfelder/core-safety/normungsroadmap-ki

  192. "Hiroshima Process International Guiding Principles for Advanced AI system | Shaping Europe's digital future". digital-strategy.ec.europa.eu. 2023-10-30. Archived from the original on 2023-11-01. Retrieved 2023-11-01. https://digital-strategy.ec.europa.eu/en/library/hiroshima-process-international-guiding-principles-advanced-ai-system

  193. "G7 AI Principles and Code of Conduct". Ernst & Young. January 19, 2024. Retrieved May 7, 2025. https://www.ey.com/en_gl/insights/ai/g7-ai-principles-and-code-of-conduct

  194. "G7 AI Principles and Code of Conduct". Ernst & Young. January 19, 2024. Retrieved May 7, 2025. https://www.ey.com/en_gl/insights/ai/g7-ai-principles-and-code-of-conduct

  195. Schildkraut, Peter J. (January 19, 2024). "What the G7 Code of Conduct Means for Global AI Compliance Programs". Arnold & Porter. Retrieved May 8, 2025. https://www.arnoldporter.com/en/perspectives/publications/2024/01/what-the-g7-code-of-conduct-means-for-global-ai-compliance

  196. Cahane, Amir (November 13, 2022). "Israeli AI regulation and policy white paper: a first glance". RAILS Blog. https://blog.ai-laws.org/israeli-ai-regulation-and-policy-white-paper-a-first-glance/

  197. Ministry of Innovation, Science and Technology and the Ministry of Justice (December 12, 2023). "Israel's Policy on Artificial Intelligence Regulation and Ethics". https://www.gov.il/en/pages/ai_2023

  198. "Artificial Intelligence Regulation and Ethics Policy". gov.il. December 17, 2023. Retrieved 2025-05-07. https://www.gov.il/en/pages/ai_2023

  199. Wroble, Sharon (2024-05-03). "Israel Signs Global Treaty to Address Risks of Artificial Intelligence". Times of Israel. Retrieved 2025-05-08. https://www.timesofisrael.com/israel-signs-global-treaty-to-address-risks-of-artificial-intelligence/

  200. Marzio Bartoloni (11 October 2023). "Cures and artificial intelligence: privacy and the risk of the algorithm that discriminates". https://amp24-ilsole24ore-com.translate.goog/pagina/AFfTkfCB?_x_tr_sl=auto&_x_tr_tl=en&_x_tr_hl=it&_x_tr_pto=wapp

  201. "AI Watch: Global regulatory tracker – Italy". whitecase.com. 2024-12-16. Retrieved 2025-05-09. https://www.whitecase.com/insight-our-thinking/ai-watch-global-regulatory-tracker-italy

  202. Olivi, Bocchi, Cirotti (2024-05-07). "The road to the AI Act: The Italian approach – Part 3: The Italian national competent AI Authority". Dentons. Retrieved 2025-05-09.{{cite web}}: CS1 maint: multiple names: authors list (link) https://www.dentons.com/en/insights/articles/2024/may/7/the-road-to-the-ai-act-the-italian-approach-pt-3

  203. "AI Watch: Global regulatory tracker – Italy". whitecase.com. 2024-12-16. Retrieved 2025-05-09. https://www.whitecase.com/insight-our-thinking/ai-watch-global-regulatory-tracker-italy

  204. The Moroccan Times (2024-04-24). "Morocco Proposes Legislation for National AI Agency". The Moroccan Times. Archived from the original on 2024-04-25. Retrieved 2024-04-25. https://themoroccantimes.com/2024/04/27375/morocco-proposes-legislation-for-national-ai-agency

  205. Buza, Taha, Maria, Sherif (2025-04-09). "DPA Digital Digest: Morocco [2025 Edition]". Digital Policy Alert. Retrieved May 8, 2025.{{cite web}}: CS1 maint: multiple names: authors list (link) https://digitalpolicyalert.org/digest/dpa-digital-digest-morocco

  206. Buza, Taha, Maria, Sherif (2025-04-09). "DPA Digital Digest: Morocco [2025 Edition]". Digital Policy Alert. Retrieved May 8, 2025.{{cite web}}: CS1 maint: multiple names: authors list (link) https://digitalpolicyalert.org/digest/dpa-digital-digest-morocco

  207. Buza, Taha, Maria, Sherif (2025-04-09). "DPA Digital Digest: Morocco [2025 Edition]". Digital Policy Alert. Retrieved May 8, 2025.{{cite web}}: CS1 maint: multiple names: authors list (link) https://digitalpolicyalert.org/digest/dpa-digital-digest-morocco

  208. Rebecca (2023-07-13). "Why is regulating AI such a challenge?". Prime Minister's Chief Science Advisor. Archived from the original on 2024-09-25. Retrieved 2024-08-20. https://www.pmcsa.ac.nz/2023/07/13/why-is-regulating-ai-such-a-challenge/

  209. "Reimagining Regulation for the Age of AI: New Zealand Pilot Project". World Economic Forum. 2020-06-29. https://www.weforum.org/publications/reimagining-regulation-for-the-age-of-ai-new-zealand-pilot-project/

  210. Cann, Geraden (2023-05-25). "Privacy Commission issues warning to companies and organisations using AI". Stuff. Archived from the original on 2024-09-25. Retrieved 2024-08-20. https://www.stuff.co.nz/business/132145529/privacy-commission-issues-warning-to-companies-and-organisations-using-ai

  211. "Artificial Intelligence and the IPPs". www.privacy.org.nz. 2023-09-21. Archived from the original on 2024-08-20. Retrieved 2024-08-20. https://www.privacy.org.nz/publications/guidance-resources/ai/

  212. "Survey finds most Kiwis spooked about malicious AI - minister responds". The New Zealand Herald. 2024-02-21. Archived from the original on 2024-08-20. Retrieved 2024-08-20. https://www.nzherald.co.nz/business/survey-finds-most-kiwis-worried-about-malicious-ai-technology-minister-judith-collins-responds/DJWKCXXSF5CCHPPNE47BTCQDJU/

  213. Arasa, Dale (13 March 2023). "Philippine AI Bill Proposes Agency for Artificial Intelligence". Philippine Daily Inquirer. Archived from the original on 25 September 2024. Retrieved 29 May 2024. https://technology.inquirer.net/122156/philippine-ai-bill-proposes-agency-for-artificial-intelligence

  214. Abarca, Charie (29 May 2024). "Comelec wants AI ban on campaign materials ahead of 2025 polls". Philippine Daily Inquirer. Archived from the original on 29 May 2024. Retrieved 29 May 2024. https://newsinfo.inquirer.net/1946107/comelec-mulls-the-use-of-artificial-intelligence-for-2025-poll-campaign

  215. Ministry of Science of Spain (2018). "Spanish RDI Strategy in Artificial Intelligence" (PDF). www.knowledge4policy.ec.europa.eu. Archived (PDF) from the original on 18 July 2023. Retrieved 9 December 2023. https://knowledge4policy.ec.europa.eu/sites/default/files/Spanish_RDI_strategy_in_AI.pdf

  216. "Chaining the chatbots: Spain closes in on AI Act". POLITICO. 2023-06-22. Retrieved 2023-09-03. https://www.politico.eu/article/spain-artificial-intelligence-ai-act-technology/

  217. Ministry of Science of Spain (2018). "Spanish RDI Strategy in Artificial Intelligence" (PDF). www.knowledge4policy.ec.europa.eu. Retrieved 9 December 2023. https://knowledge4policy.ec.europa.eu/sites/default/files/Spanish_RDI_strategy_in_AI.pdf

  218. "Chaining the chatbots: Spain closes in on AI Act". POLITICO. 2023-06-22. Retrieved 2023-09-03. https://www.politico.eu/article/spain-artificial-intelligence-ai-act-technology/

  219. Castillo, Carlos del (2021-12-28). "España vigilará la Inteligencia Artificial como a los fármacos o los alimentos". elDiario.es (in Spanish). Retrieved 2023-09-03. https://www.eldiario.es/tecnologia/espana-vigilara-inteligencia-artificial-farmacos-alimentos_1_8615818.html

  220. "España comienza el proceso para elegir la sede de la futura Agencia Española de Supervisión de la IA". El Español (in Spanish). 2022-09-13. Retrieved 2023-09-03. https://www.elespanol.com/invertia/disruptores-innovadores/politica-digital/espana/20220913/espana-comienza-proceso-agencia-espanola-supervision-ia/702929910_0.html

  221. Marcos, José (2022-09-12). "El Gobierno inicia con la Agencia de Inteligencia Artificial el traslado de sedes fuera de Madrid". El País (in Spanish). Retrieved 2023-09-03. https://elpais.com/espana/2022-09-12/el-gobierno-inicia-con-la-agencia-de-inteligencia-artificial-el-traslado-de-sedes-fuera-de-madrid.html

  222. "A Coruña acogerá la Agencia Española de Inteligencia Artificial". Europa Press. 2022-12-05. Retrieved 2023-09-03. https://www.europapress.es/economia/noticia-coruna-acogera-agencia-espanola-inteligencia-artificial-20221205132621.html

  223. "El Gobierno aprueba el estatuto de la Agencia Española de Supervisión de la Inteligencia Artificial". Europa Press. 2023-08-22. Retrieved 2023-09-03. https://www.europapress.es/economia/noticia-gobierno-aprueba-estatuto-agencia-espanola-supervision-inteligencia-artificial-20230822165127.html

  224. Guerrini, Federico. "European Countries Race To Set The AI Regulatory Pace". Forbes. Retrieved 2023-09-04. https://www.forbes.com/sites/federicoguerrini/2023/09/04/european-countries-race-to-set-the-ai-regulatory-pace/

  225. "Artificial Intelligence: Overview and Switzerland's regulatory approach". Swiss Federal Office of Communications (OFCOM). 12 February 2025. Retrieved 31 March 2025. https://www.bakom.admin.ch/bakom/en/homepage/digital-switzerland-and-internet/strategie-digitale-schweiz/ai.html

  226. "Digital economy strategy 2015 to 2018". www.ukri.org. 16 February 2015. Archived from the original on 2022-09-01. Retrieved 2022-07-18. https://www.ukri.org/publications/digital-economy-strategy-2015-to-2018

  227. "Digital economy strategy 2015 to 2018". www.ukri.org. 16 February 2015. Archived from the original on 2022-09-01. Retrieved 2022-07-18. https://www.ukri.org/publications/digital-economy-strategy-2015-to-2018

  228. "Data ethics and AI guidance landscape". GOV.UK. Archived from the original on 2023-10-26. Retrieved 2023-10-26. https://www.gov.uk/guidance/data-ethics-and-ai-guidance-landscape

  229. Leslie, David (2019-06-11). "Understanding artificial intelligence ethics and safety: A guide for the responsible design and implementation of AI systems in the public sector". Zenodo. arXiv:1906.05684. doi:10.5281/zenodo.3240529. S2CID 189762499. Archived from the original on 2020-04-16. Retrieved 2020-04-28. https://zenodo.org/record/3240529

  230. Babuta, Alexander; Oswald, Marion; Janjeva, Ardi (2020). Artificial Intelligence and UK National Security: Policy Considerations (PDF). London: Royal United Services Institute. Archived from the original (PDF) on 2020-05-02. Retrieved 2020-04-28. https://web.archive.org/web/20200502044604/https://rusi.org/sites/default/files/ai_national_security_final_web_version.pdf

  231. "Intelligent security tools". www.ncsc.gov.uk. Archived from the original on 2020-04-06. Retrieved 2020-04-28. https://www.ncsc.gov.uk/collection/intelligent-security-tools

  232. Richardson, Tim. "UK publishes National Artificial Intelligence Strategy". www.theregister.com. Archived from the original on 2023-02-10. Retrieved 2022-01-01. https://www.theregister.com/2021/09/22/uk_10_year_national_ai_strategy/

  233. The National AI Strategy of the UK Archived 2023-02-10 at the Wayback Machine, 2021 (actions 9 and 10 of the section "Pillar 3 – Governing AI Effectively") https://www.gov.uk/government/publications/national-ai-strategy/national-ai-strategy-html-version

  234. "A pro-innovation approach to AI regulation". GOV.UK. Archived from the original on 2023-10-27. Retrieved 2023-10-27. https://www.gov.uk/government/publications/ai-regulation-a-pro-innovation-approach/white-paper

  235. Gikay, Asress Adimi (2023-06-08). "How the UK is getting AI regulation right". The Conversation. Archived from the original on 2023-10-27. Retrieved 2023-10-27. https://theconversation.com/how-the-uk-is-getting-ai-regulation-right-206701

  236. Browne, Ryan (2023-06-12). "British Prime Minister Rishi Sunak pitches UK as home of A.I. safety regulation as London bids to be next Silicon Valley". CNBC. Archived from the original on 2023-07-27. Retrieved 2023-10-27. https://www.cnbc.com/2023/06/12/pm-rishi-sunak-pitches-uk-as-geographical-home-of-ai-regulation.html

  237. "AI Safety Summit: introduction (HTML)". GOV.UK. Archived from the original on 2023-10-26. Retrieved 2023-10-27. https://www.gov.uk/government/publications/ai-safety-summit-introduction/ai-safety-summit-introduction-html

  238. "Introducing the AI Safety Institute". GOV.UK. Archived from the original on 2024-07-07. Retrieved 2024-07-08. https://www.gov.uk/government/publications/ai-safety-institute-overview/introducing-the-ai-safety-institute

  239. Henshall, Will (2024-04-01). "U.S., U.K. Will Partner to Safety Test AI". TIME. Archived from the original on 2024-07-07. Retrieved 2024-07-08. https://time.com/6962503/ai-artificial-intelligence-uk-us-safety/

  240. Weaver, John Frank (2018-12-28). "Regulation of artificial intelligence in the United States". Research Handbook on the Law of Artificial Intelligence: 155–212. doi:10.4337/9781786439055.00018. ISBN 9781786439055. Archived from the original on 2020-06-30. Retrieved 2020-06-29. 9781786439055

  241. "Background on Lethal Autonomous Weapons Systems in the CCW". United Nations Geneva. Archived from the original on 2020-04-27. Retrieved 2020-05-05. https://unog.ch/80256EE600585943/(httpPages)/8FA3C2562A60FF81C1257CE600393DF6?OpenDocument

  242. "Guiding Principles affirmed by the Group of Governmental Experts on Emerging Technologies in the Area of Lethal Autonomous Weapons System" (PDF). United Nations Geneva. Archived from the original (PDF) on 2020-12-01. Retrieved 2020-05-05. https://web.archive.org/web/20201201120810/https://unog.ch/80256EDD006B8954/(httpAssets)/815F8EE33B64DADDC12584B7004CF3A4/$file/CCW+MSP+2019+CRP.2+Rev+1.pdf

  243. "Robots with Guns: The Rise of Autonomous Weapons Systems". Snopes.com. 21 April 2017. Archived from the original on 25 September 2024. Retrieved 24 December 2017. https://www.snopes.com/2017/04/21/robots-with-guns/

  244. Baum, Seth (2018-09-30). "Countering Superintelligence Misinformation". Information. 9 (10): 244. doi:10.3390/info9100244. ISSN 2078-2489. https://doi.org/10.3390%2Finfo9100244

  245. "Country Views on Killer Robots" (PDF). The Campaign to Stop Killer Robots. Archived (PDF) from the original on 2019-12-22. Retrieved 2020-05-05. https://www.stopkillerrobots.org/wp-content/uploads/2019/10/KRC_CountryViews_25Oct2019rev.pdf

  246. Sayler, Kelley (2020). Artificial Intelligence and National Security: Updated November 10, 2020 (PDF). Washington, DC: Congressional Research Service. Archived (PDF) from the original on May 8, 2020. Retrieved May 27, 2021. https://fas.org/sgp/crs/natsec/R45178.pdf

  247. "Defense Primer: U.S. Policy on Lethal Autonomous Weapon Systems". Congressional Research Service. May 15, 2023. Archived from the original on November 1, 2023. Retrieved October 18, 2023. https://crsreports.congress.gov/product/pdf/IF/IF11150