Overview of AI policy in 15 jurisdictions
These approaches illustrate a global shift towards embracing AI’s transformative potential while considering the ethical and societal implications, fostering international cooperation to future-proof technological advancements.

1. CHINA
Summary
China remains a global leader in AI, driven by significant state investment, a vast tech ecosystem and abundant data resources. Although no single, overarching AI law is in place (such as the EU AI Act), the country has introduced a multilayered regulatory framework – combining data protection, copyright, AI-specific provisions, and ethical guidelines – to balance technological innovation with national security, content governance, and social stability.
AI landscape
- New Generation AI Development Plan (2017) outlines China’s roadmap to become a global AI leader by 2030, with major milestones set for 2020 and 2025.
- Global AI Governance Initiative (2023), announced by President Xi Jinping, promotes international collaboration and encourages developing nations to take part in shaping AI governance.
- China’s AI oversight is primarily led by the Cyberspace Administration of China (CAC), supported by the National Development and Reform Commission, the Ministry of Education, the Ministry of Science and Technology, the Ministry of Industry and Information Technology, the Ministry of Public Security, and the National Radio and Television Administration, with each agency focusing on its sector-specific responsibilities while jointly shaping and enforcing AI policies.
- Tools like OpenAI’s ChatGPT and Google’s Gemini remain officially unavailable in mainland China. However, Chinese tech firms — Alibaba, Baidu, Huawei, Tencent, and the private non-profit Beijing Academy of Artificial Intelligence AI— widely adopt models such as Meta’s Llama 2 and Llama 3, with Baidu’s Ernie Bot and Alibaba’s Tongyi Qianwen among the leading examples. Notably, the current restrictions do not extend to open-source models, provided they are not used to offer generative AI services to the public.
- DeepSeek, a Chinese AI model launched on 20 January 2025, has rapidly climbed Apple’s App Store rankings due to its cost-effective yet powerful capabilities, prompting discussions about the resources needed to develop AI, the impact on data security, and China’s expanding role in advanced technology.
China’s regulatory landscape for AI is anchored by several core laws and a growing portfolio of AI-specific rules. At the core of this framework are data protection and copyright laws, which provide the legal baseline for AI deployments.
The Personal Information Protection Law (PIPL), enacted in 2021, serves as a direct parallel to the EU’s General Data Protection Regulation (GDPR) by placing strict obligations on how personal data is collected and handled. Significantly, and unlike the GDPR, it clarifies that personal information already in the public domain can be processed without explicit consent as long as such use does not unduly infringe on individuals’ rights or go against their explicit objections. The PIPL also addresses automated decision-making, explicitly barring discriminatory or exploitative algorithmic practices, such as charging different prices to different consumer groups without justification.
Copyright considerations further shape the development of AI. Under the Chinese Copyright Law, outputs generated entirely by AI, devoid of human originality, cannot be granted copyright protection. Yet courts have repeatedly recognised that when users meaningfully contribute creative elements through prompts, they can secure copyrights in the resulting works, as illustrated by rulings in cases like Shenzhen Tencent v Shanghai Yingxun. At the same time, developers of generative AI systems have faced legal liabilities when their algorithms inadvertently produce content that violates intellectual property or personality rights, exemplified by high-profile instances involving the unauthorised use of the Ultraman character and imitations of distinctive human voices.
Over the past few years, these broader legal anchors have been reinforced by regulations specifically tailored for algorithmic and generative AI systems. One of the most notable is the Provisions on the Management of Algorithmic Recommendations in Internet Information Services of 2021, which target services deploying recommendation algorithms for personalised news feeds, product rankings, or other user-facing suggestions. Providers deemed capable of shaping public opinion must register with authorities, disclose essential technical details, and implement robust security safeguards. These requirements extend to ensuring transparency in how content is recommended and offering users the option to disable personalisation altogether.
In 2022, China introduced the Provisions on the Administration of Deep Synthesis Internet Information Services to address AI-generated synthetic media. These requirements obligate service providers to clearly label media that has been artificially generated or manipulated, particularly when there is a risk of misleading the public. To facilitate accountability, users must undergo real-name verification, and any provider offering a service with a marked capacity to influence public opinion or mobilise society must conduct additional security assessments.
Interim Measures for the Management of Generative Artificial Intelligence Services, which came into effect on 15 August 2023, apply to a broad range of generative technologies, from large language models to advanced image and audio generators. Led by the Cyberspace Administration of China (CAC), these rules require compliance with existing data and intellectual property laws, including obtaining informed user consent for personal data usage and engaging in comprehensive data labelling. Providers must also detect and block illegal or harmful content, particularly anything that might jeopardise national security, contravene moral standards, or infringe upon IP rights, and are expected to maintain thorough complaint mechanisms and special protective measures for minors.
Where public opinion could be swayed, providers are required to file details of their algorithms for governmental review and may face additional scrutiny if they are deemed highly influential.
Building on these interim measures, the Basic Safety Requirements for Generative AI Services, which came into effect in 2024, took a more granular approach to technical controls. Issued by the National Information Security Standardization Technical Committee (TC260), these requirements outline 31 risk categories ranging from content that undermines socialist core values to discriminatory or infringing materials.
Under these guidelines, training data must be meticulously checked via random spot checks of at least 4,000 items from the entire dataset to ensure that at least 96 percent is free from illegal or unhealthy information. Illegal or unhealthy information is information that contains any of the 29 safety risks listed in the Annex.
Providers are similarly obligated to secure explicit consent from individuals whose personal data might be used in model development. If a user prompt is suspected of eliciting unlawful or inappropriate outputs, AI systems must be capable of refusing to comply, and providers are expected to maintain logs of such refusals and accepted queries.
Alongside these binding regulations, the Chinese government and local authorities have published a range of ethical and governance guidelines. The Ethical Norms for New Generation AI, released in 2021 by the National New Generation AI Governance Specialist Committee, articulate six guiding principles, including respect for human welfare, fairness, privacy, and accountability.
While these norms do not themselves impose concrete penalties, they have guided subsequent legislative efforts. In a more formal measure, the 2023 Measures for Scientific and Technological Ethics Review stipulates that institutions engaging in ethically sensitive AI research, particularly those working on large language models with the potential to sway social attitudes, must establish ethics committees.
These committees are subject to national registration, and violations can result in administrative, civil, or even criminal penalties. Local governments, such as those in Shenzhen and Shanghai, have further set up municipal AI ethics committees to oversee particularly high-risk AI projects, often requiring providers to
conduct ex-ante risk reviews before introducing new systems.
Under the binding frameworks, providers can be subject to financial penalties, service suspension, or even criminal proceedings if they fail to comply with content governance or user rights.
In 2023, China’s State Council announced that it would draft an AI law. However, since then, China has halted all efforts to unify its AI legislation, instead opting for a piecemeal, sector-focused regulatory strategy that continues to evolve in response to emerging technologies.
2. AUSTRALIA
Summary
Australia takes a principles-based approach to AI governance, blending existing laws, such as privacy and consumer protection, with voluntary standards and whole-of-government policies to encourage both innovation and public trust. There is currently no single, overarching AI law; rather, the government has proposed additional, risk-based mandatory guardrails – especially for ‘high-risk’ AI uses – and issued a policy to ensure responsible adoption of AI across all federal agencies.
AI landscape
- The Digital Transformation Agency (DTA) coordinates cross-government AI policies, while the Department of Industry, Science and Resources (DISR) directs national AI initiatives such as the Voluntary AI Safety Standard. The Office of the Australian Information Commissioner (OAIC) enforces privacy compliance under the Privacy Act, and the Australian Competition and Consumer Commission (ACCC) addresses competition and consumer issues arising from AI-driven practices. Meanwhile, the National Artificial Intelligence Centre propels innovation and fosters startup growth, and certain states – like New South Wales – have established dedicated AI Advisory Committees.
- The voluntary AI Safety Standard (2024) introduces ten guardrails, such as accountability, transparency, and model testing that guide organisations toward safe AI practices.
- High-Risk AI Consultation (2024) represents a proposal paper that suggests binding obligations for AI systems capable of large-scale societal impact.
- The Artificial Intelligence I Action Plan (2021) and Artificial Intelligence I Technology Roadmap (2019) aim to position Australia as a leader in trusted AI, backed by dedicated funding through the AI Capability Fund and AI and Digital Capability Centres.
- Policy for the Responsible Use of AI in Government (2024) took effect in September 2024, mandating federal agencies to designate an AI accountability official and publish transparent use statements.
There is no single all-encompassing AI law in Australia. The government has pursued a flexible approach that builds upon privacy protections, consumer safeguards, and voluntary principles while moving steadily towards risk-based regulation of high-impact AI applications.
At the core of Australia’s legal baseline is the Privacy Act of 1988, which has been under review to address emerging challenges, including AI-driven data processing and automated decision-making. Under updated guidance, the Privacy Act now clarifies that any personal information handled by an AI system, including inferred or artificially generated data, falls under the Australian Privacy Principles, meaning organisations must lawfully and fairly collect it (with consent for sensitive data), maintain transparency about AI usage, ensure accuracy, and uphold stringent security and oversight measures. Alongside the Privacy Act, the Consumer Data Right facilitates secure data sharing in sectors such as finance and energy, allowing AI-driven products to leverage richer data sets under strict consent mechanisms.
From a consumer protection standpoint, the Australian Consumer Law, enforced by the Australian Competition and Consumer Commission (ACCC) prohibits misleading or unfair conduct. This has occasionally encompassed AI-driven pricing or recommendation algorithms, as exemplified in the ACCC v Trivago case involving deceptive hotel pricing displays.
Various sectors impose complementary rules. The Online Safety Act 2021 addresses harmful or exploitative content, which may include AI-generated deepfakes. The Copyright Act governs the permissible scope of AI training data, while the Corporations Act 2001 influences AI tools used in financial services, such as algorithmic trading and robo-advice.
The government has introduced several AI-specific guidelines and policies to add to these laws:
- Voluntary AI Ethics Principles (2019) promote human-centred values, fairness, security, reliability, transparency, contestability, accountability, and societal well-being.
- Voluntary AI Safety Standard (2024) was issued by DISR and covers accountability, data governance, model testing, and other risk management practices to help organisations innovate responsibly.
- Policy for the Responsible Use of AI in Government (2024) was mandated for Australian Public Service (APS) agencies and introduces the ‘Enable, Engage, Evolve’ framework for safe AI adoption. Each agency must designate an accountable official within 90 days to oversee AI governance and, within six months, publish an AI Transparency Statement detailing the agency’s AI use cases, data handling, and risk mitigation steps.
- The government has floated mandatory obligations for AI systems that carry a high risk of societal harm by adopting the Proposals Paper for Introducing Mandatory Guardrails for AI in High-Risk Settings (2024). Borrowing from models like the EU AI Act and Canada’s AIDA, the proposals distinguish between:
Category 1 AI: Foreseeable uses of AI with known but manageable risks
Category 2 AI: More advanced or unpredictable AI systems with the potential for large-scale harm. Enforcement mechanisms include licensing, registration, or mandatory ex-ante approvals.
A variety of additional AI initiatives complement these policies, such as the Australian Framework for Generative Artificial Intelligence (AI) in Schools, which sets guidelines for safe generative AI adoption in education, covering transparency, user protection, and data security; the AFP Technology Strategy that sets guidelines for AI-based tools in federal law enforcement; and the Medical Research Future Fund that invests in AI-driven healthcare pilots, such as diagnostics for skin cancer and radiological screenings.
Internationally, Australia aligns with the Global Partnership on Artificial Intelligence (GPAI) and the OECD AI Principles, actively collaborating with global partners on AI policy and digital trade.
3. SWITZERLAND
Summary
Switzerland follows a sector-focused, technology-neutral approach to AI regulation, grounded in strong data protection and existing legal frameworks for finance, medtech, and other industries. Although the Federal Council’s 2020 Strategy on Artificial Intelligence sets ethical and societal priorities, there is no single, overarching AI law in force.
AI landscape
- Current AI uses fall under traditional Swiss laws. Switzerland has not enacted an overarching AI law, relying instead on sectoral oversight and cantonal initiatives.
- Oversight responsibilities are distributed among several federal entities and occasionally supplemented by cantonal authorities. For instance, the Federal Data Protection and Information Commissioner (FDPIC) addresses privacy concerns, while the Financial Market Supervisory Authority (FINMA) exercises administrative powers to regulate financial institutions, including the authority to revoke licenses for noncompliance. The Federal Council sets the AI policy agenda. Cantonal governments, for their part, may provide frameworks for local pilot programmes, fostering public-private partnerships and encouraging best practices in AI adoption.
- The Strategy on Artificial Intelligence (2020) emphasises human oversight, data governance, and collaborative R&D to position Switzerland as an innovation hub for finance, medtech, robotics, and precision engineering.
- The Digital Switzerland Strategy is a broad vision for the national digital transformation, including AI. The Interdepartmental Working Group on AI coordinates governance and advises on regulatory gaps.
- In February 2025, Switzerland announced its intention to ratify the Council of Europe Convention on Artificial Intelligence and to make the necessary amendments to Swiss law.
At the core of Swiss data protection is the Revised Federal Act on Data Protection (FADP), which took effect in 2023. It imposes strict obligations on entities that process personal data, extending to AI-driven activities. Under Article 21, the FADP places particular emphasis on automated decision-making, urging transparency when significant personal or economic consequences may result. The FDPIC enforces the law, carrying out investigations and offering guidance, though it lacks broad direct penalty powers.
Beyond data privacy, AI solutions must comply with existing sectoral regulations. In healthcare, the Therapeutic Products Act and the corresponding Medical Devices Ordinance govern AI-based diagnostic tools, with Swissmedic classifying such systems as medical devices when applicable.
In finance, FINMA oversees AI applications in robo-advisory, algorithmic trading, and risk analysis, regularly issuing circulars and risk monitors that highlight expectations for transparency, reliability, and robust risk controls. Other domains, like autonomous vehicles and drones, fall under the jurisdiction of the Federal Department of the Environment, Transport, Energy, and Communications (DETEC), which grants pilot licenses and operational approvals through agencies such as the Federal Office of Civil Aviation (FOCA).
Liability, intellectual property, and non-discrimination matters are similarly addressed through existing legislation. The Product Liability Act, the Civil Code, and the Code of Obligations govern contracts and liability for AI products and services, while the Copyright Act and the Patents Act regulate AI training data usage and software IP rights. The Gender Equality Act and the Disability Discrimination Act may apply if AI outputs result in systematic bias or exclusion.
At a local level, several cantonal innovation hubs, such as Zurich’s Innovation Sandbox for Artificial Intelligence support pilot projects and produce policy feedback on emerging technologies. The Swiss Supercomputer Project – a collaboration among national labs, Hewlett Packard Enterprise, and NVIDIA – provides high-performance computing resources to bolster AI research in areas ranging from precision engineering to climate simulations. In the same vein, the Swiss AI Initiative is a national effort led by ETH Zurich and the Swiss Federal Technology Institute of Lausanne (EPFL) and powered by the world’s most advanced GPU supercomputer, uniting experts across Switzerland to develop large-scale, domain-specific AI models.
The Digital Society Initiative at the University of Zurich focuses on interdisciplinary research and public engagement, exploring the ethical, social, and legal impacts of digital transformation.
Switzerland engages with the OECD AI Principles and participates in the Council of Europe Committee on Artificial Intelligence. In November 2023, the Federal Council instructed the Federal Department of the Environment, Transport, Energy and Communications and the Federal Department of Foreign Affairs to produce an overview of potential regulatory approaches for artificial intelligence, emphasising transparency, traceability, and alignment with international standards such as the Council of Europe’s AI Convention and the EU’s AI Act.
In February 2025, they presented a plan proposing sector-specific legislative changes in areas like data protection and non-discrimination, along with non-binding measures such as self-declarations and industry-led solutions, to protect fundamental rights, bolster public trust, and support Swiss innovation.
4. TÜRKIYE
Summary
Türkiye strives to become a major regional AI hub by investing in industrial applications, defence innovation, and a rapidly growing tech workforce. While there is no single, overarching AI law at present, a Draft AI Bill introduced in June 2024 is under parliamentary review and, if enacted, will establish guiding principles on safety, transparency, accountability, and privacy, especially for high-risk AI like autonomous vehicles, medical diagnostics, and defence systems.
Existing sectoral legislation, privacy rules under the Law on the Protection of Personal Data, and the National Artificial Intelligence Strategy (2021–2025) shape responsible AI use across industries.
AI landscape
- The Ministry of Industry and Technology fosters industrial AI adoption and skill-building programmes. The Digital Transformation Office leads policy formulation for emerging technologies, including the Draft AI Bill. The Personal Data Protection Authority (KVKK) can investigate and fine organisations for data-related violations, while the Banking Regulation and Supervision of Agency (BRSA) enforces financial compliance, particularly for AI-based consumer products. If the Draft AI Bill is enacted, it could either empower these agencies to oversee new AI obligations or create additional supervisory structures.
- The National Artificial Intelligence Strategy (2021–2025) is a roadmap for talent development, data infrastructure, ethical frameworks, and AI hubs to spark local innovation.
- The Draft AI Bill, proposed in June 2024, is pending parliamentary approval. The Draft proposes broad principles, such as safety, transparency, equality, accountability, and privacy, as well as a registration requirement for certain high-risk AI use cases.
- The Personal Data Protection Law, overseen by KVKK, underpins AI-driven processing of personal data and mandates informed consent and data minimisation.
There is no single, overarching AI law. Sectoral regulations play a key role. In banking and finance, the Banking Regulation and Supervision Agency (BRSA) supervises AI-driven credit scoring, risk analysis, and fraud detection, proposing rules mandating explicit consent and algorithmic fairness audits. The defence sector, led by state-owned enterprises such as TUSAŞ and ASELSAN, deploys autonomous drones and advanced targeting systems, although official details remain classified for national security reasons. The automotive industry invests in connected and self-driving vehicles — particularly through TOGG, Türkiye’s national electric car project – aligning with the National Artificial Intelligence Strategy’s push for advanced
manufacturing.
The Law on Consumer Protection, the E-commerce Law, and the Turkish Criminal Code collectively impose transparency, fairness, and liability standards on AI-driven advertising, misinformation, and automated decision-making, while the Industrial Property Code governs the permissible use of copyrighted data for AI training and clarifies patentability criteria for AI-based innovations.
While not an EU member, Türkiye often harmonises regulations with EU norms to facilitate trade and ensure cross-border legal compatibility. It also engages in the Global Partnership on Artificial Intelligence (GPAI) and participates in the Council of Europe Committee on Artificial Intelligence.
5. MEXICO
Summary
Mexico does not have a single, overarching AI law or a fully institutionalised national strategy. The 2018 National AI Strategy, commissioned by the British Embassy in Mexico, developed by Oxford Insights and C Minds, and informed by government and expert input has been influential in articulating principles for ethical AI adoption, open data, and talent development.
However, it has not been officially enforced as a national plan. AI adoption in the private sector remains limited, although the public sector ranks relatively high in Latin America for AI integration. Data protection laws were previously enforced by the National Institute for Transparency, Access to Information, and Personal Data Protection (INAI), which was eliminated in December 2024 due to budgetary constraints. These responsibilities now fall under the Secretariat of Anti-Corruption and Good Governance (SABG).
AI landscape
- No single agency oversees AI across all industries. Instead, relevant ministries and authorities handle their respective domains, with the Secretariat of Economy promoting innovation and investment, and the Secretariat of Infrastructure, Communications and Transportation (SICT) expanding the digital infrastructure necessary for AI deployment. Until December 2024, the National Institute for Transparency, Access to Information, and Personal Data Protection (INAI) intervened when data protection issues arose, but lacked a broader AI oversight mandate. As of its elimination, these functions are now carried out by the SABG.
- The 2018 National AI Strategy outlined fairness, accountability, and a robust AI workforce, but remains unevenly implemented.
At the heart of Mexico’s data governance is the Federal Law on the Protection of Personal Data Held by Private Parties (2010). This law imposes consent, transparency, and security obligations on any entity handling personal data, including AI-driven projects. Until December 2024, the National Institute for Transparency, Access to Information, and Personal Data Protection enforced these rules; enforcement responsibilities have since been transferred to the Secretariat of Anti-Corruption and Good Governance. Although its powers were primarily focused on privacy, INAI has periodically offered guidance on best practices for AI-based solutions, such as public chatbots and e-commerce platforms.
Beyond privacy, other laws – such as the Consumer Protection Law and the E-Commerce Law – can indirectly govern AI use, particularly when automated tools influence marketing, pricing, or other consumer-facing decisions.
Copyright and IP regulations apply to AI developers, especially regarding training data usage or patent filings. Regarding training data usage, developers must rely on obtaining proper licenses or using public domain material to avoid potential copyright infringement when training AI models. Patents require a genuine technical solution and novelty, and AI cannot be named as the inventor. Mexico accounts for a significant share of AI patent applications in Latin America, alongside Brazil.
Mexico’s public sector ranks third in Latin America in terms of AI integration, with pilot projects
in:
Healthcare (AI-based triage and diagnostics);
Agriculture (precision farming via drones);
Municipal services (chatbots and data analytics tools).
Nonetheless, private-sector adoption remains modest, scoring below the regional average in the
Latin American AI Index; critics argue that this is due to Mexico’s relatively low R&D spending,
fragmented policy environment, and insufficient incentives for businesses.
6. INDONESIA
Summary
While no single, overarching AI law is in place, Indonesia’s Ministry of Communication and Digital Affairs has announced a forthcoming AI regulation. Currently, the Personal Data Protection Law (2022) provides an important legal foundation for AI-related personal data processing.
Key institutions, including the Ministry of Communication and Information Technology (Kominfo) and the National Research and Innovation Agency (BRIN), jointly shape AI policies and promote R&D initiatives, with further sector-specific guidelines emerging at both the national and provincial levels.
Indonesia envisions AI as a driver of national development, aiming to strengthen healthcare, education, food security, and public services through the 2020–2045 Masterplan for National Artificial Intelligence (Stranas KA).
AI landscape
- The Stranas KA (2020–2045) is a long-term roadmap dedicated to setting ethical AI goals, boosting data infrastructure, cultivating local capacity, and encouraging global partnerships.
- The Ministry of Communication and Information Technology (Kominfo) and the National Research and Innovation Agency (BRIN) are sharing top-level responsibility for AI governance. Kominfo’s mandate covers digital infrastructure, online service regulation, and data privacy enforcement, whereas BRIN leads in AI research funding and development across ministries. Multistakeholder committees occasionally form to address overlaps ensuring that relevant agencies coordinate efforts and share resources.
- The Personal Data Protection Law (2022) establishes consent, transparency, and data minimization requirements, backed by Kominfo’s authority to impose administrative fines or suspend services.
In January 2025, Indonesia’s Ministry of Communication and Digital Affairs announced a forthcoming AI regulation that will build on guidelines emphasising transparency, accountability, human rights, and safety. Minister Meutya Hafid assigned Deputy Minister Nezar Patria to draft this regulation, as well as to gather stakeholder input across sectors such as education, health, infrastructure, and financial services.
Currently, Indonesia’s AI governance is anchored by strategic planning under the Stranas KA (2020–2045) and the Personal Data Protection Law (2022). Existing regulations, along with provincial-level guidelines and interministerial collaboration, guide the adoption of AI systems across multiple industries.
To bolster cybersecurity and data protection, the National Cyber and Encryption Agency sets additional security standards for AI implementations in critical sectors.
The Stranas KA (2020–2045) provides short-term milestones (2025) and long-term goals (2045) aimed at constructing a robust data infrastructure, prioritising ethical AI, and building a large talent pool. Five national priorities structure these efforts:
AI solutions for telemedicine, remote diagnostics, and hospital administration;
Automating public services with chatbots and data analytics;
Upskilling and training for a domestic AI workforce;
Precision agriculture, pest detection, and yield forecasting;
AI for traffic management, urban planning, and public safety.
The Stranas KA provides only broad principles rather than explicit, enforceable audit mandates, covering areas such as data handling, model performance, and ethical compliance, so formal requirements remain relatively limited.
Certain provincial governments have issued draft guidelines for AI usage in local services, including chatbots for administrative tasks and agritech solutions that support smallholder farmers. These guidelines typically incorporate privacy measures and user consent requirements, aligning with the Personal Data Protection Law.
Indonesia cooperates with ASEAN partners on cross-border digital initiatives. During its 2022 G20 presidency, Indonesia spotlighted AI as a tool for inclusive growth, focusing on bridging the digital divide.
7. EGYPT
Summary
Although there is no single, overarching AI law, the Egypt National Artificial Intelligence Strategy (2020) provides a roadmap for research, capacity development, and foreign investment in AI, while the Personal Data Protection Law No. 151 of 2020 governs personal data used by AI systems. In January 2025, President Abdel Fattah El-Sisi launched the updated 2025–2030 National AI Strategy, aiming to grow the ICT sector’s contribution to GDP to 7.7% by 2030, establish 250+ AI startups, and develop a talent pool of 30,000 AI professionals. The new strategy also announces the development of a national foundational model, including a large-scale Arabic language model, as a key enabler for Egypt’s AI ecosystem.
With multiple pilot projects, ranging from AI-assisted disease screening to smart city solutions, Egypt is laying the groundwork for broader AI deployment, with the Data Protection Authority providing oversight of AI-driven data processing. The Ministry of Communications and Information Technology (MCIT) spearheads AI policy, focusing on AI applications in healthcare, finance, agriculture, and education.
AI landscape
- The Ministry of Communications and Information Technology (MCIT) leads Egypt’s AI efforts, coordinating with other ministries on digital transformation and legislative updates. The Data Protection Authority can levy fines or administrative measures against noncompliant entities, while the Central Bank of Egypt supervises AI-based credit scoring and fraud detection in financial services.
- The 2020 National Artificial Intelligence Strategy established strategic goals for AI research, workforce development, and partnerships with global tech players, aligning with the Vision 2030 framework. The AI Strategy acknowledged non-discrimination and responsible usage, though enforcement mostly fell under existing data protection measures.
- The newly introduced 2025–2030 National Artificial Intelligence Strategy builds on the first plan, with a focus on inclusive AI, domain-specific large language models, and stronger alignment with the Digital Egypt initiative.
- The Personal Data Protection Law No. 151 of 2020 requires consent, data security, and transparency in automated processing, enforced by the Data Protection Authority.
- Healthcare initiatives deploy AI-driven disease screening and telemedicine, expanded during public health emergencies. Agriculture pilots focus on yield prediction and irrigation optimisation. Smart cities apply AI in traffic management and public safety. Education reforms integrate AI curricula in universities, coordinated by MCIT and the Ministry of Higher Education.
As part of the 2025–2030 plan, Egypt is re-emphasising ethical AI, with additional guidelines under the Egyptian Charter for Responsible AI (2023) and plans for domain-specific AI regulations. The strategy also aims to strengthen AI infrastructure with next-generation data centres, robust 5G connectivity, and sustainable computing facilities.
AI adoption aligns closely with the overarching Egypt Vision 2030 framework, highlighting the role of AI in socio-economic reforms.
8. MALAYSIA
Summary
Malaysia aims to become a regional AI power through government-led initiatives such as the Malaysia Artificial Intelligence Roadmap (2021–2025) and the MyDIGITAL blueprint. While there is currently no single, overarching AI legislation, the National Guidelines on AI Governance and Ethics (2024) serve as a key reference point for responsible AI development. Established in December 2024, the National AI Office now centralises policy coordination and is expected to propose regulatory measures for high-stakes AI use cases.
AI landscape
- The Ministry of Digital and the Ministry of Science, Technology and Innovation (MOSTI) jointly oversee AI strategies, supported by agencies such as the Malaysia Digital Economy Corporation (MDEC) and CyberSecurity Malaysia.
- The Malaysia Artificial Intelligence Roadmap (2021–2025) outlines talent building, ethical guidelines, and R&D priorities spanning sectors like finance, healthcare, and manufacturing.
- The National Guidelines on AI Governance and Ethics (2024) promote seven key principles – fairness, reliability/safety, privacy/security, inclusiveness, transparency, accountability, and human well-being – and clarify stakeholder obligations for end users, policymakers, and developers.
- The Personal Data Protection Act (PDPA) 2010 governs personal data usage by private organisations, mandating consent, data minimisation, and transparency.
- The National AI Office (2024) serves as the central authority to champion Malaysia’s AI agenda.
In addition to these frameworks, sectoral bodies impose further requirements:
- Bank Negara Malaysia (BNM) oversees AI in finance, emphasising fairness and transparency for credit scoring and fraud detection tools.
- CyberSecurity Malaysia and the National Cyber Security Agency (NACSA) safeguard AI deployments in critical infrastructure.
- The Malaysian Communications and Multimedia Commission (MCMC) may expand its authority to cover AI-based recommendation systems and content moderation.
Major enterprises leverage AI for e-services, medical diagnostics, manufacturing optimisation, and real-time analytics.
9. NIGERIA
Summary
While no single, overarching AI law is in place, the Nigeria Data Protection Act (NDPA) provides
an important legal foundation for AI-related personal data processing. Key institutions, including
the Federal Ministry of Communications, Innovation and Digital Economy (FMCIDE) and the
National Information Technology Development Agency (NITDA), shape Nigeria’s AI policy
framework and encourage responsible adoption.
AI landscape
- AI is partially governed by existing regulations and laws, including Cybercrimes (Prohibition, Prevention, etc.) Act, the Federal Competition and Consumer Protection Act (FCCPA, 2018), and the Copyright Act, 2022. Together, these frameworks address privacy, cybersecurity, consumer protection, and content ownership issues arising from AI applications.
- FMCIDE sets overall digital policy. NITDA develops AI guidelines and enforces data privacy rules, working with the Nigeria Data Protection Commission (NDPC). Meanwhile, the Central Bank of Nigeria (CBN) supervises AI in fintech, the Securities and Exchange Commission (SEC) regulates robo-advisory services, and the Nigerian Communications Commission (NCC) oversees telecom-based AI.
Nigeria’s NDPA applies to AI to some extent, as its provisions demand consent, data minimisation, and the possibility of human intervention for decisions with significant personal impact. The NDPC has the authority to impose penalties on violators, and the SEC requires robo-advisory firms to adopt safeguards against algorithmic errors or bias. The Nigerian Bar Association issued Guidelines for the Use of Artificial Intelligence in the Legal Profession in Nigeria in 2024, emphasising data privacy, human oversight, and transparency in AI-driven decisions.
In 2023, Nigeria joined the Bletchley Declaration on AI, pledging to cooperate internationally on responsible AI development.
10. KENYA
Summary
While no single, overarching AI law is in place, Kenya’s Data Protection Act (2019) offers a foundational framework for AI-related personal data usage, while existing ICT legislation, sector-specific guidelines, and taskforce reports further shape AI governance. The Ministry of Information, Communications, and the Digital Economy (MoIC) steers national digital transformation, supported by the Kenya ICT Authority’s oversight of ICT projects and the Office of the Data Protection Commissioner (ODPC) enforcing privacy provisions. The National Artificial Intelligence (AI) Strategy (2025-2030) aims to consolidate these diverse efforts, focusing on risk management, ethical standards, and broader AI-driven economic growth.
AI landscape
- Kenya’s National Artificial Intelligence Strategy for 2025–2030 aims to drive inclusive, ethical, and innovation-driven AI adoption across key sectors – agriculture, healthcare, education, and public services – by establishing a robust infrastructure, governance, and talent development frameworks to address national challenges and foster sustainable growth.
- The Ministry of Information, Communications, and the Digital Economy (MoIC) sets high-level policy, reflecting AI priorities in Kenya’s broader digital agenda. The Kenya ICT Authority coordinates pilot projects, manages government ICT initiatives, and promotes AI adoption across sectors.
- The Data Protection Act (2019) mandates consent, data minimisation, and user rights in automated decision-making. The Office of the Data Protection Commissioner (ODPC) enforces these rules, investigating breaches and imposing sanctions – particularly relevant to AI-driven digital lending and fintech solutions.
- The Distributed Ledgers Technology and Artificial Intelligence Taskforce (2019) proposed ethics guidelines, innovation sandboxes, and specialised oversight for distributed ledgers technology, AI, the internet of things, and 5G wireless technology. The taskforce aims to balance consumer and human rights protection with promoting innovation and market competition.
- The Draft AI Code of Practice (2024) from the Kenya Bureau of Standards (KEBS) offers voluntary guidelines on transparency, accountability, and data security for AI deployers.
The Data Protection Act (2019) remains central, requiring accountability and consent for AI-driven profiling – particularly in high-impact domains like micro-lending, where machine-learning models analyse creditworthiness.
The MoIC has integrated AI objectives into national strategies for e-government, supporting pilot projects such as chatbot-based public services and resource allocation.
The National AI Strategy aims to harmonise Kenya’s diverse AI efforts, addressing potential algorithmic bias, auditing standards, and the practicalities of responsible AI, particularly in healthcare, agritech, and fintech. To achieve this, the strategy sets out a clear governance framework, establishes multi- stakeholder collaboration platforms, and develops robust guidelines that promote transparent, ethical, and inclusive AI development across these priority sectors.
The government collaborates with global organisations, such as GIZ, World Bank, UNDP and regional partners such as Smart Africa with the aspiration to become an AI hub in Africa.
11. ARGENTINA
Summary
While no single, overarching AI law is in place, Data Protection Law No. 25.326 (Habeas Data, 2000) provides an important baseline for AI-related personal data use, enforced by the Argentine Agency of Access to Public Information (AAIP). The government has developed a National AI Plan and has issued Recommendations for Trustworthy Artificial Intelligence (2023) to guide ethical AI adoption – especially within the public sector. Academic institutions, entrepreneurial tech clusters in Buenos Aires and Córdoba, and partnerships with multinational firms support Argentina’s growing AI ecosystem.
AI landscape
- The National Artificial Intelligence Plan outlines high-level goals for ethical, inclusive AI development aligned with the country’s economic and social priorities.
- The Data Protection Law No. 25.326 (Habeas Data, 2000) requires consent, transparency, and data minimisation in automated processing. The AAIP can sanction entities that misuse personal data, including performing AI-driven profiling.
- Recommendations for Trustworthy Artificial Intelligence (2023), approved by the Undersecretariat for Information Technologies, promote human-centred AI in public-sector projects, emphasising ethics, responsibility, and oversight.
- The Argentine Strategy for AI Development (2019) frames AI as a catalyst for sustainable growth, outlining objectives to foster AI research, talent, and regional leadership.
Argentina’s AI governance relies on existing data protection rules, plus emerging policy instruments rather than a single, dedicated, and overarching AI law. Public institutions like the Ministry of Science, Technology, and Innovation (MINCyT) and the Ministry of Economy coordinate research and innovation, working with the AAIP to ensure privacy compliance. The government also supports pilot programmes, testing practical AI solutions.
Argentina’s newly launched AI unit within the Ministry of Security, designed to predict and prevent future crimes, has sparked controversy over surveillance, data privacy, and ethical concerns, prompting calls for greater transparency and regulation.
12. QATAR
Summary
While no single, overarching AI law is in place, Law No. 13 of 2016 Concerning Privacy and Protection of Personal Data serves as a key legal framework for AI-related personal data processing. The Ministry of Communications and Information Technology (MCIT) leads Qatar’s AI agenda through the National Artificial Intelligence Strategy for Qatar (2019), focusing on local expertise development, ethical guidelines, and strategic infrastructure – aligned with Qatar National Vision 2030. Enforcement of data privacy obligations is handled by MCIT’s Compliance and Data Protection (CDP) Department, which can impose fines for noncompliance. Oversight in finance, Sharia-compliant credit scoring, and other sensitive domains is provided by the Qatar Financial Centre Regulatory Authority and the Central Bank.
AI landscape
- The National Artificial Intelligence Strategy for Qatar (2019) sets goals for talent development, research, ethics, and cross-sector collaboration, supporting the country’s economic diversification.
- The Artificial Intelligence Committee, established under Cabinet Decision No. (10) of 2021, implements the National AI Strategy, coordinates with other government agencies, and tracks global AI trends.
- Law No. 13 of 2016 Concerning Privacy and Protection of Personal Data enforces consent, transparency, and robust security for personal data usage in AI. MCIT’s Compliance and Data Protection (CDP) Department monitors data privacy compliance, imposing monetary penalties for violations.
- The Qatar Financial Centre Regulatory Authority and the Central Bank regulate AI-driven financial services, ensuring consumer protection and adherence to Sharia principles.
- Lusail City, which brands itself as the city of the future and one of the most technologically advanced cities in the world, leverages AI-based traffic management, energy optimisation, and advanced surveillance.
- The Arabic Large Language Model (LLM) supported by MCIT in partnership with QCRI, showcases the country’s push for homegrown AI development.
Although Qatar has not enacted a single, overarching AI law, its National AI Strategy and the work of the Artificial Intelligence Committee provide a structured blueprint, prioritizing responsible, culturally aligned AI applications.
Qatar’s AI market is projected to reach USD 567 million by 2025, driven by strategic investments and digital infrastructure development that is expected to boost economic growth, attract global partnerships, and continue efforts to align national regulations with international standards.
13. PAKISTAN
Summary
While no single, overarching AI law is in place, the Ministry of Information Technology & Telecommunication (MoITT) spearheads AI policy through the Digital Pakistan Policy and the Draft National Artificial Intelligence Policy (2023), focusing on responsible AI adoption, skill-building, and risk management. Although the Personal Data Protection Bill is still pending, its adoption would introduce dedicated oversight for AI-driven personal data processing. In parallel, the proposed Regulation of Artificial Intelligence Act 2024 seeks to mandate human oversight of AI systems and impose substantial fines for violations.
AI landscape
- The Ministry of Information Technology & Telecommunication (MoITT) drives Pakistan’s AI policy under the Digital Pakistan Vision, integrating AI across e-government services, education, and agritech.
- The National Centre of Artificial Intelligence, under the Higher Education Commission, fosters research collaborations among universities.
- Digital Pakistan Policy (2018) underscores AI’s role in public-sector digitalisation and workforce development. The Draft National Artificial Intelligence Policy (2023) underscores ethically guided AI growth, job creation, and specialised training initiatives.
- The Personal Data Protection Bill proposes establishing a data protection authority with enforcement powers over AI-related personal data misuse.
- The Regulation of Artificial Intelligence Act 2024 would fine violators up to PKR 2.5 billion (approximately USD 9 million), mandate transparent data collection, require human oversight in sensitive applications, and create a National AI Commission in Islamabad.
- Pakistan uses AI to expedite citizen inquiries through chatbots, streamline government operations with digital ID systems, and address food security by optimising crop monitoring and yields. AI-based credit scoring broadens microfinance access but raises questions of fairness and privacy.
Pakistan’s AI trajectory is propelled by the MoITT’s Digital Pakistan agenda, with the National Centre of Artificial Intelligence coordinating academic research in emerging fields like machine learning and robotics.
Legislative initiatives are rapidly evolving. The Regulation of Artificial Intelligence Act 2024, currently under review by the Senate Standing Committee on Information Technology, aims to ensure responsible AI deployment, penalising misuse and unethical practices with high-value fines. Once enacted, the law would establish the National Artificial Intelligence Commission to govern AI adoption and uphold social welfare goals, with commissioners prohibited from holding public or political office. Parallel to this, the Personal Data Protection Bill would further strengthen consumer data rights by regulating AI-driven profiling.
Ongoing debates centre on balancing innovation with privacy, transparency, and accountability. As Pakistan expands international collaborations, particularly through the China-Pakistan Economic Corridor and broader Islamic cooperation forums, more concrete regulations are expected to emerge by the end of 2025.
14. VIETNAM
Summary
While Vietnam has not enacted a single, overarching AI law, the Law on Cyberinformation Security (2015) provides a basic legal framework that partially governs AI-driven data handling. Two ministries – the Ministry of Science and Technology (MOST) and the Ministry of Information and Communications (MIC) – jointly drive AI initiatives under Vietnam’s National Strategy on Research, Development and Application of AI by 2030, with an emphasis on AI education, R&D, and responsible use in manufacturing, healthcare, and e-governance. Although the national strategy references ethics and bias prevention, there is no single oversight body or binding ethical code for AI, prompting growing calls from civil society for greater transparency and accountability.
AI landscape
- The Ministry of Science and Technology (MOST) allocates funds for AI research, supporting collaborations between universities, startups, and private enterprises.
- The Ministry of Information and Communications (MIC) oversees the broader digital transformation agenda, sets cybersecurity standards, and can impose fines for data misuse under existing regulations.
- The National Strategy on AI (2021–2030) aims to develop an AI-trained workforce (50,000 professionals), expand AI usage in public services through chatbots, and digital government, and promote AI-based solutions in manufacturing, healthcare diagnostics, and city management. The strategy mentions ethical principles like bias mitigation and accountability but does not specify formal enforcement or an AI ethics board.
- The Law on Cyberinformation Security (2015) outlines baseline data security measures for organisations, which partially apply to AI-related activities, as the law’s general data protection and system security requirements extend to AI systems that process or store personal or sensitive information. The MIC can impose fines or restrict services for cybersecurity breaches and unauthorised data processing.
- The State Bank of Vietnam can issue additional rules for AI deployments in finance or consumer lending.
- Factories adopt AI for predictive maintenance, robotics, and supply-chain optimisation. AI-based diagnostics and imaging pilot projects are implemented in major hospitals, partially funded by MOST grants. AI chatbots reduce administrative backlogs. Ho Chi Minh City explores AI-driven traffic control and security systems. Tech hubs in Hanoi and Ho Chi Minh City foster AI-focused enterprises in fintech, retail analytics, and EdTech.
Vietnam’s push for AI is central to its ambition of enhancing economic competitiveness and digitizing governance. However, comprehensive AI legislation remains absent. The National Strategy on AI acknowledges concerns around fairness, personal data rights, and possible algorithmic bias, but explicit regulatory mandates or ethics boards have yet to be instituted.
Vietnam collaborates with ASEAN on a regional digital masterplan and maintains partnerships with tech-leading countries, such as Japan and South Korea for AI research and capacity development. The government is also formulating new regulations in the digital technology sector, including a draft Law on Digital Technology Industry, expected to be adopted in May 2025, which may introduce risk-based rules for AI and a sandbox approach for emerging technologies.
15. RUSSIA
Summary
Russia has adopted multiple AI-related policies including an AI regulation framework, the National AI Development Strategy (2019–2030), the National Digital Economy Programme, and experimental legal regimes (ELRs) – to advance AI in tightly regulated environments. The recently enacted rules mandating liability insurance for AI developers in ELRs signal a shift toward stricter risk management.
AI landscape
- The National AI Development Strategy (2019–2030) was adopted via presidential decree and it sets ambitious goals for AI R&D, talent growth, and widespread adoption in the healthcare, finance, and defence sectors.
- The National Digital Economy Programme (2017–2024) positioned AI as a key element of Russia’s digital transformation, focusing on regulatory sandboxes, data infrastructure, and digital services.
- Effective in 2025, Russia’s updated AI regulation framework prohibits AI in education if it simply completes student assignments (to prevent cheating), clarifies legal liability for AI-generated content, mandates accountability for AI-related harm, promotes human oversight, and focuses on national security through industry-specific guidelines.
- Experimental Legal Regimes (ELRs) allow the testing of AI-driven solutions (e.g., autonomous vehicles in Moscow and Tatarstan). Federal Law 123-FZ adopted in 2024 now requires developers in ELRs to insure civil liability for potential AI-related harm.
- Standard setting and code of ethics, such as Technical Committee 164 on AI under Rosstandart oversees AI-related GOST standards; a 2021 AI Ethics Code (voluntary) guides transparency, accountability, and user consent.
Instead, authorities have relied on diverse initiatives, laws, and financial incentives to direct AI governance. The key remains the National AI Development Strategy (2019), focusing on technological sovereignty, a deeper investment in research, and investments in attracting talent.
Alongside it, the digital economy framework has bankrolled significant projects, from data centres to connectivity enhancements, enabling the preliminary deployment of advanced AI solutions.
In 2020, policymakers introduced the Conceptual Framework for the Regulation of AI and Robotics, identifying gaps in liability allocation among AI developers, users, and operators. This has been effective as of 2025, as noted above.
Technical Committee 164 under Rosstandart issues AI-related safety and interoperability guidelines. Personal data management is governed by Federal Law No. 152-FZ, complemented by updated biometric data regulations that organise the handling of facial and voice profiles. The voluntary AI Ethics Code, shaped in collaboration with governmental entities and technology companies, aims to curb risks such as algorithmic bias, discriminatory profiling, and the unchecked use of personal data.
AI adoption is especially visible in the following:
- Companies like Yandex are conducting trials of self-driving cars in designated zones. Under the new insurance requirements, liabilities for potential accidents must be covered.
- The Central Bank endorses AI-driven services for fraud prevention and credit analysis, ensuring providers remain responsible under established banking and consumer protection laws.
- AI-assisted diagnostic tools and telemedicine applications go through a registration process akin to medical device approval, overseen by Roszdravnadzor.
- Russian authorities use AI-driven facial recognition in public surveillance, managed by biometric data policies and overseen by security services. Advocacy groups have voiced concerns regarding privacy and data retention practices.