Artificial Intelligence Act and Global AI Regulation: Compliance Requirements and Key Legal Developments

9 January 2026

Corporate Clients Insights

Two recent developments in AI regulation, the entry into force of the European Union’s Artificial Intelligence Act (AI Act) and accelerating national AI regulatory initiatives in multiple jurisdictions have clarified several key legal issues relevant to practitioners and organisations operating with AI systems. In the EU, the AI Act creates a comprehensive horizontal legal framework governing the development, distribution, and use of AI systems. In parallel, other jurisdictions, including the United Kingdom and South Korea, have advanced national AI laws and regulatory frameworks that both echo and diverge from the EU approach. These developments carry important compliance obligations, risk allocation concerns, and enforcement dimensions for entities developing or using AI globally.

European Union: The Artificial Intelligence Act (Regulation (EU) 2024/1689)

The EU’s AI Act, published on 13 July 2024, is the first comprehensive legal framework regulating AI systems based on risk, fundamental rights protection, and governance requirements. The Regulation applies not only to entities within the EU but also to providers outside the EU whose AI systems affect individuals or businesses in the Region.

Legal Framework and Scope

Under the AI Act, an AI system is defined broadly, covering software that uses machine learning, logic‑based, or statistical techniques to produce outputs for decision‑making or content generation. The legislation adopts an extraterritorial scope: AI systems developed, provided, or used outside the EU are subject to its obligations where they materially affect EU persons or markets. The legal definition of “provider” encompasses natural or legal persons placing an AI system on the market, irrespective of location.

The Act’s risk‑based categorisation divides AI systems into:

  • Unacceptable Risk systems, which are prohibited in all circumstances (e.g., social scoring by public authorities, AI that manipulates behaviour in ways materially harmful to rights).
  • High Risk systems, which trigger stringent legal requirements, including risk assessments, data governance measures, human oversight, and mandatory conformity assessments before entry into the EU market.
  • General‑Purpose AI (GPAI) systems, subject to graduated obligations such as enhanced transparency and documentation duties, particularly when marketed or adapted for high‑impact applications.
  • Limited/Miminal Risk systems, which are subject to proportionate transparency obligations.

High‑Risk and Transparency Obligations

For high‑risk AI systems, the Act prescribes a detailed compliance regime that includes:

  • Risk Management Systems throughout the lifecycle of the AI system, including pre‑deployment assessment and continuous post‑market monitoring.
  • Technical Documentation enabling competent authorities to assess compliance, including records of design choices, datasets used, performance metrics, and governance protocols.
  • Human Oversight Mechanisms to prevent or mitigate risks to health, fundamental rights, safety, and other protected interests.

GPAI models – particularly large language models and other broad‑capability systems—face distinct, system‑wide transparency requirements, including publicly accessible documentation on training data sources, known limitations, and safety evaluations.

Data Governance and Intersection with GDPR

The AI Act explicitly integrates data protection obligations by requiring compliance with the General Data Protection Regulation (GDPR) where personal data is processed by an AI system. This includes the conduct of Data Protection Impact Assessments (DPIAs) where required, implementation of appropriate technical and organisational safeguards, and alignment with privacy by design and default principles. The Act’s data governance provisions complement, not displace, GDPR requirements, reinforcing the legal obligation for robust privacy protection mechanisms within AI lifecycles.

Enforcement and Penalties

National competent authorities (NCAs) and the European AI Office are empowered to enforce compliance, investigate potential breaches, and impose administrative penalties. The AI Act authorises fines of up to a significant percentage of global annual turnover for serious violations, reflecting the legislation’s deterrent intent. Enforcement powers include market surveillance, corrective measures, and orders suspending non‑compliant systems.

Key Legal Takeaways for Practitioners

  • Scope Assessment: Entities must analyse whether their AI systems qualify as “AI” under the Act and whether they are marketed, deployed, or affecting interests in the EU.
  • Risk Categorisation: A thorough legal evaluation is required to determine whether a system is high‑risk, GPAI, or otherwise subject to specific obligations.
  • Conformity Assessment: Before placing high‑risk AI systems on the EU market, providers must complete conformity assessments and be able to demonstrate compliance with all substantive and procedural obligations.
  • Documentation and Transparency: Comprehensive records that support compliance, risk mitigation, and transparency are essential, both for internal governance and potential inspection by NCAs.
  • Prohibited Practices: Providers and deployers must ensure AI systems do not engage in practices categorised as unacceptable risk, given that such violations are subject to strict prohibition and significant penalties.

United Kingdom: AI Regulation and Guidance

Although the United Kingdom is no longer bound by EU law following Brexit, UK authorities have actively pursued AI governance frameworks that ‘mirror and modify’ EU concepts while preserving regulatory autonomy. The UK approach emphasises sector‑specific regulatory principles rather than a single horizontal statute.

Regulatory Architecture

In the UK, regulatory oversight is distributed across existing sectoral regulators with additional guidance issued by the Centre for Data Ethics and Innovation (CDEI) and policy statements from the Department for Science, Innovation and Technology (DSIT). Regulatory instruments and guidance focus on:

  • Proportionate Regulation calibrated to risk.
  • Innovation‑Friendly Approaches through regulatory sandboxes.
  • Transparency and Safety Principles consistent with public trust objectives.

Unlike the EU’s AI Act, the UK framework does not impose a single statutory regime with criminal or administrative penalties specific to AI systems across all sectors. Instead, enforcement typically arises through existing legal mechanisms—such as consumer protection law, data protection law (UK GDPR), and sector‑specific regulatory rules (e.g., financial services conduct requirements).

Confidentiality, Public Interest, and Judicial Oversight

UK jurisprudence has emphasised the interplay between contractual or regulatory confidentiality and public interest in transparency, particularly where regulatory scrutiny or judicial supervision is involved. This is relevant in AI contexts where proprietary training data, source code, or model architecture might be treated as confidential information. Courts may weigh confidentiality expectations against principles of open justice and regulatory oversight, particularly in cases where litigation or regulatory enforcement involves fundamental legal questions that affect wider public interests.

National AI Laws and Global Regulatory Convergence

Beyond the EU and the UK, several states and regions are advancing AI regulatory frameworks that influence the governance landscape for multinational organisations:

  • South Korea has advanced its AI Basic Act, establishing broad obligations for safety, transparency, and ethical AI use, with enforcement mechanisms slated to take effect in early 2026.
  • United States regulators at federal and state levels are promulgating guidelines and rules targeting specific AI applications—such as consumer protection, employment practices, and algorithmic fairness, drawing from but diverging in approach from EU risk categorisation.
  • Asia Pacific and Middle Eastern countries are issuing sector‑focused AI policies, often in areas such as autonomous vehicles, fintech applications, and healthcare AI.

These diverse regulatory models underscore a fragmented but increasingly convergent global approach to AI governance, with many jurisdictions incorporating risk‑based principles, fairness mandates, and requirements for documentation and human oversight.

Emerging Legal Issues and Compliance Trends

Ethical Safeguards and Anti‑Discrimination

Regulators are emphasising the mitigation of bias and discriminatory outcomes from AI systems. Legal practitioners must ensure that fairness audits and bias mitigation protocols are integrated into design and deployment workflows as matters of legal compliance and risk management.

Liability and Accountability

As AI systems grow in autonomy and complexity, legal frameworks may introduce clearer product liability standards and accountability mechanisms including potential requirements for mandatory insurance or financial guarantees for high‑risk AI deployments.

Adaptive and Responsive Regulation

Given the rapid pace of AI innovation, regulatory frameworks are increasingly incorporating mechanisms for iterative updates, adaptive standards, and structured feedback loops involving industry, civil society, and technical experts.

Conclusion

The EU’s Artificial Intelligence Act represents a landmark legal development and serves as a benchmark for AI governance globally. Compliance with the Act’s risk‑based obligations, transparency duties, and enforcement mechanisms is a legal imperative for entities operating in or affecting the EU market. At the same time, national AI regulatory frameworks in the UK, South Korea, and other jurisdictions demonstrate an emerging global pattern that integrates risk mitigation, ethical safeguards, and accountability within AI lifecycles.

For in‑house counsel, compliance officers, and AI practitioners, the current regulatory environment necessitates rigorous legal analysis, structured governance frameworks, and proactive engagement with evolving legal norms. The future of AI regulation will likely be defined by the interaction of horizontal rules, sectoral mandates, and cross‑border harmonisation efforts – all designed to balance innovation, competitiveness, and the protection of fundamental rights.

© New Balkans Law Office 2026

The Bulgarian and dual-qualified lawyers of New Balkans Law Office are regulated by the respective Bar of their registration. New Balkans Law Office is a brand name of Legal Services EOOD, a company registered under Bulgarian law. Reg’d No. 202331677. Further details are available here.

© New Balkans Law Office 2026