Skip to main content

AI Act - European AI regulation - what does it mean for you?

2 Mar 2026
2 Mar 2026

The EU’s AI Act has now moved from political vision to concrete reality. As of March 2026, several central parts of the regulation are either already in force or close to full application. This means that companies across the EU – including those in the experience industry – must be able to document how their AI systems are used, governed, and controlled.

Status of the AI Act in March 2026

The AI Act formally entered into force on August 1, 2024, but the rules are being implemented gradually. Prohibited AI practices became applicable from February 2025. Rules on AI literacy and requirements for general-purpose AI models (GPAI) began to apply in 2025, while the majority of requirements for high-risk systems become fully applicable from August 2026.

In practice, this means that 2026 is the year when companies must be ready with documentation, risk management, vendor management, and internal processes – particularly if they develop, market, or use high-risk AI.

What is the AI Act?

The AI Act is a directly applicable EU regulation that governs the development, marketing, and use of artificial intelligence. The purpose is to create a common European framework for the safe, transparent, and responsible use of AI.

The regulation is based on a risk-based approach. The greater the potential impact an AI system has on individuals and society, the stricter the requirements imposed.

Risk categories in the AI Act

AI systems are divided into four main categories:

  • Prohibited AI: Systems considered to pose an unacceptable risk, such as social scoring by public authorities or certain forms of manipulative behavioral control.
  • High-risk AI: Systems used in critical areas such as employment, credit scoring, access to education, biometric identification, or as safety components in regulated products.
  • Limited risk: For example, chatbots and generative AI, where transparency obligations primarily apply.
  • Minimal risk: Internal productivity tools and simple recommendation systems.

Most companies will primarily fall within the minimal or limited risk categories, but the threshold for high risk may be crossed if AI affects rights, access, or significant decisions concerning individuals.

General-purpose AI models (GPAI) and systemic risks

An important new dimension of the AI Act is the regulation of general-purpose AI models – also referred to as GPAI (General Purpose AI). This includes large language models and foundation models that can be used broadly across sectors.

Providers of such models are subject to specific requirements regarding:

  • Technical documentation
  • Transparency about training data
  • Respect for copyright
  • Risk management
  • Additional requirements in cases of “systemic risk”

Systemic risk is assessed, among other things, based on model size, capabilities, and scale of deployment. The EU’s AI Office plays a central role in supervision and coordination.

Transparency and labeling of AI-generated content

The AI Act contains specific transparency requirements, particularly when companies use generative AI or make AI systems available to users. The objective is to ensure that citizens and customers can clearly understand when they are interacting with artificial intelligence and when content is artificially generated or manipulated. 

The AI Act entered into force on August 1, 2024, and most provisions – including the transparency requirements in Article 50 – apply 24 months after entry into force. This means that the following transparency and labeling requirements apply from: August 2, 2026.

Disclosure in direct AI interaction

If a user interacts with an AI – for example via a chatbot, virtual assistant, or automated customer service – it must generally be clearly disclosed that the interaction involves AI. The information must be provided in a clear and understandable manner, and the user should also have the option to contact a human.

In practice, this typically means a short notice in the chat window or app, for example: “You are communicating with an AI assistant. It may make mistakes. If you need assistance from a staff member, you can contact us here.”

Labeling of AI-generated images, audio, and video

Companies that publish AI-generated or AI-manipulated images, audio, or video must ensure that the content can be identified as artificial. This particularly applies to material that can realistically be mistaken for authentic recordings – often referred to as deepfakes.

This can be done through a visible label associated with the content, such as “AI-generated image” or “AI-manipulated video.” At the same time, the AI Act requires providers of generative systems, as far as technically feasible, to implement technical solutions that make content identifiable in a machine-readable manner, for example through metadata or digital watermarks.

AI-generated text and public information

If AI is used to generate text that is published for the purpose of informing the public about matters of societal interest, it must generally be disclosed that the text is wholly or partially AI-generated.

If the text has been subject to genuine human editorial control, and the company bears clear editorial responsibility for the content, the disclosure requirement may in certain cases be less intrusive. The decisive factor is that the reader is not misled about the origin of the content.

Emotion recognition and biometric systems

If a company uses AI to analyze or infer emotions or for biometric categorization, the individuals concerned must be informed that the system is in use. This applies regardless of whether the solution is used physically at a location or digitally within an application.

Overall, the transparency requirements mean that companies should have clear internal guidelines for labeling, visible information, and technical traceability when using generative AI. This is an area where both legal compliance and trust among customers and partners play a central role.

New requirements for AI literacy

From 2025, organizations are required to ensure that employees working with AI have appropriate competencies. The requirement applies to both providers and users of AI systems.

This means that companies must be able to document that relevant employees understand:

  • The basic functioning and limitations of AI
  • Risk factors and sources of error
  • Applicable internal policies
  • Relevant legal requirements

Requirements for high-risk AI

From August 2026, high-risk systems must meet extensive requirements, including:

  • An established risk management system
  • Documented data governance and data quality
  • Technical documentation
  • Logging and traceability
  • Human oversight
  • Requirements for robustness, accuracy, and cybersecurity
  • Post-market monitoring and incident reporting

When placing certain systems on the market, a conformity assessment and CE marking must be carried out.

Enforcement and fines

The AI Act includes significant fine levels. Violations may result in fines of up to:

  • EUR 35 million or 7% of global annual turnover (prohibited practices)
  • EUR 15 million or 3% (violations of other requirements)
  • EUR 7.5 million or 1% (providing incorrect information)

The level of fines depends on the size of the company and the nature of the infringement. SMEs may in practice be subject to proportionate enforcement, but they are not exempt from the rules.

The relationship with the GDPR

The AI Act does not replace the GDPR. If an AI system processes personal data, data protection rules apply in parallel. In many cases, a DPIA will be relevant, particularly for high-risk systems.

The AI Act focuses on system safety and risk management, while the GDPR protects individuals’ personal data. The two regulatory frameworks complement each other.

What should companies do in 2026?

At a minimum, companies should:

  • Map all AI systems
  • Classify them according to risk level
  • Engage in structured vendor dialogue
  • Establish an AI policy and governance framework
  • Implement logging and documentation
  • Ensure AI competencies among relevant employees

For many companies, modern SaaS solutions may be a practical path to compliance, as documentation, security, and updates are often built into the platform. This reduces internal complexity and makes it easier to meet ongoing requirements.

Implications for the experience industry

In the experience industry, AI is typically used for customer service, ticket optimization, personalization, and security. Most solutions will fall under limited risk, but the use of biometric access or automated decision-making may result in a high-risk classification.

With the AI Act, the industry gains a clear legal framework. This creates confidence among guests and partners, but also imposes requirements for documented responsibility. Companies that work systematically with AI compliance will be in a stronger position in both tenders and partnerships.

The AI Act towards 2027

Although many requirements apply in 2026, implementation continues. In August 2027, transitional arrangements expire for certain high-risk systems embedded in regulated products.

The AI Act is therefore not a one-time task, but an ongoing compliance discipline. Companies that already work systematically with documentation, governance, and vendor management will be best prepared in the years ahead.

Questions and answers about the AI Act 2026

Is the AI Act fully applicable in 2026?

The AI Act has been implemented gradually. In 2026, prohibited practices, requirements for general-purpose AI models, and AI literacy obligations apply, while the main requirements for high-risk AI become fully applicable from August 2026.

What is a general-purpose AI model (GPAI)?

A general-purpose AI model is a broadly applicable model, such as a large language model, that can be used for many purposes. Providers are subject to specific documentation and transparency requirements.

When must high-risk AI be fully compliant?

Most requirements for high-risk AI apply from August 2026. Companies should have risk management, documentation, and governance in place before this date.

Does the AI Act also apply to small businesses?

Yes. The AI Act applies to all companies that develop, market, or use AI in the EU. Enforcement may be proportionate, but there are no general exemptions.

How does the AI Act relate to the GDPR?

The AI Act regulates the safety and risk management of the AI system itself, while the GDPR regulates the processing of personal data. If AI processes personal data, both regulatory frameworks apply in parallel.