Reihe futuristischer humanoider Roboter vor einer EU-Flagge, Symbol für Regulierung und Entwicklung von Künstlicher Intelligenz in Europa.

EU AI Act – What’s coming in August 2025?

Further provisions of the European AI Regulation will come into force on 2 August 2025. Europe’s AI Act – the first of its kind worldwide – aims to protect citizens and fundamental rights from the negative effects of AI while promoting innovation and trustworthy AI. For companies – especially entrepreneurs and executives – the question now arises: What will change from August 2025 and what do I need to bear in mind in practice?

Background: The EU AI Act in brief

The EU AI Act is a comprehensive set of rules that regulates artificial intelligence in the EU in a uniform manner. The aim is to make AI safe and transparent without stifling innovation. The goal is to promote human-centric and trustworthy AI applications while protecting health, safety and fundamental rights. At its core, the AI Act pursues a risk-based approach: depending on the risk posed by an AI application, different obligations apply – ranging from hardly regulated to strictly prohibited.

Important: The AI Act has already been adopted and formally entered into force on 1 August 2024. However, many provisions will only apply after transition periods. The deadline of 2 August 2025, when the first requirements come into effect, is particularly relevant for companies. Below is an overview of the timetable.

Roadmap: When will which rules come into force?

The AI Act will be implemented in stages over several years:

  • 1 August 2024: The regulation formally entered into force (start of transition periods).
  • After 6 months (from 2 February 2025): Prohibited AI practices (e.g. for intentional deception) must be removed from the market.
  • After 12 months (from 2 August 2025): Governance rules come into force and obligations for AI models for general use (so-called foundation models) take effect.
  • After 24 months (from 2 August 2026): Most of the regulations will take effect, especially for high-risk AI systems in accordance with Annex III (including transparency obligations for limited-risk AI).
  • After 36 months (from 2 August 2027): Final provisions will take effect, e.g. for certain high-risk applications in specific sectors.

For companies, this means that the first round of new obligations will apply from August 2025 – so now is the time to prepare. Below, we highlight the most important practical implications.

What does this mean for companies in practical terms?

Prohibited AI practices: What is off limits?

Since February 2025, certain AI applications with ‘unacceptable risk’ have been banned throughout the EU. Companies are not allowed to develop or use such systems. These include, for example:

  • Manipulative AI systems that influence people’s behaviour and undermine their free will (especially by exploiting vulnerable groups).
  • Social scoring systems based on the Chinese model, which evaluate people based on their behaviour or personal characteristics.
  • The massive secret collection of biometric data (e.g. faces from the internet/CCTV) for identification purposes.
  • Emotion recognition in sensitive areas such as the workplace or school.

In practice, this means: Stay away from such applications! If your company has experimented in this direction, these projects must be discontinued now at the latest. Violations can have serious consequences (see ‘Supervision & penalties’ below).

Transparency: Disclose when AI is involved

A central principle of the AI Act is transparency. Users have a right to know when they are dealing with AI. Specifically, companies must:

  • Communicate where AI is used. If a system interacts with people (e.g. a chatbot on your website), it must be made clear that it is AI. Customers, applicants or employees must not be confronted with AI without their knowledge.
  • Label AI-generated or manipulated content. For example, if you create product images using AI or generate advertising texts automatically, these should be marked as AI-generated. This prevents AI fakes from being mistaken for the real thing. (There are exceptions for clearly artistic or satirical content.)

These transparency requirements will become mandatory by August 2026 at the latest. But it’s worth starting early: transparency builds trust among customers and business partners. In practical terms, every company should keep track of which processes are AI-supported and prepare appropriate information for users. Practical examples include disclaimers under AI-generated images or a short sentence in chatbot dialogues (‘👋 I am an AI-supported assistant…’).

High-risk AI: Strict requirements with transition period

The strictest requirements of the AI Act apply to so-called high-risk AI systems. These include AI applications used in safety-critical or fundamental rights-related areas. Examples (from Annex III of the Regulation) are AI in the following areas:

  • Biometrics: e.g. facial recognition systems for access control.
  • Critical infrastructure: traffic control, energy supply, etc.
  • Education and employment: software for applicant selection or employee monitoring.
  • Important services: AI for creditworthiness checks, insurance pricing or for official decisions (e.g. social benefits).
  • Law enforcement & border control: AI for police work, migration control.
  • Justice & democracy: systems that could suggest judgements or influence elections.

For such high-risk AI systems, the AI Act stipulates extensive compliance measures from 2026 onwards. Providers(manufacturers/distributors) of such systems must implement, among other things, a risk management system, data governance, technical documentation, logging, human oversight, and quality and cybersecurity measures. In addition, a conformity assessment, similar to certification, is required before the system can be placed on the market. Only after successful testing (and issuance of a declaration of conformity) may a high-risk AI system be used productively.

There are also obligations for users of such high-risk systems (i.e. companies that use an AI tool from a provider): They must strictly follow the operating instructionsmonitor the use appropriately and react to problems or increasing risks – for example, by shutting down the system or reporting it to the manufacturer/authority. In other words, simply using it blindly is not an option – critical AI requires human oversight.

Most of these high-risk requirements will not take effect until August 2026, but companies should start planning nowPractical tips: Conduct an AI inventory in your organisation: Which AI systems do we already use or will we be using soon? Classify them according to risk (low/manageable/high). For identified high-risk applications, you should either obtain a certificate of conformity from the provider by 2026 or, if you are a developer yourself, set up the necessary processes in good time. If necessary, seek legal advice at an early stage to ensure correct classification. This will help you avoid unpleasant surprises when the deadline expires.

Generative AI & foundation models: New requirements from 2025

The rapid rise of generative AI (such as ChatGPT, DALL-E, etc.) has led to specific rules for general-purpose AI(GPAI) in the AI Act. From August 2025, providers of such foundation models must meet certain requirements. What does this mean?

  • Transparency and documentation: Providers of generative AI models must provide technical documentation and comprehensive information packages before market launch. Among other things, they must summarise the data used to train the model (including whether and how copyrights were observed). This is to ensure that downstream developers and users understand what is behind the model and can comply with their own obligations.
  • Addressing fundamental rights and risks: Large models with potentially systemic effects are subject to additional requirements, such as risk assessments, cybersecurity tests and reporting obligations in the event of serious incidents. In short, very large AI systems will be subject to a kind of TÜV inspection before they can be used in EU products.

For ordinary companies that only use such AI models, this means that the tools (e.g. AI platforms or APIs) you obtain from third-party providers should become more transparent from 2025 onwards. You can request information from the provider about the training of the model, for example. If you develop a product yourself that integrates an external basic AI model, the provider must give you enough info so that you can meet your AI Act obligations. The EU is setting up a European AI Office for this – a central AI supervisory authority that specifically oversees GPAI/foundation models. For entrepreneurs, this means that generative AI will remain usable, but the Wild West days are coming to an end. Expect more transparency requirements along the supply chain for AI models.

Supervision and fines: compliance is mandatory

As with the GDPR, the AI Act will also involve supervisory authorities and sanctions. By August 2025 at the latest, every EU country must designate a competent supervisory authority. In Germany, it is still unclear at the end of 2024 which authority will win the race – the Federal Network Agency or the Federal Office for Information Security are among those under discussion. These bodies will monitor compliance with the AI Regulation and can intervene in the event of violations.

The penalties for violations are severe. Depending on the severity, fines of up to 35 million euros or 7% of the company’s global annual turnover – whichever is higher – may be imposed. In addition, non-compliant AI systems can be forcibly removed from the market. For companies, this means a considerable financial and reputational risk if they ignore the regulations.

The good news is that the AI Act is not intended to stifle innovation. On the contrary, the aim is to create legal certaintyso that start-ups and SMEs in particular can work with AI. There are also plans for regulatory sandboxes – protected test environments where companies can try out AI solutions under supervision.

Conclusion for executives: Take AI regulation seriously, build up expertise early on and integrate AI compliance into your processes. Those who act proactively can use AI responsibly and thus secure competitive advantages – without fear of regulation.

Checklist: 5 most important obligations for SMEs from August 2025

  1. Do not use prohibited AI practices. Ensure that your company does not use prohibited AI applications (e.g. nosocial scoring, no manipulative influence on customers or employees).
  2. Make AI use transparent. Disclose where AI is used in your business – especially in customer- or employee-oriented systems. Also, label AI-generated content to avoid deception.
  3. Train and raise awareness among employees. Ensure that your team has a sufficient understanding of AI. Employees should know how AI tools work, where their limitations lie and how to use them correctly.
  4. Use high-risk AI in a controlled manner. If you use a high-risk AI system (e.g. in personnel selection or credit checks), use it only as instructed, monitor the results regularly and be prepared to intervene or shut down the system if problems arise. Document its use and inform the provider of any anomalies.
  5. Ensure compliance (for in-house development). Do you develop AI systems yourself or make significant modifications to AI from your suppliers? If so, you assume the role of the provider and must meet compliance requirements from 2025/26 onwards – from risk management and testing to registration. Plan certification and documentation processes at an early stage.

Further information: Use available resources to familiarise yourself with the topic in greater depth. For example, KI.NRW offers a detailed information paper, and tools such as the EU AI Act Compliance Checker can help you assign your applications to a risk class. Stay on the ball even after August 2025 – regulations are evolving, and regular reviews of your AI strategy for compliance will become a new component of good corporate governance.

This article was created with the support of OpenAI’s ChatGPT 4.5 DeepResearch.