5. Mai 2026

The AI Act Omnibus creates uncertainty. Here's what companies can do

Picture Credits: Guillaume Périgois

A key pledge of Ursula von der Leyen's second term as President of the European Commission was to reduce bureaucracy and promote the implementation of a Digital and AI Decade for Europe. The diagnostic case for that pledge had been set out by Mario Draghi in his September 2024 "The Future of European Competitiveness" report. The report described a "mid-technology trap", leaving the EU locked into a cycle of slow, incremental innovation, low industrial dynamism, held back by insufficient access to capital as well as a growing regulatory burden. The Digital Omnibus Package was a first legislative response to that diagnosis.

The AI Act is back on the negotiation table

In November 2025, the European Commission published COM/2025/836, the AI Omnibus, a package of proposed amendments to the AI Act framed as a simplification and implementation-readiness measure. Trilogue negotiations between the Commission, the Council of the EU (position ST 7322/26) and the European Parliament (position A10-0073/2026) have stalled since the end of April. After what was expected to be a decisive session, negotiators parted ways without agreement and no date has been set to resume discussions. The central sticking point: whether products under Annex I.A should comply with the AI Act directly or should fall under existing sectoral law instead. With high-risk AI obligations due to apply on 2 August 2026, the breakdown tightens the window for any amendment to take effect in time.

The urgency of that timeline has not prevented a predictable market response: Many companies are waiting. Some do not know what to do. Others have made the decision that the political uncertainty justifies pausing compliance work. Albeit understandable, both positions carry underpriced risk. This article explains how to create certainty in uncertain times and answers the question: What to do next?

The Commission's proposal in a nutshell

The Commission's November 2025 proposal is primarily concerned with implementation readiness. Its core measures include:

  • A new Article 113 mechanism that would delay high-risk obligations by linking their entry into application to a Commission decision confirming that compliance support – harmonised standards, guidelines, and enforcement infrastructure – is sufficiently developed;
  • Simplified conformity procedures for startups, SMEs, and small mid-cap companies;
  • A weakened Article 4 AI literacy obligation (from a binding "shall ensure" to a non-binding "shall encourage");
  • A new Article 4a establishing a direct legal basis for processing special category personal data for bias detection, extended to all AI systems rather than high-risk systems only.

What the Commission's proposal does not include: any structural change to Annex I or to the requirements for high-risk AI systems laid out in Articles 9 through 15.

Where the institutions agree

Since the Omnibus publication, the other two remaining European institutions have formulated their positions on what should change and what should remain the same about the AI Act.

Backstop dates: shared destination but contested mechanism

Arguably the most important point of agreement between Commission, Parliament and Council is the principle of delayed application for high-risk AI systems. The backstop dates are agreed across all institutions: 2 December 2027 for Annex III systems and 2 August 2028 for Annex I systems. The divergence, however, is on mechanism. The Commission proposes a discretionary trigger: obligations apply only after the Commission issues a decision confirming that standards and guidelines are in place, with the backstop dates as a hard ceiling. The Council retains fixed dates as the primary timeline and accepts the Commission trigger only as a potential accelerator if support becomes available earlier. The Parliament rejects the trigger mechanism entirely and mandates fixed dates only.

Stable high-risk requirements: Articles 9 through 15 remain untouched

A second, less-discussed point of convergence is more substantive: none of the three institutional positions proposes modifying the core compliance requirements for high-risk AI. The requirements for high-risk AI systems laid out in Articles 9 through 15 are untouched in every text. The Omnibus debates what gets classified as high-risk and when obligations apply. It does not debate what those obligations require. For companies in scope, this is the more important agreement: the substance of compliance is stable even as its timeline and perimeter remain in flux.

Simplification limits: co-legislators pull back from the initial proposal

A third pattern is worth noting. The Commission's simplification agenda has clear institutional limits. On database registration, for example, the Commission proposes deleting the obligation for systems that self-assess as non-high-risk under Article 6(3). Both the EU Parliament and Council reject deletion and align on simplified registration instead. This is not an isolated case. Across the Omnibus, the co-legislators are consistently pulling the Commission's deregulatory proposals back toward the original framework, or softening them, as with Article 4. The result is a negotiation that will simplify at the edges, not at the core.

Expansion of the AI Office's jurisdiction

A fourth point of agreement concerns enforcement. All three institutions support expanding the AI Office's jurisdiction to cover market surveillance of GPAI-based systems as well as AI systems embedded in very large online platforms and search engines. Similar to the agreement on the enforcement timeline, the divergence is on mechanism rather than direction: the Commission would define enforcement procedures via implementing act; the Council legislates a full enforcement regime directly in the Regulation; the Parliament accepts the expanded scope while favouring lighter centralisation and retained Member State jurisdiction in certain cases. As with the timeline dispute, the principle is shared and the procedural detail is an open question to be clarified during the trilogue procedure.

New prohibitions on non-consensual intimate imagery

Finally, one further development sits outside the main institutional fault lines. Both the Council and Parliament have introduced new prohibited practices targeting AI-generated non-consensual intimate imagery and child sexual abuse material. These provisions are not present in the Commission's original proposal, but reflect the political momentum that has grown around this issue after the Omnibus publication. The Commission has since launched investigations into platforms in this area, and the additions appear welcomed across all three institutions. This example is a reminder that the Omnibus is not purely a simplification exercise: the co-legislators can equally expand the Act's scope.

What this means for companies

For companies watching the negotiations, a word of caution first: the trilogue is not a mechanical aggregation of positions. Each institution enters with a mandate, and final texts regularly reflect compromises that no single party fully anticipated. What the current positions do allow is a reading of probabilities. Where all three institutions converge, the outcome is likely to hold; where one stands alone, its position is unlikely to survive intact. On that reading, the picture that emerges is more stable than the political noise suggests. The backstop dates are likely to hold. The core compliance requirements of Articles 9 through 15 are unlikely to be modified. And on enforcement and scope, the direction of travel is toward expansion, not reduction.

Where the institutions diverge

Annex I.A: the Parliament's "sector exit" proposal

That being said, the most structurally significant divergence concerns Annex I. In its position, the Parliament proposes deleting Annex I Section A in its entirety. This section covers twelve product sectors including medical devices, IVD medical devices, machinery, radio equipment, toys, recreational craft and personal watercraft, lifts, ATEX equipment, pressure equipment, cableway installations, personal protective equipment, and gas appliances. Under the Parliament's replacement mechanism the AI Act would become subsidiary to sectoral law for embedded AI in these categories. The Commission would then be empowered, but not obligated, to integrate AI Act Chapter III requirements into each of the twelve sectoral instruments via delegated acts.

The Commission and the Council both reject this restructuring, or at least do not propose it in their initial positions. They retain the current Section A/B structure, making the "sector exit" a Parliament-only position and not a shared institutional direction. The Parliament's proposal is an attempt to simplify what it characterises as an unclear overlap between AI Act obligations and existing sectoral legislation. Which regulation prevails in the case of an Annex I.A device like an AI-based medical device, for example? Critics argue that this could have been resolved through guidance rather than structural reform. For a company developing an AI-based medical device, for example, the compliance picture was, in theory, clear. Satisfy the requirements of the Medical Devices Regulation, ensure the AI system meets Articles 9 through 15, and incorporate Article 17 of the AI Act into the quality management system. The Parliament's answer to this debate is a structural change that raises harder questions of its own.

The harmonised standards problem

The deeper structural problem concerns how compliance is demonstrated in practice. Under the EU's regulatory framework, companies prove conformity with a regulation primarily through harmonised standards: technical documents that, once met, create a legal presumption of compliance without requiring article-by-article proof. CEN/CENELEC JTC 21 is currently developing exactly such standards for the AI Act: horizontal documents covering all sectors, giving providers of high-risk AI systems a shared basis for demonstrating compliance with Articles 9 through 15. If Annex I.A sectors leave the AI Act's scope, those horizontal standards no longer apply to them by default. The presumption of conformity mechanism only works for regulations they are actually subject to. Many of the twelve affected sectors currently contain no AI-specific requirements in their existing legislation. For that gap to be filled, each sector would first need to be updated via delegated acts under the Parliament's proposed mechanism, and new harmonised standards would then need to be developed for each sector separately. Whether and when that happens is an open question. What seems likely is that the single horizontal standard, which requires one development effort and produces consistent requirements applicable across all sectors, gives way to up to twelve separate processes, run by different bodies, on different timelines, with potentially divergent requirements. The net result of the Parliament's proposal, if adopted, would not be simplification. It would be at best a delay and at worst a fragmentation of compliance frameworks precisely during the period of fastest AI development and deployment growth in these sectors.

Conformity assessment: procedural decisions on hold

A second divergence is directly linked to the first, and has concrete consequences for companies in Annex I.A sectors. The Commission and Council both accept a unified assessment procedure for conformity bodies seeking designation under both the AI Act and existing product safety legislation, simplifying the path for bodies that already operate across both frameworks. The Parliament rejects this entirely, a position that follows logically from its sector exit proposal. If Annex I.A systems are removed from AI Act scope, a unified procedure serves no purpose. For companies in Annex I.A sectors, this divergence has a concrete cost. Which conformity assessment route to follow and which notified body to engage are procedural decisions that cannot be fully resolved until the trilogue outcome is clear. By placing the AI Act within the Omnibus package, the Commission has made these decisions contingent on a political negotiation – a form of uncertainty that would not exist had the framework remained stable.

What this means in practice

For Annex III providers: no reason to wait

For companies developing systems classified as high-risk under Annex III – including AI used in recruitment, credit scoring, education or law enforcement, for example – no institutional proposal in the current Omnibus process touches the substance or scope of existing obligations. The Annex I debate is structurally irrelevant to them. Articles 9 through 15 apply. On the application timeline, a fixed date in that range is the most probable outcome. Either way, the interval between now and application is not a reason to wait. It is the window in which the work of building adequate company structures, incorporating the QMS requirements of Article 17 and the technical requirements of Articles 9 through 15, needs to happen.

For Annex I.A providers: scope is live, requirements are not

For companies with systems covered under Annex I.A (such as medical devices or machinery) the scope question is live and should not be dismissed. At the same time, the requirements are not in question. Whatever the outcome of the trilogue negotiations, the substantive obligations of Articles 9 through 15 will govern high-risk AI systems in these categories in some form. The content of those requirements is equally stable. For example, European notified bodies have developed structured AI audit frameworks for medical devices under MDR/IVDR, and the current joint Team-NB/IG-NB questionnaire adopted in November 2024 explicitly acknowledges "considerable overlap" between its seven-domain framework and what the AI Act requires. The questionnaire is voluntary and applies to one sector only. The AI Act is what makes these requirements legally mandatory and horizontally applicable across all twelve Annex I.A categories. But for companies in scope, the core building blocks are knowable now: a QMS, a documented risk management system, structured data governance and a technical documentation, for example. For systems under Annex I.A, this mostly means expanding existing documentation instead of starting from scratch. Doing so is not a bet on one institutional position prevailing. It is investment in the layer that every position agrees on. The procedural decisions that cannot be resolved before trilogue concludes are genuinely outcome-dependent: which notified body to engage, which conformity assessment route to follow, and how sector-specific overlaps between the AI Act and existing product safety law will ultimately be settled. The uncertainty is real but points to a genuine path forward: build the substantive compliance framework now, and make the procedural decisions once the trilogue outcome is clear.

Conclusion

What to do now?

The Omnibus negotiations are running against a hard deadline that the institutions must resolve before existing obligations apply. That urgency is theirs to manage. For companies providing or deploying high-risk AI, the right response to that uncertainty is not to wait for it to resolve. It is to start building.

The analysis above points to one finding that holds across both Annex III and Annex I.A: no institutional proposal changes what high-risk AI compliance substantially requires. For Annex III systems, the picture is fully stable. What gets classified as high-risk may shift at the edges, when obligations formally apply may shift by months, but what those obligations require does not shift at all. For Annex I.A systems, the scope is a live question but the content of what compliance demands may be the same regardless of how it resolves. As a result, the work of building a quality management system, structuring data governance and producing technical documentation is sound investment regardless of the outcome — required directly if the AI Act applies, and recognised best practice if it doesn't. The companies best positioned when the trilogue concludes will not be those that waited for certainty. They will be those that created it.

Where KvJ can help

At KvJ Consulting, we help companies developing and deploying high-risk AI systems to meet the requirements of the EU AI Act. Our work is focused on implementing quality management systems, drafting technical documentation, and developing enterprise governance structures that are both effective and built to withstand audits. If you want to understand how your organisation stands against the requirements and what to do next, please get in touch.

Zurück
Information icon

Wir benötigen Ihre Zustimmung zum Laden der Übersetzungen

Wir nutzen einen Drittanbieter-Service, um den Inhalt der Website zu übersetzen, der möglicherweise Daten über Ihre Aktivitäten sammelt. Bitte überprüfen Sie die Details in der Datenschutzerklärung und akzeptieren Sie den Dienst, um die Übersetzungen zu sehen.