Cookie preferences

Essential cookies help make a website usable by enabling basic functions like page navigation and access to secure areas of the website. The website cannot function properly without these cookies.

Preference cookies enable a website to remember information that changes the way the website behaves or looks, like your preferred language or the region that you are in.

Statistic cookies help website owners to understand how visitors interact with websites by collecting and reporting information anonymously.

Marketing cookies are used to track visitors across websites. The intention is to display ads that are relevant and engaging for the individual user and thereby more valuable for publishers and third party advertisers.

Skip to main content
AI Governance in 2025: What you need to know

Highlights

  • What is the current sentiment around AI governance? 
  • An overview of the EU AI Act
  • An explanation of the 4 categories that make up the risk-based approach 
  • A quick look at the consequences of non-compliance 
  • A sample framework for compliance


Increasingly, any conversation about AI touches on governance. And, like everything else associated with AI, it’s a topic that's incredibly nuanced. There’s a lot to consider, and some of the viewpoints are in opposition to one another, with the needs of big business and tech companies somewhat at odds with those of politicians and lawmakers. 

Here in the EU, we have the EU AI Act, which aims to establish guardrails and ensure that organisations use AI responsibly. Its phased-in approach offers companies time to prepare and comply, but I think there is still a lack of awareness around it and its implications. 

And not everyone thinks it’s a good idea. In July of 2025, several industry leaders, including the CEOs of Airbus and BNP Paribas, among others, came together to call for a two-year pause on the Act, citing their perception that it threatens the EU’s position in the global AI race. Not only could this further complicate and potentially delay its enforcement, but it also raises questions about the legislation's efficacy.

As this story continues to unfold, here are some interesting insights worth highlighting:

  • Just a fifth (21%) of respondents in a PwC report on AI in the workplace said that their organisation has an AI and/or GenAI governance structure in place. This is an increase from 7% in June 2024. 
  • The survey also indicated that there will be a significant uplift over the next 12 months in the level of activity needed to improve governance of AI systems and controls (Jan 2025: 61%; June 2024: 46%). 
  • This is also reflected in their finding that a third (33%) of respondents now have a dedicated AI leader across their business (up from only 7% in June 2024). 
  • Only a third (31%) of organisations have a formal, comprehensive AI policy in place, highlighting a disparity between how often AI is used versus how closely it’s regulated in workplaces.
  • According to a McKinsey report, a CEO's oversight of AI governance - that is, the policies, processes, and technologies necessary to develop and deploy AI systems responsibly- is one of the elements most correlated with higher self-reported bottom-line impact from an organisation’s gen AI use.

What I take from this and other research is that businesses are taking action, particularly larger ones, but there is still much to be done. And for SMBs, where the budget to hire an AI Governance expert may simply not be there, it is yet another obstacle, another piece of legislation that draws on their resources. 

However, as a society, we have a responsibility, a duty, to protect each other, especially those who are more vulnerable. We do need guardrails, we need due diligence, we need a voice questioning the effects AI is having on the human race, society, and our future.

As we debate these very real concerns, staying informed and educated is paramount. 

So, let’s start with a reminder of the legislation.


The EU AI Act is an EU regulation that entered into force on August 2 2024.
The EU Artificial (AI) Act

As with any piece of legislation, the devil is in the details. I’ll provide links, and I strongly encourage people to discuss the implications of the Act for your company, but here’s a general overview: 

Here’s what the Department of Enterprise, Tourism and Employment says

  • The EU AI Act entered into force on Aug 2 2024, and is directly applicable across the EU.
  • The Act applies equally to both private and public sectors. 
  • There are exceptions relating to national defence, national security, scientific R&D, R&D for AI systems, models, open-sourced models, and personal use.
  • The Act is not a blanket regulation applying to all AI systems. It adopts a risk-based approach based on 4 categories: Unacceptable risk, high risk, limited risk and minimal or no risk.  
The EU AI Act’s 4 categories of risk-based approach

#1 Unacceptable risk

All AI systems considered a clear threat to the safety, livelihoods and rights of people are banned. The AI Act prohibits eight practices, namely:

  1. harmful AI-based manipulation and deception
  2. harmful AI-based exploitation of vulnerabilities
  3. social scoring
  4. Individual criminal offence risk assessment or prediction
  5. untargeted scraping of the internet or CCTV material to create or expand facial recognition databases
  6. emotion recognition in workplaces and education institutions
  7. biometric categorisation to deduce certain protected characteristics
  8. real-time remote biometric identification for law enforcement purposes in publicly accessible spaces

#2 High Risk

AI use cases that can pose serious risks to health, safety or fundamental rights are classified as high-risk. These high-risk use-cases include:

  • AI safety components in critical infrastructures (e.g. transport), the failure of which could put the life and health of citizens at risk
  • AI solutions used in education institutions, that may determine the access to education and course of someone’s professional life (e.g. scoring of exams)
  • AI-based safety components of products (e.g. AI application in robot-assisted surgery)
  • AI tools for employment, management of workers and access to self-employment (e.g. CV-sorting software for recruitment)
  • Certain AI use-cases utilised to give access to essential private and public services (e.g. credit scoring denying citizens opportunity to obtain a loan)
  • AI systems used for remote biometric identification, emotion recognition and biometric categorisation (e.g AI system to retroactively identify a shoplifter)
  • AI use-cases in law enforcement that may interfere with people’s fundamental rights (e.g. evaluation of the reliability of evidence)
  • AI use-cases in migration, asylum and border control management (e.g. automated examination of visa applications)
  • AI solutions used in the administration of justice and democratic processes (e.g. AI solutions to prepare court rulings)

#3 Limited risk

This refers to the risks associated with a need for transparency around the use of AI. The AI Act introduces specific disclosure obligations to ensure that humans are informed when necessary to preserve trust. For instance, when using AI systems such as chatbots, humans should be made aware that they are interacting with a machine so they can take an informed decision.

Moreover, providers of generative AI have to ensure that AI-generated content is identifiable. On top of that, certain AI-generated content should be clearly and visibly labelled, namely deep fakes and text published with the purpose to inform the public on matters of public interest.

Minimal or no risk

The AI Act does not introduce rules for AI that is deemed minimal or no risk. The vast majority of AI systems currently used in the EU fall into this category. This includes applications such as AI-enabled video games or spam filters.

Source: https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai

Guidelines for large AI models

Again, according to the European Commission website

General-purpose AI (GPAI) models, models can perform a wide range of tasks and are becoming the basis for many AI systems in the EU. Some of these models could carry systemic risks if they are very capable or widely used. To ensure safe and trustworthy AI, the AI Act puts in place rules for providers of such models. This includes transparency and copyright-related rules. For models that may carry systemic risks, providers should assess and mitigate these risks. The AI Act rules on GPAI became effective from August 2025.

What is the timeframe for the phased approach to implementation?

The AI Act entered into force on 1 August 2024 and will be fully applicable 2 years later on 2 August 2026, with some exceptions:

  • prohibitions and AI literacy obligations entered into application from 2 February 2025
  • the governance rules and the obligations for general-purpose AI models become applicable on 2 August 2025
  • the rules for high-risk AI systems - embedded into regulated products - have an extended transition period until 2 August 2027

Source: European Commission 

What are the consequences for non-compliance? 

The EU AI Act sets out the maximum applicable penalties as follows.

  • Up to €35 million or 7% of the total worldwide annual turnover of the preceding financial year (whichever is higher) for infringements on prohibited practices or non-compliance related to requirements on data
  • Up to €15 million or 3% of the total worldwide annual turnover of the preceding financial year for non-compliance with any of the other requirements or obligations of the Act
  • Up to €7.5 million or 1.5% of the total worldwide annual turnover of the preceding financial year for the supply of incorrect, incomplete or misleading information to notified bodies and national competent authorities in reply to a request from a competent authority

For each category of infringement, the threshold would be the lower of the two amounts for SMEs, and the higher for other companies. 

For a list of the authorities in Ireland responsible for implementing and enforcing the Act within their respective sectors, please see here.

Source: Department of Enterprise, Tourism and Employment 

There is an expectation that everyone will become AI literate.
AI literacy

Article 4 of the AI Act requires providers and deployers of AI systems to ensure a sufficient level of AI literacy of their staff and other persons dealing with AI systems on their behalf. This recently published Q&A from the AI Office provides guidance on matters relating to AI literacy.

https://enterprise.gov.ie/en/what-we-do/innovation-research-development/artificial-intelligence/eu-ai-act/

Additional reading: 

The EU Artificial Intelligence Act

EU Artificial Intelligence Act 

EU AI Act *** The Act itself 

What is the General-Purpose AI Code of Practice?

To facilitate the implementation of the Act, the European Commission collaborated with independent experts and developed the General-Purpose AI Code of Practice.  It’s a voluntary tool to provide organisations within the legislation's scope with guidance and assistance, enabling them to comply with the EU AI Act. 

As stated on the Commission’s website, AI model providers who voluntarily sign it can demonstrate compliance with the AI Act by adhering to the code. This will reduce their administrative burden and provide them with more legal certainty than if they were to prove compliance through other methods. 

Steps to take to ensure EU AI Act compliance

I think this sample roadmap from IBM provides an excellent checklist of things to consider:

  • Visual dashboard: Use a dashboard that provides real-time updates on the health and status of AI systems, offering a clear overview for quick assessments.
     
  • Health score metrics: Implement an overall health score for AI models by using intuitive and easy-to-understand metrics to simplify monitoring.
     
  • Automated monitoring: Employ automatic detection systems for bias, drift, performance and anomalies to help ensure models function correctly and ethically.
     
  • Performance alerts: Set up alerts for when a model deviates from its predefined performance parameters, enabling timely interventions.
     
  • Custom metrics: Define custom metrics that align with the organisation's key performance indicators (KPIs) and thresholds to help ensure AI outcomes contribute to business objectives.
     
  • Audit trails: Maintain easily accessible logs and audit trails for accountability and to facilitate reviews of AI systems' decisions and behaviours.
     
  • Open source tools compatibility: Choose open source tools compatible with various machine learning development platforms to benefit from the flexibility and community support.
     
  • Seamless integration: Helps ensure that the AI governance platform integrates seamlessly with the existing infrastructure, including databases and software ecosystems, to avoid silos and enable efficient workflows.
Challenges to implementation

Ownership of compliance oversight

I’m reminded of the conversations surrounding the EU Accessibility Act (2025) when discussing who is responsible for overseeing compliance. Again, the larger enterprises will be better positioned, having the resources to allocate headcount.  It’s early days, though. A recent Arthur Cox report shows that despite growing adoption, 38% of organisations have not yet assigned responsibility for AI implementation to a specific individual.  Interestingly, where responsibility had been assigned, there was no clear consensus on where it should sit—roles range from CTOs and Heads of Data to General Counsel and COOs.

As Ciaran Flynn, Head of Governance and Consulting Services, notes, “Establishing defined accountability structures will help organisations manage risk effectively and meet evolving regulatory expectations.”

Confusion around the legislation 

I think we can safely say that the expression it’s as clear as muddy water applies to this piece of legislation. Earlier this year, the Swedish Prime Minister, Ulf Kristersson, weighed in, saying that the EU’s artificial intelligence rules should be paused as he criticised the rules for being “confusing”. 

It’s a view reflected in the aforementioned recent Arthur Cox survey, with 25% of respondents acknowledging they are still unsure of their classification— whether they are deployers or providers. 

And, of course, globally, other forces are weighing in on the legislation and whether it’s needed at all. 

Watch this space.

What now? 

As political and tech leaders debate the merits of imposing legislation on AI, I think this subject will gain momentum as voices grow louder. 

For now, the issue is compliance. Working within the current regulations and making sure that you are following the law.

And I’d add to that it’s about having some common sense. Is there anything that we’d want to implement unchecked?