Website Banner Blog (1)

Navigating the EU AI Act: Essential Insights for CEOs on Talent Management

Back to Blogs
Blog Img

Navigating the EU AI Act: Essential Insights for CEOs on Talent Management

A Deep Dive

The EU’s AI Act came into force on August 1st, 2024, marking a watershed moment in the world of technological regulation. A first-of-its-kind framework, the AI act is built on a risk-based approach to governance, applying to various touchpoints throughout the AI lifecycle.

Praised by some for its comprehension, and denounced by others for its administrative burden, the AI act has sparked a lively debate on the relationship between innovation and regulation. Whatever your stance, the act is not going anywhere, and it will continue to evolve.

Business leaders must see that, in a tech-first world, a robust compliance model is a critical advantage. CEOs are under the spotlight in a market progressively interested in corporate responsibility and ethical decision-making.

Our digital transformation consultants and recruitment specialists have put together this guide to help you get a better understanding of the AI Act’s key details, how it will shape the future, and what that means for your business.

Risk

Understanding the differences between the AI Act’s risk classifications is essential in helping you develop an effective, compliant AI strategy. The categorisations are as follows:

Minimal Risk: The majority of AI systems are classified as minimal risk (spam filters, recommender systems, etc.) and because they don’t represent a threat, they either don’t have any regulatory obligations, or require self-regulation.

  • Example: Basic AI-powered video games, posing no risk to people

Specific Transparency Risk: AI systems that require transparency obligations fall into this category. Chatbots, for example, need to let the user know they’re interacting with an AI. Moreover, providers must design transparent products, where any synthetic content is detectable by a machine.

  • Example: A customer service chatbot or AI-enabled marketing system, representing a limited scope of risk.

High Risk: High-risk AI systems fall under much tighter regulatory requirements. Systems used in critical infrastructure that pose a substantial risk to health or fundamental rights sit in this classification.

  • Example: AI systems used in law enforcement, energy distribution, transport, or healthcare.

Unacceptable Risk: Systems in this category are banned under the EU AI Act. This classification is reserved for systems that represent harm to the livelihoods and rights of individuals.

  • Example: Social-scoring systems, real-time biometric identification systems (with exception), purposely deceptive systems, and systems designed to exploit the vulnerabilities of specific demographics.

It’s vital to remember that through the risk-based approach, even unassuming technologies can fall under the high-risk category. This is because its risk level is determined by the application of the system, rather than the technology itself.

Most of these categorisations represent an additional compliance dynamic for businesses. Developing a comprehensive, adaptable, and proactive compliance function is key to reducing the operational burden.

Alongside levelling up your security and enablement, good compliance is a sales enablement tool, particularly in an era of increasing regulatory scrutiny and heightened consumer awareness – customers are calling out for trustworthiness. Plus, businesses with strong governance signify greater longevity, which is typically more attractive to investors.

Compliance

While minimal-risk systems are largely self-regulated, those that fall into the high-risk classification demand significantly more governance. As detailed in the AI Act, high-risk systems require:

  • A Risk Management System – the risk management system must be equipped to mitigate risk across the entire AI lifecycle. This includes taking on regular risk assessments, testing and monitoring (which may include testing in real-world conditions), and evaluating risk in accordance with the system’s intended purpose.

  • Data and Data Governance – The foundation of AI, data must be relevant, representative, and free from bias. Organisations have a responsibility to store and process data safely and securely, in line with data protection laws. Steps must be taken to ensure the integrity of the data.

  • Technical Documentation – technical documentation must be finalised before the system reaches the market (or is put into service). The documentation needs to possess all of the information necessary to evaluate the compliance of the system.

  • Record Keeping – Record keeping is crucial for ensuring the transparency and explainability of AI systems. This can include logging the data used to train the AI, its decision-making process, errors, and updates.

  • Human oversight – systems must be developed in a way that allows for human oversight (or as the report frighteningly puts it, natural persons). This includes the implementation of a method that enables intervention. AI system decisions and actions must be verified by two separate people. Notably, law enforcement is an exception.

  • Transparency and Provision of Information to Deployers – users and regulators must be aware of how the system works and its impact on the stakeholders involved. Providers will have to provide transparent information about their system’s capability (and limitations).

  • Accuracy, Robustness, and Cybersecurity – cyber resilience is an integral part of the journey toward true tech inclusivity and enablement. Systems must be developed to consistently reach an appropriate level of security throughout its lifecycle. This includes the implementation of fail safes and resiliency measures against third-party vulnerabilities.

What This Means for You:

Whether you’re on the verge of integrating new AI systems, in the middle of a digital transformation, or you’ve not yet even considered using AI, it’s important to recognise that the impact of the EU’s AI Act will be wide-reaching.

The act will apply to businesses operating outside the EU. For example, if a non-EU provider puts products on the EU market, or the results of their products apply to the EU, they will be in the scope of the regulations. Note that businesses that handle data from EU citizens will also need to comply.

What might this heightened regulatory setting mean for your business?

Increased costs – while the AI Act set out to reduce costs in the long term, the changes do represent an imminent spike in compliance overheads, primarily for those operating alongside high-risk AI systems. Risk-based approaches tend to require more investment given the increased oversight required. That could include regular testing and monitoring, record keeping, and conformity assessments.

  • How to manage it: Develop a comprehensive understanding of which regulations apply to your organisation. Prioritise efforts in critical business functions and high-risk areas where the compliance burden is at its heaviest. Leverage existing compliance frameworks (GDPR, Consumer Duty, etc.) to help you streamline your process under the new regulations.

Operational Disruption – Broadly speaking, any major regulatory update represents a certain level of operational disruption, especially if you’re forced to adapt your tech and compliance infrastructure. For example, this might involve building a new team, secondments, data migration, or developing a new documentation system.

  • How to manage it: Take inventory of all your current systems and ensure that they’re classified. You can find helpful resources on the EU Commission’s website alongside the legislation document, but if you’re unsure or lack the internal expertise, the services of compliance contractors are worth exploring. Again, prioritising your most critical functions can help you reduce downtime (these will be functions whose inactivity causes the biggest impact on your business).

Legal Risk – Given the operational complexity of the modern organisation, increased liability (responsibility for AI-related harm) presents a unique challenge for today’s business leaders. Failure to comply with the AI Act could lead to fines of up to €35 million, or 7% of the culprit’s worldwide annual turnover for the preceding financial year (whichever is highest).

  • How to manage it: Responsibility mapping and communication is key. Having the right people with the right level of access can prevent you from mishandling sensitive data or AI systems. Contractual protections and insurance coverage can be essential here. You may want to use dedicated AI consultants, and if so, it’s vital to ensure that any subject matter expertise is kept inside your business.

Ethical Considerations – Ethics in AI is shaping up to be one of the most important conversations of today’s era. Businesses that can leverage advanced technology fairly, inclusively, and transparently can develop a distinct competitive advantage. Consumers are actively aware of the AI influx; a recent study from Forbes Advisor claimed that 75% of consumers are concerned about misinformation from AI.

How to manage it: Transparency and accountability are key. The lack of visibility around AI development and decision-making is firmly under the spotlight (reflected in the narrative from the AI Act). Building trust with stakeholders will demand transparent data usage and clear explanations of decision-making processes; the regulation emphasises the fundamental importance of safe data handling, democratic control, and AI literacy.

Scope

The scope of the AI Act is incredibly broad, and since it’s not sector-specific, it leaves room for many organisations to sit under its regulatory obligations. Specifically, the AI Act will apply to:

  • Providers – those who place AI systems on the market in the market in the EU

  • Deployers – anyone (individual or entity) that uses AI in a professional capacity

  • Distributors – anyone in the supply chain that makes an EU system available in the EU.

  • Importers – anyone based in the EU who places a system from an outside provider onto the EU market.

Given the wide-ranging scope of the Act, there’s a good chance you’ll encounter its requirements at different touchpoints of your business, whether that’s directly or through your partnerships. It’s worth familiarising yourself with key dates to ensure you’re prepared for the phased implementation dates.

The AI Act became law on August 1st, 2024, with the implementation dates as follows:

  • 6 Months After the Act, Prohibited AI Systems: AI systems that fall under the prohibited (banned) category will be subject to regulatory obligations in February 2024, only 6 months after the act became law. Businesses need to work quickly to assess their risk in this area (especially considering the extent of the fines for breaching compliance).

  • 12 Months After the Act, General Purpose AI: The rules for General Purpose AI (like Gemini and ChatGPT) will apply in August 2025.

  • 24 Months After the Act, High-Risk Systems: High-risk systems will fall in scope of the rules in August 2026. It’s important to note that some AI systems that don’t fall under the high-risk classification will still be in the scope of the transparency obligations (such as systems that create synthetic images).

  • 36 Months After the Act, High-Risk Systems as Safety Components: AI systems that are integral to the operation of a product or service will be subject to the rules in August 2027. Note that all high-risk classifications will sit under the same general regulations –  this particular implementation refers to the role of AI inside a system.

Getting Prepared

As with most unprecedented regulatory changes, implementation dates sneak up a lot faster than people think. We saw a similar situation happen around the time of the Consumer Duty Act – the initial shockwaves of the legislation were followed by a scramble to understand its implications.

To avoid breaching compliance and falling foul of the EU regulator, early planning is non-negotiable. Building up your internal capabilities and developing in-house expertise doesn’t happen overnight, it takes a carefully measured, proactive approach that continues to develop over time.

In a similar vein to preparing for a digital transformation, we recommend you:

1. Take stock of all your existing systems and all of the new systems you plan on implementing. After this, you can conduct a risk assessment to identify which classifications your AI falls under. This should involve engaging the relevant stakeholders both internally and externally. We’ve found that one of the biggest challenges organisations face when working with AI is a lack of understanding at the senior leadership level – strong stakeholder management can help you avoid this.

2. Develop a compliance timeline to help guide your project in the right direction. This should include timeframes for completing risk assessments, changes to your tech systems, and recruitment commitments (if you need to bring on dedicated compliance personnel).

3. Review your supply chain and your company’s vulnerability to third-party risk. Staying compliant depends on your ability to meet obligations at each touchpoint of the AI value chain, so make sure all of your functions are equipped to understand and mitigate their risk exposure.

4. Developing AI-specific policies and procedures. This will help you create a culture of compliance and transparency, providing proof of your efforts in due diligence.

5. Keeping your eye on any updates from relevant regulatory bodies. The EU AI Act is likely to change over time, and hopefully, this includes the provision of greater detail around risk classification (the act has been met with some criticism for ambiguity).

6. Building an incident response team can help you minimise disruption and reputational damage should you find yourself over-exposed to risk. A crisis management plan can fortify your lines of defence by outlining clear procedures for threat detection, risk assessment, and response.

Speaking of building teams, the AI Act is poised to add a new dynamic to the recruitment process, and employers must be prepared to respond to the growing complexity.

Hiring Implications

Compliance isn't all about technology and processes—it's also about people. The stringent requirements of the Act, especially for high-risk AI systems, underscore the need for a workforce that’s not only skilled in AI and digital transformation but also well-versed in regulatory compliance, data ethics, and risk management.

For CEOs, Team Leaders, hiring managers, decision-makers, and recruiters alike, this means that recruitment strategies should evolve to prioritise candidates with expertise in AI governance, legal compliance, and cybersecurity.

 The increasing demand for such niche roles looks set to intensify competition in the talent market, making it essential for companies to refine their employer value propositions and invest in training their existing staff to close skills gaps.

Upskilling leaders will help develop a culture of compliance throughout the organisation, an essential piece of the talent attraction puzzle and a central pillar of ethical AI deployment, ensuring that human oversight comes first.

Strategic Recruitment to Build a Future-Ready Workforce

Given the AI Act's focus on transparency, accountability, and human oversight, businesses should consider how their hiring practices can support these objectives.

This also applies to the AI tools used throughout the recruitment process itself – as agencies increasingly leverage AI to streamline techniques across the full talent lifecycle, these tools (and any suppliers using them) must be carefully selected.

 CEOs should consider the strategic placement of AI compliance officers and data governance experts within their teams. These roles are not just about meeting regulatory demands but are also critical in safeguarding your company’s reputation and cultivating trust with stakeholders.

Here at Trinnovo Group, we have the privilege of working with the world’s most exciting tech-enabled businesses, and we’re already seeing the AI Act make its impact across a range of sectors. From executive search to digital transformation consulting, every aspect of our service is aligned to help organisations scale through an era of intensifying regulatory scrutiny.

If you want to discuss the implications of the AI Act on your business in more detail or explore strategies for effective compliance, contact the team directly here: https://www.trinnovogroup.com/contact-us.

Trinnovo Group is a community-led staffing and advisory business on a mission to build diversity, create inclusion, and encourage workplace innovation. We are Trust in SODA (digital tech), Broadgate (Finance and Regulatory), and DeepRec.ai (AI and Blockchain).