Revolutionising AI rules for global employers

diverse-employees-viewing-computer.png

Author: Rocio Carracedo Lopez

On 1 August 2024, the EU's first comprehensive Regulation on Artificial Intelligence (AI) (known as the "EU AI Act") entered into force. Once fully implemented, it is expected to have far-reaching implications for global employers using AI systems. Here we set out what HR departments need to know, whether or not their organisations are based in the EU.

How does the EU define AI systems? 

For the purposes of the Act, an AI system is defined as "a machine-based system designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments".

How are HR teams using AI systems? 

Although the UK is no longer part of the EU, the AI Act will have extraterritorial scope, which means that UK-based companies that deploy AI systems in the EU and UK employers that deploy AI systems to make employment-related decisions about employees in the EU will be affected by the Act's provisions.

In a recent survey, we found that the top three goals for HR departments were:

  • improving the employee experience (22.7%);
  • saving time (18.4%); and
  • reducing HR's workload (18.4%).

AI has the potential to help HR achieve these aims. We heard in the research that HR professionals are making use of various AI systems, including those that help them to draft job descriptions and communications, generate policies and assist with data analysis.

The use of AI systems offers many advantages in the workplace, but it has the potential to generate legal, commercial and reputational risks, for example:

  • the use of inaccurate or incomplete data to make recommendations;
  • data protection and privacy issues when the information that users enter into an AI-powered chatbot is used to train its model and is used as part of the responses to questions asked by other users; and
  • bias and discrimination in HR processes.

How is the EU addressing the risks generated by AI?

The EU has concerns about the potential harm these systems can cause to people's rights and is introducing a risk-based approach to regulating AI systems. The four risk levels are:

  • unacceptable;
  • high;
  • limited; and
  • low/minimal.

Each classification of risk is based on the effect on its users. The higher the risk, the more rigorous the rules regulating the AI system will be.

The EU treats general-purpose AI (GPAI) models ("foundation models") as their own category of risk. For the purposes of the Act, GPAI models are where "an AI model is trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable of competently performing a wide range of distinct tasks". Popular examples of GPAI models include GPT-4 and Google BERT.

What does the EU AI Act mean for employers? 

The Act applies to providers, deployers, distributors, importers, product manufacturers and authorised representatives. The distinction between the different roles is important because each role will be subject to different requirements.

The most stringent obligations under the Act apply to "providers" (ie those that develop an AI system or a GPAI technology and place them on the market or put the system into service under their own name or trademark). Employers could be considered a provider where they, for example, launch a high-risk AI system under their own name or trademark, make a substantial modification to a high-risk AI system that has already been placed on the market or has already been put into service, or alter the intended purpose of an AI system.

The Act also applies to "deployers" (ie those using an AI system except where the AI system is used in a personal, non-professional activity). When using HR management systems, employers may fall into this category. However, deployers face fewer requirements than those demanded of providers. 

Unacceptable risk

Any systems that create unacceptable risks, such as emotion recognition systems in the workplace (eg an AI system that monitors employees' emotions through their facial expressions and tone of voice), will be prohibited. In practice, it is expected that there will be few circumstances in which employers will find themselves in this category.

High risk

AI systems that affect employment decisions may fall into the "high-risk" category, specifically technology that is used for employment, the management of workers and access to self-employment. In the employment context, the Act specifically lists high-risk AI systems intended for:

  • the recruitment or selection process (eg where AI is used to place targeted job advertisements, screen and filter job applications, and evaluate candidates); and
  • making decisions that impact the terms of work-related relationships, promotion and termination of contractual relationships; task allocation based on individual behaviours, personal traits or characteristics; and monitoring or evaluating performance. 

The reasoning behind the additional safeguards is that these types of high-risk AI systems may:

  • influence future career prospects, livelihoods and workers' rights;
  • perpetuate patterns of discrimination (eg against people with disabilities or women); or
  • undermine fundamental rights to data privacy and protection.

Due to the nature of the risks, providers will be subject to numerous obligations, including undergoing conformity assessment procedures, maintaining extensive technical documentation and ensuring the system is developed in a way that enables a high level of accuracy, robustness and cybersecurity.

Deployers will be required to, among other things:

  • follow the "instruction of use" provided by those behind the high-risk system;
  • assign appropriate human oversight to individuals in the organisation who are involved in deploying the AI system; and
  • notify employees and their representatives before implementing a high-risk AI system.

Limited risk

AI systems that present limited risks include chatbots for HR or employee relations support staff. In these cases, transparency obligations apply, so users will need to be made aware that they are interacting with a machine rather than a human.

Low risk

Systems presenting minimal risks, such as spam filters, can be used without regulation.

GPAI systems

Although most of the requirements regarding general-purpose AI systems are imposed on providers, employers should be aware that GPAI and the models they are based on will be subject to certain transparency obligations. Models that could pose "systemic risks" will face more stringent requirements.

When are these changes coming into effect?

Although the Act entered into force on 1 August 2024, it will take effect in stages. EU member states have six months to phase out any prohibited systems and 12 months before rules on GPAI apply. Most of the provisions will come into play 24 months after entry into force (August 2026).

What are the consequences for infringing EU rules? 

If an employer is found to be in violation of the rules, they may be subject to a fine of up to €35 million or 7% of global annual turnover. The European Artificial Intelligence Office will oversee the implementation of the new rules.

What does the EU AI Act mean for UK employers? 

Although the UK is no longer part of the EU, the AI Act will have extraterritorial scope, which means that UK-based companies that deploy AI systems in the EU and UK employers that deploy AI systems to make employment-related decisions about employees in the EU will be affected by the Act's provisions. This could include, for example, where a UK employer is using AI to sift through applications from candidates in the EU, or it may be that a UK-based team is using AI systems to make decisions about the promotion of employees spread across multiple EU countries.

The UK does not have any laws regulating the use of AI systems in the workplace. In February 2023, the UK Government said that it was following a "pro-innovation" approach and opted against introducing legislation. However, certain AI practices may fall foul of other legislation, eg the Equality Act 2010. As the UK's current stance is in stark contrast to the EU's approach to AI, it will be interesting to see whether it will adapt its strategy in the long term.

What should employers do now?  

Employers developing, using or implementing AI systems or hoping to do so in the future need to start preparing.  

Employers will need to consider whether the Act applies to them; what their role is (eg a provider or a deployer); what risk category applies to the system in question; and how to meet all their obligations. This may involve seeking additional information from third-party providers.

It will be key to create AI governance programmes to ensure systems are safe, robust and trustworthy and that they address data privacy issues. AI systems will need to be verified to check for bias and discrimination that may influence decisions around who is hired, promoted or dismissed, and employers will need to implement procedures to ensure there is appropriate human oversight to prevent or reduce risks.

Finally, employers should track developments on how EU countries are implementing the Act, monitoring any national regulatory guidance.

What to read next