Artificial Intelligence and Employment Law

The recent Artificial Intelligence Action Summit, which brought together leaders from various sectors to discuss the future of artificial intelligence (‘AI’), underscored the significant transformation AI is driving in businesses and their workplace practices. Whilst AI offers numerous benefits such as efficiency and productivity, it also presents businesses with potential risks and challenges that need careful management.

Artificial intelligence is an umbrella term referring to different technologies that enable machines to simulate human intelligence and to perform tasks which would typically require human intervention. However, with these advanced technologies also come legal complexities.

Discrimination and bias

Businesses are increasingly reliant on AI tools to perform certain human resources and employee management functions.

AI hiring, promotion and performance evaluation tools can carry unintended biases, raising significant employment law concerns. Indeed, AI systems are only as objective as the data they analyse. If these tools are inadvertently trained on biased historical data, this may result in an employer unintentionally discriminating against protected groups (eg, based on race, gender, age, disability, etc). For instance, if an employer relies on historically successful CVs that predominantly belong to men, the AI tool may mistakenly flag CVs from individuals who are not male as undesirable.

When dismissing an employee, an employer must be able to demonstrate a fair process was followed and the dismissal was for a fair reason. The difficulty for employers is that the nature of the algorithms used by AI technologies is so complex that it may be difficult to understand which factors have been taken into account and justify how the result was achieved. Where AI is used as a means to prompt workplace decisions, such as scoring in a selection or performance process, it should be used in conjunction with appropriate human oversight so employers can demonstrate and ensure a fair decision-making process.

Employees using AI at work

Employees are increasingly using their own AI as a productivity tool without permission from their employer. Generative AI, such as ChatGPT, has the capacity to be used by the workforce to carry out various functions. However, there are inherent limitations to its use, including the possibility of inaccuracies when producing a piece of work. Indeed, generative AI tools will gather information from all the sources available online and present it as a very plausible answer. However, AI does not have the autonomy to look beyond the data and information it has been given and if it is unable to find the right data it will lead to incorrect or inaccurate results.

The use of AI by employees can also create intellectual property risks, particularly when AI tools generate content or ideas that could be considered the property of the employer. For example, if an AI system creates a marketing campaign, a new product design, or a piece of software code, it can be unclear who owns the rights to that intellectual property.

One of the primary concerns for employers with employees using AI at work is the potential for breaches of data privacy and confidentiality. Many AI tools rely on vast amounts of data that each individual user inputs, and employees may inadvertently disclose sensitive information or trade secrets. Once the data is being stored in the AI’s algorithms, the system may use it to respond to future user requests, creating a risk of revealing confidential company information to those other users.

Employees may also inadvertently breach data protection regulations if they process personal information, such as customer data, which could lead to legal claims and reputational damage for organisations.

Whether the use of AI is going to be permitted within an organisation is ultimately going to be a business decision for individual organisations, factoring in their own risk profile, their own sector and their own regulations.

Managing risks

Understanding and managing the potential risks associated with AI is crucial for businesses to protect themselves from potential legal and reputational damage.

The first step for organisations is to carry out a risk assessment to have a clear understanding of how AI technology is being used within the workplace, and identify and mitigate potential areas of concerns.

It is recommended that employers implement an AI policy setting out what is and is not permitted use of AI, the safeguards and processes to follow and the implication of non-compliance.

Organisations should provide adequate training to their employees on the risks involved in the use of AI technology and how those risks should be managed.

Whilst there are benefits associated with the use of AI technology in the workplace, the effectiveness of these tools does seem to heavily rely on the quality of human inputs working alongside those AI tools to mitigate the risks associated with its use.

Please contact us if you would like more information about the topics raised in this article or any other aspect of employment law at 029 2034 5511 or employment@berrysmith.com.