• Bulletins
  • Jul 14, 2025

AI in the workplace: What employers need to know

Share this page: LinkedIn X

Introduction – What is AI?

Artificial intelligence (AI), or, broadly computer systems that perform tasks which typically require human intelligence, is rapidly transforming the modern workplace, offering new efficiencies and capabilities across recruitment, investigations, and everyday decision-making. However, as organisations embrace these tools, they must also navigate the complex legal, ethical, and practical challenges that come with them. This bulletin explores the key risks and responsibilities associated with AI in employment settings, and offers guidance on how employers can use these technologies responsibly and lawfully.

How Employers Are Using AI

Recruitment

A growing number of employers are turning to AI to increase efficiencies in their hiring processes. AI can help analyse CVs, evaluate candidates’ profiles and even conduct video interviews. However, because algorithms are trained on historic data, they may repeat or even amplify existing biases. As long ago as 2018, it was reported that Amazon developed an algorithm which – because it had been trained on CVs of predominately male applicants – effectively “learned” that male candidates were preferable and penalised CVs that included the word ‘women’.

AI is increasingly used to make complex judgments about applicants’ personalities and appearance. Journalists in Germany tested AI software which promised to make a behavioural personality profile of applicants recording answers to video interview questions. The software judged applicants significantly differently based on whether they were, for example, wearing glasses or a headscarf, or what they had displayed in the background. Although AI may promise to make hiring processes objective, the reality is that AI perpetuates the preferences and biases of the humans that created it.

So it would seem that AI is rather more human than we might have hoped. But, like humans can keep their own biases in check, this technology can still be used provided there is appropriate oversight. The Department for Science, Innovation and Technology released guidance in March 2024 recommending that employers carry out regular audits of any AI used in their recruitment processes to understand any biases in its outcomes. If a bias against a particular group is identified, it may not be necessary to stop using the software altogether, but to use it with caution and to consider alternatives, such as allowing manual review pathways for the identified group.

In addition, Article 22 of the UK GDPR requires further safeguarding. Automated decision-making – such as using AI to sift CVs – which has the potential to have significant effects on individuals is only permitted where the decision is necessary for the entry into or performance of a contract, authorised by law that applies to you, or based on the individual’s explicit consent. If the hiring decision falls under Article 22, employers must give individuals information about processing, introduce simple ways for them to request human intervention or challenge a decision, and carry out regular checks to make sure their systems are working as intended.

Investigations

Disciplinary and grievance investigations should be concluded in a timely manner, and certainly no one wants a stressful investigation hanging over them (or their workforce) longer than absolutely necessary. But that is easier said than done when it comes to many document-heavy investigations. Often one investigator will be responsible for analysing and collating a large volume of information in a limited timeframe. AI presents a useful tool to allow humans to sift through documents, extract relevant information quickly and identify key trends.

Care must be taken when using AI in this way, however, to avoid the pitfalls: AI tools, which can miss important nuance or context or even hallucinate information, cannot fully substitute human review. An investigator who relies solely on AI to tell them what they need to know is likely to miss key information and this may render an investigation unreasonable and unfair. Similarly, any decisions should be made by humans. An investigator must be able to demonstrate why they came to a reasoned decision, supported by the evidence. Employers should not process special category personal data (such as information regarding racial origins, medical information, or political, religious or philosophical beliefs, etc.) unless it is strictly necessary to do so. Inputting documents containing personal data or confidential information into publicly available AI tools, such as ChatGPT, could lead to its exposure into the public domain. Ultimately, AI should be used to support, not replace, human judgment. Make sure you understand the evidence and can justify any findings based on your own manual review.

Research

A recent study conducted by the University of Southampton found that people were more willing to rely on legal advice prepared by ChatGPT than a qualified lawyer. Generative AI like ChatGPT can be useful tools, but relying on them carries all manner of risks. AI is well-known to “hallucinate”, or make up answers. Where a human lawyer might tell you that an area of law is uncertain or that there are relative risks and benefits, AI wants to be able to give you a definitive answer – and it will hallucinate one if it has to. AI’s advice may also be outdated or jurisdictionally incorrect. Moreover, ChatGPT isn’t subject to legal privilege – and so anything you create to get an answer may be disclosable in evidence. While AI can help to provide a general steer or organise your thoughts into a document, it’s no substitute for a human opinion or qualified legal advice.

Top Tips for Employers

In February 2025, a survey reported that about half of all knowledge workers use ‘Shadow AI’, or AI tools which had not been sanctioned by their employers, at work. Even if your business has not rolled out AI tools to your employees, it is likely that at least some of them are using it anyway. It would be best practice to put in place a workplace policy setting rules around use of AI. You may also need to update existing policies – such as policies concerning IT and communications, bring your own device to work, data protection, confidentiality etc. In addition, employees should be trained on the proper use of AI, including in respect of their data protection obligations. Everyone should be reminded that AI is not perfect, and any outputs should be carefully reviewed by a human. Having a policy in place can reinforce these concepts.

Conclusion

While AI offers significant potential to streamline processes and enhance productivity, it is no substitute for human judgment. Employers must remain vigilant about the risks of bias, data protection breaches, and over-reliance on automated systems. By implementing clear policies, providing training, and ensuring robust oversight, organisations can harness the benefits of AI while safeguarding fairness, accountability, and compliance in the workplace.

Meet the team:

View more