Bulletins | February 14, 2020

Should you use AI to tackle workplace harassment?

Post #MeToo, employers have become increasingly focused on tackling harassment. In fact, it was recently reported that some companies are implementing monitoring software that uses AI to detect online harassment at work. The technology is said to use algorithms that recognise harassing or bullying language within internal worker emails and online chats, which are then flagged to HR for investigation. 

But is this really the correct tool to combat workplace harassment? And are there better ways for employers to protect workers? 

Why should employers tackle harassment?

Harassment in the workplace can result in low productivity, high turnover and, in some cases, irreparable damage to an employer’s reputation. Employers also risk costly legal action if they fail to demonstrate they have taken reasonable steps to protect workers from harassment, such as:

  • providing meaningful training;
  • actively enforcing zero-tolerance policies; and
  • improving internal reporting procedures.

Although AI monitoring may be a useful tool to deter workplace harassment, its use is limited as it cannot detect verbal or physical harassment and is unlikely to be capable of identifying more subtle behaviours. Employers should therefore avoid relying on AI as their sole means of demonstrating they have taken reasonable steps. If used, they should supplement it with other, more robust measures such as those above. 

Workers are entitled to a reasonable expectation of privacy in the workplace. This means employers need to make clear the extent of any monitoring of communications – for example, by updating and circulating revised policies – and ensure all monitoring is justified as a proportionate means.

Employers may find it difficult to establish that using AI to monitor private conversations between workers is proportionate given there are other less intrusive and perhaps more successful ways to prevent harassment at work. Businesses must ensure that all workers are made fully aware that their conversations may be monitored and provide detailed justifications for doing so. 

Are there risks surrounding data protection?

Monitoring digital communications at work involves processing personal data, which is governed by the General Data Protection Regulation 2016/679 (GDPR). The GDPR (article 5) states that all monitoring must be carried out for a legitimate and identifiable reason, and no more information should be gathered than is necessary to achieve that aim. 

Private conversations may include highly sensitive personal information and so any large-scale monitoring operations – such as using this technology – may not be proportionate when weighed against the need to protect the rights of the data subjects. To assess the level of risk to business, employers should carry out a thorough data protection impact assessment (DPIA) before choosing to implement these programmes, and perhaps limit their use to an ad-hoc basis that is then regularly reviewed by management.

Given the gaps around the technology’s overall reliability, and the risks surrounding privacy, employers should consider strengthening existing anti-harassment measures and policies before implementing AI. If it is used, employers need to ensure:

  • workers are made fully aware of the proposed monitoring and how the data will be used during their employment, with relevant policies being updated accordingly;
  • DPIAs are carried out beforehand to assess the potential risks, and any monitoring programmes are used in accordance with applicable data protection law generally; and 
  • the monitoring continues for no longer than is necessary to achieve the aim (to be assessed by regular reviews).

Please click here to read the article published on People Management.