Skip to content
Go backGo back

Features

What boards need to know about artificial intelligence risks

By Jessica Tasman-Jones

This article is brought to you by Agenda, an FT Specialist publication that focuses on corporate boards

The use of artificial intelligence is improving the way people work, from helping pathologists better detect cancer to removing workers from hazardous jobs.

Last year companies increased the use of AI by 7 per cent compared with 2020, according to a PwC survey of more than 1,000 executives.

Regulation is catching up with innovation, however, and boards must understand the pitfalls already apparent in AI, say experts.

Knowingly or not, most organisations will already be using AI of some sort, from security log monitoring to driver tracking to thermostatic building control, says Lee Howells, head of artificial intelligence at PA Consulting.

“No organisation that holds large amounts of data can expect to compete without some form of advanced analytics. The arrival of easy-to-use AI tools on cloud platforms will only accelerate its use,” he says.

Boards often focus on whether the use of AI is technically or financially feasible but they should be probing deeper into whether each program is necessary.

David Leslie, director of ethics and responsible innovation research at the Alan Turing Institute in London, says algorithmic labour management technologies could undermine trust in the workplace.

“Just think about the ways the automation of ad tech has created this attention-based economy, where nudging and behavioural control has become very common,” he says.

Those mechanisms could easily be applied to incentivising productivity in the workplace or attention in a Zoom meeting, Leslie says.

The UK is on the cusp of a more robust regulatory environment, says Leslie. “It’s up to boards to be ahead of that.”

In November, the All-Party Parliamentary Group on the Future of Work published a report on AI in the workplace, while an algorithmic transparency bill has been introduced in the House of Lords.

Politicians in the US and EU are preparing similar legislation.

“Companies will be more resilient to the changing legal environment if they’re already adopting the kind of best practices that will reduce liability eventually and that will anticipate harmful potential impacts,” Leslie says.

In the field of human resources, workplaces now use AI to shortlist and interview candidates, identify employees for internal roles and to recommend relevant courses to staff, says Hayfa Mohdzaini, senior research adviser for the CIPD, the professional body for HR and people development.

AI should not compromise on employment rights, she says.

“Where AI is used to make decisions that are consequential to people’s careers, there should be a clear process for people to appeal for a decision to be reviewed by a person. Not addressing this properly could negatively impact the company’s reputation,” she says.

Four years ago, Amazon abandoned an AI tool for reading and ranking job candidates’ résumés after the software was found to show bias against women.

Boards need to ensure that not only are technical experts involved in the procurement and implementation of AI programmes but also those with specialist diversity and inclusion, HR and data privacy knowledge, says Anne Sammon, a partner in the employment practice at law firm Pinsent Masons.

Appropriate safeguards need to be in place. “By way of example, if an AI program will be carrying out interviews on behalf of the employer, how will the employer ensure that those with a disability that might affect the way in which the AI interprets their facial expressions are treated fairly?” Sammon says.

“This might include allowing these employees to opt out of the AI process and to have an interview with a person.”

Leslie describes tools that claim to detect human emotions, attitudes and engagement as “pseudo-scientific and not properly validated at a scientific level”.

Employers need to do due diligence on the science underpinning AI programs, he says. That could involve upskilling procurement teams or introducing specialist roles with a focus on AI, including social scientists to focus on the ethics of the technology.

In future organisations may need to consider the effect of using AI to automate decisions as a matter of compliance, says Mohdzaini.

Third-party audits will become mandatory, according to a UK government roadmap for AI, published in December.

Boards must understand data protection risks highlighted by the Informatiion Commissioner’s Office and the risks of bias and discrimination highlighted by the Equality and Human Rights Commission in recent guidance, says Sammon.

The ICO says corporate boards will often sign off the use of AI in the workplace.

“It is important that they are aware of the risks and how they will be mitigated so they can sign off in confidence, as they will be accountable if harm occurs.”
The EHRC says helping companies to avoid biased automated decision-making is part of its strategic plan. For now, companies should consider how the Equality Act 2010 applies to automated processes, that they understand fully how their systems operate and that they are transparent about how they are used, says Jackie Killeen, director of regulation at EHRC.

This article is based on a piece written by Neanda Salvaterra for Agenda.

You might also like