How will AI shape the Future of Work?

UCL Public Policy
6 min readFeb 8, 2022

--

By Helena Hollis and Corneila Evers

On a bright morning in November, people interested in the intersection of AI and work began filtering into the lecture room of the British Academy. For most of us, it was the first in-person event since the pandemic changed our working practices so radically. As a testament to the lasting power of that change, the event was also live streamed online, merging the plush late-Georgian London setting with living rooms, home offices and kitchen tables everywhere and anywhere. We had speakers from Silicon Valley and Australia joining our in-person panels via video. All of this seemed fitting to the topic at hand, questioning the direction of AI development in the world of work, and exploring future possibilities, with Professor Jack Stilgoe chairing conversations encompassing social and technological change. In this blog post, we share some highlights from the conversations that arose during the day.

AI, work, and people

Panel: Anna Thomas, Institute for the Future of Work; Tim Cook, Government Office for AI; Dr Ekaterina Hertog, Oxford University; James Farrar, ACDU. Video from Dr Seth Lazar, Australian National University.

Our first panel of the day discussed the impacts of AI on people’s work, considering what makes work good, as well as issues around equality and fairness.

Anna and James framed the conversation by laying out some principles of good work, with Anna pointing to the IFOW’s Good Work Charter and James making the case for “fair pay and fair play” — essential principles for what we all want in work. James argued these are not changed by the various technologies that can come and go in workplaces, regardless of the disruptive hype their creator might claim.

The panel discussed issues of equality, emphasised by Tim as an imperative for the UK AI strategy, drawing our attention to an IPSOS Mori report that showed a wide gender gap in the UK AI labour market. On the other hand, from AI development, the panel also discussed how AI use intersects with this gender imbalance. Ekaterina noted ways in which the pandemic has taught us a lot about the gendered breakdown of domestic labour. The automation of this kind of work (think future iterations of Roomba replacing human cleaning), could impact both paid and unpaid domestic work. There is both emancipatory potential to free women from their domestic work, and high risk of predominantly female cleaners being left without employment.

In addition to gender, racial inequalities are another key issue with which AI technologies in work intersect. The high failure rate of facial recognition systems for non-white faces makes this stark. James discussed companies such as Uber continuing to use these AI applications, leading to discriminatory outcomes. Damningly, when such systems are used, there is often a lack of accountability with the human managers unable to understand or override the algorithmic decisions. However, just as in the case of domestic robots, there could be positive outcomes too. AI as a manager that takes algorithmic decisions could have the potential to avoid the biases, stereotypical thinking, and downright pettiness that humans all too often exhibit.

But of course, it is not a question of either AI or human management. Seth noted the many ways that employers can observe and seek to control their employees through AI enabled technologies, taking an already power-imbalanced relationship (“it’s just people and people”), and giving it new parameters that will re-shape our working behaviours. Both he and Ekaterina pointed out the ways these technologies can permeate our everyday lives, blurring work and home. This makes it especially problematic that the collection of our data is wrapped in incomprehensible terms and conditions. Going forwards, it is clear we need more open discussion on the collection of data, and how it feeds algorithmic decision making.

All the issues raised point to a need for radically increased understanding of how AI operates, by everyone from workers, managers, to developers. Talking about the UK AI skills agenda, Tim conceded that in the past the focus has been very much on the ‘top minds’ developing the technology. Now, however, it is recognised that investment is for a broader set of skills around AI procurement, implementation, and critically also ethical questions. This was echoed by Anna, who made the case for both long term but also short term skills needs, as AI is rapidly developing and already impacting our ways of working.

AI, work, and place

Panel: Dr Zeynep Engin, UCL Computer Science; Martin McIvor, Prospect Union, Maria Luciana Axente, PwC. Video from Dr Angèle Christin, Stanford University.

The second panel on AI, work and place focused more closely on how AI can be integrated into work settings whilst assuring high levels of accountability and responsibility. How can we make sure that using AI in social contexts occurs in safe and transparent ways?

The debate mainly concentrated on corporate responsibility and what employers could do to assure good use of AI at work. One main point of this debate, focusing on the role of employees, was brought up by Martin. He pointed out the responsibility of employers to ensure that AI is used in ways that empower employees. This sentiment was echoed by Angèle’s video in which she mentioned the importance of not alienating employees through the use of AI but rather including them in decision-making on AI at work. Taking into account concerns, or preferences, employees might have could make the use of AI more inclusive and adapted to the needs of employees. As these needs might differ across geographical boundaries, co-design approaches for developing AI at work might be a tool for better responding to specific work contexts.

However, embedding AI into diverse social and cultural contexts might raise other challenges. The technical limits of AI sometimes make the translation of technology into social contexts more difficult. Zeynep explained, that when AI is developed, programmers need to make a range of simplistic mathematical assumptions to be able to formulate a social situation in mathematical terms. This makes it difficult to account for all potential biases or problems an AI might cause once it is deployed. Of course, once a problem, like an AI algorithm that discriminates against a group of people in a hiring process, is identified, it can be fixed but who is accountable for the potential damage caused in the meantime?

Maria focused on this question and pointed to the need for corporate responses. Thinking outside of the box to make sure that companies develop accountability mechanisms for the decisions their companies take regarding the use of AI, is a first step of a slow process towards accountability. She urges companies to not shy away from addressing inequalities AI at work might cause.

Despite this complexity of attaining more accountability when AI and algorithms are used at work, our panel agreed on the need for more regulatory supervision and transparency. Martin brought up the importance of setting standards on retaining human control in algorithmic decision making, and Maria pointed towards the need of 24h monitoring self-learning algorithms. In light of potential regulation on this, pre-emptive capacity building might help develop best-practices on how AI at work can be the most safe and inclusive.

Through the course of the day, we heard how AI both is, and is not, a radical driver of change in the world of work. Contrary to many grandiose claims of complete disruption, our core understandings of what it means for work to be good, and how workers should be treated, remain stable in the face of rapid technological change. Working with AI requires us to learn new skills, build strong collective voice, advocate for crucial rights, and develop new policies to ensure a positive future. Issues such as invasive data collection, opaque algorithmic management, and downright discriminatory AI uses require rapid increases in awareness and action now, and looking to the future we need strong ethical grounding principles and collective involvement to shape the trajectory of these technologies.

More about the authors

Helena Hollis is a UCL PhD researcher in Information Studies

Cornelia Evers is a UCL undergraduate in UCL European & International Social & Political Studies

More about AI and the Future of Work: A UCL and British Academy Collaboration

This collaboration between UCL and the British Academy seeks to ask critical questions for policy, business, practitioners and society on the ways in which AI could and should impact on the future quality and equity of work in the UK.

Find out more on the UCL website.

--

--

UCL Public Policy

Supporting engagement and collaboration between UCL researchers and policy professionals