The Possibilities of AI and Good Work

UCL Public Policy
7 min readNov 27, 2023

What does a future in which we “got it right” look like?

Written by Dr Sinéad Murphy, UCL Policy Engagement Coordinator

In early November, world leaders, CEOs, civil society representatives, and AI experts came together for the UK Government’s AI Safety Summit, emphasising for some an urgency to address AI risks. “The future of AI is safe AI,” Prime Minister Rishi Sunak stated ahead of the event, echoing concerns raised by US President Joe Biden and Elon Musk about the dangers of unregulated AI.

Alongside speculations, AI’s potential impact on employment, with predictions of job displacement alongside increased productivity, was surfaced as another critical area. Claims range from job replacement to the transformative possibilities of AI to make us smarter and more efficient, as is argued by Mustafa Suleyman (CEO and Co-Founder of Inflection AI). However, the debate extends beyond safety and employment to broader questions of the societal changes AI may usher in. To that end our focus is neither on the hype or risks — instead, we posed, what does a future of work in which we “got it right” with AI look like?

In advance of the summit, the British Academy and UCL Public Policy hosted a fringe event on ‘The Possibilities of AI and Good Work’. With a focus on ‘AI for public good’, the event sought to extend and challenge the conversation beyond the risks of frontier AI. Our panel debate and audience dialogue was prompted by three key questions: what are the possible futures of ‘good work’? How is AI changing the nature of work? What support systems are still needed to ensure good work?

1. AI is already changing the nature of work

Compared to a relative lack of public interest in AI and work just a few years ago, ‘ChatGPT’ is now firmly in the popular lexicon and AI is quickly becoming an everyday feature of work in many sectors. This rapid change has given rise to a range of concerns about the effects of AI on work, including, as Professor Helen Margetts laid out, around hiring algorithms, workplace surveillance, anxieties around the future of jobs, the gig economy, the commercialisation of large language models, and intellectual property rights, among others.

Dan Conway (CEO, Publishers Association) pointed out that although publishing is a continuously modernising industry in which AI is already used throughout the value chain, the level of change wrought by generative AI has polarised the sector. Rob McCargow (Technology Impact Leader, PwC UK) reported that PWC’s CEO survey found that 40% of UK CEOs are concerned about the social and economic viability of their businesses over the next ten years, while one-third of employees think that the skills required to do their jobs will change over the next five years. In a climate where the majority of workers are either ambivalent towards or fearful of AI, how can business leaders create a culture of optimism?

According to our panel, this is an opportunity to reset industrial relations. As Anna Thomas (Co-Founder and Director, Institute for the Future of Work) stated, a ‘good work’ impact assessment can help to focus on maximising benefits, building trust, furthering our capabilities, and shifting our focus from job losses to the qualitative impacts of AI on work.

2. Alongside regulation, how do we take control?

With 25 years of experience in regulation, Sophia Adams Bhatti (Head of Purpose and Impact, Simmons & Simmons LLP) remarked that the siloed nature of the policy debate on AI is a known barrier to solving long-term societal issues in this area. Margetts concurred, observing that both the policy world and the academic community are fragmented on the issue of AI.

Examining the possible impacts on ‘good work’ is a way of bringing together policy and academic landscapes which are disconnected on a macro level, Thomas argued. Policy actors can play an important role as arbitrators, helping people have necessary conversations about ‘good work’ — especially where they do not have agency within their own work settings. Thomas also proposed that a work automation strategy, alongside an AI regulation strategy, is a useful step forward. Learning from previous experiences of advancements in digital technology (e.g., the internet, social media) where workers felt a lack of control is crucial.

Considering the support systems needed to ensure good work, Dan Conway pointed out that any regulatory action needs to be cascaded globally if it is to be functional. Issues are already arising, for example, around accreditation and copyright relating to large language models, as evidenced by the series of litigations taken by prominent authors in the US around authorship infringement. There are significant economic ramifications ahead if copyright protections differ across national borders — borders which AI is not bounded by.

3. Recognising inequities around access to AI is key

A ‘good work’ framework forces an acknowledgment of issues of digital poverty and addressing global inequalities around resource distribution. As was noted by Adams Bhatti as well as audience members, AI and work is a global issue but one experienced and perceived differently according to equity of resourcing and access across the global north and global south. It is vital to consider the ethics of disparity and responsibility between areas that experience poverty differently.

While the net impact of AI doesn’t have to be the displacement of work, Thomas stated, there is significant variation in how the adoption of AI is experienced. The IFoW’s study of AI adoption in the UK found that at firm level, organisation size is not a significant factor, with SMEs automating cognitive tasks at the same rate as larger businesses. The level of innovation readiness at regional level, on the other hand, has a considerable bearing — the study finds that more sectoral differences are observed around this factor than any other outcomes. As Adams Bhatti asked, what experience do we want people to have with AI, and how can people have a role in shaping that experience? Conway and Thomas suggested that unions and other representative bodies have a strong social role, potentially acting to safeguard against relational harms, develop worker-centred benefits and to cocreate solutions with workers. Disruptions presented by AI may not necessarily be positive, McCargow noted, but they do offer an opportunity for businesses to take up an ESG (environmental, social, and governance) agenda and reassess standards, ethics, and responsibilities.

Following on from issues around regulation, Adams Bhatti pointed out that being able to connect regulatory power at a macro level, across nations, requires agreeing a basic set of values. There is a need to establish a rights-based regulatory mechanism, with clarity on who and what is best served by these protections. To achieve that clarity, it is vital to pluralise and diversify debate beyond the potential risks and scalability of frontier AI, and to work beyond a typically narrow range of voices. As was acknowledged during Roundtable 8 at the AI Safety Summit — albeit cursorily — we need to hear from the public and there are many voices that need to be heard and amplified’.

Looking ahead: ‘Good work’ is a multi-actor, multi-level, and cross-disciplinary subject

Our panel agreed that while AI is a fragmented area of policy and research, issues around AI and work are cross-cutting and cross-disciplinary. Consultative processes and joined-up thinking are key to effective policy and regulatory interventions. ‘Good work’ will not emerge by default, McCargow stated, and it is in business’ competitive interests to ensure labour relations are handled well. Issues tend to emerge, he noted, where poor practices have been followed, such as a lack of workforce engagement, the use of biased algorithms, or policies imposed on people rather than developed in consultation.

A ‘good work’ framework can enable systems-level thinking which meaningfully considers the various impacts of AI on work and enables an alignment of work with fundamental values. We must consider the drivers and imperatives at play — as Adams Bhatti asked, why do we want to deploy AI in the workplace? Are bottom-line costs the right metric, or can AI allow us further capabilities to solve problems we all care about, like health and disease resolution? In what ways might technology companies and AI developers think through questions of purpose, to seek to produce profitable solutions for the problems of people and planet, rather than profiting from creating problems?

Addressing what ‘good work’ looks like requires multi-sector actors in policy, allied with a value-based charter, combined with stakeholder conversations around solving pressing national problems. As our panel discussed, facilitating diverse consultation on the current and projected effects of AI on work can equip policymakers with a more holistic and more nuanced understanding of what is needed to achieve ‘good work’. Ultimately, to harness the disruptions and impacts presented by AI as opportunities rather than threats, and to establish what the future of ‘good work’ looks like, clarity is needed on what we want to solve, for whom, when, and how.

The panel was made up of representatives from industry, civil society, and academia: Sophia Adams Bhatti (Head of Purpose and Impact, Simmons & Simmons LLP); Dan Conway (CEO, Publishers Association); Rob McCargow (Technology Impact Leader, PwC UK) and Anna Thomas (Co-Founder and Director, Institute for the Future of Work). It was chaired by Professor Jack Stilgoe (Professor of Science and Technology Policy, UCL), with opening remarks by Professor Helen Margetts, FBA (Director of the Public Policy Programme at the Alan Turing Institute and Professor of Society and the Internet at the University of Oxford). Find out more about the event here, and visit The British Academy’s website to watch the video of the debate.

--

--

UCL Public Policy

Supporting engagement and collaboration between UCL researchers and policy professionals