Empowering humans in an AI world Andrew Law of the Open University talks to UNISON about the issues and opportunities posed by the AI revolution

“Suddenly, in the last five years, we have a peer that isn’t a human, that is also apparently thinking and learning and developing, and we’re interacting with it. For the first time in the entirety of human history.”

Andrew Law is explaining the magnitude of the change we’re facing with artificial intelligence (AI). It seems almost guaranteed, now, that AI is going to cause a major disruption to society and to the world of work. In short, we are living through the early stages of another industrial revolution.

Andrew Law

Andrew is, actually, quite optimistic. “I imagine, 150 years ago, people thought: ‘What will I be doing if I’m not using my body to do this manual work?’ Now, ‘knowledge workers’ are going to be asking: ‘What will I be doing if I’m not using my brain to do this knowledge processing work?’

“Some are talking about a life of leisure and no work. I suspect not. I don’t know what it will be, but I’m not terrified of it. I have a three-year old and a 13-year-old and I don’t worry for their futures. I’m quite excited in the sense of what technology might bring for them.

“But I am keenly aware that they don’t want to be subject to AI, they need to be aware of it and be an agent of it.”

This last thought was the crux of a presentation which Andrew gave to a recent ‘strategic summit’ of senior UNISON staff, which he called ‘Empowering Humans in an AI world’.

AI right now

Andrew works for The Open University (OU) as its director of business innovation. In his words, his job is “to look at what the OU is not doing but could be doing to enhance its mission.”

In part, this has taken the form of ensuring that the university is keeping up to date with, and making best use of, new technologies – whether it be different styles of online learning or, in more recent years, AI.

The latter has brought Andrew’s career almost full circle. He completed a masters in AI in the 1980s, but left the field because “I was thinking, this is very philosophically interesting but, at the end of the day, it’s going to take forever.” In fact, it’s taken just 40 years.

“AI is doing mind-boggling things,” he says. “In the edges of science, it’s able to predict chemical drug compounds, and detect breast cancer better than any human can. It can spot stars in the sky that we’ve never seen before and categorise them. It can predict how proteins will produce new structures inside our bodies.

“These things will produce new discoveries in science that simply could not be done by humans – truly transformative moments.”

Andrew notes that it’s not just cutting-edge science where AI is being used; most of us use AI daily. One example: “If you use Netflix, or Google, or Amazon, there is a bit of algorithm in there that’s going ‘You like this, other people who like this also like this...’ That’s AI working in the background.”

AI is advancing in a host of sectors at an increasingly rapid pace. But the conversation with Andrew revolves mostly around a particular strand which has announced itself on the world stage recently: large language models (LLM), the most famous, or infamous, of which is ChatGPT.

“I think the ground shifted fundamentally in about 2020 when this thing called GPT Three came out,” he notes. “I saw it do things like answer undergraduate questions pretty well, at least with the ability to pass the degree. And then I saw reports of people who said it had given good enough answers to MBA questions to get a master’s degree.”

ChatGPT, which runs off GPT 3.5, is not the sentient AI being of fiction, and Andrew is very clear that he still classes it as ‘a tool’. Nevertheless, he says:

“We've never had a tool that appeared to think.”

“They've been able to do lots of little things that we can – hammer, pull, lift, data process. Now, we have a tool that sits alongside us that appears to think.”

Large language models: how they work

So, how do these machines work and what can they do? Understanding the answer to those questions, argues Andrew, is vital to how people can avoid becoming the victims of AI.

ChatGPT is a free to use, web-based LLM, where you type in a prompt and out comes a response. In simple terms, they work like the auto-complete function on any smart phone – they predict what will come next, based on a set of data.

On your phone, that dataset is likely to be the principles of English language and grammar and your previous text messages, for example how you’ve finished off similar sentences in the past. But for LLMs, the principle is the same, but the dataset and the complexity of the prediction mechanism are on a completely different level.

ChatGPT has a network of billions of interconnected ‘neurons’ – 175bn of them. Each of these neurons receives signals, does simple calculations and, based on those, sends out its own signals to a number of other interconnected neurons. It works in a similar way to the human brain.

In terms of data, while OpenAI (ChatGPT’s owners) haven’t disclosed the exact extent and content, it is reported that ChatGPT’s dataset contains around 570gb of text data, equating to almost 300 billion words.

To put that figure in context if you read one average length novel (100,000 words) every day for 80 years, you will have read just under three billion words or 1% of ChatGPT’s dataset. The scale is impossible to comprehend – for humans, anyway.

However, LLMs don’t use the data in the same way as a human can. When you ask it a question, it does not try to give you a ‘true’ answer. It has no built sense of ‘truth’ and it is not trying to establish facts. It only gives you the combination of words it deems most likely to be the answer. As such, Andrew argues:

“There's nothing intelligent, planful, insightful or curious about these AI machines.”

“They are just doing data processing on a scale that we could never have imagined back in the ‘80s, because the data wasn't around, and the computing power wasn't around. But it is now.”

Unsurprisingly, this causes several issues.

The fallibility of human data

There is an old computing term – ‘garbage in, garbage out’ – which means the quality of the output from software depends on the quality of the data input. The same is true of LLMs.

Andrew explains: “Nick Shackleton-Jones is a fairly well-known educator, and he asked an AI system: ‘Show me some pictures of the perfect employee.’ What does it do?

“It showed more men than women. All the men were wearing ties. There were no women in ‘executive positions’, none of them were wearing ties, they all seemed to be in retail. Nobody was much over 30. Nobody was disabled. Everybody was white.”

“So, it’s really worrying if AI is sitting behind job search algorithms or short-listing processes that large companies might be using and asking the question, ‘What’s my perfect employee?’”

“It would say, well, they’re white, they’re male, they’re executive, they’re not disabled. And it is working from our data – what we’ve said, what we’ve done, what we’ve written.”

AI is not racist, or sexist, or ableist, those are very human traits requiring thought and intention. It makes no value judgements. It has not been deliberately taught that white, male workers with ties are better than any other type of worker. Instead it has predicted, based on its dataset, and calculations, what it 'thinks' the most likely answer is.

And through its vast trawl (usually of the internet) for data, the system has found 'bad data', and produced 'bad answers'. Garbage in, garbage out.

What jobs will AI affect?

In previous industrial revolutions, machines attacked the world of manual labour; for the most part, machines now dig the earth, rather than humans. This, Andrew says, caused a knowledge revolution whereby much of the human workforce moved into ‘knowledge jobs’.

In this new revolution, Andrew says, “AI is attacking the world of knowledge. I think 10 years ago it was assumed that AI would touch lots of simple data-processing, data-entry jobs.

“But now it’s looking to threaten jobs that people assumed would never be touched by AI – lawyers, accountants, strategists, copywriters, designers, videographers, video editors.”

Certain public service roles, especially traditional, office-based ones, may also be in the crosshairs of AI. That could mean a lot of jobs. According to the Office of National Statistics, local and central government administrative roles alone make up 4% of the total workforce.

And while AI may be used to great effect as a tool in health and education settings, in his presentation Andrew showed a very disturbing image of an old person in a care home talking to a physical chatbot.

“The suggestion [of the image] was: isn’t this great? But care requires a genuine sense of empathy, humanity, warmth and connection with another human being, which you know is going to be entirely absent with a robot.”

Empowering humans in an AI world

Andrew says there are two dimensions of empowerment in relation to AI.

The first is purely positive: “I think lots of people could be empowered by it, helping them be faster and more effective in their jobs – I can get papers summarised and data summarised, I can get ideas from ChatGPT in seconds that might have taken me an hour’s worth of thinking and scribbling.”

The second form of empowerment is more existential. “We are certainly going to be served by AI. We’re potentially the victims of AI. We may be supplanted by AI. And, in that sense, there’s an empowerment that’s really important, which is to understand what’s coming, what it can do, what it can’t do and what your role is in challenging it.”

He argues this empowerment should extend to rights. “You should have a right to know that an AI system is working with you, or alongside you. You should have the right to know whether your data is being taken and used by it. And you should have the right to challenge its outputs if you think there could be bias or problems.”

The role of unions in guiding AI progress

All these consequences, from supplanting jobs, to manipulating recruitment, to exploiting personal data, are clearly of concern to trade unions. So, what should they be doing about it?

Andrew separates his answer into two elements: education and influence. “I would say this, because I’m a member of a university, but it’s firstly about education. It’s giving people an understanding of the limits, as well as the possibilities of AI.”

“In crude terms, you’re more likely to become a victim of AI if you don’t fully understand what it is and how it works.”

“You’re less likely to be replaced or supplanted if you know something about it, and how to use it.

“So, I think unions have to make sure that their members are broadly aware of what AI is, the jobs that it’s likely to impact and the limits of the data, and are able to challenge the conclusions of the systems.”

In terms of influencing the development of AI, Andrew says:

“I think UNISON is in a uniquely powerful position to influence the future here. You’re the largest union in the UK for public sector workers ­– health, education, local government. They’re all going to be dramatically impacted by this.”

“There are two or three AI companies within a quarter of a mile of your head office [in London] who are deeply worried about the ethics. They want people to talk to them about what they should be doing. I think you should reach out to them and say, ‘We expect to be consulted about what’s happening and how it’s being used’. And I think you would be in a strong position.

“There are also academic centres of AI, in Essex, Edinburgh and Oxford, which I think would be very pleased to have the largest union in the UK contacting them and saying, ‘We want to work with you and understand what’s going on here.’

“UNISON could play a really critical role in influencing some of those groups.”

UNISON

UNISON recently signed an open letter to the Prime Minster about the Global Summit on AI Safety. It called out the marginalisation of communities and workers most affected by AI and called for the inclusion of a wider range of voices.

The union also has an in-depth guide about how to bargaining around new technologies in the workplace – explains the importance of why branches should be negotiating around the issue and provides detailed resources, checklists and model policies.

Words and design: Simon Jackson

Images: Andrew Law/Adobe stock/ Adobe Firefly