A race to the bottom? AI, content, and the creativity question

From “slop” to weak oversight, Foresight Live experts express concerns about quality and accountability as AI scales

Thierry Heles

“Authors are humans. Artists are humans,” said Sue Turner OBE from the University of Bristol, talking at a recent Foresight Live panel Content With Conscience: AI’s Next Chapter. Machines can generate text, images, and music. They can replicate tone, structure, and style – but that does not make them authors or artists, Turner argued. Creativity carries ownership and responsibility as well as output.

A headshot of Sue Turner OBE.
Sue Turner OBE, University of Bristol

The panel’s concern was not simply that AI can produce content. It was the speed at which organisations are embedding generative systems into everyday workflows. Marketing copy, internal reports, product descriptions, and scripts; tasks once handled by people are increasingly automated. As reliance grows, questions of quality, differentiation, and oversight move to the fore.

Turner, a professor in Practice (AI and digital technologies), referenced a 2025 EY survey of large enterprises during the discussion, noting that around three quarters are using generative AI tools, while only a third have formal governance structures in place. Adoption is advancing rapidly but the systems are scaling faster than the structures designed to oversee them.

The effects are already visible. Turner pointed to Merriam-Webster’s 2025 word of the year, “slop”, shorthand for the surge of low-quality AI-generated material across digital platforms. YouTube’s chief executive has publicly pledged to reduce it. Yet the economics of the content ecosystem continue to reward speed, volume, and engagement. When attention becomes the dominant metric, scale can crowd out craft.

Headshot of Richard Cole.
Richard Cole, University of Bristol and Bristol Digital Game Lab

For Richard Cole, lecturer in Digital Futures at the University of Bristol and co-director of the Bristol Digital Game Lab, the issue runs deeper than platform incentives.

“AI is fundamentally a recompilation engine of what we have already done,” he said, adding that it is “not the creative partner” but an engine that enables certain kinds of experiences to be designed.

Large language models and generative systems draw on vast quantities of existing material, reorganising and synthesising what already exists. They can produce plausible novelty. Whether they produce originality is another question.

Cole traced how ideas of authorship have shifted historically, from oral storytelling traditions to individual ownership under modern copyright law. Each transition altered how creative labour was valued and protected. Generative AI introduces a further shift. When output is assembled from patterns in existing work, attribution, ownership, and creative identity become harder to define.

That ambiguity is not confined to theory. Gaming is one of the sectors experimenting most actively with AI, but audiences have pushed back when AI appears to replace human expertise. Games using AI-generated art or voice work, for example, without clear disclosure, have faced backlash and review-bombing. The issue is not the tool itself, but whether it has been used to cut corners.

A headshot of Bed Ackland.
Ben Ackland, Meaning Machine

For Ben Ackland, co-founder of Meaning Machine, the technology should not be judged solely by its weakest applications.

“Humans create with AI as a tool,” he said. “It is not AI creating independently.”

In gaming and immersive storytelling, generative systems can extend narrative scope allowing characters to respond dynamically and environments to feel less scripted. Used deliberately, AI can increase freedom rather than reduce quality.

The difference, he suggested, lies in how it is deployed. AI can compress production time. It can also expand possibility. The outcome depends on intent and oversight.

“We’re definitely in a hype cycle right now,” said Megan Marie Butler, AI workforce lead UK at KPMG.

Companies, she suggested, are moving quickly to deploy generative tools – often before fully understanding how those systems affect workflows, risk exposure, and data governance.

A headshot of Megan Marie Butler.
Megan Marie Butler, KPMG

“Only about 1% of jobs are fully automatable,” she added. “The real question is what it’s doing to work.”

For Butler, the shift is happening at task level rather than role level. Processes are being redesigned around AI-assisted outputs. That changes accountability, as much as efficiency. The risks are not confined to creative quality either. They extend into law, data, and corporate reputation.

“Businesses don’t want to be the court case where those ethical decisions get made,” she said.

Regulatory approaches differ sharply across jurisdictions too, and global standards remain unsettled, with Butler calling AI governance a “big bag of worms.”

In that environment, commercial incentives tend to set the pace, suggested Turner.

“AI is about power,” she said, noting that control over infrastructure and training data sits with a small number of companies.

Cole framed the issue as socio-technical rather than purely legal. The systems influence behaviour as much as they process information. Without clear boundaries and institutional guardrails, errors scale as quickly as innovation.

A further concern raised during the discussion is homogenisation. When organisations rely on the same foundation models trained on similar datasets, outputs begin to converge. Tone flattens. Language patterns repeat. Distinctiveness becomes harder to sustain.

Cole’s description of AI as a “recompilation engine” reinforces that risk. Systems reorganise existing material at scale. They can produce novelty in form. Whether they produce genuine originality is more difficult to establish.

For brands and creative industries, this presents a commercial challenge as much as a cultural one. If competitors are using the same tools to generate marketing copy, scripts, or design concepts, differentiation depends less on the model and more on direction. Human judgement becomes an essential element, whether corporations like it or not.

Ackland’s emphasis on intent returns here. AI can extend narrative range. It does not decide what is worth saying. That remains a human responsibility. And whether AI content becomes a race to the bottom will depend less on the technology and more on the choices made around it.

Related Story:
Thierry Heles
Thierry Heles / Guest writer

Thierry is a freelance journalist specialising in university research commercialisation. He has over a decade experience covering spinouts and university venture funds globally, with his research cited in publications including the UK government's Spinout Review, the Financial Times, and The Wall Street Journal.

THE CONVERGENCE OF CRITICAL TECHNOLOGIES

Semiconductors & Future Mobility

Book now

ONLINE COURSE

Learn quantum computation with leading experts

Apply

WORKSHOP

Engage with national experimentation facilities

Register
Privacy Overview

This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.