AI ethics: trust, impact, and the age of agents

A brief recap of the AI Ethics, Risks and Safety Conference in Bristol’s Watershed last week

Marc Ambasna-Jones

You can’t keep a good AI down. Or a dangerous one, a biased one, or a misunderstood one. And that’s the point. AI, we are consistently told, is moving fast, and while that may be true in some industries, in others it has yet to find a reliable purpose. But as we contemplate the near and longer-term future of AI development, how do we build trust and ensure safety and ethical considerations?

The AI Ethics, Risks and Safety Conference in Bristol recently featured voices from across the UK’s AI landscape, from government and academia to start-ups and big tech. Discussions moved well beyond technical milestones, focusing instead on the growing responsibility to embed ethical thinking into design, deployment, and governance from the outset.

Several key themes emerged. Trust was a recurring concern – not just trust in the systems, but in the motives of those who develop and deploy them. As AI becomes more embedded in everyday decisions, the question of who is accountable and how transparency can be maintained becomes increasingly urgent.

The idea of agentic AI, where systems plan and act on behalf of users, added further complexity. These systems may enhance productivity and convenience, but they also carry new risks – the potential for misuse, manipulation, emotional dependency, and unclear chains of responsibility when things go wrong.

Energy consumption was another pressing issue. While AI could help solve climate challenges, it also contributes to rising energy demand. The conference explored whether we are making the right choices about model size, deployment, and infrastructure, to ensure AI supports decarbonisation efforts, not undermines them.

And there was a clear call to ensure AI development considers society as a whole, with discussions about the future of computer systems and the importance of democratising AI technologies. There was also a nod to Isambard-AI, the UK’s fastest supercomputer, based in Bristol. The second iteration of Isambard-AI is about to go live and will start opening its doors to businesses as well as academics and government.

The day concluded with a panel discussion that brought all these threads together, a reminder that the cultural and political implications of AI are inseparable from the technical ones. As one panellist put it: “We can’t regulate our way out of poor design. We have to build better systems, from the start.”

If there was one takeaway from the whole event, it was that responsible AI isn’t just about what we build, but how we govern, question, and live with it. And that demands collaboration and actually doing something about it.

Related Story:
Marc Ambasna-Jones
Marc Ambasna-Jones / Editor-in-chief

Working as a technology journalist and writer since 1989, Marc has written for a wide range of titles on technology, business, education, politics and sustainability, with work appearing in The Guardian, The Register, New Statesman, Computer Weekly and many more.

PANEL EVENT

Convergence of Critical Tech in AI & Telecoms

Book now

SCIENCE CREATES

Designed for Deep Tech event

Request a ticket

SETSQUARED BRISTOL

Startup Stories - Spinout Edition

Book now