There’s appetite for AI but no real roadmap. We need to change that

We speak to Karin Rudolph about why AI ethics is a business imperative to encourage investment and boost innovation

Marc Ambasna-Jones

As the founder of Collective Intelligence and the force behind the upcoming AI Ethics, Risk and Safety Conference in Bristol, Karin Rudolph is calling time on the “move fast and break things” approach to artificial intelligence (AI). She says that ethics applied to AI isn’t a box-ticking exercise, it’s a blueprint for better innovation.

Rudolph knows first-hand that businesses are under pressure to adopt AI, fast. But with little guidance and few case studies to lean on, many start-ups and SMEs are flying blind.

“There’s almost no support on how to do this responsibly,” she says. “AI adoption requires an organisational shift that’s cross-functional and multidisciplinary and it must account for both the risks and the rewards.”

That’s the thinking behind the forthcoming AI Ethics, Risks and Safety event in Bristol. Backed by experts from DSIT, The Alan Turing Institute, Google, Mind Foundry and the University of Bristol, it aims to give businesses something sorely lacking in the current hype cycle: practical, actionable insight.

Forget the slogans. Ethics is strategy

Rudolph doesn’t pull punches on how AI ethics has been hijacked by vague promises. But she’s clear that if done right, it’s a serious competitive advantage.

A headshot of Karin Rudolph
Karin Rudolph, Collective Intelligence

“Currently, the term ‘AI ethics’ has become a buzzword,” Rudolph adds. “It’s often seen as a form of activism or reduced to slogans about ‘doing good’ or empty statements like ‘making the world a better place’ that tend to be completely meaningless. As a discipline, ethics requires knowledge, expertise, and an understanding of complex trade-offs.”

Rudolph adds that ethics is “an analytical tool,” as it “helps organisations make better decisions and build more robust, trusted business practices.”

By building in ethical thinking from the start and maintaining it throughout the whole lifecycle of an AI project (not just bolting it on after deployment), companies can avoid future pitfalls, ensure better outcomes, and even accelerate innovation.

So how exactly does ethics fuel innovation?

“One of the great strengths of ethical analysis is its ability to anticipate potential scenarios and pitfalls early and build strategies to avoid them,” says Rudolph. “For innovators, this kind of foresight is essential as it can help them to see the bigger picture and identify areas of development.”

It gives innovators the chance to ask difficult but necessary questions early on, such as: How will this system work when scaled? Who will benefit? Who will be negatively impacted? How can we design a better implementation?

“Most ethics frameworks include five to seven key principles but their value only becomes clear when applied to real-case scenarios,” she explains. “It’s not abstract. It’s about designing better systems from the ground up. Ultimately building a better type of innovation that people will actually want to use.”

In short, ethics is not about building roadblocks, it is about creating guide rails for building tech that people trust. And that trust, argues Rudolph, is what drives adoption at scale.

The UK’s ‘pro-innovation’ stance: helpful or hype?

On regulation, Rudolph is measured. She sees the UK’s light-touch approach as well-meaning but potentially misleading.

“‘Pro-innovation’ is often used as a slogan, but it implies regulation and innovation are at odds. They’re not. Clear, well-defined rules build trust, encourage investment, and support mass adoption, which is exactly what many governments, including the UK, want.”

Rudolph adds that we absolutely need clear, well-defined rules that are part of the public discourse, as this will reassure people that their rights are protected and that AI is being used responsibly. What we don’t need, she says, is “over-complicated rules and bureaucratic burdens, as this could have a detrimental effect on entrepreneurship and innovation.”

She’s watching the EU’s AI Act closely. While ambitious, its real impact will come down to how it’s implemented on the ground.

Bias, agency, and the risks we’re still underestimating

While bias and misinformation dominate headlines, Rudolph says some of the deeper, more systemic risks are still being underestimated.

“Bias is important,” she says, “but it’s also subjective and can be politicised to fit specific agendas. What concerns me more is our increasing overreliance on systems we don’t fully understand.”

Rudolph warns that when we delegate complex decisions to black-box systems we risk weakening human agency, and that’s not easily reversed.

“We want to believe we’re in control of our choices,” she says, “but if we don’t understand how decisions are made, we’re sleepwalking into situations where explanations become even harder and that raises serious legal and ethical challenges.”

What next for ethical AI?

“I don’t see ‘ethical AI’ as a statement or a product,” says Rudolph. “It’s a process: ongoing, iterative, and embedded throughout the lifecycle of a project.”

She’s optimistic about the potential of AI assistants and agents to expand access to health and education, and to accelerate breakthroughs in medicine. But without clearer regulation, better transparency, and more attention to public trust, we may lose control of where this technology takes us.

Related Story:
Marc Ambasna-Jones
Marc Ambasna-Jones / Editor-in-chief

Working as a technology journalist and writer since 1989, Marc has written for a wide range of titles on technology, business, education, politics and sustainability, with work appearing in The Guardian, The Register, New Statesman, Computer Weekly and many more.

PANEL EVENT

Convergence of critical tech in AI & Telecoms

Book now

INTENSIVE ONLINE COURSES

Learn quantum information & quantum computation

Find out more

UK Conference

Help shape the future of AI

Find out more