Artificial Intelligence
From garage to grid: how one company is using AI to redefine infrastructure intelligence
Reading time: 3 mins
AI development is not a linear process, so a responsible approach is needed to ensure businesses achieve objectives
For anyone building AI systems today, it’s easy to get swept up in the promise: faster delivery, frictionless automation, market differentiation. But for Dr Simon Fothergill, AI consultant and founder of Lucent AI, there’s a more grounded truth: “People say, ‘humans need to be in the loop’, and they forget the fact that humans have never been out of the loop. Because where do you think the model came from in the first place?”Fothergill, who will be speaking at the forthcoming AI Ethics, Risks and Safety conference in Bristol, has spent years designing applied AI systems that work in the real world. And in his view, some businesses treat responsibility as something of an afterthought, not taking it seriously enough, and that can have consequences.
“Go slow to go fast,” Fothergill says, invoking an idea familiar to agile developers. “If you build it more carefully, it takes a bit longer. But a year later, you’re still delivering things at speed, rather than having ground to a halt because it’s too messy.”
A key principle, according to Fothergill, is that responsible AI should be rooted in practical values like transparency, authenticity, and fairness. These are not just aspirational. They guide the design of systems that can be trusted, maintained, and scaled. He outlines his own expanded definition of responsibility through the lens of a neat acronym, FACETS, that stands for Fairness, Authenticity, Climate, Education, Transparency, and Safety.
Transparency, he explains, goes beyond knowing where training data came from. It also involves understanding how a model was built, how it makes predictions, and how uncertainty and bias are communicated.
“All data sets are biased in some way,” he says. “The interesting question is: how? That’s where transparency really plays a key role.”
Fothergill is sceptical of AI development that charges forward without clear intent. He uses an agile metaphor to frame the point: “The task isn’t building a car, it’s getting from A to B. A car might seem obvious, but it could be unnecessarily complex. You might start with a scooter, then a bike, and iterate. When you get to B, you might even realise you need a boat instead of a car.”
What matters, he says, is understanding your metrics from the outset: “Agile only works if you know what you’re optimising for.” While commercial metrics like speed or efficiency are common, start-ups must also consider values like transparency, sustainability, and long-term trustworthiness.
Fothergill argues that strong technical leadership is essential to understanding the trade-offs. Optimising for transparency or authenticity, he says, affects everything, from how you collect data to how models are structured and teams are organised.
That clarity can be difficult to maintain under certain types of investor pressure, especially where business models depend on rapid growth or commoditised outputs. But he’s quick to point out that not all investors are alike. “Some are prioritising climate change, renewable energy, or trustworthy systems,” he says, adding that it really depends on who’s at the table.
Another challenge is what Fothergill calls the “glasses problem”. Engineers can’t build responsible AI if they can’t see clearly what they’re trying to model. “An artist can’t paint a picture unless they can see what they are painting,” he says. In this analogy, engineers need help seeing, and that’s where domain experts come in. But domain experts also need help articulating what they see. Collaboration, he argues, must be mutual.
“You have to have engineers who are willing to listen to the domain experts and who can ask the right questions,” he says. “And you have to have domain experts who are willing to answer and who can point out things the engineers might have missed.”
Leadership in these relationships is fluid. While engineers often begin the process, domain experts become increasingly central as models are deployed. “The model might show empirically, objectively, something about the domain that the experts hadn’t realised,” Fothergill notes.Fothergill challenges the idea that interdisciplinary work is inherently fragile or unsuitable for modern AI development. He draws on CP Snow’s The Two Cultures to illustrate the persistent divide between scientific and non-scientific disciplines, and the discomfort some may feel when navigating across those boundaries.
One of Fothergill’s sharper critiques is aimed at the increasing reliance on what he calls ultra-processed foundation models (UPFs) – a deliberate nod to ultra-processed foods. “If you go to the supermarket and all you see is OpenAI milk, what are you really choosing?” he asks.
He stresses that while foundation models are trained, many no-train systems built around them fail to train or adapt the models further, essentially plugging them in as-is, without scrutiny. While this might seem convenient, it disconnects developers and users from the assumptions, uncertainties, and limitations of the model.
Fothergill does not argue that everyone needs to build their own model from scratch. “The point is to get an accurate enough model for your problem. If one exists already, then great,” he says. But he notes that relying entirely on external models can limit intellectual property generation and weaken long-term understanding and trust.
“You can’t trust it as much as you might need to,” he adds. “And it’s harder to connect with the humans behind it.”
Ultimately, Fothergill believes responsible approaches to AI lead to more sustainable businesses. “If you keep hacking things together, it is hard to scale,” he says. Over time, systems built without clear intent or structure become brittle and harder to evolve.
That fragility isn’t just a technical risk, it has reputational and regulatory implications too. In a world increasingly concerned with explainability, copyright, and auditability, early shortcuts can turn into costly liabilities.
But responsibility, Fothergill argues, also presents a commercial opportunity. “You can make responsibility part of your value proposition,” he says, pointing to companies that integrate ethics and care into how they operate and thrive because of it.
This isn’t about being perfect, he adds, but about being deliberate. “It’s about being honest, transparent, and willing to keep learning.”
Dr Fothergill will explore these ideas further at the AI Ethics, Risk and Safety conference, where he hopes to spark more realistic and honest conversations about what it takes to build trustworthy AI.
“There’s so much interest, so much resource going into AI,” he says. “If we could just get everyone on the same page, from the engineers to the boardroom, we could really fly.”
Working as a technology journalist and writer since 1989, Marc has written for a wide range of titles on technology, business, education, politics and sustainability, with work appearing in The Guardian, The Register, New Statesman, Computer Weekly and many more.
Quantum
Reading time: 10 mins
Quantum
Reading time: 10 mins
Quantum
Reading time: 11 mins
Robotics
Reading time: 3 mins
Quantum
Reading time: 3 mins