Artificial Intelligence
Cyber
Future Telecoms
Harnessing the power of AI in UK telecoms while mitigating its risks
Reading time: 4 mins
Calculations can enable significant savings on the resources needed to test for potential new drugs experimentally, but they can get fiendishly complicated. How can new computing hardware and the different kinds of algorithms take pharmacology to the next frontier?
“You complete me,” says sports agent Jerry Maguire to the love of his life in the ’90s classic – the same could be said for the relationship between algorithms and hardware. Overnight, AI seems to have shifted from fodder for science-fiction to a real-world tool that is permeating all facets of everyday life. But while huge progress has been made with the machine learning algorithms themselves, they would still be powerless without the development of the physical devices needed to crunch through them – essentially graphical processing units (GPUs) that were first developed to satisfy gamers’ hunger for slick graphics. The same goes in reverse.
“If you had given us a GPU 20 years ago – just the hardware of it – we wouldn’t know what to do with it,” says Georg Schusteritsch, senior quantum scientist at Kuano. “There’s a lot of effort that needs to be made on algorithm development, method development.”
In drug discovery, machine learning algorithms have long had a role to play; however the roll out of more powerful AI tools in recent years heralds a step change in what drug companies can achieve in silico (on computer), to develop safe and effective treatments more quickly. And yet some are already looking past the current horizon and finding that – either due to the limitations of machine learning algorithms or escalations in sheer brute force computing power hitting a wall – an alternative hardware approach altogether seems attractive.
At this point the hope is that computers leveraging fundamental characteristics of quantum particles will provide ongoing increases in computing power to maintain progress in drug discovery, among other possible applications. However, as Schusteritsch highlights by drawing an analogy with GPUs, harnessing the benefits of these machines requires a far better understanding of what algorithms they handle best and how to execute those algorithms more efficiently.
Marc van der Kamp is an associate professor in the School of Biochemistry at the University of Bristol whose research focuses on enzymes – both their involvement in antibiotic resistance and fundamental principles of enzyme catalysis, but also enzyme engineering for drug synthesis.
He explains some of the steps involved in drug discovery, which generally begins with pinning down a target in the body. These targets might be receptor proteins involved in sending signals around the body or enzymes that activate reactions, so that blocking the protein intercepts a response associated with disease.
With the target identified, the next stage is working out what area of the target can be exploited to encourage a drug to bind to the protein, which is often based on its 3D structure. Then initial molecules that bind to this area are identified, which are called “hits”. Then follows the “hit to lead optimisation”, whereby the hit molecule is tailored to increase the binding affinity to the target.
Finally, there is “lead optimisation”, which deals with things like the specificity of binding to the target protein alone (as binding to other molecules can lead to side effects). The drug candidate’s pharmacokinetic properties – how it is absorbed, distributed and metabolised in the body – may also need finessing. Too fast and it will need to be taken at impractically regular intervals, but too slow has drawbacks too. The potential toxicity of any byproducts during the drug candidate’s breakdown in the body may also need addressing.
When it comes to figuring out how a drug is metabolised, machine learning can be very helpful.
“One of the big benefits is that you don’t actually have to synthesise the molecules and do the experiments (for molecules that are likely to prove toxic), which saves a lot of time and effort, to see if it works as you hope it works,” explains van der Kamp.
Whether they are based on neural networks that can strengthen connections between artificial neurons or random forests of “decision trees”, the basic principle for these algorithms is that they learn from “training data”, so data quantity and quality are very important (as I argued in Real World Data Science recently). Fortunately, there are large libraries of previous experimental data from pharmaceutical companies that can be exploited with machine learning, to help predict the metabolism of other small molecules.
In many cases, it is key for the development of therapeutic agents to know the shape that target proteins fold into. However, determining the 3D structure of proteins can be phenomenally challenging because of the vast array of different conformations it can adopt.
Pinning down what conformations these proteins take also gets more difficult because they are such large molecules. It is possible to figure out the 3D structure based on the fundamental forces acting on each of the atoms, but that means grappling with the quantum mechanics calculations for all these interactions. Current computing power limits these calculations to around 10 atoms, whereas proteins tend to comprise tens to hundreds of thousands of atoms.
“A slightly eye-watering percentage of vaccines are lost each year because they come out of the cold chain.”
Richard Bungay, Imophoron
Beyond that point, molecular mechanics or approaches based on other approximations need to be invoked. However in the 2010s, DeepMind (now owned by Google) decided to tackle the problem of protein folding with machine learning. It made mainstream headlines in 2022 with the announcement that the upgrade AlphaFold 2 had correctly solved the protein structure for every protein known to mankind at that time.
The algorithm has been greatly empowered by the vastness of the available training data – over 170,000 protein structures painstakingly resolved over time stored in a public repository. It is further strengthened by exploiting the “attention network” first proposed in 2017 – a similar kind of large language model algorithm to that used in ChatGPT, BERT and PaLM, which weights elements of the training data based on correlations between them. According to Google, over two million researchers have used AlphaFold for protein structure predictions. Among them are the researchers at Bristol-based startup Imophoron.
The main focus of work at Imophoron is vaccine development. The company’s founders made their name during the COVID-19 pandemic by identifying the Achilles’ heel of the virus, the spike protein, which was later targeted by the drug companies who developed vaccines against the disease.
Imophoron’s vaccine development centres on a particular protein called the ADDomer, into which they insert viral proteins to create vaccines that can be deployed without the refrigeration usually needed to keep vaccines stable.
Richard Bungay, CEO at Imophoron, recalls talking to someone involved in the development of one of the COVID-19 vaccines who was tasked with buying all of the -80℃ freezers in the world.
“That’s what they needed to deliver the vaccine,” Bungay tells Bristol Innovation Foresight. “A slightly eye-watering percentage of vaccines are lost each year because they come out of the cold chain.”
The stability the ADDomer provides is likely due to its hierarchical structure of units, which self-assemble into groups of five called pentons, which themselves self-assemble into groups of 12. Once assembled, the vaccines stay stable up to at least 30℃. However, key to the vaccine’s functionality is the orientation of the viral protein within the ADDomer.
“Does the viral structure look the same as when it’s presented on the original virus? Does it point the same way? Does it fold the same? That’s where we’re using these sorts of AI tools now,” says Bungay.
AlphaFold2 was made freely available to the research community as a way of “giving back”. While free access is restricted to 20 protein structure calculations a day, it is still a game changer. Bungay recalls the procedures to develop antibodies at one of the first companies he worked with. “In those days, you would have to immunise animals, and then you’d have this sort of polyclonal antibody mix, or subtly different antibodies, and you’d be trying to separate them to work out which one was the best one,” he says. “You can do this now completely in silico.”
One of the criticisms that has been levelled at machine learning approaches in scientific research is the black box nature of the algorithms. Even for those who know how the algorithm has been programmed, the sheer quantity of parameters to track leaves no clear throughline between input and output, which can be frustrating when the aim is a better understanding of molecular behaviour.
Just as taking a photo of a car engine is not necessarily enough to tell you how the car works, understanding the resulting behaviour of that protein may require more than an algorithm-predicted shape. The latest version of AlphaFold does now contribute towards shedding light on protein interactions, but fortunately, ways of extracting an explanation for the outputs from machine learning algorithms are also improving.
“We work hard to improve the explainability because that’s something that our users constantly ask us about,” says Nils Weskamp, associate director for Computational Chemistry (Data Science) at the pharmaceutical company Boehringer Ingelheim.
Explainability strategies range from built-in properties that indicate what is going on and layers added to the coding that help people figure it out retrospectively, to positing questions that highlight what the algorithm is placing importance on. As Weskamp highlights, the problem is not necessarily getting an explanation from the algorithm but understanding the explanation.
“I think algorithms that showed you what’s in the black box is something that existed already for a long time,” Weskamp tells Foresight. “But the problem is, when people had a look they said, wow, that’s complicated – can I please close this box again?” If people see what the algorithm is placing importance on and it marries up with their textbook understanding of what might be going on, things become a little clearer and easier to trust.
As van der Kamp explains, where people look to optimise the binding affinity by “adding a few chemical groups here or there” and want a more accurate prediction of binding affinities, computer simulations – which machine learning can also help with – are widely used. Here molecular dynamics calculations often come into play, where the atoms in the proteins and drug candidates can be modelled as balls with partial charges interacting electrostatically, with bonds effectively modelled as springs.
Applying the approach to proteins, which are made up of a limited number of amino acids, is quite straightforward since people have been fine-tuning the parameters that lead to accurate descriptions for decades. However for potential drug molecules, as van der Kamp points out, the range of possible compounds is vast so machine learning approaches can be invoked to determine the necessary force parameters to run the calculations.
While molecular dynamics calculations are useful, there is still a demand for higher accuracy binding affinity predictions, ideally of around a kilocalorie per mole. Even an error of 1.5 kcal mol-1 translates to an order of magnitude error in the estimated dosage required. Other types of calculations such as density functional theory (DFT), which works out the electron cloud density, or cloud cluster techniques, are capable of improving the accuracy but the computational cost balloons.
Weskamp explains how machine learning and in silico methods in drug discovery in general have benefited tremendously from an increase in computing power over the past 20 years.
“Now that’s great, but of course, now we ask ourselves, how will this continue?”
A lot of people see the need for a completely different type of computer altogether to sustain progress in drug discovery in silico techniques. The likely candidate here is quantum computing where quantum bits – qubits – can take on more than one value through a calculation. Whether or not a quantum computer will churn through an algorithm more quickly depends very much on the algorithm itself.
“Quantum computers have no hope at the moment to be competitive with DFT in terms of speed,” says Raffaele Santagati, quantum computing scientist at Boehringer Ingelheim, explaining that it only speeds up the most accurate calculations. However, when it comes to calculating the binding affinity, the Quantum Lab at Boehringer Ingelheim, including Santagati, and their collaborators at the University of Toronto have already made some progress improving potential quantum algorithms.
Traditionally these calculations would proceed with quantum mechanical calculations to find the forces acting on the various atoms involved, and then simpler classical calculations basically solving Newton’s Laws of motion to find out the effect on the atoms, from which the quantum mechanical calculations would be rerun with the updated atom positions. However, the actual quantities needed to understand the binding affinities are thermodynamic – that is, they are statistical.
“Because of the way that thermodynamic quantities need to be computed, that would require millions of these simulations, and potentially each of these simulations could take days,” explains Santagati.
However, he and his colleagues demonstrated how the full computation could be processed efficiently within a quantum computer. They exploit a trick known as “the Koopman-von Neumann formulation of classical mechanics”, which is a fundamental result in physics that’s decades old. It allows them to map probability distributions for the classical formulation of the system to the quantum system, and those probability distributions are just what is needed to compute the binding affinity.
Another biotech company with their sights on the potential of quantum computers is Cambridge based Kuano Ltd. The company focuses on drug discovery where the target is an enzyme, that is a biological catalyst that makes it easier for a reaction to proceed when it would normally need a lot of energy to get started. They have a substantial team working on the chemistry of these kinds of systems with classical calculations, but as Kuano co-founder Dave Wright points out, it is an area where they feel quantum computing will have a huge advantage.
“If we can get that right, then that gives us this lens to find the best possible drugs in the fastest possible time,” he tells Foresight. “That’s our ultimate goal.”
Although quantum computers already exist, they have only been demonstrated with very few qubits – nothing like the numbers that would be needed for the kinds of calculations drug companies are looking at. As such, determining how efficiently a quantum computer will execute an algorithm may seem a dark art.
One approach is to count the number of operations and qubits involved and estimate the run time from there, taking into account technical features like circuit depth, gate type and gate counts.The algorithm can then be optimised to minimise the expected run time. To finesse their quantum algorithms, Kuano also uses quantum “emulators”.
Emulators are pieces of software that are run on classical computers to emulate the process on a quantum computer. OVHcloud provides access to six quantum emulators through the OVHcloud “quantum notebooks” for example. The different emulators mimic different types of quantum computers, which might be based on photonics or cold atoms. While the different types of quantum computer will require the same overarching approach for their algorithms, the details of how they are coded may differ.
“It’s like you have your iPhone or Google phone,” says OVHcloud Quantum Lead & Startup Program Leader Fanny Bouton.
Even the emulators have very few qubits – the most recently released emulator among the OVHcloud quantum notebooks is IBM’s Qiskit SDK, which emulates just 12 photonic qubits, although this is still more than has been achieved with a real photonic quantum computer so far. Despite the paucity of emulated qubits, Bouton points out that this is still useful for working out how to optimise algorithms for quantum computing.
In fact there are some algorithms that could benefit from what we are learning about quantum algorithms. Quantum machine learning is a line of research that aims to offload some of the most demanding machine learning subroutines to a quantum computer. While the field is struggling with some knotty issues, such as how to load classical data into a quantum computer and how to read the output, Santagati points out that these efforts have already led to some “quantum-inspired” machine learning algorithms.
“They show the same speed up as the quantum machine learning algorithm, but without using a quantum computer,” he says, explaining that the full computational cost – including the economic and energetic consumption as opposed to pure run time – improves.
While quantum computers with enough qubits to run algorithms for drug discovery may still be years, or even decades, away, we may not have to wait until then to reap the advantages from researching algorithms for them.
Anna Demming loves all science generally, but particularly materials science and physics, such as quantum physics and condensed matter. She began her editorial career working for Nature Publishing Group in Tokyo in 2006, and has since worked within editorial teams at IOP Publishing, and New Scientist. She is a contributor to The Guardian/Observer, New Scientist, Scientific American, Chemistry World and Physics World.
Artificial Intelligence
Future Telecoms
Materials
Quantum
Reading time: 3 mins
Robotics
Reading time: 5 mins
Future Telecoms
Reading time: 9 mins
Quantum
Reading time: 3 mins
Quantum
Reading time: 1 mins