Lyndon Drake navigates between three worlds that appear to exist at a distance from each other: the ethics of capital markets, artificial intelligence, and theology. He calls himself a theologist-scientist, and attempts to build bridges between the cold logic of the algorithm, the dynamics of money, and classic questions of good and evil. He’s been both a high-ranking banker at Barclays Capital — where he worked with the United Kingdom’s Debt Management Office, central banks, pension funds and hedge funds during the Lehman Brothers crisis — and an ecclesiastical leader, as an archdeacon of the Māori Anglican Church.
Currently, he works as a researcher at the University of Oxford where, along with a group of experts, he has developed the Oxford Oath for AI Practitioners, which seeks to be a kind of Hippocratic oath for AI engineers and other professionals. Inspired by ancestral ethical codes, the voluntary pledge attempts to prioritize common good over mere technical efficiency, placing limits on applications that amplify inequality or erode human dignity. Drake calls it an antidote to the algorithmic savagery of Silicon Valley. “It’s not a drag on innovation, but rather a compass for AI to generate sustainable wealth without sacrificing values,” he says.
Question. What does theology have to do with artificial intelligence?
Answer. I would say there are two main factors. The first is that a growing majority of the global population has some kind of religious affiliation. And that number is growing. AI is going to transform society. Because of that, the reasons why AI interacts with society must include at least one explanation that makes sense to religious people. Otherwise, there is a risk of social disruption: there would be a purely secular justification and, at the same time, communities that feel excluded or resistant. But there is a deeper reason: some of the problems posed by artificial intelligence today have been the subject of theological reflection for hundreds of years, while computer scientists have been thinking about them for a relatively short time.
Q. For example?
A. Language. Until recently, AI was good at mathematics but bad with words. Now, that has changed. Theologists and philosophers have been reflecting on the relationship between language, meaning, and identity.
Q. It has been said that we are on the brink of solving all our problems thanks to AI. Do you share that optimism?
A. I have been at this for 30 years, and we have always had the hope that the solution is right around the corner. Though it always seems to escape us a bit, I think we are very close to solving the majority of categories of problems, if not all concrete cases.
Q. Is that good news?
A. It is positive in fields like medicine, where detecting illnesses before they develop will save lives. But the risk is that we will lose the capacity to make those diagnoses. Or we can consider autonomous weapons: they can save lives in a just war, but in the hands of those who start an unjust war, the damage is devastating. And the problem is that we will almost never be in agreement about which war is just.
Q. We also use AI to communicate, almost as if we were speaking with a peer.
A. That is a dangerous conceptual frame. We have assumed that a person’s value is tied to their ability, particularly in the area of language. The fact that a computer is good at mathematics does not make it human, nor a god. With conversational systems, we tend to bestow upon them the status of a persona, but we shouldn’t treat things that are simply powerful as gods.
Q. Will these chatbots be able to challenge the status of religious leaders?
A. Without a doubt. Chatbots have something positive: they are always patient and friendly, something that is hard for us as humans. But it’s a complex friendliness, the kind where they’ll always say you are good, when sometimes we need to hear the opposite. This forces us to ask ourselves what it means to be human. We are not only our capabilities. A person with a severe disability has immense dignity even if their capabilities are limited. If we equate value with ability, we will have misunderstood our own nature.
Q. You are promoting the Oxford Oath for AI Practitioners. What is your goal with it?
A. We want something similar to the Hippocratic oath for doctors. There are thousands of practitioners working behind the scenes who want to do good, but don’t know how. The goal is to incorporate moral deliberation into their daily practice and preserve the space for human judgment where it belongs.
Q. What commitments does someone who signs this oath make?
A. There’s one that is fundamental and controversial: to affirm that human beings have a moral value superior to that of any artificial entity. AI must be at the service of human flourishing. We also want designers to reflect on how their creations transform users. It does not prescribe specific actions, but rather guides moral reflection, regardless of how technology changes.
Q. Do you have support from the large tech companies?
A. We are in the receiving comments phase. We have published an open letter that has attracted the interest of workers at the large companies. The oath in itself is still being perfected, because we want it to last for centuries. The true test will come when it is made available: if it connects with the sector’s concerns, people will sign on.
Q. Some executives see ethics as a barrier to innovation. What would you say to them?
A. That we have proposed this in order to encourage innovation, but through redefining its ultimate goal. We want a way to develop AI that is, above all, good and useful.
Q. Without a law behind it, how will you avoid this becoming something that is purely symbolic?
A. An oath is made through individual and community conscience. It does not dictate laws, but it does create an ethos. If everyone signs on, we will be able to mutually question whether our actions contradict the oath. Society has an equal need for laws (like the EU Artificial Intelligence Act) and a shared moral framework that bestows social legitimacy on the industry.
Q. In your opinion, what is the biggest risk of uncontrolled AI?
A. Mass unemployment, due to its probability and impact. Not just because of money, but also because we tie our identity to work. Integrating this change into our conception of personal value will be very hard. My biggest worry is that they are creating systems designed — implicitly or explicitly — solely to get our attention rather than serving a greater human purpose. If its only goal is to hijack our time, it will wind up degrading the most valuable thing we have as humanity.
Sign up for our weekly newsletter to get more English-language news coverage from EL PAÍS USA Edition
