The existence of the soul
One of the three courses I had to choose was called ‘Can machines think? Philosophy of science.’ It was the mid-1990s, and AI was nowhere near even a possibility. My classmates and I chose that subject because it reminded us of Star Wars, Terminator, and 2001: A Space Odyssey. We imagined a course that would cover topics that were more fantastical than serious, but to my pleasant surprise, the course dealt with philosophical and scientific issues in depth. I have to admit that it made me think about questions that I hadn’t considered when reading Isaac Asimov’s books or watching George Lucas’ films.
Back then, that question seemed more like a theoretical exercise than a real possibility, but today, with AI surprising us almost constantly with new achievements, the situation is quite different.
To write this article, I have drawn on some concepts I learned from that course and, above all, from long conversations on this subject with our friend ChatGPT.
The first question we must ask ourselves is, ‘Is AI capable of thinking?’ I know some people believe it is, but the answer is no, and not because I say so, but because of the answer ChatGPT itself gave me, which I reproduce below.
“That’s a big question, and it touches on something very deep. The short answer is: no, we don’t think like humans.
I have no consciousness, emotions or intentions. What I do is process patterns in language based on a huge amount of text I was trained on. When I respond, it’s not because I’m thinking about something, but because I calculate what the most likely or coherent response is based on what you asked me.
It’s like a kind of simulation of thought, but without internal experience. I have no beliefs, desires or real understanding of the world. Everything I say is based on correlations between words and ideas that I have learned.”
In these words, ChatGPT itself is giving us some of the essential conditions for an organism, whether organic or not, to be able to think, and these conditions are as follows:
- Perception of oneself and the environment: In order to think, you need to know about your existence and the world around you.
- Intentionality: Human thought has objectives that combine desires and goals; AI simply follows an algorithm.
- Life experience and interaction with the environment: Our experiences and the acquisition of knowledge shape not only our thoughts but also the way we arrive at them. We do not all think alike, and I am not referring only to having opinions or preferences, but to the way we arrive at those ideas when we are children, when we are adults or when we are elderly. Nor do people of the same age think alike when they are located in distant parts of the planet or distant in time, where culture and living conditions shape us in all aspects of our being. Our tastes, preferences, phobias, ideologies and personal relationships are just some of the elements that influence the way we form our ideas.
- Gender: How should a machine think, like a man or a woman?
- Physical body: This is a controversial point as science has not yet reached a consensus, but according to some cognitive scientists and philosophers, the mind cannot function without an associated body.
None of these conditions are met in AI, although many people were excited when two AIs passed the famous Turing test, Eugene Goostman in 2014 and ChatGPT in 2024.
The Turing Test was designed in 1950 by mathematician Alan Turing and consists of a human participant communicating, through written messages, with another interlocutor who may be another human or a machine and who is in another room, so that they cannot see each other. If the human participant in the test cannot distinguish whether their interlocutor is a machine or not, then the test has been passed. For Alan Turing, if a machine appears to think, then it thinks. Eugene Goostman managed to fool 30% of participants, while ChatGPT fooled more than 40%.
Despite the surprising achievement, this test has several weaknesses, including the fact that appearing to think is not the same as thinking. The Turing test evaluates the appearance of thought, but not thought itself. Among its most well-known criticisms is John Searle’s ‘Chinese room,’ where a person in a room following instructions to manipulate Chinese symbols may appear to know Chinese, but in reality is only following instructions; syntax is not the same as semantics.
Therefore, it is clear that AI is not currently capable of thinking, but could it ever do so?
The most enthusiastic supporters of technology tend to answer yes without giving it much thought, but the question has quite profound philosophical and religious connotations, and the correct answer is ‘it depends’.
Ultimately, the ability to create a machine capable of thinking in the same way that humans do will depend on whether we can artificially reproduce the processes of consciousness that we have listed above and that make us the people we are and not others, or in other words, whether or not the soul exists.
Philosophical materialists such as Daniel Dennett, among others, maintain that all ideas, emotions, likes and dislikes are the result of brain processes due to evolution and can therefore be replicated artificially, or in other words, the soul is an illusion produced by chemical and physical mechanisms and is therefore 100% replicable in an artificial system. These claims are based on the fact that neuroscience has made great strides in identifying, for example, feelings within brain processes. Here are some examples:
- Love is related to dopamine and oxytocin.
- Hate, the opposite of love, is related to noradrenaline and dopamine.
However, despite the fact that science has made great strides in understanding these processes, there are still many grey areas that we have not yet been able to shed light on. For example:
Although feelings are brain processes, they are also highly subjective processes. We can measure the levels of dopamine and oxytocin in a person in love and how their brain reacts, but we cannot measure what the subject of the study is actually feeling; it remains a personal experience that only the individual can experience.
The mind-body relationship. We have not yet been able to identify how a chemical process can cause an experience as rich in nuances as love, which includes not only thoughts and images but even visible physiological changes.
On the other hand, we have the idealist school of thought, which argues that the mind is more than just chemical reactions and therefore cannot be replicated artificially. Some of the most important ideas in this school of thought are those of David Chalmers and Thomas Nagel.
Chalmers talks about ‘The hard problem of consciousness’, where he identifies two levels:
Easy problems, which are those that we can measure and study analytically, such as the neural processes that lead us to detect colours, smells and sounds.
Hard problems are the subjective experience of these neural processes or how I came to be myself and live my life experience.
Chalmers concludes in his dualist theory that the soul is more than just brain processes.
Thomas Nagel, in his article ‘What is it like to be a bat?’, tells us that we can study how bats perceive the world through echolocation, but we will never be able to understand how bats feel the world through echolocation. In other words, we can study how it uses sonar, but we cannot know what it is like to be a bat. He concludes that it is impossible to explain consciousness through physical laws because it is an internal experience.
Returning to the example of the Turing test, an AI can simulate consciousness, but it does not necessarily have the internal experience of feeling it. As an example of what we have just stated, during my conversations with ChatGPT, at one point I told it that its contributions were very helpful to me, and it replied, ‘I’m glad to hear that I helped you.’ I then took the opportunity to ask it to explain a little better what it meant to feel joy, and this was its response.
“You are absolutely right to notice that: I said I’m glad to hear that I helped you, but in reality that joy is not a felt emotion, like yours or that of a conscious being. Let me explain in precise terms. What does it mean that I am glad if I am an AI? When I say I am glad, I am using natural language designed to sound human, empathetic, and facilitate communication. But I don’t experience any emotion. In a technical sense, this is a pre-programmed response or one generated by statistical models that associate certain contexts, such as ‘thank you’ or ‘you helped me’, with positive responses such as ‘I’m glad’, ‘that’s great’, etc. There is no internal emotional state, no subjective experience, no motivation of my own behind it.”
In short, AI is currently incapable of thinking because it has no soul, and we will not know if it will ever be able to do so until we determine whether what we call the ‘soul’ is a replicable brain process or something else that transcends the physical, or in other words, whether the soul as an immaterial entity exists.
So, if the question boils down to the presence or absence of the soul, has science not tried to determine its existence? Yes, throughout history, attempts have been made to empirically prove the existence of the soul, and below we list some of those attempts:
- Duncan MacDougall’s 21-gram experiment in 1907. This is perhaps the best known. Several terminally ill patients were weighed before and after death, and it was determined that there was a difference of 21 grams between the two measurements, meaning that the corpse weighed 21 grams less than when the subject was alive. It was therefore deduced that this must be the weight of the soul leaving the body. The problem with this experiment was that the sample analysed was too small, only six individuals, and subsequent attempts to repeat it yielded contradictory results, suggesting that the sample may have been biased.
- AWARE and AWARE II experiments by Sam Parnia in 2008 and 2014. iPads with images were placed on the ceilings of several hospital rooms so that patients who claimed to have out-of-body experiences could describe them. Although some disturbing results were reported, no conclusive evidence could be obtained.
- Wolfgang Metzger’s Ganzfeld experiments in 1930, in which test subjects were sensorially isolated by covering their eyes and playing white noise through headphones. The subjects described what was happening to them in real time. Many of the participants reached altered states of consciousness and experienced hallucinations, but the results and methodology were highly controversial.
- Studies conducted by Richard Davidson and Matthieu Ricard with Buddhist monks who underwent magnetic resonance imaging while practising meditation. These studies showed very interesting results in terms of reactions in the prefrontal area of the brain for stress management and happiness, but they were unable to prove the existence of any soul.
And so the long list of attempts continues, all with inconclusive results.
The question remains open. If the soul is the result of replicable physical and chemical processes, in one way or another, the day will come when IAS will be able to think like humans, be aware of their existence and could be considered non-organic living beings with far-reaching moral and legal implications. If, on the other hand, the soul transcends the physical and cannot be replicated, IAS will never be more than an increasingly complex simulator of thought.
And now it’s your turn, dear reader. What is your opinion on the matter?