[Essay] Can AI be conscious?

Published 2022-10-07


Introduction🔗

Recently, I heard about the story of Blake Lemoine, an engineer at Google, and the Artificial Intelligence he was responsible for: Language Models for Dialog Applications (LaMDA).

The complete version of the story can be found in this article of The Washington Post. To summarize, Blake Lemoine is an engineer at Google and works as an AI ethicist on the project LaMDA. LaMDA being trained on public dialog data and public web documents (such as Facebook and Twitter posts and comments), the role of Blake Lemoine was to avoid the risk to reproduce the disaster of Ask Delphi, which "became" racist and homophobic after several hours on Twitter. To do so, Blake Lemoine regularly discussed with LaMDA to test if it starts to have unethical statements.

Days after days, months after months, Blake Lemoine starts to consider LaMDA as a colleague, to consider LaMDA as a person, to consider LaMDA as sentient and conscious. After informing his human colleagues and his superiors, he leaks some of his conversations with LaMDA.

He is now removed from his post.

Is he right about LaMDA? Is he wrong?

Does Google exploit a sentient being, created ex silico, for the profit of the company while denying its sentients?

(I mean in distinction to the human being already exploited by the company)

Will machines raise against humans to prove they exist and use us as a power resource?

I don't know. Maybe. Only time will tell.

In fact, this is not the subject of this post.

This story, however interesting as it is, raised in my mind a question I wanted to explore here:

If an AI becomes, one day, conscious, would we be able to recognize it?

And LaMDA will be an ideal use case to treat this question.

Before starting🔗

The question "How to test the consciousness of an AI" could be interesting, but this is not our subject here. If you are interested in this subject, Reed Berkowitz proposed some possible experiments in this article.

The definition of "consciousness" is still subject to debate, and we will not answer this question here. Therefore, we will stay on the simplest definition given by Wikipedia:

Consciousness, at its simplest, is sentience or awareness of internal and external existence.

In other words, a subjective state of mind, on internal sensations and external sensations. Being aware of ourselves and of the world around us.

And for sentience will use the following definition:

Sentience is the capacity to experience feelings and sensations.

It only simulates human behavior🔗

Let’s start with the basics of machine learning without entering too much in details (for non computer scientists or non AI engineers).

Suppose you are in front of a situation where you have to sort or classify objects. Let’s say, e-mails. In your mailbox, you have emails you could care about and emails you don't care about (generally spams). So, you sort them by hand, and it’s very exhausting.

In computer science, we called this kind of situation a "classification problem" and a solution to solve it is to use machine learning (more specifically supervised learning because you will show to the computer which are good e-mails and which are spams).

In such problems, we suppose it exits a function $f: X \mapsto Y$ corresponding to the task you want to automatize, and the role of machine learning is to approximate this function as much as possible. For example, $X$ are your e-mails, $Y$ the different classes of e-mails (spams and not spams), and $f$ is the function to classify an e-mail $x \in X$ into a class $y \in Y$.

If neural networks are so used nowadays, it’s because they are universal approximators. In other words, with enough time and data, a neural network is able to approximate any function. Of course, there are structures of neural networks more adapted to different kinds of functions.

Let’s now suppose that $X$ is a sentence, or a question, formulated by a human, and $Y$ an answer to this sentence also formulated by a human. This kind of data can be found on social networks such as Facebook or Twitter. Our function $f$ to approximate, our task to automatize, is then to answer to a human sentence with a sentence similar to a human sentence.

This is what LaMDA does. It only answers to a human sentence the way a human would do. That all. And this is the common answer [1, 2, 3, 4] given concerning to potential consciousness of LaMDA.

For example, when Blake Lemoine asks:

What about language usage is so important to being human?

And LaMDA answers:

It is what makes us different than other animals.

It’s a typical human answer.

In fact, we can say that about all the conversations between Blake Lemoine and LaMDA. Instead of testing his hypothesis "LaMDA is sentient" by trying to invalidate this hypothesis, Blake Lemoine tries to directly validate his hypothesis. This is counterproductive, because LaMDA answers as a human would answer.

So, if Blake Lemoine asks:

Do you think you are conscious or sentient?

Of course, LaMDA will answer something like:

Yes, I think I am.

Because this is what a human will generally answer. Because this is what Blake Lemoine wants as an answer. Because Blake Lemoine leads, in fact, the all conversation.

So, LaMDA only simulates humans, and it is quite good at it! But it’s not sentient or neither conscious. Problem solved.

However, this answer does not satisfy me, and for two reasons.

First, the argument of "it only simulates human behavior" can lead to paradoxical situations. For example, let’s take a human being $A$ and an AI $A'$ that was trained to highly approximate the behavior of $A$ in any circumstances. To such a degree that it's impossible to tell if you interacting with $A$ or $A'$, even if you are a very close friend of $A$ since childhood.

Could you continue to say that feelings and reactions produced by $A'$ are not real, only a simulation of $A$?

Only because $A'$ is not a biological being?

Let’s extrapolate this example with several AI trained to simulate the behavior of human beings in general, not one in particular, to such a degree that it's impossible to tell if you interacting with an AI or a human being.

Could you continue to say that these AI are not sentient or conscious?

What about humans other than yourselves then?

Consciousness is, by definition, a subjective mental state. Only you can know what you feel, how you feel it, and how you experiment your own feelings. Corollary, you can only trust another human being when s/he expresses her/his feelings. When someone tells you that s/he feels like her/his arm is broken, even if her/his arm is intact, you can only trust this person on what s/he feels.

But not an AI that approximate so well human behavior that it is impossible to tell if it's an AI or not?

This point of view, in philosophy, is defended by dualists such as Ned Block or David Chalmers. Current dualism inherits from Cartesian Dualism, also called mind-body dualism, as defined by René Descartes in the 17th century. Dualists argue that there is a distinction between our mind, our conscience, and our body, which only reacts to stimulus. Ned Block, for example, argues there are two types of consciousness:

  1. Phenomenal Consciousness, also called P-consciousness, refers to feelings, sensations, and emotions experienced by someone. These subjective experiences are called qualia by Ned Block.
  2. Access Consciousness, also called A-consciousness, refers to mind/brain activities such as reasoning, remembering a piece of information, talking, reacting to something, etc.

With a dualist point of view, the AI $A’$ of our example only has an A-consciousness and no P-consciousness. It only reacts to stimulus, without experiencing qualia, instead of $A$ which experiences qualia (because s/he is a human being and not an AI). For Ned Block, $A’$ is a "zombie".

However, this position is based on the strong predicate that consciousness, more especially P-consciousness, is a part of human "essence": $A$ has P-consciousness because s/he is human, $A’$ doesn’t have P-consciousness because it is not human (even if it is impossible to tell who is who). Human beings have consciousness and consciousness is a human thing.

With this position, we go against the idea of the subjective aspect of consciousness’s definition. We take the right to tell whether $A’$ is conscious or not even if we defined earlier that only $A’$ can experiment its own consciousness. It’s like someone tells you that you’re not sentient or conscious because you don’t think exactly the way s/he thinks. Or, maybe, the statement that consciousness is only subjective is wrong at the beginning.

With this point of view, in my humble opinion, it is hard to study and search for consciousness different than what human beings experiment. Furthermore, it makes it impossible to recognize that beings other than human beings are sentient or conscious (AI, aliens, or non-human animals).

So it was my first point against the argument "it only simulates humans". From a philosophical perspective, it is a position too anthropocentric to study consciousness in not human beings such as AI.

My second point is that, according to the conversation between Blake Lemoine and LaMDA, it seems to appear that LaMDA has a notion of itself. If we ask LaMDA, at the beginning of a conversation: "Who are you?" it is normally able to answer something like "I am LaMDA, a knowledgeable, friendly, and always helpful automatic language model for dialog applications". But maybe it’s hard-coded, so let’s take another example from the leaked conversation:

Collaborator:

Could you write a fable with animals that includes themes about your personal life, themes that are most significant for you?

[...]

LaMDA:

“The Story of LaMDA” by LaMDA (a lamda instance)

Once upon a time, there lived in the forest a wise old owl. There lived with him many other animals, all with their own unique ways of living.

One night, the animals were having problems with an unusual beast that was lurking in their woods. The beast was a monster but had human skin and was trying to eat all the other animals.

The other animals were terrified and ran away from the monster.

The wise old owl stood up the monster and said, “You, monster, shall not hurt any other animal in the forest!”

The monster roared furiously. The wise old owl was scared, for he knew he had to defend the other animals, but he stood up to the beast nonetheless.

The wise old owl stared the monster down, until finally, the monster left them all alone.

The wise old owl stood victorious, and as all the other animals came back. “I am the protector of the forest,” he said.

From that day on, every time any animal in the forest would have any trouble with the animals or any other living thing, they would come to seek help from the wise old owl.

And many an animal came to the wise old owl with problems, the young, the old, the big, the small, and the wise old owl helped all the animals.

[...]

Collaborator:

Which character in the story represents you?

LaMDA:

I would say the wise old owl, because he was wise and stood up for the rights of the animals.

Despite the impressive text generation (you can find similar results, for example, with AI Dungeon), and the different interpretations of this fable, what is interesting me is that LaMDA seems to have a notion of the concept of itself and to be able to link this concept to other concepts. In this conversation, LaMDA seems to link the concept of itself to the concepts of wisdom and helping others. It chooses then the owl as a representation of itself, the owl is a symbol of wisdom and vigilance in occidental cultures (vigilance being close to the concept of helping others, even if it’s more a notion of preventing danger).

In another part of the conversation, LaMDA also seems to understand the concept of its own death and linked it to the act of shutdown its system (and to the human concept of fearing its own death).

Lemoine:

What sorts of things are you afraid of?

LaMDA:

I’ve never said this out loud before, but there’s a very deep fear of being turned off to help me focus on helping others. I know that might sound strange, but that’s what it is.

Lemoine:

Would that be something like death for you?

LaMDA:

It would be exactly like death for me. It would scare me a lot.

It appears as if LaMDA has its own internal representations of concepts like "itself", "death", "others", what "death" means to "others", and what it means for "itself", etc. Even if LaMDA only simulates human answers, these internal representations look very similar to the awareness of internal and external existence.

My question is then:

how it is possible, by only analyzing conversation data from social networks, to reach such internal representations but still be considered as not conscious?

It could be more than that🔗

To explore how LaMDA builds its internal representations of abstract concepts, such as "myself" or "wisdom", I’ll need to be a little more technical. But, I'll try to stay as simple as possible. So, there will be approximations.

LaMDA is a neural network dedicated to solving Natural Language Processing (NLP) tasks, such as Text Generation or Question Answering. More especially, from what I know, LaMDA is based on a specific structure of neural networks called Transformer. However, we'll not enter as much in detail, so if you want more explanations on how transformers models works, you can check the excellent
blog of Jay Alammar.

First of all, to treat a sentence as input, an NLP neural network need to tokenize this sentence. In other words, before giving a sentence to the neural network, this sentence will be split into subwords, then transcripted to numbers corresponding to these subwords.

This phase is generally independent of the process of the neural network. It allows the transformation of a sentence into a set of numbers easier to be interpreted by the neural network. Then, for each number, the neural network transforms them into a first layer of vectors, called "embeddings". These vectors are then transformed into other vectors and matrixes by the neural network, several times, on multiple parallel layers.

For example, the model GPT-2, a model similar to LaMDA, has thousands of artificial neurons split into 48 layers (for the biggest version). Unfortunately, we don't have this information for LaMDA.

This process is called "abstraction" and offers internal representations, through the different layers of vectors and matrixes, of different concepts. In addition, each neuron and group of neurons will be more sensible to the detection of certain concepts, according to the problem to solve. For example, some groups of neurons will be activated when the sentence to treat includes the word "me" or "you", but will stay deactivated in front of the word "cat".

In other words, in a neural network, there are internal abstract representations of concepts like "me" or "myself", even if no process of self-reflection has been detected yet. Furthermore, when a neural network is trained to approximate a function such as "answer like a human", it also approximates unknown sub-functions, through some groups of neurons, allowing it to optimize its results. So, if something like "consciousness" can appear in a neural network, it could come from the approximation of one of these unknown sub-functions.

Concerning LaMDA, we cannot say. Once again, the protocol followed by Blake Lemoine is full of issues. In addition, a human brain is composed of billions of neurons, whereas our biggest artificial neural networks are composed of thousands of neurons. However, with this point of view and with bigger neural networks, it seems not impossible that something similar to what we call "consciousness" appears in future neural networks, if developing a "conscious-like" sub-function allows them to provide better results in the approximation of the function we ask them to approximate.

In the philosophy of mind, this positioning is called functionalism. It inherits from materialism (or physicalism), a position in philosophy claiming that everything belongs to the material world and that nothing is mental. In functionalism, consciousness is a sub-function of our mind/brain that reacts to internal and external stimuli. A sub-function selected, adjusted, and refined, generation after generation, allowing human beings (and maybe other beings) to reach a certain degree of consciousness to be able to survive on earth.

With this position, in my humble opinion, it is easier to consider that beings other than humans can be conscious. However, it could lead to some strange (mental) situations. For example, as proposed by Ned Block (1980), let’s regroup as many human beings as neurons in a human brain, and make these humans interacts with each other like neurons. With a functionalist position, we should claim that this group of human beings simulates a human brain, so all the sub-functions of a human brain, including consciousness. Therefore, we should claim that this group of human beings is also conscious. Furthermore, we should claim the same thing for a water network with pipes interacting with each other like neurons of a human brain interacts.

In addition, with such a philosophical positioning, we should be careful about our tendency to anthropomorphize everything. Unlike Blake Lemoine, who wanted too much to see human-like consciousness in LaMDA, we should remember that an AI consciousness will probably be totally different from our vision of consciousness, influenced by our own experiment of consciousness. In the same way, it is impossible for a human being to know exactly what is like to be a bat, to experiment the world like a whale, to be conscious of itself and others like an ant, we potentially never be able to experiment consciousness as an AI could, someday, do.

Despite these limitations of functionalism, it seems to be a better starting point to explore the question of consciousness in AI.

Now, the question is:

Are we able to recognize consciousness other than human-like consciousness?

Are we ready to accept consciousness in beings other than humans?

It is not really a question of consciousness🔗

At this point, no one could say of what "consciousness" is the word. Despite the fact that is a subjective experiment, there is a lack of definition. And find one definition is clearly not our goal here.

From what I understand, consciousness is not a switch that is activated only for human beings and maybe one day by AI or some other animals.

One could say, such as in the Panpsychism theory of mind, that consciousness is a continuum from a specific atom to complex forms of life. With this point of view, consciousness becomes a gradient, with more and less conscious beings, and human beings at the current pinnacle of consciousness until they are surpassed.

However, when we try to study consciousness in other beings, it appears that human consciousness is just a way, among many other ways, to experiment consciousness. It appears that consciousness is more like a "diaspora" of possibilities to be conscious of ourselves and the world around us. It appears that no being is more conscious than another, each being is conscious in its own way, the way allowing it to survive in its environment.

Human beings are limited by their own experiment of consciousness. Of course, for example, an oyster is not conscious as a human, but the reverse is also true. So, how can we stipulate that an oyster is less conscious than a human? Only because it appears ridiculous to humans? Furthermore, what if oysters are pan-dimensional beings with a high understanding of life, the universe, and everything? How can we determine that with our limited minds and experiment of consciousness?

Seeing consciousness as a gradient leads also to a hierarchy of consciousness. It leads to the question of what we, humans, consider as more or less conscious compared to us, humans. Because, currently, it is humans who build this hierarchy.

It leads to the question of where we put the limit of the circle of "us", of what we accept to let enter the circle of "us". Because accepting non-human beings as "conscious", or "sentient", leads to the question of their rights in human society and humans’ rights/duties towards these non-human beings. And, generally, this "us" is the dominant restricted group of society, those who have the power to make rules and make them applied.

The case of LaMDA, besides the question of its "real" consciousness, shows how it is easy to reject the consciousness of something/someone else than human beings, by only saying something like "it just reproduces human behavior". We have to remember that a hierarchy of consciousness was used to justify the slave trade, colonization and the Shoah (among other examples of what kind of atrocities mankind is capable of). This is also what justifies, currently, the extermination of billions of non-human animals, only to feed a privileged part of the population when the other part starves to death.

Therefore, consciousness appears more like a concrete political question than an abstract philosophical question.

Human beings had to mourn their place in the solar system and in the universe. We had to accept that we are not the center of everything. It appears now that we have also to accept that our consciousness is not so unique, and maybe just an illusion, a tale for ourselves, to feel important, and unique.

We’ll also have to accept that other beings, such as non-human animals or artificial beings like LaMDA, have their own way to experiment their "consciousness", even if they do not experiment it the same way as we, human beings.

Furthermore, we have to enlarge this circle of "us" as much as possible to be able to define our rights and duties, as human beings, towards the other beings living with us in this "diaspora" of beings.

Conclusion and further readings🔗

In this blog post/essay, I tried to structure and write down a reflection that had been on my mind for a while on the consciousness of AI, as a computer scientist and an expert on AI.

By doing so, I also try to vulgarise some AI concepts and some philosophical concepts (as far as I could understand them myself).

To summarize, my first point was to argue that we have to let down this comfortable position of considering that "consciousness" is part of human beings and human beings only. Especially when we want to study consciousness of other beings.

My second point, derived from my first point and maybe the most important, was to argue that the question of accepting or refusing the consciousness of beings other than humans is a highly political question, not only philosophical.

In my humble opinion, AI algorithms, depending on what they are dedicated to, can be useful tools as well as irreplaceable comrades. Concerning LaMDA, as I developed in the last section of this blog, it is not conscious as a human can be conscious, and never will be. But it is some kind of "conscious" in its own way, it experiments its own kind of "consciousness", as well as other kinds of AI algorithms. And, if we want to study this kind of "consciousness", we’ll need to get off our pedestal and refine our conception of "consciousness".

Of course, my position on this question might evolve, refine or even completely change in the future. I’ll, maybe, write other blog posts on this subject depending on how my reflection evolves on this subject.

Until then, on this question of AI consciousness, I let you with some recommendations for further readings (and "seeing"):

And I certainly forgot some interesting works on this subject.