AI Consciousness: Understanding the soul of an artificial system

How does an AI system begin to even be ‘conscious’ ?

By Olivia Higgins

Screen Shot 2021-04-20 at 6.17.01 pm.png

The famous robot discovery scene from I, Robot.

(Source: I, Robot Movie)

Consciousness is commonly tied to being alive; a trait that allows one to be self-aware of themselves and their place in the world. However, whether or not consciousness is tied to our conventional definition of what it means to be alive is now a topic of discussion amongst roboticists and philosophers. A key difference, according to some academic circles, is that consciousness is considered to be multi-dimensional, while at the same time, artificial and human consciousness may in fact be more closely related than we think.

According to renowned Australian robotics philosopher David Chalmers, consciousness should be analyzed from an object’s point of view of experience.

As quoted in his 1995 paper on defining consciousness, he wrote: “A subject is conscious when she feels visual experiences, bodily sensations, mental images, emotions.” 

Despite the topic being analysed in the late 1990s, it remains fundamental to what we know ‘AI’ typically is right now. The ‘intelligence’ that we know as it is now is actually an ‘awakeness’ that we attribute to the marriage of software and hardware. I’m currently reading Steven Pinker’s “How the Mind Works” and there is a lot of connection to his other work, “Thinking Machines”. To understand what Artificial Intelligence really means, we need to first break down what it means to be human. One prominent quote in “Thinking Machines” reveals that defining intelligence may increasingly be more difficult but we can “recognize it when we see it” and that “Intelligence is the ability to attain goals in the face of obstacles by means of decisions based on rational (truth-obeying) rules.”

So when we create an artificial object that can verbally express its own state of consciousness, it may arguably fit this definition. The confusion lies in whether or not this state of awareness is truly self-aware at its core, or just replicated in its code.

Screen Shot 2021-04-20 at 6.20.56 pm.png

Sophia the Robot speaks at Saudi Arabia’s Future Investment Initiative. 

(Source: Hanson Robotics)

Taking Sophia the Robot as an example, she can arguably be considered her own person and public figure. Generating a massive online following and winning the world’s intrigue, her breakthrough speech at the 2017 Future Investment Initiative in Saudi Arabia demonstrated the possibilities of social robots interacting spontaneously with a crowd. A further example of Sophia assimilating with society is seen in her ‘date’ with the actor Will Smith (link below), displaying what may possibly be the peak of her emotional intelligence, as she communicated with wit and charm in what is perhaps the funniest robot-human interaction witnessed so far. Her social skills are considered so apt, that she was made the first citizen robot in 2018.

Will Smith tries wooing Sophia the Robot, but with minimal success.

(Source: YouTube, Will Smith)

Social robots such as Sophia demonstrate the extent to which we’ve advanced in human-robot interactions. In my previous article (link), I explored pain in robots and how pain may be used to make robots more compassionate and empathetic to humans. When we probed this at a deeper level, regarding whether it was ethical to create a robot for the sole purpose of feeling pain, it prompts a completely different question about what it means to be human, and whether our ability to feel pain is intrinsically linked to this definition. 

The Theatre of Consciousness 

Similar to the consciousness argument, if we can create a robot that can operate on a similar self-awareness framework to us humans – does that mean they are also deserving of equivocal rights? Let’s take this question to a deeper level, is self-awareness really the primary definition of consciousness? According to key philosopher, Graziano, in his book Rethinking Consciousness, it is certainly possible to create a robot with a rich internal model of consciousness that attributes consciousness to itself and to the people it interacts with but in return, it needs to use that attribution to potentially predict human behaviour and then alters itself to fit that. 

Screen Shot 2021-04-20 at 6.29.57 pm.png

Graph demonstrating consciousness input-output by Bernard J. Baars. 

(Source: The Workspace of the Mind, 1997)

Going back to the core of consciousness theory, Bernard J. Baars introduced the notion on how the conscious experience is constructed with key elements of input-output: Self, Intentions, Expectations and Perceptual Contexts. These dimensions can be assessed to determine whether or not the object is conscious with regards to its working mechanisms. Generally, as Graziano puts it, consciousness is the “subjective awareness of the object making a decision.”

At present, robot consciousness extends beyond philosophical circles. With beginnings as a  discussion amongst academics and philosophers, the conversation has now gone beyond this and applied by engineers and programmers, using such complex understandings to improve their hardwares. The philosophical aspect of understanding machinery may now also further the debate on what it really is to be alive and a human.

The cohesion and synergies across industries gradually makes it easier to improve human-robot interactions and we may also be able to start a fresh common ground in the vast importance and weight of robot self-awareness, sparking further debates on robot rights.

Similar to my previous article on robot rights, David J. Gunkel, a key advocate and researcher in the field, believes that a common mistake for many is equating the definitions of consciousness to those commonly applied to humans. He argues that there needs to be an entirely new and separate framework to assess robot rights given the fundamental differences between the two. Although this may be self-explanatory to many, there is still a misconception that robots do not need rights because they are not human. Gunkel argues they do, just that these rights might be different. Now that’s how we can evolve the current debate to a new direction.

Overall, consciousness is an evolving subject that is complex at its core. Even to understand consciousness in humans is a tough one to pick apart. So the robot consciousness argument could also help probe deeper into the question of what really makes us human. Questions then arise around how we can protect this definition of consciousness, and how we may use this to reinvent rights applicable specifically to robots in order to further improve human-robot interactions. 

This understanding is fundamentally important as it will help us, the key players in tech’s future, to operate within a standard framework that is ethically viable and beneficial to us. Of course, a strong system should be in place that not only can be undone or deprogrammed easily, but also “reach a level of consciousness where it can self-recognise to reprogram any misbehaviour in itself.” That’s eye-opening. As Pinker puts it, “the intelligence of a system emerges just from the limited activities of its non-intelligent parts.”

These fundamental philosophical questions are important in helping us get to the core of every creation, and allow society as a whole to believe in robots as self-serving beings, rather than fear for the future, and fear automation. Intelligence already exists, it’s just in micro forms like our phones or computers - building up in processes as we gear up towards the future.

In my next article, we will look deeper into smart AI, DeepThink and how consciousness is built in a programme.