If we can program robots to develop intelligence, does this mean that we have finally conquered the understanding of what the mind is?
This is the question that goes through my head when I listen, watch or read stories related to advances in artificial intelligence.
Now, not only are we creating simple robots for industrial use, we are building robots with the capacity to learn from past experiences, pick up new behaviours from scratch, learn human languages unassisted and even develop their own languages.
We are even at the precipice of endowing robots with the function of self-awareness – an extremely daunting yet at the same time intriguing possibility. Imagine what it would be like to meet and communicate with another cognisant organism for the first time.
This is one of the possibilities of a new field of robotic engineering – cognitive robotics.
Cognitive robotics is breaking plenty of new ground and challenging many of our ethical assumptions.
It takes all of our understanding of our own minds and those of other organisms around us and applies and embeds it into a material architecture for machinery to allow it to develop intelligence, or perhaps even become cognisant.
Though still in its infancy, it is taking all of the acquired knowledge in neuroscience and psychology and, using the assistance of the computer sciences and robotics, attempting to reconstruct the very way humans have become so intelligent.
In a sense, programming robots to think for themselves and become intelligent is the contemporary equivalent of exhuming or borrowing dead bodies to study them and learn more about the human anatomy.
With robots, we are furthering our study of the anatomy of the mind.
Once we develop the capacity to build robots and computers with the capacity to think, and think for themselves, we will have taken one major lead forward towards understanding who we are ourselves and what the mind is.
Accurate knowledge of the mind has eluded philosophers, scientists and thinkers for millennia now. But today, science is inching ever closer towards an understanding of the mind.
Psychology has granted us with the keys to the mind. It has essentially drawn a comprehensive picture of what the mind looks like externally – it has captured the manifestation of the mind as it appears on the surface.
But this is still just the top of the iceberg that is the mind. We are still submerging into the water to take a glimpse at the overall grand structure of this iceberg.
Now scientists are piecing together more of the puzzle of the mind, and realising that what we perceive it to be – what we observe on this surface level – is the result of a vast and complex interconnected web of nerve cells which are powered by electrical impulses and chemicals. (I understand that this is an extremely oversimplified explanation, but it intends to highlight the idea that the abstract concept of the mind is the embodiment of these physical and chemical interactions.)
This is a mirror image of when scientists were beginning to delve deeper into biology and noticing that what the body appears like to the observer is the manifestation of phenomena occurring on a number of deeper levels in our physiological make-up.
The study of our anatomy was the illumination of these deeper levels and the mapping out of how our bodies work and function.
It isn’t so straightforward to do the same with the question of how brains turn physical impulses into thoughts because it involves a much more complex process. Brain imaging is allowing us to witness this process, but it can only go so far.
We have a fair understanding of the brain processes that generate cognitive tasks, but there’s still a long way to go to map out all the physiological interactions going on in the brain during these processes. For example, we may understand which areas of the brain light up during a mental activity, and what these particular areas do for us, but we are only now beginning to link the activity of individual nerve cells and neurotransmitters with specific thoughts and experiences.
But we need a way to replicate our theories of how these cognitive processes work.
Humans learn not just from observing nature but by reconstructing it.
We need to prove linkages between the causes and effects we observe when we deconstruct nature. When we successfully reconstruct the processes we observe, we prove these cause and effect relationships because we have systematically tested them out.
Any form of engineering, or construction writ large, is proof of our understanding of the laws of nature, as we are using them to our advantage.
If we have two competing explanations or models for how humans perform certain cognitive tasks, robot engineers could use both models in a robot and see which technique more closely mimics human performance. Robots can therefore be used as testbeds to experiment on different cognitive science theories.
In this way, building robots to function like humans, and creating computers to act like brains, are an attempt to see how close we are to understanding how we ourselves are programmed. We are trying to inch closer to grasping the anatomy of our minds.