One of the biggest ethical, philosophical, and sci-fi scares is the day machines become self-aware. So far, it has been the subject of stories, movies, and debates, but will it occur? And, if it does, will there be the dreaded singularity, or will something else happen? Let us talk about conscious AI.
Technicalities
First of all, how can we define consciousness? If we want to be philosophical about it, we could say that it is a unique sense of self and the things around this self of ours. On the other hand, we could analyze a person’s brain and compare it to a sleeping or unconscious individual. How can you measure that in an AI?
Here is where we are going through uncharted territories since computer calculations are much like our brain activity. Machine learning experts can tell you that, when a machine learns to solve a problem, it does so based on previous experiences, so mistakes are the tools to progress. We also learn by making mistakes.
The Levels of Consciousness
Since we need to put a definition on everything in order to understand it, scientists have mapped out three levels of consciousness in the magazine called Science in 2017. The first level is level 0, usually labeled C0. It is the unconscious processing. With humans, that’s processing a language, light, or memory without being aware of it, like when you overhear a conversation on the bus without listening.
The second one is C1, the global consciousness. It is being aware of the world. C1 is also known as external consciousness. We analyze the data coming from outside to make our decisions. The third level is C2, the inner consciousness. This is where we correct the mistakes and learn on our own. It is the information about ourselves we introspect. The paper concluded by saying that, in theory, machines could be made to achieve C1 and C2 levels, though that would still leave some people unsatisfied.
The Singularity
In the novels and movies, the singularity is usually the inevitable outcome of AI developing a sense of self. The technological singularity is the theory that artificial super-intelligence will come to be and send the techno-world leaping forward. Sounds good, so what’s the problem?
The idea behind this usually follows AI becoming conscious. It sees humans as limited, it starts self-improving, and, before you know it, humanity is obsolete, and we are harvested for resources or done away with. Others believe that, if the event occurs, it will lead to prosperity. No comment.
Should We Be Worried?
In short, no. AI is advancing more and more, but we are simply making it able to make decisions based on parameters we implement. Even if Ai reaches the levels of consciousness we discussed, there is still no guarantee that it will truly be a machine knowing what it is, instead of just copying learned behavior. We’ll definitely be fine, at least for a while.