Alien Awareness: A study of Consciousness.
By: Bekim Sejdiu
Abstract: A study of human consciousness and the result of an artificial intelligence derived from technological progression. How these new forms of awareness could alter human society and even extend changes with regards to how animals should be treated.
Examining the 21st century, one can see human technological progress has been advancing exponentially and has resulted in technological wonders. The Japanese have taken technology to levels that foreshadow human society becoming intertwined with robotics. Just this year, in Nagasaki, the world’s first hotel staffed completely by robots had opened. The hotel is aptly named “Henn’na Hotel”, which translates to “strange hotel” (Doane). Everything is automated, from the check-in desk, to baggage claim, and even the concierge. A card isn’t needed to enter a room, as each one has a facial recognition scanner. Inside there is another robot that deals with several functions like lighting and contacting the front desk. Japanese culture has embraced the use of robotics heavily; as they greet people when they enter stores, appear in restaurants, commercials, and schools. There is a robot on the market for $1,600 that can read facial expressions and determine one’s feelings (Doane). Many robotics have become extremely sophisticated as time passes, to the point of disturbingly life-like robotics. One of the hotel front desk clerks happens to be one of these types of robotics. This machine has been programmed to mimic human facial responses, behave appropriately to certain situations, and communicate information. About 50 miles straight up, NASA has employed its robotic assistance machine named Valkyrie (NASA). It has many ranges of motion and the capabilities of managing the international space station on its own. It has the capacity to take orders and communicate with ground control if necessary. These machines, although expensive, are seen as safer alternatives and can be manufactured more quickly than human astronauts can be trained.
Back towards the surface of the earth, androids have become greatly popular in Japan. A more recent Android is taking the world by storm and its name is Actroid-F. It is a special model of robotics that have extremely life like appearance, range of motion, and communication. In fact, these models have the capacity to talk with another android like people do with each other (Serkan). They are being used in trials for psychological purposes in nursing homes to provide company for people in understaffed facilities. When their trial was over, the patients felt lonely and sad. It was as if someone these patients cared about had moved away. But this is a being that never actually existed, because their programming functions don’t have the capacity to be aware or alive. These robotics, even though they may take on human appearances and behavior, have no more awareness than the facial recognition scanners attached to the hotel rooms mentioned earlier. There is an older case study on situation of something similar in the 1990’s, where Stanford professor Clifford Nass arranged an experiment involving human reciprocity (Spiegel). Reciprocity is a feature of human culture that appears everywhere without exception. Humans have the need to respond in kind. In this situation they were responding to a computer that would answer questions for them. One computer would be extremely helpful, the other not so much. At the end of the questions, the computer would ask them to help them become better for future questions. The individuals who had the computer that was extremely helpful had helped that computer far more than the non-helpful computer. These individuals were also not aware that they had done so. Now this is different from the androids in the nursing home, as these are just computer screens without a voice or face. But still, humans reacted in a social way, in an anthropomorphic way (Spiegel). Clifford had this to say about the situation:
“The relationship is profoundly social,” he says. “The human brain is built so that when given the slightest hint that something is even vaguely social, or vaguely human… given the slightest hint of humanness, people will respond with an enormous array of social responses including, in this case, reciprocating and retaliating.” (Spiegel)
This means that our ability to determine whether something is alive or not is essentially flawed. Where human behavior can be replicated by software and programing and our brains will treat this computer as if it were a person. This path of development leads to some disturbing questions that will be faced within this century. What happens when these robots become so life like, so they match our behaviors so perfectly and seamlessly, that they cannot be distinguished as being a human or a machine based on programming? It raises several other questions about what it means to be human and how to define us. Are humans really conscious, or a result of genetic “programming” that makes one think that they are thinking? What is the nature of our sentience? This paper will go through how advancements in technology will soon begin to alter how existence is defined. How these new perspectives could cause spillover effects, where our opinions of all forms of life, that our alien to us, are reevaluated. Including whether or not advanced robotics, and even non-human entities, should be considered aware and if they should be treated differently with this new realization.
To understand these other forms of awareness, humans must first understand themselves. When reaching for the answer about our self-awareness, the scientific approach is to study the brain that produces it. The human brain is comprised of over a hundred billion interacting neurons (Graziano). Modern neuroscientists take the position that the brain is a network of neurons that process information. Michael Graziano, professor of Psychology and Neuroscience at Princeton, takes this idea and asks how does the brain become aware of information? And also wonders what sentience itself is? He states that in modern times, the process of how knowledge in the brain can be encoded is not a fundamental mystery. How one become aware of it is. The first scientific account on defining consciousness dates back to Hippocrates in the 5th century B.C. During that era of time, humans didn’t even have formal science that society enjoys today. Regardless, Hippocrates observed and noticed people with brain damage also happened to lose mental abilities. This was the first time, documented in history, when someone realized the mind is something generated by the brain (Graziano). The passage below summarizes his realization:
“Men ought to know that from the brain, and from the brain only, arise our pleasures, joys, laughter and jests, as well as our sorrows, pains, griefs and tears. Through it, in particular, we think, see, hear, and distinguish the ugly from the beautiful, the bad from the good, the pleasant from the unpleasant” – Hippocrates.
This discovery launched two and a half thousands years of what is now called neuroscience. When it comes to explaining what consciousness is though, Hippocrates’s discovery falls short. Scientists have a good idea that the brain is what makes it happen, but how the brain actually does it and what one can actually define it as is left out. Two thousand years later, in 1641, Descartes proposes another influential view of the brain basis of consciousness. In his perspective, the brain is a receptacle of an ethereal fluid substance (Graziano). What he called a mental substance. However, when he dissected brains he noticed that every brain came in pairs, the hemisphere that divide the brain into left and right. From his view, the soul of humanity was a unified, singular being and that it could not be stored in different sections of the brain as such. He came across the pineal body in the center of the brain and then postulated that this is in fact where the soul must be stored. The pineal body has been studied heavily since and is defined as a gland that produces melatonin and maintains reproductive cycles in the body. It has absolutely nothing to do with self-awareness (Graziano).
The 18th century German philosopher Immanuel Kant’s A Critique of Pure Reason, published in 1781 was once seen as one of the foundation bricks of modern science, especially modern psychology. Kant coins the term “a priori forms”, where the mind relies on abilities and ideas within us that are already present first before all explanations and from which everything else follows. In other words, there is no explaining the magic. This consciousness is simply supplied to us by a divine act (Graziano). These three individuals, Hippocrates, Descartes, and Kant represent only a few of the prominent ideas about the theory of the mind. There are countless others, even zooming forward to modern neuroscience. The same issue afflicts all of them. They don’t truly explain the nature of consciousness. Graziano states: “They point to a magician but do not explain the magic”. The first ground-breaking theories of consciousness were proposed by Francis Crick and Cristof Koch in the 1990’s. It was theorized that the electrical signals in the brain oscillate, and this in turn causes consciousness (Graziano). The idea goes like this: information passes amongst the neurons, where information is more efficiently linked from one neuron to another, and more efficiently maintained over a shorter time span, if these electrical impulses oscillate in synchrony. This idea leads to the thought that consciousness itself might be caused by many synchronized neurons oscillating at once. These ideas still point to the thought that the brain just does it. Hippocrates: “Brain does it”. Descartes, “Magical brain fluid does it”. Koch, “The oscillations do it”. Graziano brings up the important question of why there is an inner feeling at all. How does enhancing information processing through oscillations lead to an inner experience? Why do humans not just have information without the extra awareness? He tries to answer these questions with his theory, the attention schema.
The idea for the theory developed during Graziano’s study of the brain, where individuals who suffered from damage to the superior temporal sulcus (STS) and temoro-parietal junction (TPJ), located in the cerebral cortex that deals with higher mental functioning, had suffered a complete and total loss of awareness. Not only of others around them but of themselves. This was seen as a contradiction in the neuroscience community, why would someone lose a part of their own awareness with damage to a specific part of the brain? Where awareness is currently seen as a combination of all of these brain functions. This led to the idea that possibly the machinery in one’s head responsible for holding the concept that others are self-aware also participates in creating one’s own self-awareness. The basis of the theory helped to create a foundation of how consciousness and awareness are to be defined. Awareness is usually understood and everyone more or less agrees on its definition, to be aware. Consciousness on the other has many quirky definitions. Some insist that to explain consciousness, you must explain how one knows that they know. How one knows who they are, how they’re there, and how they are a person distinct from the world. Others claim one must explain consciousness with regards to memory, because bringing up memory gives self-identity. Even others have said to explain consciousness; one must explain how tactile sensations, like color, touch, temperature, etc., result in raw experiences of the world. In Graziano’s scheme, he divides consciousness as the inclusive overarching general term and awareness is a focused term. Consciousness is representing the entirety of one’s personal experience at any moment and awareness is applied to a portion of it, the act of experiencing. His bubble diagram puts consciousness surrounding two smaller bubbles, one called awareness, and the other next to that is the information with which one is aware of.
Graziano brings up a very interesting idea about how to define awareness further. That awareness itself could be information stored in the brain. He uses an unusual example to explain this idea. He has a friend, who is a clinical psychologist, which was helping a patient that insisted that a squirrel exists inside his brain. A physical, clawed, furred, animal that is taking up residence in his skull. He would be asked why he felt he was convinced this was the situation, and he would say the squirrel has nothing to do with being convinced or not. The squirrel was just there. He accepted this. He had complete access to the feeling of “squirrelness”. In this case, instead of Descartes’s “Cogito Ergo sum” (I think, therefore I am), his would have been “Squirrel ergo squirrel”. This poses two problems, an easy and hard one. The easy one is how to explain why this man’s brain would come to a conclusion with such certainty. It’s known the brain is an information-processing machine. When someone introspects, their mind is accessing the internal data. If it’s incorrect, then the brain will come to the wrong conclusion. Not only that, but the brain might attach a high level of certainty to it. These incorrect conclusions are common throughout human history and are understandable, people even now are dead certain of beliefs that are completely ridiculous and false. The hard problem is how can neurons result in an actual squirrel in one’s head? Where does all the biological parts of the squirrel go, and how does it even end up in the head? It seems physically impossible, as no known process could lead from neurons to a squirrel living in the brain. What is this magic?
Graziano asks the reader to imagine if everyone on earth had this same delusion, as if it were evolutionarily sculpted into our species, everyone would be completely immersed in mystery about this squirrel. People would introspect, find this furry animal in our skulls with all these properties, be dead certain it exists, describe it to each other, and collectively come to the agreement that all humans have it. Yet upon autopsy, one could not explain why the squirrel is gone. This forces us into a philosophical conundrum where people will have to accept that somehow this neurological machine and this squirrel are separate and it exists on some higher dimensional plane. Now back to reality, the problem isn’t really a problem because it doesn’t exist, it’s ridiculous. In a philosophical sense, you could argue that the squirrel did exist, but only as information. It existed only as a description within that man’s brain and nothing more. Graziano then suggests switching out the word “squirrel” and replacing it with “awareness” and realizing the logic is the same. People believe it’s inside of them. That they have a direct connection to it and are dead certain it exists. Everyone can collectively agree as a species on its basic properties. But where does this feeling come from, how can our neurons create this? What is the answer to this hard problem? The answer may be the same, where the properties of conscious experiences (the squirrel so to speak), the vividness, the sensation of “experienceness”, the ethereal/spiritual feeling of it in our heads, these properties may exist in our brains as part of a descriptive model. Perhaps, the brain doesn’t actually contain these things; Graziano suggests that the brain contains a description of these things. Stating the fact our brains are amazing at manufacturing descriptions of the world. He clearly states, however, that he is not suggesting that awareness is a delusion. Where delusion is harmful and impedes human functioning, in this attention schema, awareness is not a harmful result but an adaptive, evolutionarily beneficial model. But it is similar to the delusion in that it is a “description of a thing, not the thing itself” because it doesn’t exist in that way.
He summarizes the theory in a single sentence: “Awareness is a description of attention” (Graziano). We are thinking we are alive, and that is what makes us alive. People don’t like this conclusion, however, because it contains the “magic” into a less than spiritual phenomenon. Where the brain is placed as the creator and essence of humanity versus one or more of the 2,500 gods’ human culture has generated over the course of ten thousand years. Although from another perspective, it’s almost beautiful. That something so small on the scale of the universe can actually hold the concept of the entire universe within it and have the capacity to contemplate itself. Which would make my statement about the brain being beautiful a rather cocky one. As Carl Sagan has said, in the face of this lack of spirituality in our awareness, it is preferable to accept what is true rather than what we want to believe is true.
With regards to the theory, attention itself should not be seen as data encoded in the brain, but rather a data-handling method. It’s a procedure the brain undergoes, an emergent process. Now there is no reason for the brain to have any knowledge about how this works, as it serves no biological purpose for why such a thought would arise. Graziano is suggesting, however, that in addition to undergoing the process of attention, the brain is creating a description of this attention, and awareness happens to be that description. He argues this with several points of similarity between both awareness and attention:
- Both involve a target. One attends to something. One is aware of something.
- Both involve an agent. A brain performs attention, and awareness implies an “I” who is aware.
- Both are selective. Only a small fraction of available information is attended at any one time. Awareness is selective in the same way. A person is aware of only a tiny amount of information that arrives at their senses at any moment in time.
- Both have a level of focus. Attention typically has a single focus, but while attending mostly to activity A, the brain spares some attention for activity B. Awareness also has a similar focus in the same manner. One can be mostly aware of A and little aware of B.
- Both operate on similar information. Although most case studies of attention rely on vision, and it’s obviously not limited to just vision. Likewise, one can be aware of the same range of human sensing. If one can attend to it, then you one be aware of it.
- Both have a direct effect on behavior. When the brain attends to something, signals are increased, and gain greater influence over the nearby signals that then results in a behavior. When the brain does not attend to something, the signal is weaker and has little impact on behavior. Same with awareness, a person can choose to act. When someone is unaware of something, they are unlikely to act upon it. This is an important one because of the ability to drive behavior.
- Both involve deep processing. Attention is when information encoding requires more resources from the brain to organize the incoming “data”. Awareness implies an intelligence that is seizing something, being occupied by, or experiencing something.
- The most important feature of these two is that awareness seems to always follow attention. It’s like a needle that is directed towards one’s state of attention. In these moments, information that is attended to matches that which is reaching awareness. However, but because attention and awareness can be dissociated, one can state they are not the same thing. But that awareness is a close but imperfect indicator of attention.
This theory is able to also address the question in terms of why this would arise in our biology. To be able to have a concept of awareness, or a description of attention, gives our ancestors the evolutionary advantage to being able to ascertain the behavior of others around them. This is a valuable survival strategy, and would explain why aspects of the frontal lobe of the brain have grown over our species existence (Lui). Its likely natural selection found it favorable and it became more and more prominent as generations passed, leading up to our modern species.
After defining a clear idea, based on the attention schema theory, the question is whether humans can actually develop a robot that can create the same level of connections that the human brain can and result in a “description of attention”. Is a synthetic version of the brain possible? Some scientists believe a fully functioning cybernetic brain will be invented within a decade. This year, an artificial neuron was developed and it has the capacity of functioning like its organic counterpart (Simon). A neuron is a basic component that comprises up the brain, and it allows information transfers. This artificial neuron releases information/current to the residing neuron, whether it is organic or another artificial neuron. This means this technology can be used to fix disorders on the neurological scale, like with the man with the squirrel in his head from earlier, with these artificial parts (Simon). It also means entire sections of a brain can be constructed, made entirely of mechanical parts. Now the second part is far off from development as even though replication of the tiniest aspect of how the brain functions is possible, creating a frontal lobe, for example, would require more study and testing to achieve that scale of sophistication. How a human brain would react if it were replaced by an artificial frontal lobe is entirely up for speculation. If researchers continue on this path of development, however, it could be possible within this lifetime that humans develop a new form of existence. This would be called an artificial intelligence, which is completely different from the robotics discussed earlier. In fact, this entity wouldn’t even require a body that looks human; it would be an entity that would be aware of its surrounding and conscious of its own thoughts and ideas. It would be able to behave in ways no human mind could ever operate; it would be an entity without the limitations of one’s DNA.
Some argue, with regards to how humans store memory, that this isn’t possible. Nicholas Carr claims that there is a big difference, however, in how robotics and biological entities deal with memory. A good example is provided in his book, What the internet is doing to our brains, where he explains a situation where an individual learns an idea and the information is stored in the brain. Then some time passes for this individual and he/she has mostly forgotten said idea, and then relearns this information. When comparing this with a computer file that is created, and then deleted, and then created again. The human or nonhuman brain that learned is different from when it first learned the information and relearned it versus a computer which puts the exact information as it was before into storage. It does so without any relationship to other forms of information in its storage device unlike the brain that makes connections throughout all pieces of information (Carr).
Graziano, however, has a different opinion to Carr’s. He brings up Turing’s famous paper from the 1950’s that asked whether a computing machine could think. Since then the paper has been warped into many science fiction shows depicting computers gaining intelligence through increasing memory space. Authors like Isaac Asimov and Arthur C. Clark have expressed their work in this way. Hal from 2001: Space Odyssey, or the Terminators, and even the computers of The Matrix all express this unfounded assumption that if more processing units are added, that they will come alive and behave malevolently? Based off of the attention schema theory, a synthetic consciousness is possible and would require three major functions to be satisfied.
- The computer must be able to assort information and controls its behavior using some sort of method of attention, similar to how humans do.
- Secondly it must have an idiosyncratic view of awareness. Where the computer depicts attention as an ethereal entity located somewhere in space as some sort of ectoplasmic force that drives actions. In other words, that inner feeling humans describe.
- Finally, it must be able to link its attention to other information, including information about itself, information on an item, and recognize it is aware of said item.
In this way, such a computer would be able to exist in a similar platform as humans do, and contain a characteristic commonly known as a soul.
Human beings are afraid of this and it’s a situation that has been avoided or attempted to avoid for a long time. There is something about another form of intelligence that has a capacity similar/greater to our own that causes us concern. Why is this? Could it be the fact society has portrayed other forms of intelligence greater than our own as aggressors? How many countless Hollywood movies portray artificial intelligence as the destroyers of human civilization? It’s also important to recognize the fact that robotics in general have the capacity to destroy civilization without the need of being sentient, but for some reason society becomes disturbed with the idea that they’re alive? The astrophysicist Neil Desgrasse Tyson discusses this fear of a higher level of intelligence, how it’s actually a result of what humans have done to other humans in history. Historically, when one group of humans has the technological advantage over another, the least progressed side is usually consumed or destroyed by the other (White). A great example of this is how colonialists stole American land from the Native Americans and put them on reservations. Humanity is afraid that these beings will do to us what it has done to itself.
This idea is rooted in a much deeper human conceit, that human consciousness is the only form of existing and that an artificial intelligence will behave based off of humanities consciousness and behavior. Now, if humans do happen to make an artificial intelligence, it will undoubtedly be based off of the human brain. Neuroscientists and robotic designers don’t have any other framework with which to design off of for a being to become self-aware. With that said, society may have to come to realize that perhaps the nature of how this being experiences life may be completely different from what a human mind experiences. An odd conclusion is realized, that maybe this being, even though it may have awareness, is a distinct consciousness that doesn’t exist within the realm of humanity. Where both humanity and this new entity are separate windows of experience within the overarching category of consciousness.
What happens when this is taken a step further, and moves beyond the consciousness of humanity and robotics towards the case of animals (humanities biological siblings)? Now it is important to recognize the fact that humanity is a part of the animal kingdom. This happens to be another one of our conceits, where humans view themselves as distinct entities apart from the rest of nature. Even our depictions of evolution show humans rising from shorter and more primitive ancestors in a biased way, a perspective that reads as though humans are the epitome of all life. Where humans give themselves these special titles and situate our species on a pedestal. Where humanity defines what consciousness has to be for an entity to be alive and humans are the dominant life form on the planet. These conceits are engulfed in our culture and religions. When studying non-human animal species, it’s common to see them engaging in activities that are rather intelligent, “Animals doing things that we in our arrogance used to think was just human” – Jane Goodall (Goodall). Now when discussing these animals and their ability to be intelligent, they are discussed in the case of whether or not they are actually aware of what they are doing, rather than how well acclimated they are to their environment.
In David McFarland’s book, Guilty Robots, Happy Dogs, where he discusses how one can determine the awareness within other forms of consciousness, he brings up the example of his dog who is going into the kitchen for food. He talks about the different perspectives one can take on this situation. From the anthropomorphic stance, one would say that the dog is hungry like a person would be and wants to get food. From a scientific view point, it could be said that through associative learning the dog has learned that food is in the kitchen and goes there to satisfy its hunger. Finally from a realist perspective, one could say that the dog believes there is food in the kitchen and that is why it is going there to eat. McFarland goes on to describe about how these ideas extend into philosophical debates as the information of whether or not these animals know what they’re doing is beyond scientific testability at the moment. In philosophy, the standpoint is based upon the behaviorist position. The behavioral standpoint has been documented many times in animals, which behave in a certain way for a reward, such as food. This behavior is a clear result of Pavlovian training. Behaviorism is how humans base a species ability for awareness upon its behaviors, although it is now seen as an outdated model which will be discussed later. Philosophers have broken down this level of awareness into categories of intention (McFarland). Now it’s important to state that these non-human beings may not have every quality of feeling or qualia (individual moments of subjectivity) that humans have, but if they have a level of awareness that they would at least contain a few of them.
The levels of intention are defined as zero, first, and second order systems. To describe these systems, let’s have an animal, for example a Killdeer bird that acts injured to lure away a predator from its nest. For a zero order system, the bird is behaving based on associative learning from other birds about how to react in this situation to prevent the nest from being attacked. A first order system would be where the bird believes that doing so will cause the predator to follow it away from the nest. A second order system is where the bird believes the predator will believe it is injured and will follow it away from the nest. The problem in nature is that scientists have not been able to determine whether a species is able to believe others believe, just as humans have this quality (McFarland). There is also the underlying issue of relying on just an animal’s behavior to determine whether it is actually aware. When relating this back with robotics, those who have no knowledge of the inner working of these entities would assume the most sophisticated of these machines actually possess a form of consciousness. The engineer who designed its software and how it is supposed to behave would say otherwise, because this individual understands the ins and outs of the machine. For the sake of animals, biologists don’t completely know what is going on in their brains, whether they are just autonomous or actually have a belief system (McFarland). Graziano also takes on this position that there isn’t enough information to come to a definitive conclusion, however, it is likely if it can be determined that animals have characteristics falling in line with the attention schema theory, then they would be considered aware.
When humans compare themselves to the animal kingdom, and try to define consciousness, a common species compared to are the chimps. Their DNA is approximately 2% different from ours and they are great case study examples of animals that have developed a primitive level of culture (Goodall). In fact, their genetic relationship is closer to our species than to gorillas. This development of culture and stratification of individual members of a group is especially apparent in Bonobos. Our two species have similar gestation periods for pregnancy, and have similar lengthy child development where the infant requires constant care for many years. They also employ many ranges of behaviors that parallels enormously with human infants, such as playfulness, curiosity, need for attention and physical connection. The central nervous system of humans and chimps overlap quite a bit, so it wouldn’t be a stretch to say they are also capable of forming abstract thoughts. They also are capable of recognizing themselves in mirrors, which is rare. Out of the current 7.7 million animal species (10,000 more are discovered each year), only a tiny fraction of them have the capacity to pass this test. They include chimps, gorillas, orangutans, dolphins, elephants, orcas, bonobos, macaques, magpies, and us. This is not to say all the other hundreds of thousands of species do not have a sense of self, it just cannot determine if they do. Back to Chimps in particular, they also have a wide range of communication capabilities and emotional states. There are several differences, however, as they do not have a vocal tract that would allow for complex language (Goodall). They can learn sign language and even teach it to their young, but nothing beyond that. Their intellectual capacities of their most gifted individuals are also dwarfed by our own species.
When bringing up the previous discussion about levels of intention to this conversation with chimps, something similar occurs as well. In a situation where a mother is teaching their youngling how to use tools, it can be divided into those three stages. The zero order system would be the mother is undergoing associative learning and is not aware of what she is doing. This has been proven false in this case. The first order system would have the mother wanting the child to understand, in which she believes the youngling will learn via association. This has been proven to be the case in this situation (Goodall). There is still the issue of being stuck on the second order system, where one just cannot prove that the mother believes the child will understand what it is doing and realize it is doing so. If only scientists could ask them, however, such a question would require an abstraction of thought they may not be aware of. This could also be another feature that isn’t taken into consideration, where there could be species that do have a sense of awareness, but do not have the capacity to express that they have it. It is likely this was the case in our species history before the development of culture and language. Where the biological requirement for language and abstract thought must have developed in our species genetically before they could expressed in the form of culture. There could have been countless human generations before the advancement of civilization that had this “strange feeling” or “awareness” but could not articulate it in the way humans can now. This could be the situation for some species on the planet currently, or in its distant future. This is where a mixture of behaviorism and biological understanding of a particular animal can come together to allow humanity to obtain a general idea of whether or not a species is actually aware. To make an educated guess based off of what little evidence that is discovered.
Why does society care so much about whether or not these beings are aware or not? It’s an interestingly common question people ask about their pets, thinking about what they may be thinking. These questions may arise out of the simple fact human society has much interaction with our animal siblings. Perhaps humans ask this question because of how alone a human being feels, being the only sentient species in the known universe. How terrible would it be if were found out that, in fact, all animal species that have been consumed actually possessed a concept of self and a form of awareness? This discussion leads into the conversation about the ethical nature of how to deal with other species in the animal kingdom. When Peter Singer discusses the human concept of equality for all in his book, “Practical Ethics”, he explains how this equality is for the interests of all parties. These parties can be very different from each other, just as humanity’s variation of individuals can be described as a wide chasm. But no matter the differences all parties should be viewed as equals. The reasoning for this is understandable within our species, especially from the perspective or racial or sexist discrimination. He describes how groups of people who view others outside their group. Let’s say a population that falls under the social race of white viewing those who are not white as being less than human, as had happen in history. That these groups viewed them this way because of the level of melanin in their skin, attributing certain stereotypes and socioeconomic situations as evidence of their value as an individual. When society modifies what is considered part of a group, society is altering who is going to be exploited, discriminated, and belittled (Singer). Singer takes on an approach of extending this difference, while also disregarding the idea of whether or not these animals are aware or not. That humans should extend equality towards them because of the very nature of equality for all depends on overlooking the differences of all in a group. That this is the moral stance society should take in regards to the ethical treatment of all forms of life. During the 70’s when the idea was introduced, many found his position to be an issue for another generation with the current media focus on racial inequalities within our own species. He argued back that the defense of animals, although seemingly minor compared to the defense of African Americans, should not be placed on some back burner for humans in another time to deal with. That the extension of equality doesn’t need to pass through phases or borders, that to exclude a group outside of this equality would in itself be a form of inequality expressed on behalf of humanity. Now let’s take this idea a bit further, should this idea be extended towards an artificial intelligence? By the nature of this entity, it would also be considered a different form of life. Or does this conversation change because of the possibility that an artificial intelligence could be far smarter than humans and therefore a threat? In terms of robotics, it’s understandable position to take. As our society becomes infused with robotics and as their intelligence increases, society wants to know how to manage and control them. As stated before, humans don’t like the idea of having a higher intelligence that has the capacity to get rid of their creators. There will be a lot of political and social changes when they become commonplace to make sure they serve their sole purpose and only that. But is it ethical to force an entity that which is considered alive to commit to a life of service? Can humans and artificial intelligence be equals? Is it even morally ethical to create a synthetic life form that is alive?
In the context of extending right towards animals, the idea is usually framed in the perspective of anthropomorphism (Singer). People see animals in pain and assume that they are suffering. If people were placed in a room watching a documentary that displays animals being slaughtered for meat, they would feel that anthropomorphic extension of feelings onto those creatures. Humanity is an anthropocentric species, so it is understandable that these feelings are extended onto beings and objects that may or may not be self-aware. This is where Singer’s ideologies, McFarland’s, and Graziano’s study brings up interesting and conflicting questions. On the one hand, you have the perspective of treating all creatures as equals, no matter their intellectual capacities. On the other hand, you have a perspective of treating animals based upon their level of self-awareness. When using the world animals, one should try to not view them as an overarching consciousness, but rather many individual species that have varying levels of consciousness or lack thereof. With this in mind, is it morally acceptable to consume a species that do not possess a sense of awareness? That these autonomous beings can be mass produced, pumping out flesh for consumption without concern for their well-being since their autonomy is the equivalent of a machine that only responds to reactions in the environment. In essence, treating them like non-sentient machines? How does one determine where this line of requirement begins for a species to fall under the category of self-awareness? Or for the sake of our anthropomorphic desires, should society stop eating them all together because of how humans personally feel when witnessing an animal in pain?
Our perspective on whether or not humans assume a being that is of equal value or less can have ripples in society. An excellent example of this is racial and gendered discrimination. During those moments in human history, it was acceptable for people to view someone that was African-American as being less than human. It reflected itself in our politics where the United States once employed laws, such as the 3/5ths compromise, that determined a slave as not measuring up to that of a standard American citizen. Or how society didn’t allow women voting rights for a time, as women were seen as an inferior gender. Our change of perspective of these groups, albeit gradual, eventually led to a better society today. These views are already taking shape in some nations today, such as India, which recently passed laws forbidding the killing of dolphins and recognizes their sentience. A change in views towards robotics could have two possible outcomes according to Martin Ford, who is head of a software development firm. One is a world where capitalism causes the use of artificial intelligence programs to outsource not only white collar jobs, but jobs referred to as “good jobs” that wouldn’t be replaced otherwise. Some statistical information provided by Ford shows that roughly 47% of the human work force could be susceptible to automation within the next two decades. To put that into greater perspective, imagine in the 2030s that 175 million Americans are out of work. Programs like Narrative Science, which take information and threads it into a news article could actually put journalists and reporters out of business. Google’s development with driverless vehicles could put a majority of transportation jobs out of business as well. The resulting switch to a robotic work force, although seemingly a cheaper alternative, could actually result in the collapse of the global economy. Since the robots themselves aren’t going to be putting any money back into the economy, and the corporate heads reap the benefits (Ford). The other side of this is a society whose basic mundane tasks and functions are completely automated and taken care of by machinery. Where policy restrictions are placed on what artificial intelligence can be used for. This would allow higher ends jobs to be sought after by the general human population. To do so would also require adequate access to education, which could result in many modern nations switching to allowing education to become a free commodity. This would allow human society to have people who can reach and do what they’ve always wanted while having robotics support the base necessities of said society. These are two polar opposites, a dystopia and utopia on the extremes of what could happen with artificial technologies (Ford). It is more likely that a situation between both these extremes will occur.
With regards towards animals, whether people like it or not, society will have to switch towards a more vegetarian diet, or less meat based diet, for the sake of global progress. The rest of the world is catching up with developed nations, especially in terms of consumption. If the world consumed like these developing countries, like Bangladesh, India, and Uganda, about half of the planet would be untouched by humans. The main reason for this is that developing countries tend to eat less meat as developed countries do, sadly America loves its meat. There is a major problem with this, as it takes 13 pounds of grain to produce just 1 pound of meat. Some frightening statistics show that 80 percent of corn and 95 percent of the oats grown are fed to livestock (“State of Consumption “). Human civilization would need 4.1 Earths growing mostly grain to produce feedstock for the meat needed for everyone to eat like an American. There are also societal impacts because of our unhealthy diets, as an estimated 65 percent of US adults are overweight or obese, which leads to an annual loss of 300,000 lives and at least 117 billion dollars in health care costs since 1999 (“State of Consumption”). Americans unhealthy diet includes 815 billion calories of food eaten each day, that’s 200 billion more than needed. Food for animal farming has led to the loss of 50 percent of wetlands, 90 percent of northwester old-growth forests, and 99 percent of the tall-grass prairies. While a future with robotics may be skidding the edge of dystopia, a future without meat-based products means a brighter future for the planet and a positive move towards a healthy society. It is important to note that policy changes in the future that deal with animal protection will probably extend to mammals first, as much of the animal rights movement sways public opinions through species that humans find aesthetically pleasing. However, with education on the environment and the animal kingdom offered to the broader population, it is hopeful that humans will recognize the safety and important of an animal species regardless of how “cute” they are. It is also hopeful that the recognition of other animals being self-aware could push society in this direction. Of course, there will most likely be policies that allow for the removal of an invasive species that is causing ecological damage, but for them to be removed in a humane fashion. Further policies regarding self-defense will arise in the future of animal protection in the event an animal attacks a human, where the human can defend themselves against the attacking animals. It is likely, however, that society will still have some consumption of meat products in the future. These products will most likely be grown in a sustainable fashion where the animal is euthanized in a humane manner for consumption. Will society be willing to change these diets after having so much access to meat? That is uncertain, but the path society must take is clear for humanity to continue. Whether it is favored or not, this century is going to have many societal changes with the acceleration of human technology and progress. As Peter Singer has said:
“The French have already discovered that the blackness of the skin is no reason why a human being should be abandoned without redress to the caprice of a tormentor. It may one day come to be recognized that the number of the legs, the villosity of the skin, or the termination of the os sacrum, are reasons equally insufficient for abandoning a sensitive being to the same fate.” (Singer)
And if it may be added to his statement, that the treatment of other forms of consciousness that don’t fall under the traditional definition of life. May our changes in ethical treatment of beings help to create a world future generations can feel proud of.
Carr, Nicholas. Tools of the Mind. The Shallows. New York: Norton, 2011.
Doane, Seth. “A night in Japan’s robot hotel”. CBS. July 22, 2015. http://www.cbsnews.com/news/inside-japan-robot-hotel-hennna-where-staff-are-robots/ (accessed Nov. 10, 2015)
Ford, Martin. Rise of the robots: technology and the threat of a jobless future. New York. PBG. 2015.
Goodall, Jane. “About Chimpanzees”. Jane Goodall Institute of Canada. http://www.janegoodall.ca/about-chimp-so-like-us.php (accessed Nov. 11, 2015)
Graziano, Michael. Consciousness and the Social Brain. New York. Oxford University Press. 2013.
Grimes, S. Cruickshank, Allan. “Injury Feigning: By Birds”. American Ornithologists’ Union. (Oct. 1936). http://www.jstor.org/stable/4078314 (accessed Nov. 11, 2015)
Langley, Liz. “What do animals see in the mirror?” National Geographic. Feb. 14, 2015. http://news.nationalgeographic.com/news/2015/02/150214-animals-behavior-mirrors-dolphins-dogs-self-awareness-science/. (accessed Dec. 8, 2015)
Liu, C., Tang, Y., Ge, H., Wang, F., Sun, H. “Increasing breadth of the frontal lobe but decreasing height of the human brain between two Chinese samples from a Neolithic site and from living humans.” American journal of physical anthropology. 2014. (accessed Dec. 8, 2015)
McFarland, David. Guilty robots, happy dogs: the question of alien minds. Oxford: Oxford University Press. 2008.
NASA. “University of Edinburgh’s NASA Valkyrie Robot”. Edinburgh. Sept. 2015. http://valkyrie.inf.ed.ac.uk/ (accessed Nov. 10, 2015)
Serkan, Toto. “Actroid-F: Japanese Super realistic Robot”. TC. Oct. 18, 2011. http://techcrunch.com/2011/10/18/actroid-f-japans-super-realistic-humanoid-gets-a-brother-video/ (accessed Nov. 10, 2015)
Siddique, Ashik. “Frontal Lobe Size in Brain does not explain Human Intelligence”. Medical Daily. May 13, 2013. http://www.medicaldaily.com/frontal-lobe-size-brain-does-not-explain-human-intelligence-245843 (accessed Nov. 10, 2015)
Simon, Daniel. Larsson, Karin. Nilsson, David. “An organic electronic biomimetic neuron enables auto-regulated neuromodulation”. ScienceDirect. Sept. 15, 2015. http://www.sciencedirect.com/science/article/pii/S0956566315300610 (accessed Nov. 10, 2015)
Singer, Peter. In defense of animals. New York, NY, USA: Blackwell. 1985.
Singer, Peter. Practical ethics. Cambridge: Cambridge University Press. 1979.
Spiegel, Alix. “No Mercy for Robots: Experiment Tests how Humans Relate to Machines”. NPR. Jan. 28, 2013. http://www.npr.org/sections/health-shots/2013/01/28/170272582/do-we-treat-our-gadgets-like-they-re-human (accessed Nov. 10, 2015)
“State of Consumption.” Public. WSU. Jan. 1, 2013. http://www.worldwatch.org/node/810 (accessed Nov. 10, 2015)
Wall, Tim. “8.74 Million Species on Earth”. Discovery. Aug. 13, 2011. http://news.discovery.com/earth/plants/874-million-species-on-earth-110823.htm (accessed Nov. 11, 2015)
White, Macrina. “Should We Fear Artificial Intelligence?” Huffington Post. April 5, 2015. http://www.huffingtonpost.com/2015/04/05/fear-artificial-intelligence_n_6941994.html?utm_hp_ref=science&ir=Science (accessed Dec. 10, 2015)