This draft goes through how we define consciousness within ourselves and how we define consciousness in non-human entities like other animal species and robotics. Discussing the ethical nature of how we view these beings and how they should be treated. Near the end it discusses the hypothetical future based on how previous perspectives on the value of a person/entity altered human society in the past and how these new perspectives could alter human society in the future.
After writing this draft, I would like to focus a bit more energy towards the ethical nature and implications of society with robotics. The same towards discussing racial/sexist discrimination and it’s relationship to this new scenario.
The title was also changed as it severely limited how I had to go through the paper to get readers to understand what was meant behind “How robots may make us Vegetarians”.
*Warning: This is a fairly lengthy draft.*
Alien Awareness: A study of Consciousness
Human technological progress has been advancing exponentially and has resulted in technological wonders. The Japanese have taken technology to levels that foreshadow human society becoming intertwined with robotics. Just this year, in Nagasaki, the world’s first hotel staffed completely by robots had opened. The hotel is aptly named “Henn’na Hotel”, which translates to “strange hotel” (Doane). Everything is automated, from the check-in desk, to baggage claim, and even the concierge. You won’t even need a card to get into your room, as each one has a facial recognition scanner. Inside you will find another robot that deals with several functions like lighting and contacting the front desk. Japanese culture has embraced the use of robotics heavily, as they greet people when they enter stores, appear in restaurants, commercials, schools. For about $1,600, you can own a robot that can read your facial expressions and determine your feelings (Doane). Many robotics have become extremely sophisticated as time passes, to the point of disturbingly life-like robotics. One of the hotel front desk clerks happens to be one of these types of robotics. This machine has been programmed to mimic human facial responses, behave appropriately to certain situations, and communicate information. About 50 miles straight up, NASA has employed its robotic assistance machine named Valkyrie (NASA). It has many ranges of motion and the capabilities of managing the international space station on its own. It has the capacity to take orders and communicate with ground control if necessary. These machines, although expensive, are seen as safer alternatives and can be manufactured more quickly than human astronauts can be trained.
Back towards the surface of the earth, androids have become greatly popular in Japan. A more recent Android is taking the world by storm and its name is Actroid-F. It is a special model of robotics that have extremely life like appearance, range of motion, and communication. In fact, these models have the capacity to talk with one android like people do (Serkan). They are being used in trials for psychological purposes in nursing homes to provide company for people for understaffed facilities. When their trial was over, the patients felt lonely and sad. As if someone they cared about had moved out. But this being never actually existed, because we know exactly how their programming functions and that this android model does not have the ability to know it exists. These robotics, even though they may take on human appearances and behavior, have no more awareness than the facial recognition scanners attached to the hotel rooms from earlier. There is an older case-study on situation of something similar in the 1990’s, where Stanford professor Clifford Nass arranged an experiment involving human reciprocity (Spiegel). Reciprocity is a feature of human culture that appears everywhere without exception, humans have the need to respond in kind. In this situation they were responding to a computer that would answer questions for them. One computer would be extremely helpful, the other not so much. At the end of the questions, the computer would ask them to help them become better for future questions. The individuals that had the computer that was extremely helpful had helped that computer far more than the non-helpful computer. These individuals were also not aware that they had done so. Now this is different from the androids in the nursing home, as these are just computer screens without a voice or face. But still, humans reacted in a social way, in an anthropomorphic way (Spiegel). Clifford had this to say about the situation:
“The relationship is profoundly social,” he says. “The human brain is built so that when given the slightest hint that something is even vaguely social, or vaguely human – given the slightest hint of humanness, people will respond with an enormous array of social responses including, in this case, reciprocating and retaliating.” (Spiegel)
This means that our ability to determine whether something is alive or not is essentially flawed. That these behaviors can be replicated by software and programing and our brains will treat this entity as if it were a person. This path of development leads to some disturbing questions that we will probably have to face within this century. What happens when these robots become so life like, that match our behaviors so perfectly and seamlessly, that we cannot distinguish if someone is actually a machine based on programming or a biological entity that is conscious? It raises several other questions about what it means to be human and how we define ourselves. Are we really conscious, or a result of our genetic “programming” that makes us think that we think? What is the nature of our sentience?
When we look at describing what makes us how we are, the scientific approach is to examine one’s brain. The brain and our central nervous system can be used to explain our behaviors. So we turn to the aspect of humanity that we view as especially unique to our species, our emotions and feelings. The frontal lobe specifically deals with these functions and have been heralded in the anthropology community as the possible reason modern humans were able to create abstract thought and ideas when the split from our relatives around 2 million years ago. However, more recent studies that review this information show it to be false (Siddique). When you scale up the frontal lobe of other species in relation to its brain size, they are almost equivalent. The only difference real difference between us and our relatives is the number of neurons within the Cerebellum. This is strange, because it is the part of the brain that it responsible for primitive controls, the earliest part of the brain to have developed in evolution (Siddique). Now this does not down play the importance of the frontal lobe, but it raises some interesting questions about what is causing our brains to function in the way that it does. Another aspect of human consciousness is our memory. Our memory is controlled by the hippocampus, which also deals with organizing, storing information, navigation, and spatial awareness (Bailey). It’s the part of your brain that triggers past memories when you smell something familiar or when you felt a certain way. It is also extremely important for long-term memories. This part of the human brain isn’t unique to us however, as other species also have similar aspects of the human brain. Robotics also have the capacity to store information and to react to said information, and even learn from it. However, there is a big difference in how robotics and biological entities deal with memory. A good example is provided by Nicholas Carr, where we have an individual who learns something and the information is stored in the brain. Then some time passes and the individual has mostly forgotten said idea, and then relearns this information. We compare this with a computer file that is created, and then deleted, and then created again. The human or nonhuman brain which learned is different from when it first learned the information and relearned it versus a computer which puts the exact information as it was before into storage. It does so without any relationship to other forms of information in its storage device unlike the brain which makes connections throughout all pieces of information (Carr).
We come across a certain question of whether humans can actually develop a robot that can create the same level of connections that the human brain can. Is a synthetic version of the brain possible? Some scientists believe we’ll have invented a fully functioning cybernetic brain within a decade. This year, an artificial neuron was developed and it has the capacity of functioning like its organic counterpart (Simon). A neuron is a basic component that comprises up the brain, and it allows information transfers. This artificial neuron releases information/current to the residing neuron, whether it is organic or another artificial neuron. This means we can use this technology to fix disorders on the neurological scale with these artificial parts (Simon). It also means we can construct entire sections of a brain made entirely of mechanical parts. Now the second part is far off from development as even though we can replicate the tiniest aspect of how our brain functions, creating a frontal lobe for example would require a lot more study and testing to achieve something of that scale. How a human brain would react if we replaced it with an artificial frontal lobe is entirely up for speculation. If we continue on this path of development however, it could be possible within our lifetime that humans develop a new form of existence. We would call this an artificial intelligence, which is completely different from the robots discussed earlier. In fact, this entity wouldn’t even require a body that looks like a human, it would be an entity that would be aware of its surrounding and conscious of its own thoughts and ideas. It would be able to do things no human mind could ever do, it would be an entity without the limitations of ones DNA.
Human beings are afraid of this, it’s something we’ve been avoiding or trying to avoid for a long time. There is something about another form of intelligence that has the capacity similar to our own that causes us concern. Why is this? Could it be the fact society has portrayed other forms of intelligence greater than our own as aggressors? How many countless Hollywood movies portray artificial intelligence as the destroyers of human civilization? The astrophysicist Neil Desgrasse Tyson discusses this fear of a higher level of intelligence, how it’s actually a result of what humans have done to other humans in history. Historically, when one group of humans have the technological advantage over another, the not so advanced side is usually consumed or destroyed by the other side. A great example of this is how colonialists happen to obtain American land from the Native Americans. We are afraid that these beings will do to us what we have done to ourselves. This idea is rooted in a much deeper human conceit, that human consciousness is the only form of existing and that an artificial intelligence will behave based off of humanities consciousness. Now, if humans do happen to make an artificial intelligence, it will undoubtedly be based off of the human brain. We don’t have any other framework with which to design off of for a being to become self-aware. With that said, we have to come to realize that perhaps the nature of how this being experiences life may be completely different from what a human mind experiences. An odd conclusion is realized, that maybe this being, even though it may have awareness, is a distinct consciousness that doesn’t exist within the realm of humanity. That both humanity and this new entity are separate windows of experience within the overarching category of sentience.
What happens when we take this a step further, and we move beyond the consciousness of humanity and robotics towards the case of animals? Now it is important to recognize the fact that humanity is a part of the animal kingdom. This happens to be another one of our conceits, where humans view themselves as distinct entities apart from the rest of nature. Even our depictions of evolution show humans rising from shorter and more primitive ancestors in a biased way, a perspective that reads as though humans are the epitome of all life. Where we give ourselves these special titles and situate our species on a pedestal and we define what consciousness has to be for an entity to be alive. Even our religions contain these conceits, where our texts make statements about how deities created animal life for humanities benefit alone. When we look and study non-human animal species, we find them engaging in activities that we would state as being a human behavior, or a behavior requiring significant intelligence (Goodall). Now when discussing these animals and their ability to be intelligent, I am discussing the case of whether or not these beings are actually aware of what they are doing, rather than how well acclimated they are to their environment.
In David McFarland’s book, Guilty Robots, Happy Dogs, he brings up the example of his dog who is going into the kitchen for food. He talks about the different perspectives one can take of this situation. From the anthropomorphic stance, we would say that the dog is hungry like a person would be and wants to get food. From a scientific view point we would say that through associative learning, the dog has learned that food is in the kitchen and goes there to satisfy ones hunger. Finally from a realist perspective, one could say that the dog believes there is food in the kitchen and that is why it is going there to eat. McFarland goes on to describe about how these ideas extend into philosophical debates as the information of whether or not these animals know what they’re doing is beyond scientific testability at the moment. In philosophy, the standpoint is based upon the behaviorist position. The behavioral standpoint has been documented many times in animals which behave in a certain way for a reward, such as food; A clear result of Pavlovian training. Behaviorism is how we base a species ability for awareness upon its behaviors, although it is now seen as an outdated model which will be discussed later. Philosophers have broken down this level of awareness into categories of intention (McFarland). Now it’s important to state that these non-human beings may not have every quality of feeling or qualia that humans have, but if they have a level of awareness that they would at least contain a few of them. The levels of intention are defined as zero, first, and second order systems. To describe these systems, let’s have an animal, for example a Killdeer bird that acts injured to lure away a predator from its nest. For a zero order system, the bird is behaving based on associative learning from other birds about how to react in this situation to prevent the nest from being attacked. A first order system would be where the bird believes that doing so will cause the predator to follow it away from the nest. A second order system is where the bird believes the predator will believe it is injured and will follow it away from the nest. The problem in nature is that we have not been able to determine whether a species is able to believe others believe, just as humans have this quality (McFarland). There is also the underlying issue of relying on just an animal’s behavior to determine whether it is actually aware. When we relate this back with robotics, those who have no knowledge of the inner working of these entities would assume the most sophisticated of these machines actually possess a form of consciousness. The engineer who designed its software and how it is supposed to behave would say otherwise, because this individual understands the ins and outs of the machine. For the sake of animals, we don’t completely know what is going on in their brains, whether they are just autonomous or actually have a belief system (McFarland).
When we compare ourselves to the animal kingdom, and try to define consciousness, a common species we compare ourselves to are the chimps. Their DNA is approximately 2% different from ours and are great case study examples of animals who have developed a primitive level of culture (Goodall). In fact, their genetic relationship is closer to our species than to gorillas. This development of culture and stratification of individual members of a group is especially apparent in Bonobos. Our two species have similar gestation periods for pregnancy, and have similar lengthy child development where the infant requires constant care for many years. They also employ many ranges of behaviors that parallels enormously with human infants, such as playfulness, curiosity, need for attention and physical connection. The central nervous system of humans and chimps overlap quite a bit, so it wouldn’t be a stretch to say they are also capable of forming abstract thoughts. They also are capable of recognizing themselves in mirrors, which is rare. Out of the current 7.7 million animal species (10,000 more are discovered each year), roughly 10 of them have the capacity to pass this test (Wall). They include us, chimps, gorillas, orangutans, dolphins, elephants, orcas, bonobos, macaques, and magpies. This is not to say all the other hundreds of thousands of species do not have a sense of self, we just cannot determine if they do. Back to Chimps in particular, they also have a wide range of communication capabilities and emotional states. There are several differences however, as they do not have a vocal tract that would allow for complex language (Goodall). They can learn sign language and even teach it to their young, but nothing beyond that. Their intellectual capacities of their most gifted individuals are also dwarfed by our own species. When we bring up the previous discussion about levels of intention to this conversation with chimps, we find something similar occurring as well. In a situation where a mother is teaching their youngling how to use tools, we can divide it into those three stages. The zero order system would be the mother is undergoing associative learning and is not aware of what she is doing. This has been proven false in this case. The first order system would have the mother wanting the child to understand, where she believes the youngling will learn via association. This we have proven to be the case in this situation. We are still stuck on the second order system, where we just cannot prove that the mother believes the child will understand what it is doing and realize it is doing so. We also can’t just ask them, as such a question would require an abstraction of thought they may not be aware of. This is where a mixture of behaviorism and biological understanding can come together to allow us to get to a general idea of whether or not a species is actually aware of what it is doing.
Why do we care so much about whether or not these beings are aware or not? A lot of which probably because of our interactions with them. Perhaps want to know if these beings we are interacting with should be treated a certain way based on their level of awareness. How terrible would it be if we found out that in fact all animal species we’ve been consuming possessed a concept of self and a form of awareness? This discussion leads into the conversation about the ethical nature of how we deal with other species in the animal kingdom. When Peter Singer discusses the human concept of equality for all in his book, “Practical Ethics”, he goes through how this equality is for the interests of all parties. That these parties can be very different from each other, just as humanity’s variation of individuals can be described as a wide chasm, that no matter the difference we should view all as equals. The reasoning for this is understandable within our species, especially from the perspective or racial or sexist discrimination. He goes onto describe how groups of people who view others outside their group, let’s say a population that fall under the social race of white viewing non-whites as being less than human. That these groups viewed them this way because of the level of melanin in their skin, attributing certain stereotypes and socioeconomic situations as evidence of their value as an individual. When we alter what we consider part of a group, we alter who is going to be exploited, and who will be discriminated and belittled (Singer). Singer takes on an approach of extending this difference, while also disregarding the idea of whether or not these animals are aware or not. That we should extend our equality towards them because the very nature of equality for all depends on overlooking the differences of all in a group. That this is the moral stance we should take in regards to the ethical treatment of forms of life. Now let’s take this idea a bit further, should we extend this idea towards an artificial intelligence? By the nature of this entity, it would also be considered a different form of life. Or does this conversation change because of the possibility that an artificial intelligence could be far smarter than us and therefore a threat? In terms of robotics, it’s understandable, as our society becomes infused with robotics and their intelligence increases, we want to know how to manage and control them. As stated before, we don’t like the idea of having a higher intelligence that has the capacity to get rid of us. There will be a lot of political and social changes when they become common place to make sure they serve their sole purpose and only that. But is it ethical to force an entity that which is considered alive to commit to a life of service? Can humans and artificial intelligence be equals?
In the context of extending right towards animals, the idea is usually framed in the perspective of anthropomorphism (Singer). We see animals in pain and we assume that they are suffering. If people were placed in a room watching a video of animals being slaughtered for meat, they would feel that anthropomorphic extension of feelings onto those creatures. We are an anthropocentric species, so it is understandable that we extend our feelings onto beings and objects that may or may not be self-aware. This is where Singers ideologies and McFarlands study brings up interesting and conflicting questions. On the one hand, you have the perspective of treating all creatures as equals, no matter their intellectual capacities. On the other hand, you have a perspective of treating animals based upon their level of self-awareness. When we say the word animals, we should try to not view them as an overarching consciousness, rather many individual species that have varying levels of consciousness or lack thereof. With this in mind, is it morally acceptable to consume species that do not have a sense of self or awareness? That these autonomous beings can be mass produced in the way without concern of their well-being since their autonomy is the equivalent of a robotic machine that only responds to reactions in the environment? How do we determine where this line of requirement begins for a species to fall under the category of self-awareness? Or for the sake of our anthropomorphic desires, should we stop eating them all together because of how we personally feel when we see an animal in pain?
Our perspective on whether or not we assume a being that is of equal value or less can have ripples in society. An excellent example of this is racial and gendered discrimination. During those moments in human history, it was acceptable for people to view someone that was black as being less than human. It reflected itself in our politics, where we actually once employed laws that determined a slave as not measuring up to that of a standard American citizen. Or how we didn’t allow certain groups voting rights for a time. Our change of perspective on these individuals, albeit gradual, eventually led to a better society today. A change in views towards robotics could have two possible outcomes according to Martin Ford, who is head of a software development firm. One is a world where capitalism causes the use of artificial intelligence programs to outsource not only white collar jobs, but jobs we would refer to as “good jobs” that couldn’t be replaced by a simple repetitious machine. Some statistical information provided by Ford shows that roughly 47% of the human work force could be susceptible to automation within the next two decades. To put that into greater perspective, imagine in the 2030s that 175 million Americans are out of work. Programs like Narrative Science, which take information and threads it into a news article could actually put journalists and reporters out of business. Google’s development with driverless vehicles could put a lot of transportation jobs out of business as well. The resulting switch to a robotic work force, although seemingly a cheaper alternative, could actually result in the collapse of the global economy. Since the robots themselves aren’t going to be putting any money back into the economy, and the corporate heads wreak the benefits (Ford). The other side of this is a society whose basic mundane tasks and functions are completely automated and taken care of by machinery, and we put policy restrictions on what artificial intelligence can be used for. This would allow higher ends jobs to be sought after by the general human population. To do so would also require adequate access to education, which could result in many modern nations switching to universities allowing students to study for free. This would allow human society to have people who can reach and do what they’ve always wanted while having robotics support the base necessities of said society. These are two polar opposites, a dystopia and utopia on the extremes of what could happen with artificial technologies (Ford).
With regards towards animals, whether people like it or not we will have to switch towards a more vegetarian diet for the sake of global progress. The rest of the world is catching up with developed countries, especially in terms of consumption. If the world consumed like these developing countries, like Bangladesh, India, and Uganda, we would have about half of the planet untouched by humans. The main reason for this is that developing countries tend to eat less meat as developed countries do, and America loves its meat. There is a major problem with this, it takes 13 pounds of grain to produce just 1 pound of meat. Some frightening statistics show that 80 percent of the corn crown and 95 percent of the oats are fed to livestock (“State of Consumption “). We would need 4 Earths growing mostly grain to produce feedstock for the meat we would need for everyone to eat like an American. There are also societal impacts because of our unhealthy diets, as an estimated 65 percent of US adults are overweight or obese, which leads to an annual loss of 300,000 lives and at least 117 billion dollars in health care costs since 1999 (“State of Consumption”). Americans unhealthy diet includes 815 billion calories of food eaten each day, that’s 200 billion more than needed. That is enough to feed 80 million humans (“State of Consumption”). This is sad since 10 million people die each year from hunger related causes. Developing countries do not require as much land because it is much easier to grow plants rather than animals. Plants also have higher energy content than animal meat, therefore providing more nutrients and energy to people. Their impact on the environment is also negligible compared to Americans. Food for animal farming has led to the loss of 50 percent of wetlands, 90 percent of northwester old-growth forests, and 99 percent of the tall-grass prairies. Every day, an estimated nine square miles of rural land are lost to development for agriculture. While a future with robotics may be skidding the edge of dystopia, a future without meat based products means a brighter future for the planet and a positive move towards a healthy society.
Whether we like it or not, this century is going to have many societal changes with the acceleration of human technology and progress. As Peter Singer has said:
““The French have already discovered that the blackness of the skin is no reason why a human being should be abandoned without redress to the caprice of a tormentor. It may one day come to be recognized that the number of the legs, the villosity of the skin, or the termination of the os sacrum, are reasons equally insufficient for abandoning a sensitive being to the same fate.” (Singer)
And if I may add, the treatment of other forms of consciousness that don’t fall under the traditional definition of life. May our changes in ethical treatment of beings help to create a better world as it has for our society today.
Bailey, Regina. “Hippocampus.” About Education. About. Jan. 1, 2014. http://biology.about.com/od/anatomy/p/hippocampus.htm. (accessed Nov. 11, 2015)
Carr, Nicholas. Tools of the Mind. The Shallows. New York: Norton, 2011.
Doane, Seth. “A night in Japan’s robot hotel”. CBS. July 22, 2015. http://www.cbsnews.com/news/inside-japan-robot-hotel-hennna-where-staff-are-robots/ (accessed Nov. 10, 2015)
Ford, Martin. 2015. Rise of the robots: technology and the threat of a jobless future. New York. PBG.
Goodall, Jane. “About Chimpanzees”. Jane Goodall Institute of Canada. http://www.janegoodall.ca/about-chimp-so-like-us.php (accessed Nov. 11, 2015)
Grimes, S. Cruickshank, Allan. “Injury Feigning: By Birds”. American Ornithologists’ Union. (Oct. 1936). http://www.jstor.org/stable/4078314 (accessed Nov. 11, 2015)
McFarland, David. 2008. Guilty robots, happy dogs: the question of alien minds. Oxford: Oxford University Press.
NASA. “University of Edinburgh’s NASA Valkyrie Robot”. Edinburgh. Sept. 2015. http://valkyrie.inf.ed.ac.uk/ (accessed Nov. 10, 2015)
Siddique, Ashik. “Frontal Lobe Size in Brain does not explain Human Intelligence”. Medical Daily. May 13, 2013. http://www.medicaldaily.com/frontal-lobe-size-brain-does-not-explain-human-intelligence-245843 (accessed Nov. 10, 2015)
Simon, Daniel. Larsson, Karin. Nilsson, David. “An organic electronic biomimetic neuron enables auto-regulated neuromodulation”. ScienceDirect. Sept. 15, 2015. http://www.sciencedirect.com/science/article/pii/S0956566315300610 (accessed Nov. 10, 2015)
Singer, Peter. 1985. In defense of animals. New York, NY, USA: Blackwell.
Singer, Peter. 1979. Practical ethics. Cambridge: Cambridge University Press.
Spiegel, Alix. “No Mercy for Robots: Experiment Tests how Humans Relate to Machines”. NPR. Jan. 28, 2013. http://www.npr.org/sections/health-shots/2013/01/28/170272582/do-we-treat-our-gadgets-like-they-re-human (accessed Nov. 10, 2015)
“State of Consumption.” Public. WSU. Jan. 1, 2013. http://www.worldwatch.org/node/810 (accessed Nov. 10, 2015)
Serkan, Toto. “Actroid-F: Japanese Super realistic Robot”. TC. Oct. 18, 2011. http://techcrunch.com/2011/10/18/actroid-f-japans-super-realistic-humanoid-gets-a-brother-video/ (accessed Nov. 10, 2015)
Turner, Rebecca. “10 Animals with Self Awareness”. Lucid Dreaming. Oct. 31, 2015. http://www.world-of-lucid-dreaming.com/10-animals-with-self-awareness.html (accessed Nov. 11, 2015)
Wall, Tim. “8.74 Million Species on Earth”. Discovery. Aug. 13, 2011. http://news.discovery.com/earth/plants/874-million-species-on-earth-110823.htm (accessed Nov. 11, 2015)