Chapter 5: What is sentience?

In the middle of Chapter 5, Patchett describes how Gen, even though being a genius, is at a loss when it comes to using his own words. Shortly after the author says “It had occurred to him in his life that he had the soul of a machine and was only capable of motion when someone else turn the key”. This sparked my interest in robotics, specifically sentient machines. I began researching the most advanced robotics on the market, and came across Asimo. The name stands for Advanced, Step, Innovative, Mobility, It is a robot being developed by Honda that can walk, climb stairs, had advanced agility and mobility, state of the art sensors, and can also communicate. In the sources will be a video link to watch. Its intention is to be used as a service robot to help people. The project has been ongoing since the 80’s when Honda was first trying to get the machine to actually walk, and now in 2015 it can play sports! I came across other robot known as Valkyrie, which is a project being developed by NASA. It pretty much looks like a superhero robot, which was its design intention. Its actual function is to be an autonomous machine that can take care of systems on the orbiting international space station.

Even though these robots are extremely advanced for their kind, they do not possess the qualities of self-consciousness. What we would call an artificially generated intelligence. Our friends at google however are actually investing in computers that will eventually be able to do just that. The director of Google, Ray Kurzweil, believes in what he calls the singularity, where artificial intelligence will overcome human intelligence. That by the year 2029, robots will be able to do all the things human beings can do, only better. This doesn’t bode too well for the economy, especially as a lot of corporations are seeking cost effective routes by making their production and services all automated. There is a Japanese hotel opening this year in Nagasaki that will have no human workers. Everything will be automated and you will be greeted and your luggage will be carried to your rooms by a service bot. These robots have been made to look extremely humanistic, and can even respond with gestures and the proper facial expressions. I thought this was quite odd, but I came across a study in the 90s where people were responding to questions from a software that was to mimic how a human would communicate. The results were extremely profound, as many of the people started feeling emotional and even asked for the observers to leave to have a personal conversation with the software, as if it were a person. It is theorized that with enough software and advancement, you would not be able to distinguish in the future whether someone was a person or a robot as our brains pick up on these social cues that can be replicated via software.

In the book, “The Shallows”, by Nicholas Carr who is an avid writer on technology and culture, brings up biologists who discuss how the view of intelligence, or what makes us alive isn’t just about how much information can be processed. The authors discusses how there is a misconception in the greater population, that if you create a computer that can process more and more information, that eventually it will become sentient. Completely misunderstanding how the human brain works. What makes us who we are is how our brains operate, how it experiences the world, how it stores memory and emotion, and how it learns. This can be externally replicated with software, but it wouldn’t create a being that is sentient. Carr puts a great example where let’s say you create a file on the computer, and then you delete it, and then you create it again. It’s pretty much the exact same file. You compare that to the human equivalent, where let’s we learn something, and we don’t look at it for a time, and then we refresh ourselves with it. The human brain that learned that idea is different and processes the relearned information differently, while the computer puts the file exactly as it was the first time. The brain is such a complicated organ and even though it processes a ton of information, what makes us what we are is how the brain interacts with each piece of information and relates it to other pieces.

Even though Carr states a good argument against how robots will not be sentient like we are, he has a bias which came up while reading that I would like to discuss further. He takes on the assumption that sentience is only a characteristic that can exist within humans. Such an assumption I think limits our perspectives, as it is quite selfish to judge what existence can only be based on a single dominant species. There are many other animal species on earth which have exhibited qualities which we have taken for granted, such as the lively social communities within our chimpanzee relatives. It is also common in other mammalian species like dolphins and animals like crows have been shown to exhibit intelligent capabilities. Every time we take a closer look at the animal kingdom, and I will quote the anthropologist Jane Goodall, “We find animals doing things that we, in our arrogance, used to think was just human”. Could it be possible to create sentient beings based on machinery to in a sense mimic the human brains form of understanding the world? Yes, some scientists believe it is possible to create a synthetic counterpart of the human brain, however, they would experience the world differently from humans, and they would be alive in a completely different sense that is ignorant to us as certain spectrums of light are ignorant to our retinas. They would be new life. So in a sense, Carr was right, they will not be sentient, not sentient like *we are*.

This idea of creating artificial intelligence comes with several arguments from both sides of the spectrum. Something machines should become intelligent to solve problems without the need of human intervention. Other state that creating such beings would be unethical, as it would be mistreatment of a group of beings forced into slave labor so to speak. It brings up new questions about what it means to be a sentient being, and what other forms of existence there can be. Homo Homo sapiens have existed on earth for about 200,000 years, and our universe is calculated to be around 13.7 billion years old. The universe was already 8 billion years old before our star even fused into existence. Who is to say we are the measure of what it means to be alive and sentient, when there could be beings across the stars that have existed far longer than we have and who also experience the universe as well. These beings, most likely being composed of a different arrangement of life would have a perspective of existence completely alien to our own. But to them, they may feel just as much “alive” and “aware” in their own right as we do. People use the word sentience and humanity synonymously, extending the idea of other beings having a human consciousness, when perhaps sentience is not a state of being only the human species can hold.

Sources:

Ackerman, Evan. “NASA JSC Unveils Valkyrie DRC Robot.” Spectrum. Accessed October 1, 2015.http://spectrum.ieee.org/automaton/robotics/military-robots/nasa-jsc-unveils-valkyrie-drc-robot

Cadwalladr, Carol. “Are the Robots about to Rise?” The Guardian. Accessed October 1, 2015. http://www.theguardian.com/technology/2014/feb/22/robots-google-ray-kurzweil-terminator-singularity-artificial-intelligence.

Bridge, Adrian. “Robots to Serve Guests in Japanese Hotel.” The Telegraph. Accessed October 1, 2015. http://www.telegraph.co.uk/travel/destinations/asia/japan/11387330/Robots-to-serve-guests-in-Japanese-hotel.html

Zeidner, Moshe, Gerald Matthews, and Richard D. Roberts. 2009. What we know about emotional intelligence: how it affects learning, work, relationships, and our mental health. Cambridge, Mass: MIT Press.

Carr, Nicholas G. 2011. The shallows: what the Internet is doing to our brains. New York: W.W. Norton.

Artificial intelligence. n.d. [Amsterdam]: Elsevier Ltd. http://www.sciencedirect.com/science/journal/00043702.

Video of ASIMO:

https://www.youtube.com/watch?v=QdQL11uWWcI

Advertisements

4 thoughts on “Chapter 5: What is sentience?

  1. Michael Pedersen says:

    I’ve been casually following the discussion on artificial intelligence for quite some time and one huge topic that has come up in recent years is the concept of a basic income. Although this idea didn’t emerge from the artificial intelligence debate, the idea of giving every American citizen a basic, unconditional, income is one that appeals to many as the economic threats of artificial intelligence looms. In general, basic income would serve as an alternative to the current welfare system and is a major point you should look into if you plan on continuing along with this research.

    As I mentioned in class, the main driving force in accelerating development of highly advanced artificial intelligences is the car industry. Honda, Tesla, Google, and the rest all have stakes in the future of autonomous driving cars and many, like Google, are combining a lot of distinct artificial intelligences into a network and having them all work in concert to fully automate the car’s driving. Although this would only register an a “weak” AI it is an important in developing general purpose “strong” Ais that can function approximately as well as a human in most tasks.

    As you hinted at while describing the fully automated Japanese hotel a lot of effort is being put into creating robots that not only understand humans but actually look like us too. If you want to expand your research and look into the social perception of human-like robots I highly suggest looking into the term “uncanny valley” which is when a robot looks real but through the fluidity of motion or the way the motors move beneath the skin makes the entire robot seem just…. strange. Suffice it to say that it’s a phenomenon best demonstrated visually.

    Overall, I enjoyed your post and appreciated the breadth of discussion covered.

    Like

  2. slaudeman says:

    I really like your research idea this week. As I see it, there are a number of different ways you could take this in the future. You bring up the idea of creating consciousness, and whether or not it is possible. You also talk about the fact that consciousness in humans may not be the same as consciousness in other animals, or robots. However, I agree that this should not be taken to mean that only humans are sentient. Thus, I think that another research question that could be of interest to you is the explanation of consciousness and how we define consciousness. Additionally, you could also address the ways in which the definition of consciousness could be expanded in order to apply to other types of awareness. I think that breaking away from the human-centric mentality will be key in order to accept any kind of artificial intelligence in society.
    The ethics of machine integration and artificial intelligence are an entirely different field of questions. The idea of addressing whether or not AIs are “human” or “conscious” becomes an issue all its own, and then there is the question of human interactions. Should there be laws to govern the ways in which AIs can act, react, and interact? Should the programming in AIs be inherently bound by a series of rules which are safe, or is that an infringement on the consciousness and freedom of the AI? You also touched on the idea of robots taking over human jobs, which could also be ethically unreasonable. It seems to me that all of the issues are intertwined.

    -Sara Laudeman

    Like

  3. ballen68 says:

    I thought that your research this week was extremely interesting. I enjoyed looking at the things you had to say about new robotic technology, made me think about IRobot! I enjoy innovation and the mystery of how far humans will go with our technology, given our limitless capacitation for knowledge. The thought you provoke about “what is sentience” is a bit interesting.
    I think it taps in to a lot of moral issues, and whether or not we are the only ones that deserve to be called a sentient. I definitely believe that animals perceive emotions and have feelings. However, I am not sure we will ever be able to create something that truly has feelings and can discern those feelings and take action on them. Man is not capable of being God. That being the case, I think it is foolish of us to try. Look at what happened with the Tower of Babel when man attempted to become like God. It wasn’t the fact that they were innovating and working together to build something amazing caused the issue, it was the goal to be on equal level with God that caused the issue. Being God comes with responsibility that we as finite beings would crumble beneath. The idea of creating robots is awesome, and I believe that we should make them as smart and efficient as possible, but not capable of emotions. Emotions are totally different from intelligence. Scientists have made robots that are starting to make decisions on their own, that is intelligence, and that is what we should strive to improve. Creating robots that serve us with no emotions is ethical, but attempting to create ones that will serve us and have emotions is wrong.

    -Bryson

    Like

  4. katelynzander says:

    I really love how deep you take your topics. I always enjoy hearing you talking about your research and how you find an answer to a question but you do not agree with it so you end up researching a different avenue. With the current advancement of technology this subject of self-aware robots is relevant in today’s world. You research that Japan will have a hotel with only robot employees. What kind of issues with safety would this cause? Without actual humans how can there be safety precautions for the residents? What would an economy be like with robots running factories? I would imagine it would fall apart because no one would be employed. You mentioned that someone believe that robots within the workplace would be a good thing….good for who? Owners of corporations? It would benefit the millions of laborers being unemployed.
    This topic really lets your imagination run wild and makes you relate to many of the sci-fi movies of robots. You mentioned in class an issue with self-aware robots would be what if they no longer wanted to perform the jobs they were created for? What is started to think about would be the justice system with self-aware robots. Would we start to see robots as people? We have rights for animals and people are able to go to jail for animal cruelty. Would robots end up having rights, would they be subject to trail? I do not see the benefit of creating self-aware robots. Robots who can perform daily tasks to make everyday life easier is one thing, but sentience sounds society is creating a huge problem for itself.

    Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s