The first time I watched it, it made me uneasy. Very uneasy. Nonetheless, it is quite thought provoking. I thought about the machines in the movie, “The Matrix,” and “I-Robot.” They all have the underlying theme of robots or A.I.s developing their own consciousness, emotions, and intelligence. When is this going to happen and is it ever going to happen? Is it ethical to create machines to feel human emotions? Considering how many scientists are so eager to create a machine that can “feel” like a human and how almost everything that we have though of has been invented, I can’t really deny the possibility that perhaps such a machine will eventually exist one day. If that day happens, where is the point where we recognize them to be sentient beings? How will humanity handle it? That brings me to wonder how we define humanity. What does it mean to be human? What are the requirements for being considered a “life form”? Is the definition for life restricted to having flesh and blood or can it be extended? If such as machine fits the entire definition of being human, then perhaps we should consider it as a human being or a sentient life form. Sometimes, I wonder if there is going to be serious conflicts between A.I.s and humans in the future like in The Matrix. They would radically change our culture. Honestly, I believe that culturally we aren’t ready to handle A.I.s just yet. Like Einstein had said: “It has become appallingly obvious that our technology has exceeded our humanity.” I don’t believe that technological progress is going to stop anytime soon, so I can’t help but think that if we want to prevent such a conflict, should we be as one with machines? That is something that might not be comfortable for most people to think about because it is very hard to imagine such an existence. It’s a scary thought for most people, but nonetheless I feel that it may be something very important to think about.