Wednesday, December 18, 2019

Assignment #16 - Luke Plummer - Speech about robots

For this entire speech I will be defining accomplishment as a measure of one's ability to reach goals in different environments. The ability to produce results, the ability to complete tasks effectively and efficiently. Artificial Intelligence has the ability to maximize results, AI can accomplish a great deal, in a very short amount of time. As stated by Kose, Artificial Intelligence started its journey as a result of innovative developments within computer science. Since its beginning it has not only increased exponentially in how useful it is, but it has been implemented into various fields with real success, and it is seen taking an active part in the scientific arena with its massive potential to solve real world problems.  Yet, we are faced with a real concern.

Can we rely too much on Artificial Intelligence?

    Artificial Intelligence is already a part of our everyday lives. It is something that we use every day without realizing. Think Siri, Alexa, Google search engines, and your social media pages(Manifest). More importantly, Artificial Intelligence is on the path to become a tangible part of our lives, something physical. Self driving cars are being programmed and created, Ai is being implemented into drones in our military, and they are being used to work for business. With AI becoming more and more intertwined with our lives and our societies, we must confront a major problem that arises with the implementation of Artificial Intelligence.

   
These AI equipped machines, which we are seeing more frequently in our daily lives, need to be acceptable. As they begin to be developed and intertwined with the areas of our lives that are important they will be capable of making decisions that will directly affect humans. As stated by Boer Deng,  researchers are increasingly convinced that society’s acceptance of AI equipped machines will depend on if they can be programmed in ways that maximize safety, fit in with social norms and encourage trust. We will need deliberate progress to determine what developments are required to allow AI to reason successfully in ethical decisions.

The only issue is, that is simply impossible.

The unsolvable problem that arises with Artificial Intelligence, is that currently we cannot program morality into Artificial Intelligence. We need to be able to trust machines to make moral decisions. According to Etonzi, Much attention has been paid to the need for AI to choose between two harms in cases when inflicting some harm cannot be avoided. In short, AI equipped machines that make decisions on their own, seem to need ethical guidance. Moral decisions have posed an issue to AI. A classic example of this situation is found in self driving cars, Autonomous vehicles may face situations in which it is inevitable that one person may be killed in order to save others. Suppose a self-driving car has only the choice to run into one of two groups of people. Is it relevant that the groups of people are following the traffic laws whereas the other group cross against the red light? What happens if the self-driving car can only save the life of other traffic participants by sacrificing its passengers? If we cannot find a way to allow AI to make moral or ethical decisions, it is unsafe to implement them into society. This raises the question, should we continue to develop and rely on AI? Immoral AI dilemmas although in some cases are rare, arouse heated, extensive discussion which have led to virtually no progress on how we could safely implement this new technology.  

    And say that car actually is faced with the decision, whether it has to crash into one group or the other, who gets persecuted? Who is to blame for the injuries and maybe death? Is it the user of the self driving car or is it the programmer of the self driving car. Neither of them were in control when people got injured, so who is to blame? Also to take into consideration, because of the inability of program morals into an AI equipped machine, is it up to the programmer to decide who lives and who dies in those situations. The AI would be equipped with sets of instructions to follow in all situations. Does the programmer get to set those instructions? Under this current implementation of AI, while yes, we are “accomplishing” more are we striving towards a future of a safe and applicable form of this technology. Sure, the AI already knows the best route to your destination, it can get you there faster, but how can we allow it to put your life at risk when we have so many questions without answers.

    Immoral machines that do not care to cause some destruction is not the only downside to relying on artificial intelligence. Artificial Intelligence can “accomplish” more than a normal person. As stated before, they are more efficient at tasks than we are, they are more effective at reaching goals than we are. They beat us in a lot of areas. AI pose a threat to many jobs. Artificial Intelligence has already taken away jobs, and continued exposure will see the continued loss of jobs to AI. According to analysts at Oxford Economics, more than 20 million jobs in retail can be replaced by Artificial Intelligence.  20 million people with unreliable sources of income in just the retail industry alone. In a more current light, McDonalds is replacings servers with Artificial Intelligence. This allows for easier, and cheaper, completion of tasks. AI is cheaper to install than to train and continue to pay for human workers, McDonalds is starting to switch and nothing is stopping other large business corporations. Artificial Intelligence has the ability to complete tasks better than humans can, but it puts a lot of those same people out of jobs.

Currently AI poses too many threats, both to the safety and wages to many people. By continuing the research of AI and being wary of actual implementation , we enable further research on the vast field of AI while continuing to ensure the comfort and the safety of the public. AI should be implemented to assist the public, not place them in danger, and until we can ensure safety we should be wary of how much reliance we place on these machines.

Annotated Bibliography
BBC. “McDonald's Uses AI for Ordering at Drive-Throughs.” BBC News, BBC, 11 Sept. 2019,
www.bbc.com/news/technology-49664633.
Deng, Boer. “Machine Ethics: The Robot’s Dilemma.” Nature, vol. 523, no. 7558, July 2015, pp.
24–26. EBSCOhost, doi:10.1038/523024a.
Etzioni, Amitai, and Oren Etzioni. “Incorporating Ethics into Artificial Intelligence.” Journal of
Ethics, vol. 21, no. 4, Dec. 2017, pp. 403–418. EBSCOhost,
doi:10.1007/s10892-017-9252-2.
Köse, Utku. “Are We Safe Enough in the Future of Artificial Intelligence? A Discussion on
Machine Ethics and Artificial Intelligence Safety.” BRAIN: Broad Research in Artificial
Intelligence & Neuroscience, vol. 9, no. 4, Nov. 2018, pp. 184–197. EBSCOhost,
search.ebscohost.com/login.aspx?direct=true&AuthType=ip,uid,cpid,url&custid=s1176
92&db=a9h&AN=133436011.
Manifest, The. “16 Examples of Artificial Intelligence (AI) in Your Everyday Life.”
Medium, Medium, 26 Sept. 2018,
medium.com/@the_manifest/16-examples-of-artificial-intelligence-ai-in-your-everyday-l
ife-655b2e6a49de.
Rotman, David. “Will Advances in Technology Create a Jobless Future?” MIT Technology
Review, MIT Technology Review, 6 June 2019,
(Visual Aid)

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.