Dewyan Thilakasiri
4 min readMay 31, 2021

AI and Ethics

The very concept of artificial intelligence or AI was firstly recorded in the late 1800s. Even though those approaches were not so scientific, they could perfectly match the definition of artificial intelligence. Definition for artificial intelligence as the Oxford Dictionary states: "the theory and development of computer systems able to perform tasks normally requiring human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages. "In simple terms, which is a machine or a virtual software having the consciousness or the ability to make decisions without or with minimal interference of a user. Even though the first computer was invented in 1822, it took some time to conceive or develop the first software in 1948. So it is safe to assume that the early concepts of artificial intelligence did not include virtual software. It is still pervasive for us humans to think about virtual software that contains artificial intelligence over machines with artificial intelligence. Most of the human-operated tasks are now automated, and they inevitably become operated via artificial intelligence. Almost every field in the modern world has even the slightest impact of artificial intelligence.

In "Future Crimes: How Our Radical Dependence on Technology Threatens Us All" by Marc Goodman, Goodman refers to Artificial Intelligence as, "When the computer scientist John McCarthy coined the term "artificial intelligence" in 1956, he defined it succinctly as "the science and engineering of making intelligent machines." Today artificial intelligence (AI) more broadly refers to the study and creation of information systems capable of performing tasks that resemble human problem-solving capabilities, using computer algorithms to do things that would normally require human intelligence, such as speech recognition, visual perception, and decision making. These computers and software agents are not self-aware or intelligent in the way people are; rather, they are tools that carry out functionalities encoded in them and inherited from the intelligence of their human programmers. This is the world of narrow or weak AI, and it surrounds us daily.
Weak AI can be a powerful means for accomplishing specific and narrow tasks. When Amazon, TiVo, or Netflix recommends a book, TV show, or film to you, it is doing so based on your prior purchases, viewing history, and demographic data that it crunches through its AI algorithms. When you get an automated phone call from your credit card company flagging possible fraud on your account, it’s AI saying, "Hmm, Jane doesn’t normally purchase cosmetics in Manhattan and a laptop in Lagos thirty minutes apart." Google Translate could not be accomplished without AI, nor could your car’s GPS navigation or your chat with Siri."

"As society and the development of artificial moral agents increase in industry and in everyday life, moral cognition and machine ethics becomes increasingly relevant (Cervantes, Rodríguez, López, Ramos, & Robles, 2016; Wallach, 2008). The fundamental basis for understanding how to develop advanced machines that can make decisions for humans, is consistent with understanding of the human brain and its structures and functions laying ground for morality, as neuroscience provide knowledge for the development of artificial intelligence (Cervantes et al., 2020). Moral cognitive components such as moral judgment, cognitive control, theory of mind and empathy, are a big part of our moral system (Bzdok et al., 2012). By investigating the functions of human morality, this knowledge will perhaps contribute to the development of machine morality. In contrast, moral judgment involves a complex kind of decision making, as one needs to take values, norms, and principles into account. With the exception of complex rules, implementation of machine morality might be one of the hardest tasks to solve in regards to having both rational and affective decision making in mind (Wallach, Franklin, & Allen, 2010), something that is going to be further discussed. The field of moral cognition is developing rapidly (Greene, 2015) and its underpinnings in moral psychology and cognitive neuroscience, can increase the understanding of these functions for machine intelligence as they compose of several vital functions for human morality."

(MORAL MACHINES The Neural Correlates of Moral Judgment and its Importance for the Implementation of Artificial Moral Agency)

In addition to the above references and the facts we deal with day-to-day, it is safe to conclude that it is not the same with AI as it is with simple software. A plain software can be operated in the way of creator desire since an artificial intelligence software has its thinking or analyzing then determining the appropriate response it is pretty hard to predict the outcome. It is still questionable to put AI in sensitive sectors of society. This statement may seem to be an overreaction, yet when analyzed past bizarre real-world incidents like Microsoft’s AI Chatbot Tay Trolled incident or French Medical chatbot using OpenAI’s GPT-3 suggesting suicide to a patient who sought medical help. The sensitive fields divided by a very thin, made-up social layer are best off without artificial intelligence, at least just for now. Because of the unpredictable social and ethical conflicts.

Dewyan Thilakasiri
Dewyan Thilakasiri

Written by Dewyan Thilakasiri

Wandering the internet with a great interest in technology.

No responses yet