What is
Artificial
Intelligence?

08 November, 2018

Defining AI

For a number of reasons, artificial intelligence (AI) is a difficult term to define precisely. Firstly, intelligent behavior doesn’t necessarily require intelligence, and more specifically, sentient intelligence. Psychology can measure intelligence through multiple criteria, but it still fails to provide a clear definition, or even to prove intelligence. The Turing Test tells us whether a human is able to distinguish human behavior from that of machines such as a chatbot. It deals with perception, but there is no innate way to test whether a machine manifests the same type of intelligence as humans do.


Furthermore, a lot of tasks assigned to AI systems don’t deal much with the types of intelligence shown by conventional tests such as an IQ test, or solving verbal or mathematical problems. In many cases, AI assignments have a mechanical component, such as measuring the ability to pick up an object.


What is Artificial Intelligence

Where does this leave us?

A pragmatic definition of AI may include the following elements: the capacity of machines to be autonomous (meaning they are operational without human interference) and adaptive (meaning they learn and adjust through interaction with their surroundings) in solving a specific task. Designating “specific tasks” is an important disclaimer to AI systems, since we still haven’t invented Artificial General Intelligence. AGI would be a machine that could solve any intellectual task and this is still in the realm of science fiction.

Though a large percentage of researchers say we will discover AGI within the next 60 years, AI-based systems today vary from chatbots and music composers to self-driving cars. But even though such systems are assigned specific tasks and don’t have the ability to freely take initiative on decisions outside of this task, we are still burdened by various ethical problems. The problem of self-driving cars will most likely be solved through the lens of insurance policies, but it is still worth asking ourselves which ethical approaches should be applied when dealing with AI systems that become an increasingly integral part of our lives. Thinking about who the decision-makers are is also crucially important. Should our approach be purely utilitarian, categorical or something else? Will it be inclusive enough and is the rapid growth of technology going to reduce or amplify inequality depending on how we deal with it? Not to mention that in the event of discovering AGI, we aren’t sure how we can test whether its behavior is intelligent or we merely perceive it as intelligent. Intelligence, through evolution, did not develop in a vacuum, separate from emotions and sensations, nor did self-awareness. Our present reality is that not even all humans have equal rights, be it de jure or de facto, let alone animals and then machines. The political capital for considering the rights of AGIs would be even weaker, which paves the way for potential exploitation.

Laws of robotics

It would be nice if we could use Isaac Asimov’s three laws of robotics:

  • 1) A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  • 2) A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
  • 3) A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

But, since we can’t do so categorically, and even in his fictional work there are unintended consequences, we still need to figure out which possible ethical approaches to apply in dealing with AI, since their usage influences all spheres of society. There is also the question of liability when it comes to those unintended consequences, especially if they could have a grave influence on our lives.



Teodora Rešetar

Did you find it interesting?

READ MORE