For a number of reasons, artificial intelligence (AI)
is a difficult term to define
By: Teodora Rešetar
Firstly, intelligent behavior doesn’t necessarily require
intelligence, and more
specifically, sentient intelligence. Psychology can measure intelligence through
criteria, but it still fails to provide a clear definition, or even to prove
The Turing Test tells us whether a human is able to distinguish human behavior from
machines such as a chatbot. It deals with perception, but there is no innate way to
whether a machine manifests the same type of intelligence as humans do.
Furthermore, a lot of tasks assigned to AI systems don’t deal
much with the types of
intelligence shown by conventional tests such as an IQ test, or solving verbal or
mathematical problems. In many cases, AI assignments have a mechanical component,
measuring the ability to pick up an object.
Where does this leave us?
A pragmatic definition of AI may include the following elements:
the capacity of machines to
be autonomous (meaning they are operational without human interference) and adaptive
(meaning they learn and adjust through interaction with their surroundings) in
specific task. Designating “specific tasks” is an important disclaimer to AI
we still haven’t invented Artificial General Intelligence. AGI would be a machine
solve any intellectual task and this is still in the realm of science fiction.
Though a large percentage of researchers say we will
discover AGI within
the next 60 years, AI-based systems today vary from chatbots and music composers to
self-driving cars. But even though such systems are assigned specific tasks and
the ability to freely take initiative on decisions outside of this task, we are
burdened by various ethical problems. The problem of self-driving cars will most
solved through the lens of insurance policies, but it is still worth asking
ethical approaches should be applied when dealing with AI systems that become an
increasingly integral part of our lives. Thinking about who the decision-makers are
crucially important. Should our approach be purely utilitarian, categorical or
else? Will it be inclusive enough and is the rapid growth of technology going to
amplify inequality depending on how we deal with it? Not to mention that in the
discovering AGI, we aren’t sure how we can test whether its behavior is intelligent
merely perceive it as intelligent. Intelligence, through evolution, did not develop
vacuum, separate from emotions and sensations, nor did self-awareness. Our present
is that not even all humans have equal rights, be it de jure or de facto, let alone
and then machines. The political capital for considering the rights of AGIs would be
weaker, which paves the way for potential exploitation.
Laws of robotics
It would be nice if we could use Isaac Asimov’s three laws of
1) A robot may not injure a human being or, through inaction, allow a human
come to harm.
2) A robot must obey the orders given it by human beings except where such
conflict with the First Law.
3) A robot must protect its own existence as long as such protection does
with the First or Second Laws.
But, since we can’t do so categorically, and even in his
fictional work there are unintended
consequences, we still need to figure out which possible ethical approaches to apply
dealing with AI, since their usage influences all spheres of society. There is also
question of liability when it comes to those unintended consequences, especially if
could have a grave influence on our lives.