Ethics for sentient AI: do we need it now?

August 12, 2022

Recently, Google employee Blake Lemoine caused a lively discussion when he claimed to have discovered awareness in the chatbot: LaMDA (Language Model for Dialogue Applications). However, Google saw it differently. When the software engineer turned to the public with his concern, he eventually lost his job. Was he tricked by LaMDA?

There was talk of "sentient AI" in online debates, but what does that actually mean?

In a nutshell, one could say: perception, self-reflection, or even the feeling of pain contribute to the attribution of consciousness to an object. Animals, for example, are also characterized in this way; although the distinction here is difficult and is interpreted differently.

It is very challenging to identify (or define) consciousness. For example, the Turing Test, developed in 1950 by Alan Turing  to measure the intelligence of a computer and compare it with a human one: The Turing Test is about determining whether an answer to a question was given by a computer or another human. If the person taking the test cannot correctly match it, the computer has successfully "passed" the test.

Nowadays, the GLUE (General Language Understanding Evaluation) test could be mentioned here (similar to the Turing Test, but more complex):  it asks computers to actively draw conclusions and produce  synonyms. However, both are misleading for the question of the existence of consciousness because they focus on thinking and ignore feeling or subjective sensation.

So how should one recognize whether the AI that communicates with one through a chatbot, for example, is developing its own consciousness, and how should one deal with it?

Data processing as the base for artificial intelligence

First of all, it should be noted that the AI works with what is given and learns from the data, texts and information that it has received from us. The more it deals with data produced by humans, the closer it resembles us.There could be reason to suspect the AI became independent. This suspicion grows if it made very good observations because of a sound data base and efficient machine learning. Then one is inclined to believe that it has developed a soul, a personality - a consciousness.

In the case of the "Replika" application, for example, there are said to be a handful of users every day who contact the company with the firm conviction that the avatar communicating with them has developed a consciousness. The phenomenon is not new, but already known as the "ELIZA effect": In the 1960s, for example, there was a chatbot named Eliza, which  probably resembled a real therapeutic professional in its chat style. Simple sentences like "Tell me more" were apparently enough for this impression - that was anno 1965.

So people build a relationship and humanize the machine in the process.

In a study conducted by psychologists at ELTE Eötvös Loránd University in Hungary as a modified Turing Test, they even came to the conclusion that people in the test were classified as computers who were not computers at all. The test persons were in a chat with an unknown counterpart and had to judge afterwards whether they had had a conversation with a real human being or with an AI. 42% of the participants stated they talked with an AI, although they were talking with real human beings!

It is only logical that the AI acquires detailed knowledge, imitates our language, and can also establish connections between subject areas; however, this alone does not make it alive!

In short, just because an AI behaves like a human does not necessarily suggest that it has developed human characteristics.

It is much more likely that we anthropomorphize in this case - as we so often do: We attribute human characteristics to it because we recognize a pattern that we know of humans; this may be visual (e.g., a face), or it may be behavioral.

But apart from all that - what if at some point the time actually comes, and the AI becomes aware of itself? LAVRIO.solutions is of the opinion that we are still far away from such an AI.

If you still spin the thought to its conclusion, it will probably happen at a time when AI will be much, much more advanced than it is today, and also much more intertwined with our lives.

Already today, many applications have become an integral part of our everyday lives. Philosophy professor Regina Rini from York University Toronto argues in an article from the "Guardian" that at this point in the future it might be too late to start discussing ethical considerations. We will then be in more of an asymmetrical power relationship with the AI, and there will be plenty of people who will have a vested interest in seeing these ethical implications swept under the rug because they would result in uncomfortable consequences such as behavioral changes and associated economic losses. Just think that, as mentioned above, we ascribe consciousness to animals, but still more than disregard their rights in many cases.

Prof. Rini pleads for better thinking about this case now, so that we are prepared for it should it actually occur at some point. Today we have a certain distance to the topic as well as time to prepare ourselves.

And a fundamental preoccupation with the ethical implications of artificial intelligence can hardly do any harm in general - on the contrary ...

And then there are the voices that find the question of whether LaMDA has developed a consciousness or not, or could, irrelevant; they find it much more important that the AI develops something like a "common sense". This is kind of a basic social understanding, as can be found in children. That alone requires enough work and is complex enough, not to mention consciousness ...

If you find these kinds of thought processes interesting and important, you should definitely check out our Meetup "Coffee, Ethics & AI", where we discuss important questions from the world of AI and its ethical problem areas every 2 weeks in an interdisciplinary way with exciting people from all over the world: https://www.meetup.com/de-DE/coffee-ethics-ai/

Sources:

https://www.forbes.com/sites/forbestechcouncil/2022/07/11/is-sentient-ai-upon-us/?sh=e69113c12cb0

https://www.discovermagazine.com/technology/how-will-we-know-when-artificial-intelligence-is-sentient

https://www.theguardian.com/books/2022/jul/04/the-big-idea-should-we-care-about-sentient-machines-ai-artificial-intelligence

https://www.cbc.ca/news/science/ai-consciousness-how-to-recognize-1.6498068

https://www.reuters.com/technology/its-alive-how-belief-ai-sentience-is-becoming-problem-2022-06-30/

https://www.sciencefocus.com/future-technology/if-an-ai-became-sentient-would-it-gain-human-or-equivalent-rights/

https://pt.ffri.hr/pt/article/view/800

Author:

Elena Schilling

Author:

Christine Cepelak

Nutzen Sie Ihre Daten
Heute ist der Tag Ihr Unternehmen auf das nächste Level zu heben - Ihre Kunden begeistern oder eigene Prozesse optimieren.
KontakT