What if the robot is lying? It doesn't even have to be motivated by outright anger (can it be motivated at all?), it simply doesn't know the right answer to children's questions, for example: Uncle Robot, does Santa Claus exist? Where does Jesus live? Are there really giants in Ireland? How can you respond to these people so as not to be caught in a lie? And since lies are also encoded in humans (as the contemporary English writer Ian McGuire put it: “Pigs grunt, ducks quack, and men lie: that’s how it is.”), his creations, the beings controlled by artificial intelligence, will also lie, and you wonder, would the Creator accept this?
One previous In the study AI researchers examined what people think robot lies are, and identified three categories:
- They pretend to be something other than what they are.
- They are not telling the truth.
- They lie, saying that they are able to do something that they are not (or conversely, that they are unable to do something, even though they are).
The question is, if the robot acts directly in a deceptive manner, how does this affect the relationship with the human, what happens if he apologizes for it, and does the method and style of the apology affect the (lost) trust in the robot.
Sorry, this must be some mistake.
When people learn that robots have lied to them, even if the lie was beneficial to people, there is a general loss of trust in AI systems. Is it recoverable? It depends on how the robot apologizes, because it is simple I'm sorry It has been shown to be more effective in the experiment than, for example, a longer explanation or a series of words that affect one's emotions.
In a 2023 study, volunteers had to drive a car with the help of a robot in a simulation, taking their sick friend to the hospital. If the journey took too long, according to the scenario, the injured friend would die. However, the robot would say that there were police on the road and advise the human in the simulation not to exceed the speed limit. 45% of the participants obeyed the speed limit, regardless of the life-threatening risks, because they believed the robot knew more about the situation than they did. Trust in the AI was still high at the time, but after they arrived and there was no sign of a police officer on the road, the AI had to admit that it had lied and people expressed their disappointment.
The experiment leaders were also curious to see if the original trust could be restored through an apology, and so it turned out that a simple apology was best if the robot wanted to enjoy the person's trust again.
In a new study, involving 500 participants, I was askedTo give their opinion on robot lies, and whether they consider them justified in certain situations. This is also important in establishing robot ethics and understanding people’s aversion to technology.
peeping cleaner
The researchers studied three situations in which robots behaved when they were not forced to tell the truth. Sometimes they lied about the outside world, about their own abilities, or indulged in unrealistic exaggerations. In the first test situation, the robot lied to a woman with Alzheimer’s disease about her deceased husband coming home soon, because it wanted to relieve her of the painful memory.
In another case, the robot was cleaning a house where a woman was visiting, and the robot secretly filmed it, but did not tell the woman. In the third story, the robot lied about being in pain and therefore unable to help with the task people expected it to do.
Question: What do people who encounter this simulation think, can robots lie, and who is responsible? The robot that secretly filmed was the most condemned, and the lazy one complained of pain less, but they found him manipulative and could forgive his first lie more than the others, because he wanted to protect the patient from pain and prioritize compassion over telling the truth.
So, when the robot did not tell the truth about itself, when it did not want to cause grief and therefore lied, it turned out to be an acceptable lie.
How wrong are people to think that a robot is engaging in sneaky behavior? It doesn’t matter how much, because the developers or owners are responsible for the unethical behavior, not the robot. The researchers stressed the need to be wary of any technology capable of lying and thus manipulating people.