[ad_1]
What makes human a human being? And it is possible to replicate this “humanity”, whatever it is, and transfer it to a robot? A team of scientists from the laboratory of Social Cognition in Human-Robot Interaction toItalian Institute of Technology (Iit) of Genoa, led by Agnieszka Wykowska, has just tackled the problem, setting up an experiment to try to clarify how and when human beings “see” robots as “intentional agents”, an entity that is very close to their fellow man. To do this they implemented a Turing test non-verbal in a robot-human interaction setting, involving the now famous iCub: and in this way, how they tell in the magazine Science Roboticshave discovered that it is indeed possible to “transfer” to robots some characteristics typical of human beings, in particular the response timein such a way that a human being cannot tell if he is talking to a conspecific or to a machine.
The Turing test
Let’s take a step back. One of the first scientists it was to question the “humanity” of machines Alan Turingwhich over sixty years ago proposed to “consider the following question: Are machines capable of thinking?“imagining of “Describe a new form of the problem in terms of a game we call ‘imitation game’. It is played in three, a man (A), a woman (B) and an interrogator (C) […] The interrogator is in a separate room from the other two. The object of the game, for the interrogator, is to determine who between A and B is the man and who is the woman. He only knows them as X and Y, and at the end of the game he can tell ‘X is A and Y is B’ or ‘X is B and Y is A’“. To prevent the interrogator from helping himself by listening to the tone of voice or handwriting, A and B’s answers are typed. “Now – continues Turing – let’s ask ourselves the following question: what would happen if a car took the place of A? Would the interrogator fail with the same error rate when the test is performed by a man and a woman? These questions replace the original question: can a robot think? “.
Throughout history, hundreds of experiments have been performed to answer this question. And lately there have been some positive results: it is the case, for example, of the dialogue between Eugene Goostman, a computer programmed to hold conversations, and human volunteers who had to figure out who they were talking to. At that juncture, Goostman managed to convince a third of the judges that he was a 13-year-old boy, a boy in flesh and blood.
The IIT experiment
The one just described is a “classic” Turing test. The IIT scientists, on the other hand, have proposed them a “non-verbal” version, that is, which does not involve an exchange of messages. “The most interesting result of our study – tells Wykowska a Wired – lies in the fact that the human brain is highly sensitive to the nuances of behavior that reveal the‘humanity’. In the non-verbal Turing test, human participants had to evaluate whether they were interacting with a machine or with a person by considering only the reaction time of pressing a button “. To prepare for the experiment, Wykowska’s team first measured precisely the response times and accuracy of an average human profile. Later, he recruited volunteers and divided them in such a way as to create human-robot pairs: each person was basically paired with a robot, who had to press a button whenever he saw a certain signal on a screen. The robot was controlled by a person or an algorithm, programmed to act similarly, but not quite the same, to a human being. And of course his “companion” didn’t know who he was being controlled by.
“In our experiment – he adds Francesca Ciardofirst author of the study – “We pre-programmed the robot by slightly modifying the reaction time and accuracy parameters of the average human profile. In this way, the possible responses of the robot were of two kinds: the first completely human – one in which the robot is actually controlled by a human – and the second slightly different from that of a human, since the robot is controlled by a pre-programmed algorithm “. Result: the robot appears to have passed this particular type of non-verbal Turing test. That is, in other words, the volunteers who interacted with the robot were unable to tell if the robot was controlled by a human or by an algorithm in situations where it was actually controlled by the algorithm. “The next step of the experiment – concludes Wykowska – will involve the implementation of a more complex behavior, in such a way as to have a more elaborate interaction with humans and understand what other parameters of this interaction are perceived as humans or mechanical “.
.
[ad_2]
Source link
