It's no surprise, as technology continues to advance, we witness it more in our everyday lives. However, despite the obvious benefits technology contributes, we must consider how easily new advances can be introduced into our society. Nearly every technological innovation has been met with 'unease' in one way or another.
For example, laptops deployment into mainstream society was doubtful as they were 'heavy, pricey and had a poor battery'. Quite clearly, the issues never materialised. Yet, as technology continues to advance, the issues surrounding the capability of the technology is replaced by social anxieties and raises questions such as 'can we trust future technology?' In particular, an area of technology where this question is warranted is in robotics.
In a recent study between Yard and Cardiff Metropolitan University, we set out to understand how we can successfully achieve the public's acceptance and, more importantly, their trust in robots. With the broader public's knowledge surrounding robotics based on TV shows and movies, we understand why there is such a negative perception and apprehension towards trusting them.
After all, when movies like The Terminator portray robots as evil machines wanting to take over humanity, we can't blame anyone's concern. The idea of 'robots taking over the world' is frightening; however, robots will take over but not in the way presented by sci-fi movies. Robotics has already had a positive impact in many sectors of society, from manufacturing to health care, but the challenge remains, how can we get people to trust robots further? To explore this, we must look at human to human behaviour and how humans initiate a trusting relationship.
Reviewing human psychology, research shows a contributing factor in the decision to trust is down to facial expressions and the display of emotions. Whilst a robot's ability to display empathy is limited; the ability to manipulate the visual outputs of a robot is something that could be explored.
As a means to test the theory that a robots facial expression/features can influence trust between robot and human, we designed a short study using the Canbot U03S robot. By manipulating the robot's facial features (i.e. changing the colour, cartoon faces, human faces, hand-drawn faces, cracked screens, evil eyes, smiling, angry etc), we asked participants, would you trust this robot?
One thing made very clear from this study; people do not like robots which contain too many human characteristics; we like some similarities, but not too many. Interestingly, this finding was very much expected and falls in line with the ‘uncanny valley’ theory. This is where something that is not human holds human characteristics, and up until a point humans can accept them as familiar; however, too much human likeness and we fall into the 'uncanny valley' (for instance, a zombie is something that holds strong human resemblance but is not familiar').
The process of adding human characteristics to non-human organisms or objects is called anthropomorphism. Our research discovers that to successfully deploy robotics into society, we must carefully consider their design. The design of something as simple as the robot's eyes can be the breaking point between human acceptance and not. Don't get me wrong; there is much more to human acceptance of robots than just facial expressions. Elements such as the robot's size, abilities, the voice, the target audience, the area deployed, and the social and ethical implications must also be an area of consideration.
For the full story, this research has been published in numerous areas, including the RITA (Robot Intelligence Technology and Applications) conference, PeerJ Computer Science and The Conversation (where we explored robotics in healthcare).
By Joel Pinney, co-authored by Paul Newbury