Japanese scientists have found ways to make faces of human-like robots more expressive, paving the way for machines to show a greater range of emotions, and ultimately have deeper interaction with people.
Researchers at Osaka University in Japan found a method for identifying and quantitatively evaluating facial movements on their android robot child head. Named Affetto, the android’s first-generation model was first unveiled in 2011.
The researchers have now found a system to make the second-generation Affetto more expressive.
Their findings, published in the journal Frontiers in Robotics and AI, offer a path for androids to express greater ranges of emotion.
The researchers investigated 116 different facial points on Affetto to measure its three-dimensional movement. Facial points were underpinned by so-called deformation units. Each unit comprises a set of mechanisms that create a distinctive facial contortion, such as lowering or raising of part of a lip or eyelid.
Measurements from these were then subjected to a mathematical model to quantify their surface motion patterns. While the researchers encountered challenges in balancing the applied force and in adjusting the synthetic skin, they were able to employ their system to adjust the deformation units for precise control of Affetto’s facial surface motions. (PTI)