HEFES (Hybrid Engine for Facial Expressions Synthesis) is an engine for generating and controlling facial expressions both on physical androids and 3D avatars. HEFES is part of a software library that controls FACE. HEFES was designed to allow users to create facial expressions without requiring artistic or animatronics skills and it is able to animate both FACE and its 3D replica.
HEFES includes four modules: synthesis, morphing, animation and display.
The synthesis module allows users to generate new facial expressions through the control of the selected emotional display, i.e. FACE robot or 3D avatar. Both modules provide a graphical user interface (GUI) with as many slider controls as the number of servo motors (FACE robot) or anchor points (3D avatar) which are present in the corresponding emotional display. Using the editor it is possible to create any facial expressions other than the six basic expressions, i.e. happiness, sadness, anger, surprise, fear and disgust, defined as ’universally accepted’ by Paul Ekman. A widely-accepted emotional theory proposes that affective experiences can be defined by two independent dimensions: valence which ranges from a positive to negative feeling, and arousal which ranges from a calming to exciting state. Therefore each expression can be associated with a couple e (v, a) which describes the expression in terms of valence and arousal.
The morphing module generates an interpolated emotional space called Emotional Cartesian Space (ECS). The x coordinate represents the valence and the y coordinate represents the arousal therefore each expression represents a point in the valence-arousal plane where the neutral expression e (0, 0) is placed at the origin. Starting from an initial set of expressions, for example the six basic expressions, the morphing module generates the ECS through an interpolation algorithm. New expressions manually created through the synthesis module can be added to the ECS. The possibility of updating the ECS with additional expressions allows users to continuously adjust the more uncovered zones.
The animation module is designed to combine and merge multiple requests coming from various modules which can run in parallel in the robot/avatar control library. Therefore the animation module is responsible for mixing movements, such as eye blinking or head turning, with requests of expressions. For example, eye blinking conflicts with the expression of amazement since normally amazed people react opening the eyes wide.
The display module represents the output of the system with dedicated emotional displays: the FACE android and the 3D avatar. It receives the facial motion request from the animation module and converts it in movements according to the selected output display.
For further information: