Abstract:
Beyond words, non-verbal behaviors (NVB) are known to play important roles in face-to-face interactions. However, decoding NVB is a challenging problem that involves both extracting subtle physical NVB cues and mapping them to higher-level communication behaviors or social constructs. Gaze, in particular, serves as a fundamental indicator of attention and interest with functions related to communication and social signaling, and plays an important role in many fields, like intuitive human-computer or robot interface design, or for medical diagnosis, like assessing Autism Spectrum Disorders (ASD) in children.
However, estimating the visual attention of others - that is, estimating their gaze (3D line of sight) and Visual Focus of Attention (VFOA) - is a challenging task, even for humans. It often requires not only inferring an accurate 3D gaze direction from the person's face and eyes but also understanding the global context of the scene to decide which object in the field of view is actually looked at. Context can include the person or other person activities that can provide priors about which objects are looked at, or the scene structure to detect obstructions in the line of sight. Hence, two lines of research have been followed recently. The first one focused on improving appearance-based 3D gaze estimation from images and videos, while the second investigated gaze following - the task of estimating the 2D pixel location of where a person looks in an image.
In this presentation, we will discuss different methods that address the two cases mentioned above. We will first focus on several methodological ideas on how to improve 3D gaze estimation, including approaches to build personalized models through few-shot learning and gaze redirection eye synthesis, differential gaze estimation, or taking advantage of priors on social interactions to obtain weak labels for model adaptation. In the second part, we will introduce recent models aiming at estimating gaze targets in the wild, showing how to take advantage of different modalities including estimating the 3D field of view, as well as methods for inferring social labels (eye contact, shared attention).