Deep Facial Representation for Quantification of Emotion Dynamics

Piyush Karande | 22-FS-032

Project Overview

Facial imagery is used in many applications, including social media analytics, identity recognition, sentiment analysis, biomedical research, and clinical applications. One key facet of facial imagery important to all of these areas is expression of emotion. Existing approaches for detecting emotion largely utilize very generic categories, such as happy, sad, and angry. However, the range of expressions is much larger and driven by deeply rooted neural underpinnings. A detailed system of facial movements, the Facial Action Coding System (FACS), describes the fundamental lexicon (Action Units or AUs) by which facial expressions are composed. In this work we collaborated with emotion scientists and trained FACS coders at UCSF to create a one-of-a-kind facial-imagery dataset of various emotions annotated with micro-expressions defined by AUs. During this project's period of data collection and annotation we also developed deep neural-network models to detect high-level emotions using two previously released facial-emotion datasets. In our future work we plan to leverage these models and the newly collected dataset to create subject-specific models for accurate annotation of detailed micro-expressions in facial imagery videos. These types of models would tremendously increase the pace of scientific research in numerous areas of neuroscience and pave the way for highly detailed models that could be applied in areas of cybersecurity, such as identity recognition and deception detection.

Mission Impact

Our work on creating a clean facial-imagery dataset annotated with detailed facial movements has paved the way to creating high-fidelity facial-expression AI/ML models. The preliminary modeling work and the curation of data has strengthened the strategic connection with UCSF in the field of neuroscience. The work on machine-learning approaches to detecting emotions through facial imagery was included as a part of a proposal leading to a 5-year NIH PO1 grant awarded to UCSF. This will result in continuation of our work developing subject-specific models of facial imagery. Such individualized models, specifically for high-profile personnel, can be potentially used to tackle emerging security challenges in identity verification, malicious intent detection, and discriminating synthesized facial imagery videos from real videos.