🤖 AI Summary
Current sign language generation lacks a unified modeling framework for non-manual signals—particularly facial expressions—resulting in inconsistent emotional annotation of virtual signers and insufficient linguistic accuracy. To address this, we propose a dual-parameter affective representation model grounded in the EASIER annotation scheme, which orthogonally decouples affective intensity and category into two interpretable numerical dimensions, enabling fine-grained, semantically transparent control over virtual signer facial expressions. Integrated into the Paula sign language avatar platform, our method combines numeric parameterization with linguistically grounded textual labels to support automated synthesis of affective non-manual signals. Experimental evaluation demonstrates significant improvements in expressiveness naturalness, cross-contextual consistency, and syntactic compatibility with sign language grammar. This work establishes the first reusable, extensible, and standardized control framework for affective sign language generation.
📝 Abstract
Non-manual signals in sign languages continue to be a challenge for signing avatars. More specifically, emotional content has been difficult to incorporate because of a lack of a standard method of specifying the avatar's emotional state. This paper explores the application of an intuitive two-parameter representation for emotive non-manual signals to the Paula signing avatar that shows promise for facilitating the linguistic specification of emotional facial expressions in a more coherent manner than previous methods. Users can apply these parameters to control Paula's emotional expressions through a textual representation called the EASIER notation. The representation can allow avatars to express more nuanced emotional states using two numerical parameters. It also has the potential to enable more consistent specification of emotional non-manual signals in linguistic annotations which drive signing avatars.