🤖 AI Summary
This study addresses communication barriers between Deaf and hearing individuals by proposing a non-intrusive virtual digital human interaction framework integrating sign language recognition, affective computing, and ethics-aware design. Methodologically, it establishes an end-to-end sign language translation pipeline comprising a high-accuracy sign language recognition model, a multimodal affect perception module, an interpretable data collection and analysis system, and a lightweight, real-time interactive avatar toolchain. Its key contribution lies in being the first to jointly embed affective computing and ethics-by-design principles into a sign language translation digital human architecture—thereby extending its role from a mere translator to a socially aware interaction agent. The framework significantly improves usability and social inclusion, advancing sign language technology from lab-based prototypes to real-world deployment. It offers a reusable methodology and practical paradigm for cross-modal human–AI interaction and disability-inclusive AI.
📝 Abstract
The Sign Language Translation and Avatar Technology (SLTAT) workshops continue a series of gatherings to share recent advances in improving deaf / human communication through non-invasive means. This 2025 edition, the 9th since its first appearance in 2011, is hosted by the International Conference on Intelligent Virtual Agents (IVA), giving the opportunity for contamination between two research communities, using digital humans as either virtual interpreters or as interactive conversational agents. As presented in this summary paper, SLTAT sees contributions beyond avatar technologies, with a consistent number of submissions on sign language recognition, and other work on data collection, data analysis, tools, ethics, usability, and affective computing.