9th Workshop on Sign Language Translation and Avatar Technologies (SLTAT 2025)

📅 2025-08-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses communication barriers between Deaf and hearing individuals by proposing a non-intrusive virtual digital human interaction framework integrating sign language recognition, affective computing, and ethics-aware design. Methodologically, it establishes an end-to-end sign language translation pipeline comprising a high-accuracy sign language recognition model, a multimodal affect perception module, an interpretable data collection and analysis system, and a lightweight, real-time interactive avatar toolchain. Its key contribution lies in being the first to jointly embed affective computing and ethics-by-design principles into a sign language translation digital human architecture—thereby extending its role from a mere translator to a socially aware interaction agent. The framework significantly improves usability and social inclusion, advancing sign language technology from lab-based prototypes to real-world deployment. It offers a reusable methodology and practical paradigm for cross-modal human–AI interaction and disability-inclusive AI.

Technology Category

Application Category

📝 Abstract
The Sign Language Translation and Avatar Technology (SLTAT) workshops continue a series of gatherings to share recent advances in improving deaf / human communication through non-invasive means. This 2025 edition, the 9th since its first appearance in 2011, is hosted by the International Conference on Intelligent Virtual Agents (IVA), giving the opportunity for contamination between two research communities, using digital humans as either virtual interpreters or as interactive conversational agents. As presented in this summary paper, SLTAT sees contributions beyond avatar technologies, with a consistent number of submissions on sign language recognition, and other work on data collection, data analysis, tools, ethics, usability, and affective computing.
Problem

Research questions and friction points this paper is trying to address.

Advancing sign language translation for deaf communication
Developing avatar technologies as virtual interpreters
Exploring sign language recognition and data analysis
Innovation

Methods, ideas, or system contributions that make the work stand out.

Sign language recognition technology
Digital human virtual interpreters
Affective computing for communication
🔎 Similar Papers
No similar papers found.
Fabrizio Nunnari
Fabrizio Nunnari
German Research Center for Artificial Intelligence (DFKI), Saarland Informatics Campus
Computer-Human InteractionSign LanguageAvatarsVirtual Interpreters
C
Cristina Luna Jiménez
University of Augsburg, Augsburg, Germany
Rosalee Wolfe
Rosalee Wolfe
DePaul University
accessibilitydeaf communicationcomputer graphicshuman computer interaction
J
John C. McDonald
DePaul University, Chicago, IL, USA
M
Michael Filhol
Université Paris-Saclay, Paris, France
E
Eleni Efthimiou
Institute for Language and Speech Processing, Athena RC, Athens, Greece
E
Evita Fotinea
Institute for Language and Speech Processing, Athena RC, Athens, Greece
T
Thomas Hanke
University of Hamburg, Hamburg, Germany