SignAgent: Agentic LLMs for Linguistically-Grounded Sign Language Annotation and Dataset Curation

📅 2026-03-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limitations of existing sign language computational approaches, which are largely confined to lexical transcription and lack fine-grained linguistic annotations, while manual labeling remains prohibitively expensive for large-scale phonologically aware datasets. To overcome this, we propose SignAgent, a novel linguistically driven annotation framework grounded in an agent-based architecture. The framework employs a SignAgent Orchestrator to integrate multimodal evidence with linguistic knowledge and leverages SignGraph to anchor representations at both lexical and phonological levels. By combining large language model agents, vision–phonology joint clustering, and constrained pseudo-transcription generation, our method achieves state-of-the-art performance in pseudo-transcription and ID Glossing tasks. This represents the first scalable, linguistically principled approach to automatic large-scale sign language annotation and lexeme variant clustering.

Technology Category

Application Category

📝 Abstract
This paper introduces SignAgent, a novel agentic framework that utilises Large Language Models (LLMs) for scalable, linguistically-grounded Sign Language (SL) annotation and dataset curation. Traditional computational methods for SLs often operate at the gloss level, overlooking crucial linguistic nuances, while manual linguistic annotation remains a significant bottleneck, proving too slow and expensive for the creation of large-scale, phonologically-aware datasets. SignAgent addresses these challenges through SignAgent Orchestrator, a reasoning LLM that coordinates a suite of linguistic tools, and SignGraph, a knowledge-grounded LLM that provides lexical and linguistic grounding. We evaluate our framework on two downstream annotation tasks. First, on Pseudo-gloss Annotation, where the agent performs constrained assignment, using multi-modal evidence to extract and order suitable gloss labels for signed sequences. Second, on ID Glossing, where the agent detects and refines visual clusters by reasoning over both visual similarity and phonological overlap to correctly identify and group lexical sign variants. Our results demonstrate that our agentic approach achieves strong performance for large-scale, linguistically-aware data annotation and curation.
Problem

Research questions and friction points this paper is trying to address.

Sign Language
Linguistic Annotation
Dataset Curation
Phonological Awareness
Gloss-level Processing
Innovation

Methods, ideas, or system contributions that make the work stand out.

Agentic LLMs
Sign Language Annotation
Linguistically-Grounded
SignGraph
Pseudo-gloss Annotation
🔎 Similar Papers
No similar papers found.