Dynaword: From One-shot to Continuously Developed Datasets

📅 2025-08-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current large-scale NLP datasets face three critical bottlenecks: ambiguous licensing impedes sharing and reuse; static release cycles hinder community engagement; and quality assurance relies solely on centralized teams. To address these, we propose Dynaword—a novel framework introducing the first community-driven, dynamically evolving dataset paradigm. Dynaword enforces explicit open licensing (CC-BY-4.0), embeds lightweight automated testing to ensure format correctness, data quality, and documentation consistency, and integrates version control, collaborative validation, and CI/CD pipelines. Built entirely on open-source infrastructure, it enables sustainable, iterative dataset evolution. Empirically, we instantiate the Danish Dynaword dataset—four times larger than prior comparable resources—demonstrating broad adoption and active contributions from both industry and academia. This validates Dynaword’s feasibility, scalability, and real-world impact in advancing collaborative, maintainable NLP data curation.

Technology Category

Application Category

📝 Abstract
Large-scale datasets are foundational for research and development in natural language processing. However, current approaches face three key challenges: (1) reliance on ambiguously licensed sources restricting use, sharing, and derivative works; (2) static dataset releases that prevent community contributions and diminish longevity; and (3) quality assurance processes restricted to publishing teams rather than leveraging community expertise. To address these limitations, we introduce two contributions: the Dynaword approach and Danish Dynaword. The Dynaword approach is a framework for creating large-scale, open datasets that can be continuously updated through community collaboration. Danish Dynaword is a concrete implementation that validates this approach and demonstrates its potential. Danish Dynaword contains over four times as many tokens as comparable releases, is exclusively openly licensed, and has received multiple contributions across industry and research. The repository includes light-weight tests to ensure data formatting, quality, and documentation, establishing a sustainable framework for ongoing community contributions and dataset evolution.
Problem

Research questions and friction points this paper is trying to address.

Address reliance on ambiguously licensed data sources
Overcome static datasets lacking community contributions
Improve quality assurance beyond publishing teams
Innovation

Methods, ideas, or system contributions that make the work stand out.

Framework for continuously updated open datasets
Community collaboration for dataset development
Light-weight tests ensuring data quality
🔎 Similar Papers
No similar papers found.