Transducing Language Models

📅 2026-03-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work proposes a general transduction framework based on deterministic finite-state transducers (FSTs) to adapt pretrained language models—originally defined over string distributions—to diverse downstream output formats such as bytes, subwords, or amino acid sequences. By treating the composition of a language model with a deterministic FST as a new probabilistic language model, the approach enables exact marginalization and conditional generation over target formats without modifying model parameters. The paper formally characterizes inference under deterministic transformations of language models and integrates probabilistic propagation algorithms to support both efficient exact and approximate inference. Empirical evaluations on tasks including byte-to-word conversion and DNA-to-amino-acid mapping demonstrate the method’s effectiveness, achieving flexible adaptation to heterogeneous output formats while keeping model parameters fixed.

Technology Category

Application Category

📝 Abstract
Modern language models define distributions over strings, but downstream tasks often require different output formats. For instance, a model that generates byte-pair strings does not directly produce word-level predictions, and a DNA model does not directly produce amino-acid sequences. In such cases, a deterministic string-to-string transformation can convert the model's output to the desired form. This is a familiar pattern in probability theory: applying a function $f$ to a random variable $X\sim p$ yields a transformed random variable $f(X)$ with an induced distribution. While such transformations are occasionally used in language modeling, prior work does not treat them as yielding new, fully functional language models. We formalize this perspective and introduce a general framework for language models derived from deterministic string-to-string transformations. We focus on transformations representable as finite-state transducers -- a commonly used state-machine abstraction for efficient string-to-string mappings. We develop algorithms that compose a language model with an FST to *marginalize* over source strings mapping to a given target, propagating probabilities through the transducer without altering model parameters and enabling *conditioning* on transformed outputs. We present an exact algorithm, an efficient approximation, and a theoretical analysis. We conduct experiments in three domains: converting language models from tokens to bytes, from tokens to words, and from DNA to amino acids. These experiments demonstrate inference-time adaptation of pretrained language models to match application-specific output requirements.
Problem

Research questions and friction points this paper is trying to address.

language models
string-to-string transformation
finite-state transducers
output format adaptation
probabilistic modeling
Innovation

Methods, ideas, or system contributions that make the work stand out.

finite-state transducer
language model transformation
output marginalization
inference-time adaptation
string-to-string mapping
🔎 Similar Papers
No similar papers found.