TRACE Back from the Future: A Probabilistic Reasoning Approach to Controllable Language Generation

📅 2025-04-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Controlling large language models (LLMs) globally—e.g., suppressing toxicity, ensuring personalization, and maintaining topic consistency—remains challenging due to the inherent limitations of autoregressive decoding in planning ahead for such multi-faceted attributes. Method: We propose a traceable probabilistic inference framework that distills an LLM into an analytically tractable hidden Markov model (HMM), augmented with lightweight attribute classifiers. This enables closed-form, exact computation of the expected attribute probability (EAP) and zero-shot adaptation. Token reweighting during decoding is then performed based on EAP. Contribution/Results: Our method achieves state-of-the-art performance on detoxification with only 10% decoding overhead; enables second-level adaptation to 76 low-resource personalized LLMs; and natively supports joint control over multiple attributes. By transcending local prediction paradigms, it establishes a new pathway for LLM controllable generation—efficient, precise, and broadly applicable.

Technology Category

Application Category

📝 Abstract
As large language models (LMs) advance, there is an increasing need to control their outputs to align with human values (e.g., detoxification) or desired attributes (e.g., personalization, topic). However, autoregressive models focus on next-token predictions and struggle with global properties that require looking ahead. Existing solutions either tune or post-train LMs for each new attribute - expensive and inflexible - or approximate the Expected Attribute Probability (EAP) of future sequences by sampling or training, which is slow and unreliable for rare attributes. We introduce TRACE (Tractable Probabilistic Reasoning for Adaptable Controllable gEneration), a novel framework that efficiently computes EAP and adapts to new attributes through tractable probabilistic reasoning and lightweight control. TRACE distills a Hidden Markov Model (HMM) from an LM and pairs it with a small classifier to estimate attribute probabilities, enabling exact EAP computation over the HMM's predicted futures. This EAP is then used to reweigh the LM's next-token probabilities for globally compliant continuations. Empirically, TRACE achieves state-of-the-art results in detoxification with only 10% decoding overhead, adapts to 76 low-resource personalized LLMs within seconds, and seamlessly extends to composite attributes.
Problem

Research questions and friction points this paper is trying to address.

Control language model outputs for human values and desired attributes
Address inefficiency in existing methods for attribute adaptation
Enable exact computation of expected attribute probabilities for compliance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses Hidden Markov Model for exact EAP computation
Pairs HMM with small classifier for attribute probabilities
Reweighs next-token probabilities for compliant continuations
🔎 Similar Papers
No similar papers found.