DecompressionLM: Deterministic, Diagnostic, and Zero-Shot Concept Graph Extraction from Language Models

📅 2026-01-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limitations of existing knowledge probing methods, which rely on predefined queries and struggle to cover unknown or long-tail concepts due to sequence coupling and decoding competition. The authors propose DecompressionLM, a novel framework that, for the first time, enables stateless, zero-shot, and parallelizable concept graph extraction without requiring preset queries or cross-sequence shared states. By integrating Van der Corput low-discrepancy sequences with arithmetic decoding, the method demonstrates strong empirical performance across multiple models. Comparative analysis between activation-aware quantization (AWQ) and uniform quantization (GPTQ) reveals that AWQ-4bit improves concept coverage by 30–170%, whereas GPTQ-Int4 causes a 71–86% drop. Additionally, the study uncovers a 17-point hallucination gap among models on the MMLU-Pro Law task.

Technology Category

Application Category

📝 Abstract
Existing knowledge probing methods rely on pre-defined queries, limiting extraction to known concepts. We introduce DecompressionLM, a stateless framework for zero-shot concept graph extraction that discovers what language models encode without pre-specified queries or shared cross-sequence state. Our method targets three limitations of common decoding-based probing approaches: (i) cross-sequence coupling that concentrates probability mass on high-frequency prefixes, (ii) competitive decoding effects that suppress long-tail concepts, and (iii) scalability constraints arising from sequential exploration. Using Van der Corput low-discrepancy sequences with arithmetic decoding, DecompressionLM enables deterministic, embarrassingly parallel generation without shared state across sequences. Across two model families and five quantization variants, we find that activation-aware quantization (AWQ-4bit) expands concept coverage by 30-170%, while uniform quantization (GPTQ-Int4) induces 71-86% coverage collapse - divergent behaviors not reliably reflected by explanation-level perplexity. Corpus-based verification further reveals a 19.6-point hallucination gap between top- and bottom-ranked MMLU-Pro Law models. DecompressionLM establishes concept coverage as a complementary evaluation dimension for assessing knowledge breadth and factual grounding in compressed models intended for deployment.
Problem

Research questions and friction points this paper is trying to address.

knowledge probing
concept extraction
language models
zero-shot
model compression
Innovation

Methods, ideas, or system contributions that make the work stand out.

zero-shot concept extraction
deterministic decoding
low-discrepancy sequences
concept coverage
quantization effects
🔎 Similar Papers
No similar papers found.