Compression versus Accuracy: A Hierarchy of Lifted Models

📅 2025-05-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In Advanced Colour Passing (ACP), manual tuning of the hyperparameter ε compromises interpretability and hinders the trade-off between compression ratio and accuracy. Method: This paper proposes a hyperparameter-free hierarchical probabilistic modeling approach that automatically constructs a model hierarchy wherein ε and the corresponding error bound increase strictly monotonically. Leveraging factor similarity–based hierarchical clustering and error propagation analysis, the method generates ε values adaptively across levels in a single run, yielding multi-granularity compressed models with theoretically guaranteed per-level error bounds. Contribution/Results: The key innovation is the first hyperparameter-free hierarchical modeling framework for ACP, achieving efficient inference while simultaneously preserving model interpretability and enabling precise, controllable accuracy–compression trade-offs.

Technology Category

Application Category

📝 Abstract
Probabilistic graphical models that encode indistinguishable objects and relations among them use first-order logic constructs to compress a propositional factorised model for more efficient (lifted) inference. To obtain a lifted representation, the state-of-the-art algorithm Advanced Colour Passing (ACP) groups factors that represent matching distributions. In an approximate version using $varepsilon$ as a hyperparameter, factors are grouped that differ by a factor of at most $(1pm varepsilon)$. However, finding a suitable $varepsilon$ is not obvious and may need a lot of exploration, possibly requiring many ACP runs with different $varepsilon$ values. Additionally, varying $varepsilon$ can yield wildly different models, leading to decreased interpretability. Therefore, this paper presents a hierarchical approach to lifted model construction that is hyperparameter-free. It efficiently computes a hierarchy of $varepsilon$ values that ensures a hierarchy of models, meaning that once factors are grouped together given some $varepsilon$, these factors will be grouped together for larger $varepsilon$ as well. The hierarchy of $varepsilon$ values also leads to a hierarchy of error bounds. This allows for explicitly weighing compression versus accuracy when choosing specific $varepsilon$ values to run ACP with and enables interpretability between the different models.
Problem

Research questions and friction points this paper is trying to address.

Finding suitable epsilon for lifted models efficiently
Balancing compression and accuracy in model grouping
Ensuring interpretability across different epsilon values
Innovation

Methods, ideas, or system contributions that make the work stand out.

Hierarchical lifted model construction without hyperparameters
Efficient computation of epsilon hierarchy for models
Explicit compression versus accuracy trade-off control
🔎 Similar Papers
No similar papers found.
J
Jan Speller
Computer Science Department, University of Münster, Germany
M
Malte Luttermann
German Research Center for Artificial Intelligence (DFKI), Lübeck, Germany; Institute for Humanities-Centered Artificial Intelligence, University of Hamburg, Germany
M
M. Gehrke
Institute for Humanities-Centered Artificial Intelligence, University of Hamburg, Germany
Tanya Braun
Tanya Braun
University of Münster
Probabilistic Inference