🤖 AI Summary
Multi-metric speech quality assessment faces challenges stemming from heterogeneous metrics (e.g., PESQ, STOI, MOS), including scale inconsistency, conflicting statistical assumptions, and complex inter-metric dependencies. To address this, we propose a novel chain-based autoregressive modeling framework—the first to jointly predict both perceptual and objective metrics within a unified architecture. Our method introduces a hierarchical speech information tokenization scheme, constructs a dynamic classifier chain to explicitly model conditional dependencies among metrics, and incorporates confidence-guided two-stage decoding to enhance robustness and interpretability. It synergistically integrates end-to-end speech representation learning with multi-stage confidence-weighted inference. Evaluated across three major tasks—speech enhancement, generative synthesis, and noisy speech assessment—our approach consistently outperforms state-of-the-art baselines, achieving significant improvements in multi-metric prediction accuracy (average +12.6%) and cross-scenario generalization capability.
📝 Abstract
Speech signal analysis poses significant challenges, particularly in tasks such as speech quality evaluation and profiling, where the goal is to predict multiple perceptual and objective metrics. For instance, metrics like PESQ (Perceptual Evaluation of Speech Quality), STOI (Short-Time Objective Intelligibility), and MOS (Mean Opinion Score) each capture different aspects of speech quality. However, these metrics often have different scales, assumptions, and dependencies, making joint estimation non-trivial. To address these issues, we introduce ARECHO (Autoregressive Evaluation via Chain-based Hypothesis Optimization), a chain-based, versatile evaluation system for speech assessment grounded in autoregressive dependency modeling. ARECHO is distinguished by three key innovations: (1) a comprehensive speech information tokenization pipeline; (2) a dynamic classifier chain that explicitly captures inter-metric dependencies; and (3) a two-step confidence-oriented decoding algorithm that enhances inference reliability. Experiments demonstrate that ARECHO significantly outperforms the baseline framework across diverse evaluation scenarios, including enhanced speech analysis, speech generation evaluation, and noisy speech evaluation. Furthermore, its dynamic dependency modeling improves interpretability by capturing inter-metric relationships.