🤖 AI Summary
This paper addresses the challenge of quantitatively evaluating subjective voice impressions (e.g., “cute voice” or “my favorite voice”), which lack objective, measurable ground truth. To this end, it introduces Subjective Voice Descriptors (SVDs)—a novel concept enabling personalized semantic modeling—and proposes a unified evaluation framework. Methodologically, it jointly leverages Absolute Category Rating (ACR) and Comparative Category Rating (CCR) data, incorporates RankNet-based learning-to-rank, and introduces a new evaluation metric, *ppref*. Key contributions include: (1) a formal definition of SVDs and a learnable modeling pathway; (2) empirical validation that CCR data significantly outperforms ACR in few-shot settings, enabling effective personalization with minimal annotated samples; and (3) experimental results demonstrating moderate *ppref* performance even under extremely low-data regimes, thereby establishing the feasibility and effectiveness of personalized subjective voice assessment.
📝 Abstract
We tackle a new task of training neural network models that can assess subjective impressions conveyed through speech and assign scores accordingly, inspired by the work on automatic speech quality assessment (SQA). Speech impressions are often described using phrases like `cute voice.' We define such phrases as subjective voice descriptors (SVDs). Focusing on the difference in usage scenarios between the proposed task and automatic SQA, we design a framework capable of accommodating SVDs personalized to each individual, such as `my favorite voice.' In this work, we compiled a dataset containing speech labels derived from both abosolute category ratings (ACR) and comparison category ratings (CCR).
As an evaluation metric for assessment performance, we introduce ppref, the accuracy of the predicted score ordering of two samples on CCR test samples. Alongside the conventional model and learning methods based on ACR data, we also investigated RankNet learning using CCR data. We experimentally find that the ppref is moderate even with very limited training data. We also discover the CCR training is superior to the ACR training. These results support the idea that assessment models based on personalized SVDs, which typically must be trained on limited data, can be effectively learned from CCR data.