MetaMetrics: Calibrating Metrics For Generation Tasks Using Human Preferences

📅 2024-10-03
🏛️ arXiv.org
📈 Citations: 5
Influential: 1
📄 PDF
🤖 AI Summary
To address the misalignment between automatic evaluation metrics and human preferences in generative tasks, this paper proposes MetaMetrics—a calibratable meta-metric that supervisely weights and fuses existing metrics to model fine-grained human preferences across multimodal (language/vision), multilingual, and multi-domain settings. Methodologically, it introduces the first preference-dimension-aware metric calibration framework, enabling cross-modal unified evaluation and plug-and-play integration. The approach combines supervised meta-learning, multi-task joint optimization, and explicit modeling of human preference annotations. Experiments demonstrate that MetaMetrics significantly improves correlation with human judgments across multilingual text and vision generation tasks (average Kendall’s τ increase of +18.7%). Moreover, it exhibits strong generalization to unseen domains and models, maintaining robust alignment with human preferences without task-specific retraining.

Technology Category

Application Category

📝 Abstract
Understanding the quality of a performance evaluation metric is crucial for ensuring that model outputs align with human preferences. However, it remains unclear how well each metric captures the diverse aspects of these preferences, as metrics often excel in one particular area but not across all dimensions. To address this, it is essential to systematically calibrate metrics to specific aspects of human preference, catering to the unique characteristics of each aspect. We introduce MetaMetrics, a calibrated meta-metric designed to evaluate generation tasks across different modalities in a supervised manner. MetaMetrics optimizes the combination of existing metrics to enhance their alignment with human preferences. Our metric demonstrates flexibility and effectiveness in both language and vision downstream tasks, showing significant benefits across various multilingual and multi-domain scenarios. MetaMetrics aligns closely with human preferences and is highly extendable and easily integrable into any application. This makes MetaMetrics a powerful tool for improving the evaluation of generation tasks, ensuring that metrics are more representative of human judgment across diverse contexts.
Problem

Research questions and friction points this paper is trying to address.

Calibrate metrics to align with human preferences.
Evaluate generation tasks across different modalities.
Optimize existing metrics for multilingual and multi-domain scenarios.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Calibrates metrics using human preferences
Optimizes combination of existing metrics
Enhances alignment with human judgment
🔎 Similar Papers
No similar papers found.