The Unanticipated Asymmetry Between Perceptual Optimization and Assessment

📅 2025-09-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work identifies a critical asymmetry between image quality assessment (IQA) and perceptual optimization: high-fidelity IQA metrics (e.g., LPIPS) perform poorly in perceptual optimization tasks—especially under adversarial training—revealing a structural mismatch between evaluation and optimization objectives. Method: We systematically compare convolutional, patch-based, and Transformer-based discriminators in both generative optimization and IQA, analyzing their architectural impact on detail reconstruction, artifact suppression, and representation transferability. Contribution/Results: We find that patch-based convolutional discriminators achieve optimal perceptual optimization performance, yet their learned representations yield limited gains when transferred to IQA model backbones. This demonstrates that discriminator architecture critically governs optimization efficacy but does not produce universally transferable IQA representations. To our knowledge, this is the first empirical characterization of such an evaluation-optimization misalignment, establishing a new paradigm for discriminator design and IQA model development.

Technology Category

Application Category

📝 Abstract
Perceptual optimization is primarily driven by the fidelity objective, which enforces both semantic consistency and overall visual realism, while the adversarial objective provides complementary refinement by enhancing perceptual sharpness and fine-grained detail. Despite their central role, the correlation between their effectiveness as optimization objectives and their capability as image quality assessment (IQA) metrics remains underexplored. In this work, we conduct a systematic analysis and reveal an unanticipated asymmetry between perceptual optimization and assessment: fidelity metrics that excel in IQA are not necessarily effective for perceptual optimization, with this misalignment emerging more distinctly under adversarial training. In addition, while discriminators effectively suppress artifacts during optimization, their learned representations offer only limited benefits when reused as backbone initializations for IQA models. Beyond this asymmetry, our findings further demonstrate that discriminator design plays a decisive role in shaping optimization, with patch-level and convolutional architectures providing more faithful detail reconstruction than vanilla or Transformer-based alternatives. These insights advance the understanding of loss function design and its connection to IQA transferability, paving the way for more principled approaches to perceptual optimization.
Problem

Research questions and friction points this paper is trying to address.

Analyzing the asymmetry between perceptual optimization and image quality assessment metrics.
Investigating why fidelity metrics effective in IQA fail in optimization tasks.
Evaluating how discriminator design impacts detail reconstruction during optimization.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Fidelity metrics misalign with optimization under adversarial training
Discriminator designs provide more faithful detail reconstruction
Patch-level convolutional architectures outperform Transformer-based alternatives
🔎 Similar Papers
No similar papers found.
J
Jiabei Zhang
Institute of Microelectronics of the Chinese Academy of Sciences
Q
Qi Wang
Institute of Microelectronics of the Chinese Academy of Sciences
S
Siyu Wu
Beihang University
D
Du Chen
The Hong Kong Polytechnic University
Tianhe Wu
Tianhe Wu
City University of Hong Kong, OPPO Research Institute
Reinforcement LearningVLM/LLMLow-level Vision