🤖 AI Summary
This work addresses the limited zero-shot performance of GPT models in industrial video quality classification across seven fine-grained dimensions (e.g., sharpness, stability, color fidelity). We propose a “decompose–aggregate” prompt engineering framework: complex quality judgments are decomposed into sequential, interpretable subtask prompts, whose outputs are then logically aggregated to enhance consistency; additionally, we introduce a simplified decision strategy to substantially reduce false negatives. Experiments demonstrate that, without fine-tuning or reliance on frame encoders, our method achieves significant gains in zero-shot accuracy over single-prompt baselines. It exhibits strong generalization and cross-domain deployability on real-world industrial video data, effectively overcoming the fundamental bottleneck in large language models’ zero-shot understanding of raw video content.
📝 Abstract
In this study, we tackle industry challenges in video content classification by exploring and optimizing GPT-based models for zero-shot classification across seven critical categories of video quality. We contribute a novel approach to improving GPT's performance through prompt optimization and policy refinement, demonstrating that simplifying complex policies significantly reduces false negatives. Additionally, we introduce a new decomposition-aggregation-based prompt engineering technique, which outperforms traditional single-prompt methods. These experiments, conducted on real industry problems, show that thoughtful prompt design can substantially enhance GPT's performance without additional finetuning, offering an effective and scalable solution for improving video classification systems across various domains in industry.