Content Moderation in TV Search: Balancing Policy Compliance, Relevance, and User Experience

📅 2025-05-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In TV search, conventional retrieval systems frequently return non-compliant or irrelevant results due to model uncertainty, erroneous metadata, or architectural limitations—undermining regulatory compliance, result relevance, and user trust. To address this, we propose the first large language model (LLM)-based post-retrieval intent alignment auditing framework tailored for TV search. Our method dynamically identifies and filters out non-intent-aligned content—i.e., violations or irrelevancies—using the user’s actual retrieval intent as the ground-truth criterion, thereby closing critical safety and quality gaps. Furthermore, it establishes an audit-feedback loop that drives iterative refinement of the underlying retrieval system. Experimental evaluation demonstrates significant improvements in compliance and relevance, reduced exposure risk of harmful or off-topic content, and establishment of a scalable, quality-assurance baseline. This work introduces a novel paradigm for multimodal search content governance grounded in intent-aware auditing.

Technology Category

Application Category

📝 Abstract
Millions of people rely on search functionality to find and explore content on entertainment platforms. Modern search systems use a combination of candidate generation and ranking approaches, with advanced methods leveraging deep learning and LLM-based techniques to retrieve, generate, and categorize search results. Despite these advancements, search algorithms can still surface inappropriate or irrelevant content due to factors like model unpredictability, metadata errors, or overlooked design flaws. Such issues can misalign with product goals and user expectations, potentially harming user trust and business outcomes. In this work, we introduce an additional monitoring layer using Large Language Models (LLMs) to enhance content moderation. This additional layer flags content if the user did not intend to search for it. This approach serves as a baseline for product quality assurance, with collected feedback used to refine the initial retrieval mechanisms of the search model, ensuring a safer and more reliable user experience.
Problem

Research questions and friction points this paper is trying to address.

Enhancing content moderation in TV search systems
Balancing policy compliance, relevance, and user experience
Mitigating inappropriate or irrelevant search results
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses LLMs for content moderation layer
Flags unintended search results automatically
Refines search models via feedback loop
🔎 Similar Papers