🤖 AI Summary
This work addresses the document ranking optimization challenge in competitive search environments. We propose a prompt-engineering-based, controllable document rewriting method leveraging large language models (LLMs). Unlike conventional retrieval-augmented or end-to-end re-ranking approaches, our method explicitly models the LLM as a **faithful and controllable editor**, guided by structured prompts to enhance document-query match strength—particularly for high-competition queries—while preserving semantic fidelity and textual quality. The framework integrates competitive ranking simulation, multi-granularity faithfulness constraints, and a deployment-oriented evaluation protocol. Evaluated on multiple real-world search ranking competitions, our approach achieves significant improvements in ranking performance (average +12.7% top-10 recall), while maintaining high content accuracy (>93.5%) and linguistic quality (4.6/5.0 in human evaluation). To our knowledge, this is the first work to formulate LLMs as explicit, fidelity-aware editors for competitive document rewriting.
📝 Abstract
We study prompting-based approaches with Large Language Models (LLMs) for modifying documents so as to promote their ranking in a competitive search setting. Our methods are inspired by prior work on leveraging LLMs as rankers. We evaluate our approach by deploying it as a bot in previous ranking competitions and in competitions we organized. Our findings demonstrate that our approach effectively improves document ranking while preserving high levels of faithfulness to the original content and maintaining overall document quality.