🤖 AI Summary
To address unfair contributor attribution and high computational overhead in generative search engines, this paper proposes MaxShapley: the first algorithm integrating Shapley values with a decomposable max-sum utility function. It ensures attribution fairness—satisfying efficiency, symmetry, and null-player axioms—while reducing computational complexity from exponential to linear. Embedded within retrieval-augmented generation (RAG) pipelines, MaxShapley enables document-level contribution quantification in multi-hop question answering. Experiments on HotPotQA, MuSiQUE, and MS MARCO demonstrate that its attribution quality closely matches exact Shapley values (Spearman ρ > 0.92), with up to 8× faster runtime and significantly reduced memory consumption. This provides an efficient, scalable, and incentive-compatible foundation for fair content attribution in generative search ecosystems.
📝 Abstract
Generative search engines based on large language models (LLMs) are replacing traditional search, fundamentally changing how information providers are compensated. To sustain this ecosystem, we need fair mechanisms to attribute and compensate content providers based on their contributions to generated answers. We introduce MaxShapley, an efficient algorithm for fair attribution in generative search pipelines that use retrieval-augmented generation (RAG). MaxShapley is a special case of the celebrated Shapley value; it leverages a decomposable max-sum utility function to compute attributions with linear computation in the number of documents, as opposed to the exponential cost of Shapley values. We evaluate MaxShapley on three multi-hop QA datasets (HotPotQA, MuSiQUE, MS MARCO); MaxShapley achieves comparable attribution quality to exact Shapley computation, while consuming a fraction of its tokens--for instance, it gives up to an 8x reduction in resource consumption over prior state-of-the-art methods at the same attribution accuracy.