🤖 AI Summary
In edge AI-generated content (AIGC) services, information asymmetry arises between remote operators and application service providers (ASPs), particularly as operators cannot observe ASPs’ private configurations—including diffusion step selection—hindering optimal incentive design.
Method: We propose the first online framework integrating large language models (LLMs) into incentive contract design. Our approach leverages LLMs for private type inference and couples an iterative optimization solver with convex optimization techniques to efficiently decompose and approximately solve the NP-hard contract problem.
Contribution/Results: The core innovations are LLM-driven private information inversion and an end-to-end differentiable contract generation mechanism. Evaluated on a Unity-based teleoperation platform, our framework improves operator utility by 5–40% over baselines while ensuring ASP participation incentives and maintaining service quality.
📝 Abstract
With the rapid growth in demand for AI-generated content (AIGC), edge AIGC service providers (ASPs) have become indispensable. However, designing incentive mechanisms that motivate ASPs to deliver high-quality AIGC services remains a challenge, especially in the presence of information asymmetry. In this paper, we address bonus design between a teleoperator and an edge ASP when the teleoperator cannot observe the ASP's private settings and chosen actions (diffusion steps). We formulate this as an online learning contract design problem and decompose it into two subproblems: ASP's settings inference and contract derivation. To tackle the NP-hard setting-inference subproblem with unknown variable sizes, we introduce a large language model (LLM)-empowered framework that iteratively refines a naive seed solver using the LLM's domain expertise. Upon obtaining the solution from the LLM-evolved solver, we directly address the contract derivation problem using convex optimization techniques and obtain a near-optimal contract. Simulation results on our Unity-based teleoperation platform show that our method boosts the teleoperator's utility by $5 sim 40%$ compared to benchmarks, while preserving positive incentives for the ASP. The code is available at https://github.com/Zijun0819/llm4contract.