SibylSense: Adaptive Rubric Learning via Memory Tuning and Adversarial Probing

📅 2026-02-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limitations of traditional scoring rules in open-ended generation tasks—namely, their high construction cost, poor generalization, and vulnerability to gaming. To overcome these issues, the authors propose a learning-at-inference framework that keeps the main model parameters frozen while employing a tunable memory bank to store validated scoring items and an adversarial probing mechanism to dynamically refine the scoring rule generator. This approach introduces, for the first time, memory-based tuning combined with adversarial strategies to enable continuous, self-adaptive evolution of scoring rules. Experimental results on two open-ended generation benchmarks demonstrate that the generated scoring rules exhibit superior discriminability and robustness, significantly enhancing downstream reinforcement learning performance compared to static and non-adaptive baselines.

Technology Category

Application Category

📝 Abstract
Designing aligned and robust rewards for open-ended generation remains a key barrier to RL post-training. Rubrics provide structured, interpretable supervision, but scaling rubric construction is difficult: expert rubrics are costly, prompted rubrics are often superficial or inconsistent, and fixed-pool discriminative rubrics can saturate and drift, enabling reward hacking. We present SibylSense, an inference-time learning approach that adapts a frozen rubric generator through a tunable memory bank of validated rubric items. Memory is updated via verifier-based item rewards measured by reference-candidate answer discriminative gaps from a handful of examples. SibylSense alternates memory tuning with a rubric-adversarial policy update that produces rubric-satisfying candidate answers, shrinking discriminative gaps and driving the rubric generator to capture new quality dimensions. Experiments on two open-ended tasks show that SibylSense yields more discriminative rubrics and improves downstream RL performance over static and non-adaptive baselines.
Problem

Research questions and friction points this paper is trying to address.

reward design
open-ended generation
rubric learning
reward hacking
reinforcement learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

adaptive rubric learning
memory tuning
adversarial probing
reward alignment
inference-time learning
🔎 Similar Papers
No similar papers found.