Meta-TTRL: A Metacognitive Framework for Self-Improving Test-Time Reinforcement Learning in Unified Multimodal Models

📅 2026-03-16
📈 Citations: 0
Influential: 0
📄 PDF
📝 Abstract
Existing test-time scaling (TTS) methods for unified multimodal models (UMMs) in text-to-image (T2I) generation primarily rely on search or sampling strategies that produce only instance-level improvements, limiting the ability to learn from prior inferences and accumulate knowledge across similar prompts. To overcome these limitations, we propose Meta-TTRL, a metacognitive test-time reinforcement learning framework. Meta-TTRL performs test-time parameter optimization guided by model-intrinsic monitoring signals derived from the meta-knowledge of UMMs, achieving self-improvement and capability-level improvement at test time. Extensive experiments demonstrate that Meta-TTRL generalizes well across three representative UMMs, including Janus-Pro-7B, BAGEL, and Qwen-Image, achieving significant gains on compositional reasoning tasks and multiple T2I benchmarks with limited data. We provide the first comprehensive analysis to investigate the potential of test-time reinforcement learning (TTRL) for T2I generation in UMMs. Our analysis further reveals a key insight underlying effective TTRL: metacognitive synergy, where monitoring signals align with the model's optimization regime to enable self-improvement.
🔎 Similar Papers
No similar papers found.
L
Lit Sin Tan
Tsinghua University
Junzhe Chen
Junzhe Chen
Tsinghua University
Natural Language Processing
X
Xiaolong Fu
JD.COM
L
Lichen Ma
JD.COM
Junshi Huang
Junshi Huang
Meituan
Computer VisionNLPMachine Learning
J
Jianzhong Shi
JD.COM
Y
Yan Li
JD.COM
L
Lijie Wen
Tsinghua University