BeamPERL: Parameter-Efficient RL with Verifiable Rewards Specializes Compact LLMs for Structured Beam Mechanics Reasoning

📅 2026-03-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates whether parameter-efficient reinforcement learning (RLVR) can enable compact language models (1.5B parameters) to genuinely acquire physical reasoning capabilities rather than relying on superficial pattern matching. The model is trained without teacher-provided reasoning traces to solve static beam problems, using binary correctness rewards derived from a symbolic solver. Results indicate that outcome-only rewards often lead the model to converge on fixed solution templates instead of internalizing underlying physical principles. Notably, generalization peaks at intermediate training stages, with further optimization degrading robustness. The best-performing model achieves a 66.7% improvement over the baseline in Pass@1 accuracy and demonstrates compositional generalization; however, its performance deteriorates under variations in support topology, revealing anisotropic physical reasoning capabilities.

Technology Category

Application Category

📝 Abstract
Can reinforcement learning with hard, verifiable rewards teach a compact language model to reason about physics, or does it primarily learn to pattern-match toward correct answers? We study this question by training a 1.5B-parameter reasoning model on beam statics, a classic engineering problem, using parameter-efficient RLVR with binary correctness rewards from symbolic solvers, without teacher-generated reasoning traces. The best BeamPERL checkpoint achieves a 66.7% improvement in Pass@1 over the base model. However, the learned competence is anisotropic: the model generalizes compositionally (more loads) but fails under topological shifts (moved supports) that require the same equilibrium equations. Intermediate checkpoints yield the strongest reasoning, while continued optimization degrades robustness while maintaining reward. These findings reveal a key limitation of outcome-level alignment: reinforcement learning with exact physics rewards induces procedural solution templates rather than internalization of governing equations. The precision of the reward signal - even when analytically exact - does not by itself guarantee transferable physical reasoning. Our results suggest that verifiable rewards may need to be paired with structured reasoning scaffolding to move beyond template matching toward robust scientific reasoning.
Problem

Research questions and friction points this paper is trying to address.

reinforcement learning
verifiable rewards
physical reasoning
language models
beam mechanics
Innovation

Methods, ideas, or system contributions that make the work stand out.

Parameter-Efficient RL
Verifiable Rewards
Structured Reasoning
Physical Reasoning
Outcome-Level Alignment
🔎 Similar Papers
No similar papers found.
T
Tarjei Paule Hage
Department of Mechanical Engineering, Massachusetts Institute of Technology, Cambridge, MA, USA
Markus J. Buehler
Markus J. Buehler
Massachusetts Institute of Technology
Materials scienceartificial intelligencebiomaterialsbioinspirationfailure