On Memorization of Large Language Models in Logical Reasoning

πŸ“… 2024-10-30
πŸ›οΈ arXiv.org
πŸ“ˆ Citations: 7
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
While large language models (LLMs) achieve high accuracy on logical reasoning benchmarks, it remains unclear whether such performance stems from genuine reasoning or mere memorization of training instances. Method: We introduce a dynamically generated Knights-and-Knaves benchmark and propose a per-sample β€œmemory score” to quantify model reliance on training examples. We further employ perturbation analysis, cross-difficulty generalization evaluation, representation probing, and fine-tuning on erroneous answers to systematically disentangle memorization from reasoning. Contribution/Results: Fine-tuned models achieve near-perfect accuracy on seen problems but suffer substantial generalization degradation. Crucially, memorization and reasoning co-develop rather than reflecting simple overfitting; moreover, models exhibit interpretable, familiarity-driven switching between memory- and reasoning-based strategies. This work provides the first quantitative evidence and a novel analytical framework for characterizing the interplay between memorization and logical reasoning in LLMs.

Technology Category

Application Category

πŸ“ Abstract
Large language models (LLMs) achieve good performance on challenging reasoning benchmarks, yet could also make basic reasoning mistakes. This contrasting behavior is puzzling when it comes to understanding the mechanisms behind LLMs' reasoning capabilities. One hypothesis is that the increasingly high and nearly saturated performance on common reasoning benchmarks could be due to the memorization of similar problems. In this paper, we systematically investigate this hypothesis with a quantitative measurement of memorization in reasoning tasks, using a dynamically generated logical reasoning benchmark based on Knights and Knaves (K&K) puzzles. We find that LLMs could interpolate and memorize the training puzzles (achieving near-perfect accuracy) after fine-tuning, yet they struggle with slight variations of these puzzles. On the other hand, we show that while fine-tuning leads to heavy memorization, it also consistently improves generalization performance. Through in-depth analyses with perturbation tests, cross difficulty-level transferability, probing model internals, and fine-tuning with wrong answers, we establish that LLMs develop reasoning skills on K&K puzzles alongside memorization. Finally, our analysis based on a per-sample memorization score sheds light on how LLMs switch between reasoning and memorization when solving logical puzzles. Our code and data are available at https://memkklogic.github.io.
Problem

Research questions and friction points this paper is trying to address.

Investigates memorization in LLMs during logical reasoning tasks.
Examines LLMs' ability to generalize versus memorize training data.
Explores how LLMs balance reasoning and memorization in puzzle solving.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dynamically generated logical reasoning benchmark
Quantitative measurement of memorization in reasoning
Per-sample memorization score analysis
πŸ”Ž Similar Papers
No similar papers found.