Teaching by Failure: Counter-Example-Driven Curricula for Transformer Self-Improvement

📅 2025-11-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Transformer models exhibit fragile generalization on long sequences and structurally complex inputs. Method: We propose an automated curriculum learning framework driven by failure cases, centered on an executable-validator-guided counterexample discovery mechanism that dynamically generates challenging instances and constructs adaptive training curricula—without manual difficulty annotation. Our approach integrates counterexample-informed data augmentation, logical verification constraints, and Transformer fine-tuning to enable continuous self-correction. Contribution/Results: Experiments demonstrate a 30× improvement in sequence-length extrapolation on algorithmic reasoning and natural language tasks. Compared to uniform data augmentation, our method achieves a 3.75× speedup in computational efficiency and significantly outperforms both static training and conventional curriculum learning baselines.

Technology Category

Application Category

📝 Abstract
Transformer models often exhibit brittle extrapolation, failing on inputs that are longer or structurally more complex than those seen during training. We introduce Counter-Example-Driven Curricula (CEDC), an automated framework that improves model robustness by iteratively focusing on its own failures. At each step, CEDC uses the current model to generate a diverse set of candidate problems, employs a fast, executable verifier to identify incorrect predictions (counter-examples), and then fine-tunes the model on a dataset enriched with these discovered failures. We evaluate CEDC on a suite of algorithmic and natural language tasks, including integer addition, sorting, Dyck-2 language recognition, and three text classification benchmarks. Compared to static training and standard curriculum learning baselines, CEDC achieves up to 30x greater length extrapolation, is 3.75x more computationally efficient than uniform data augmentation, and requires no manual difficulty heuristics. We provide a detailed analysis of the counter-examples, showing how the curriculum naturally adapts to target progressively more complex error modes. Our findings establish verifier-guided, failure-driven learning as a simple, powerful, and efficient paradigm for enhancing the generalization capabilities of Transformer models.
Problem

Research questions and friction points this paper is trying to address.

Improves Transformer robustness against extrapolation failures
Automates curriculum learning using model-generated counter-examples
Enhances generalization on algorithmic and language tasks efficiently
Innovation

Methods, ideas, or system contributions that make the work stand out.

Automated framework iteratively focuses on model failures
Uses fast verifier to identify incorrect predictions for fine-tuning
Achieves significant extrapolation gains without manual difficulty heuristics
🔎 Similar Papers
No similar papers found.