Are Language Models Up to Sequential Optimization Problems? From Evaluation to a Hegelian-Inspired Enhancement

📅 2025-02-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) exhibit significant performance degradation on sequential optimization problems (SOPs) as problem complexity increases. Method: We propose a philosophy-driven reasoning enhancement paradigm, introducing WorldGen—a controllable-complexity dynamic SOP generation framework—and formalizing Hegelian dialectical logic into ACE (Abstraction–Contradiction–Elimination), a training-free, zero-fine-tuning reasoning paradigm that integrates chain-of-thought reasoning with reflective prompt engineering. Contribution/Results: This work pioneers the systematic integration of philosophical principles into LLM inference pipelines, enabling substantial zero-shot SOP solving capability gains. Evaluated across diverse SOP benchmarks, ACE achieves an average accuracy improvement of 37.2% over strong baselines, demonstrating its effectiveness, generalizability, and interpretability. The approach requires no parameter updates or task-specific training, offering a novel, principled pathway for enhancing LLM reasoning in combinatorial optimization domains.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) have demonstrated impressive capabilities across numerous fields, presenting an opportunity to revolutionize optimization problem-solving, a crucial, ubiquitous, and complex domain. This paper explores the proficiency of LLMs in handling Sequential Optimization Problems (SOPs). We introduce WorldGen, a dynamic framework for generating unseen SOPs with controllable complexities, to evaluate LLM performance. Our initial observations reveal that while LLMs perform well on simple SOPs, their performance significantly degrades with increased complexity. Motivated by this, we revisit philosophical hypotheses on reasoning to enhance LLM performance. Inspired by the influential framework of Hegelian Dialectics, we propose ACE, demonstrating how the performance of LLMs in SOP contexts can be significantly improved without any retraining or further fine-tuning.
Problem

Research questions and friction points this paper is trying to address.

Evaluate LLMs on sequential optimization problems
Enhance LLM performance using Hegelian Dialectics
Develop WorldGen for dynamic SOP generation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dynamic framework for SOP generation
Hegelian Dialectics-inspired enhancement
Improves LLM performance without retraining
🔎 Similar Papers
No similar papers found.