🤖 AI Summary
Existing evaluations of large language models’ (LLMs) programming capabilities lack systematicity and multidimensional rigor, hindering fine-grained assessment of reasoning, robustness, and cognitive alignment. Method: We propose Code Triangle, the first three-dimensional evaluation framework orthogonalizing edit analysis, code implementation, and test-case generation. Built upon competitive programming benchmarks, it integrates human-written solutions, diverse test suites, and model ensembling to expose LLM limitations in solution diversity and robustness. Contribution/Results: Empirical analysis reveals a structural misalignment between LLMs’ cognitive distributions and those of human experts. To address this, we introduce a collaborative optimization paradigm—“human-authored content guidance + hybrid model reasoning”—which significantly improves performance and stability across code understanding and generation tasks. This work establishes a novel, interpretable paradigm for evaluating LLM programming competence and advancing cognitive alignment with human expertise.
📝 Abstract
Large language models (LLMs) have achieved remarkable progress in code generation, yet their true programming competence remains underexplored. We introduce the Code Triangle framework, which systematically evaluates LLMs across three fundamental dimensions: editorial analysis, code implementation, and test case generation. Through extensive experiments on competitive programming benchmarks, we reveal that while LLMs can form a self-consistent system across these dimensions, their solutions often lack the diversity and robustness of human programmers. We identify a significant distribution shift between model cognition and human expertise, with model errors tending to cluster due to training data biases and limited reasoning transfer. Our study demonstrates that incorporating human-generated editorials, solutions, and diverse test cases, as well as leveraging model mixtures, can substantially enhance both the performance and robustness of LLMs. Furthermore, we reveal both the consistency and inconsistency in the cognition of LLMs that may facilitate self-reflection and self-improvement, providing a potential direction for developing more powerful coding models.