Rethinking Code Similarity for Automated Algorithm Design with LLMs

📅 2026-03-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing code similarity metrics struggle to capture essential differences in algorithmic logic and thus fail to effectively evaluate the novelty of algorithms generated by large language models (LLMs). To address this limitation, this work proposes BehaveSim, a novel approach that represents algorithms through their problem-solving trajectories (PSTrajs) and quantifies behavioral similarity during execution using dynamic time warping (DTW), thereby overcoming the constraints of syntax- or output-based methods. Integrated into LLM-driven automated algorithm design frameworks such as FunSearch and EoH, BehaveSim significantly enhances performance across three benchmark tasks and enables behavior-based clustering and strategic analysis of generated algorithms. The implementation and associated data are publicly released.

Technology Category

Application Category

📝 Abstract
The rise of Large Language Model-based Automated Algorithm Design (LLM-AAD) has transformed algorithm development by autonomously generating code implementations of expert-level algorithms. Unlike traditional expert-driven algorithm development, in the LLM-AAD paradigm, the main design principle behind an algorithm is often implicitly embedded in the generated code. Therefore, assessing algorithmic similarity directly from code, distinguishing genuine algorithmic innovation from mere syntactic variation, becomes essential. While various code similarity metrics exist, they fail to capture algorithmic similarity, as they focus on surface-level syntax or output equivalence rather than the underlying algorithmic logic. We propose BehaveSim, a novel method to measure algorithmic similarity through the lens of problem-solving behavior as a sequence of intermediate solutions produced during execution, dubbed as problem-solving trajectories (PSTrajs). By quantifying the alignment between PSTrajs using dynamic time warping (DTW), BehaveSim distinguishes algorithms with divergent logic despite syntactic or output-level similarities. We demonstrate its utility in two key applications: (i) Enhancing LLM-AAD: Integrating BehaveSim into existing LLM-AAD frameworks (e.g., FunSearch, EoH) promotes behavioral diversity, significantly improving performance on three AAD tasks. (ii) Algorithm analysis: BehaveSim clusters generated algorithms by behavior, enabling systematic analysis of problem-solving strategies--a crucial tool for the growing ecosystem of AI-generated algorithms. Data and code of this work are open-sourced at https://github.com/RayZhhh/behavesim.
Problem

Research questions and friction points this paper is trying to address.

algorithmic similarity
code similarity
Large Language Models
Automated Algorithm Design
problem-solving behavior
Innovation

Methods, ideas, or system contributions that make the work stand out.

algorithmic similarity
problem-solving trajectories
dynamic time warping
LLM-based Automated Algorithm Design
behavioral diversity
🔎 Similar Papers
No similar papers found.
R
Rui Zhang
Department of Computer Science, City University of Hong Kong
Zhichao Lu
Zhichao Lu
City University of Hong Kong
Evolutionary ComputationBilevel OptimizationNeural Architecture Search