RevoNAD: Reflective Evolutionary Exploration for Neural Architecture Design

📅 2025-12-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Addressing three core challenges in LLM-driven Neural Architecture Design (NAD)—non-differentiable feedback, mode collapse, and structural infeasibility—this paper proposes the Reflective Evolutionary Framework for NAD (REF-NAD). REF-NAD synergistically integrates multi-round, multi-expert consensus reasoning; reward-variance-driven adaptive reflective exploration; and Pareto-front-guided non-dominated sorting for evolutionary selection. This co-optimizes architectural reasoning and search while preserving diversity, significantly improving feasibility and deployment efficiency. Evaluated across diverse benchmarks—including CIFAR, ImageNet16-120, COCO-5K, and Cityscapes—REF-NAD achieves state-of-the-art performance, demonstrating strong cross-task generalization and stability. Its key contribution lies in reformulating the non-differentiable feedback problem as a reflective evolutionary modeling task and mitigating redundancy and feasibility drift via a dynamic multi-objective trade-off mechanism.

Technology Category

Application Category

📝 Abstract
Recent progress in leveraging large language models (LLMs) has enabled Neural Architecture Design (NAD) systems to generate new architecture not limited from manually predefined search space. Nevertheless, LLM-driven generation remains challenging: the token-level design loop is discrete and non-differentiable, preventing feedback from smoothly guiding architectural improvement. These methods, in turn, commonly suffer from mode collapse into redundant structures or drift toward infeasible designs when constructive reasoning is not well grounded. We introduce RevoNAD, a reflective evolutionary orchestrator that effectively bridges LLM-based reasoning with feedback-aligned architectural search. First, RevoNAD presents a Multi-round Multi-expert Consensus to transfer isolated design rules into meaningful architectural clues. Then, Adaptive Reflective Exploration adjusts the degree of exploration leveraging reward variance; it explores when feedback is uncertain and refines when stability is reached. Finally, Pareto-guided Evolutionary Selection effectively promotes architectures that jointly optimize accuracy, efficiency, latency, confidence, and structural diversity. Across CIFAR10, CIFAR100, ImageNet16-120, COCO-5K, and Cityscape, RevoNAD achieves state-of-the-art performance. Ablation and transfer studies further validate the effectiveness of RevoNAD in allowing practically reliable, and deployable neural architecture design.
Problem

Research questions and friction points this paper is trying to address.

Bridges LLM reasoning with feedback-aligned architectural search
Prevents mode collapse into redundant or infeasible neural designs
Optimizes accuracy, efficiency, latency, confidence, and structural diversity
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multi-round Multi-expert Consensus transfers design rules into architectural clues
Adaptive Reflective Exploration adjusts exploration based on reward variance
Pareto-guided Evolutionary Selection optimizes multiple objectives like accuracy and latency
🔎 Similar Papers
No similar papers found.
G
Gyusam Chang
Korea University
J
Jeongyoon Yoon
Korea University
S
Shin han yi
Korea University
J
JaeHyeok Lee
Korea University
Sujin Jang
Sujin Jang
Principal Researcher, Samsung AI Center (DS Division)
Machine LearningRoboticsComputer VisionHuman-Computer Interaction
Sangpil Kim
Sangpil Kim
Korea University
Computer Vision