🤖 AI Summary
AI research is constrained by human cognitive biases that impose prior assumptions on the neural architecture search space. Method: We propose ASIA-Arch, a system enabling a paradigm shift from automated optimization to automated innovation—achieving autonomous hypothesis generation, code implementation, and training-validation闭环 via large-scale experimentation (1,773 trials, 20,000 GPU-hours) and empirical learning. Contribution/Results: For the first time, we empirically establish that scientific breakthroughs in architecture design obey a computational scaling law. Without any human-imposed architectural priors, ASIA-Arch discovers 106 novel linear-attention architectures outperforming state-of-the-art baselines, uncovering emergent design principles beyond human intuition. This constitutes the first demonstration of AI-driven, scientifically grounded, autonomous neural architecture innovation—heralding a self-accelerating era in AI research and development.
📝 Abstract
While AI systems demonstrate exponentially improving capabilities, the pace of AI research itself remains linearly bounded by human cognitive capacity, creating an increasingly severe development bottleneck. We present ASI-Arch, the first demonstration of Artificial Superintelligence for AI research (ASI4AI) in the critical domain of neural architecture discovery--a fully autonomous system that shatters this fundamental constraint by enabling AI to conduct its own architectural innovation. Moving beyond traditional Neural Architecture Search (NAS), which is fundamentally limited to exploring human-defined spaces, we introduce a paradigm shift from automated optimization to automated innovation. ASI-Arch can conduct end-to-end scientific research in the domain of architecture discovery, autonomously hypothesizing novel architectural concepts, implementing them as executable code, training and empirically validating their performance through rigorous experimentation and past experience. ASI-Arch conducted 1,773 autonomous experiments over 20,000 GPU hours, culminating in the discovery of 106 innovative, state-of-the-art (SOTA) linear attention architectures. Like AlphaGo's Move 37 that revealed unexpected strategic insights invisible to human players, our AI-discovered architectures demonstrate emergent design principles that systematically surpass human-designed baselines and illuminate previously unknown pathways for architectural innovation. Crucially, we establish the first empirical scaling law for scientific discovery itself--demonstrating that architectural breakthroughs can be scaled computationally, transforming research progress from a human-limited to a computation-scalable process. We provide comprehensive analysis of the emergent design patterns and autonomous research capabilities that enabled these breakthroughs, establishing a blueprint for self-accelerating AI systems.