🤖 AI Summary
This work proposes a large language model–driven multi-agent framework capable of end-to-end generation and validation of a complete deep learning software stack from high-level instructions with minimal human intervention. The approach yields VibeTensor, a fully functional deep learning framework autonomously synthesized by AI agents, featuring a C++20 core, CUDA runtime (including streams, events, and graphs), reverse-mode automatic differentiation, a stream-ordered caching allocator, nanobind-based language bindings, and a C ABI plugin mechanism. The system successfully executes end-to-end training of models such as ViT and miniGPT on H100 and Blackwell GPUs, with microbenchmarks—including fused attention—demonstrating competitive performance. The authors also release the AI-generated kernel suite and a comprehensive test suite, substantially advancing the frontier of AI-assisted programming.
📝 Abstract
VIBETENSOR is an open-source research system software stack for deep learning, generated by LLM-powered coding agents under high-level human guidance. In this paper,"fully generated"refers to code provenance: implementation changes were produced and applied as agent-proposed diffs; validation relied on agent-run builds, tests, and differential checks, without per-change manual diff review. It implements a PyTorch-style eager tensor library with a C++20 core (CPU+CUDA), a torch-like Python overlay via nanobind, and an experimental Node.js/TypeScript interface. Unlike thin bindings, VIBETENSOR includes its own tensor/storage system, schema-lite dispatcher, reverse-mode autograd, CUDA runtime (streams/events/graphs), a stream-ordered caching allocator with diagnostics, and a stable C ABI for dynamically loaded operator plugins. We view this release as a milestone for AI-assisted software engineering: it shows coding agents can generate a coherent deep learning runtime spanning language bindings down to CUDA memory management, validated primarily by builds and tests. We describe the architecture, summarize the workflow used to produce and validate the system, and evaluate the artifact. We report repository scale and test-suite composition, and summarize reproducible microbenchmarks from an accompanying AI-generated kernel suite, including fused attention versus PyTorch SDPA/FlashAttention. We also report end-to-end training sanity checks on 3 small workloads (sequence reversal, ViT, miniGPT) on NVIDIA H100 (Hopper, SM90) and Blackwell-class GPUs; multi-GPU results are Blackwell-only and use an optional CUTLASS-based ring-allreduce plugin gated on CUDA 13+ and sm103a toolchain support. Finally, we discuss failure modes in generated system software, including a"Frankenstein"composition effect where locally correct subsystems interact to yield globally suboptimal performance.