π€ AI Summary
Current artificial intelligence systems predominantly rely on monolithic models that tightly couple perception, reasoning, and decision-making, resulting in low transparency, limited scalability, and difficulty in continuous evolution. This work proposes a neuro-symbolic architecture centered on composability, introducing an innovative βsymbolic seamβ mechanism that explicitly defines typed objects, versioned constraint bundles, and decision traces at module boundaries. This approach enables the organic integration of data-driven components with formal symbolic constraints. The architecture supports modular composition and dynamic evolution, significantly enhancing system verifiability, transparency, and scalability, thereby offering a new paradigm for building evolvable intelligent systems.
π Abstract
Current Artificial Intelligence (AI) systems are frequently built around monolithic models that entangle perception, reasoning, and decision-making, a design that often conflicts with established software architecture principles. Large Language Models (LLMs) amplify this tendency, offering scale but limited transparency and adaptability. To address this, we argue for composability as a guiding principle that treats AI as a living architecture rather than a fixed artifact. We introduce symbolic seams: explicit architectural breakpoints where a system commits to inspectable, typed boundary objects, versioned constraint bundles, and decision traces. We describe how seams enable a composable neuro-symbolic design that combines the data-driven adaptability of learned components with the verifiability of explicit symbolic constraints -- combining strengths neither paradigm achieves alone. By treating AI systems as assemblies of interchangeable parts rather than indivisible wholes, we outline a direction for intelligent systems that are extensible, transparent, and amenable to principled evolution.