🤖 AI Summary
Current AI coding systems suffer from opaque decision-making, hindering comprehension and trust among non-expert users. To address this, we propose an “explanation-first” intelligent coding paradigm that mandates concurrent generation of executable code and natural-language explanations, governed by two core principles: cognitive alignment—ensuring explanations match users’ mental models—and semantic fidelity—guaranteeing strict preservation of program semantics. Methodologically, we design a neuro-symbolic architecture integrating symbolic logic constraints with neural representations, enabling runtime automatic consistency verification and incorporating explanation-aware regularization during training. Unlike conventional post-hoc explanation or static analysis approaches, our framework enables intrinsic, generation-time explainability. It establishes both theoretical foundations and practical implementation pathways for explainable AI programming, yielding substantial improvements in transparency, reliability, and user trust.
📝 Abstract
Intelligent coding systems are transforming software development by enabling users to specify code behavior in natural language. However, the opaque decision-making of AI-driven coders raises trust and usability concerns, particularly for non-expert users who cannot inspect low-level implementations. We argue that these systems should not only generate code but also produce clear, consistent justifications that bridge model reasoning and user understanding. To this end, we identify two critical justification properties-cognitive alignment and semantic faithfulness-and highlight the limitations of existing methods, including formal verification, static analysis, and post-hoc explainability. We advocate exploring neuro-symbolic approaches for justification generation, where symbolic constraints guide model behavior during training and program semantics are enriched through neural representations, enabling automated consistency checks at inference time.