Position: Intelligent Coding Systems Should Write Programs with Justifications

📅 2025-08-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current AI coding systems suffer from opaque decision-making, hindering comprehension and trust among non-expert users. To address this, we propose an “explanation-first” intelligent coding paradigm that mandates concurrent generation of executable code and natural-language explanations, governed by two core principles: cognitive alignment—ensuring explanations match users’ mental models—and semantic fidelity—guaranteeing strict preservation of program semantics. Methodologically, we design a neuro-symbolic architecture integrating symbolic logic constraints with neural representations, enabling runtime automatic consistency verification and incorporating explanation-aware regularization during training. Unlike conventional post-hoc explanation or static analysis approaches, our framework enables intrinsic, generation-time explainability. It establishes both theoretical foundations and practical implementation pathways for explainable AI programming, yielding substantial improvements in transparency, reliability, and user trust.

Technology Category

Application Category

📝 Abstract
Intelligent coding systems are transforming software development by enabling users to specify code behavior in natural language. However, the opaque decision-making of AI-driven coders raises trust and usability concerns, particularly for non-expert users who cannot inspect low-level implementations. We argue that these systems should not only generate code but also produce clear, consistent justifications that bridge model reasoning and user understanding. To this end, we identify two critical justification properties-cognitive alignment and semantic faithfulness-and highlight the limitations of existing methods, including formal verification, static analysis, and post-hoc explainability. We advocate exploring neuro-symbolic approaches for justification generation, where symbolic constraints guide model behavior during training and program semantics are enriched through neural representations, enabling automated consistency checks at inference time.
Problem

Research questions and friction points this paper is trying to address.

Intelligent coding systems lack transparent justifications for non-experts
Existing methods fail to ensure cognitive alignment and semantic faithfulness
Neuro-symbolic approaches may bridge reasoning gaps in code generation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Neuro-symbolic approaches for justification generation
Symbolic constraints guide model behavior
Automated consistency checks at inference
🔎 Similar Papers
No similar papers found.