🤖 AI Summary
Code auditing poses significant comprehension barriers for non-expert developers due to the cognitive overhead of navigating complex codebases and formulating effective prompts for large language models (LLMs).
Method: This paper proposes a hierarchical, progressive code understanding paradigm enabled by an interactive LLM-based analysis system. The system integrates three core components: (1) CodeMap—a structural code visualization module; (2) context-aware conversational interfaces; and (3) a stepwise guidance engine. Crucially, it introduces the “Chain-of-Understanding” cognitive framework, the first to unify hierarchical reasoning with tightly coupled visualization–dialogue interaction.
Contribution/Results: The design substantially reduces prompt engineering effort: user studies show a 62% reduction in manual prompt authoring time compared to both pure-LLM baselines and static visualization tools. It improves auditing efficiency and user engagement, earning consistent endorsement from both expert and novice developers.
📝 Abstract
Code auditing demands a robust understanding of codebases - an especially challenging task for end-user developers with limited expertise. To address this, we conducted formative interviews with experienced auditors and identified a Chain-of-Understanding approach, in which Large Language Models (LLMs) guide developers through hierarchical code comprehension - from high-level overviews to specific functions and variables. Building on this, we incorporated the Chain-of-Understanding concept into CodeMap, a system offering interactive visualizations, stepwise guided analysis, and context-aware chatbot support. Through within-subject user studies with 10 participants of diverse backgrounds and 5 expert and 2 novice interviews, CodeMap proved effective in reducing the manual effort of prompt engineering while enhancing engagement with visualization, outperforming both standalone LLMs and traditional static visualization tools.