🤖 AI Summary
In code review, pull request (PR)–embedded links are frequently overlooked by automated tools, leading to contextual gaps and increased cognitive load. This paper introduces the first large language model (LLM)–based approach for generating intelligent previews of PR-embedded links, integrating PR metadata (title, description, comments) with linked content to produce context-aware summaries. Automatic evaluation using BLEU, BERTScore, and compression ratio shows that context-aware generation significantly outperforms baseline methods. However, a user study reveals developers’ preference for concise, context-agnostic summaries—highlighting a critical misalignment between automatic metrics and real-world usability. Key contributions include: (1) the first LLM-based preview framework specifically designed for PR-embedded links; (2) a systematic analysis of context modeling and empirical comparison across variants; and (3) an open-source implementation and accompanying demonstration video.
📝 Abstract
Code review is a key practice in software engineering, where developers evaluate code changes to ensure quality and maintainability. Links to issues and external resources are often included in Pull Requests (PRs) to provide additional context, yet they are typically discarded in automated tasks such as PR summarization and code review comment generation. This limits the richness of information available to reviewers and increases cognitive load by forcing context-switching. To address this gap, we present AILINKPREVIEWER, a tool that leverages Large Language Models (LLMs) to generate previews of links in PRs using PR metadata, including titles, descriptions, comments, and link body content. We analyzed 50 engineered GitHub repositories and compared three approaches: Contextual LLM summaries, Non-Contextual LLM summaries, and Metadata-based previews. The results in metrics such as BLEU, BERTScore, and compression ratio show that contextual summaries consistently outperform other methods. However, in a user study with seven participants, most preferred non-contextual summaries, suggesting a trade-off between metric performance and perceived usability. These findings demonstrate the potential of LLM-powered link previews to enhance code review efficiency and to provide richer context for developers and automation in software engineering. The video demo is available at https://www.youtube.com/watch?v=h2qH4RtrB3E, and the tool and its source code can be found at https://github.com/c4rtune/AILinkPreviewer.