NeMo-Inspector: A Visualization Tool for LLM Generation Analysis

๐Ÿ“… 2025-05-01
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
To address the challenges of ensuring quality in large-scale synthetic data and the high cost of manual auditing, this paper introduces NeMo-Inspectorโ€”the first open-source, visualization-based analytical framework for closed-loop synthetic data quality optimization. It integrates a web-based interactive frontend (Plotly/Dash), LLM inference interfaces, and multi-dimensional quality assessment modules evaluating consistency, factual accuracy, and format compliance. The framework supports error attribution, sample provenance tracing, and model-level generation bias diagnosis, with fine-grained quality visualization enabled via interpretable heatmap representations. Evaluated on the GSM-Plus dataset, NeMo-Inspector reduces the low-quality sample rate from 46.99% to 19.51%. Furthermore, fine-tuning Meta-Llama-3-8B on cleaned data yields accuracy improvements of 1.92% on MATH and 4.17% on GSM8K, empirically validating its effectiveness in synthetic data curation and downstream model performance enhancement.

Technology Category

Application Category

๐Ÿ“ Abstract
Adapting Large Language Models (LLMs) to novel tasks and enhancing their overall capabilities often requires large, high-quality training datasets. Synthetic data, generated at scale, serves a valuable alternative when real-world data is scarce or difficult to obtain. However, ensuring the quality of synthetic datasets is challenging, as developers must manually inspect and refine numerous samples to identify errors and areas for improvement. This process is time-consuming and requires specialized tools. We introduce NeMo-Inspector, an open-source tool designed to simplify the analysis of synthetic datasets with integrated inference capabilities. We demonstrate its effectiveness through two real-world cases. Analysis and cleaning of the synthetically generated GSM-Plus dataset with NeMo-Inspector led to a significant decrease in low-quality samples from 46.99% to 19.51%. The tool also helped identify and correct generation errors in OpenMath models, improving accuracy by 1.92% on the MATH dataset and by 4.17% on the GSM8K dataset for a Meta-Llama-3-8B model fine-tuned on synthetic data generated from Nemotron-4-340B.
Problem

Research questions and friction points this paper is trying to address.

Ensuring quality of synthetic datasets for LLMs
Simplifying manual inspection of synthetic data errors
Improving model accuracy via synthetic data analysis
Innovation

Methods, ideas, or system contributions that make the work stand out.

Open-source tool for synthetic dataset analysis
Integrated inference capabilities for quality inspection
Visualization tool reduces low-quality samples significantly
๐Ÿ”Ž Similar Papers
No similar papers found.
D
Daria Gitman
NVIDIA Corporation, United States
Igor Gitman
Igor Gitman
Applied Scientist, NVIDIA
Large Language ModelsMath ReasoningDeep Learning
E
E. Bakhturina
NVIDIA Corporation, United States