๐ค AI Summary
To address the challenges of ensuring quality in large-scale synthetic data and the high cost of manual auditing, this paper introduces NeMo-Inspectorโthe first open-source, visualization-based analytical framework for closed-loop synthetic data quality optimization. It integrates a web-based interactive frontend (Plotly/Dash), LLM inference interfaces, and multi-dimensional quality assessment modules evaluating consistency, factual accuracy, and format compliance. The framework supports error attribution, sample provenance tracing, and model-level generation bias diagnosis, with fine-grained quality visualization enabled via interpretable heatmap representations. Evaluated on the GSM-Plus dataset, NeMo-Inspector reduces the low-quality sample rate from 46.99% to 19.51%. Furthermore, fine-tuning Meta-Llama-3-8B on cleaned data yields accuracy improvements of 1.92% on MATH and 4.17% on GSM8K, empirically validating its effectiveness in synthetic data curation and downstream model performance enhancement.
๐ Abstract
Adapting Large Language Models (LLMs) to novel tasks and enhancing their overall capabilities often requires large, high-quality training datasets. Synthetic data, generated at scale, serves a valuable alternative when real-world data is scarce or difficult to obtain. However, ensuring the quality of synthetic datasets is challenging, as developers must manually inspect and refine numerous samples to identify errors and areas for improvement. This process is time-consuming and requires specialized tools. We introduce NeMo-Inspector, an open-source tool designed to simplify the analysis of synthetic datasets with integrated inference capabilities. We demonstrate its effectiveness through two real-world cases. Analysis and cleaning of the synthetically generated GSM-Plus dataset with NeMo-Inspector led to a significant decrease in low-quality samples from 46.99% to 19.51%. The tool also helped identify and correct generation errors in OpenMath models, improving accuracy by 1.92% on the MATH dataset and by 4.17% on the GSM8K dataset for a Meta-Llama-3-8B model fine-tuned on synthetic data generated from Nemotron-4-340B.