🤖 AI Summary
Existing text-similarity metrics (e.g., BLEU, BERTScore) fail to capture logical dependencies and spatiotemporal constraints among sewing steps, hindering accurate evaluation of temporal ordering and spatial coherence in automatically generated sewing instructions.
Method: We propose a tree-structured automatic evaluation framework that models instruction sequences as ordered dependency trees, explicitly encoding both temporal precedence and spatial reliance among steps. Leveraging tree-aware representation learning and large language model (LLM)-generated counterfactual perturbations, we construct a domain-adapted robustness validation pipeline.
Contribution/Results: Our metric achieves strong correlations with human annotations—ρ = 0.89 with manual error counts and ρ = 0.92 with human quality scores—significantly outperforming baseline methods. It provides a novel, interpretable, highly correlated, and robust evaluation paradigm for embodied reasoning–oriented generation tasks.
📝 Abstract
In this paper, we propose a novel, automatic tree-based evaluation metric for LLM-generated step-by-step assembly instructions, that more accurately reflects spatiotemporal aspects of construction than traditional metrics such as BLEU and BERT similarity scores. We apply our proposed metric to the domain of sewing instructions, and show that our metric better correlates with manually-annotated error counts as well as human quality ratings, demonstrating our metric's superiority for evaluating the spatiotemporal soundness of sewing instructions. Further experiments show that our metric is more robust than traditional approaches against artificially-constructed counterfactual examples that are specifically constructed to confound metrics that rely on textual similarity.