🤖 AI Summary
This study addresses the challenge of effectively integrating self-consistency mechanisms into vision tasks to enhance the accuracy of large language models in motion trajectory generation and verification. To tackle the dual difficulties of trajectory diversity and semantic consistency, the work proposes modeling trajectories as combinations of prototype trajectories and geometric transformation groups—such as rigid, similarity, and affine transformations—and introduces a hierarchical clustering algorithm to automatically recover shape families and their associated transformation hierarchies. By incorporating group-structured priors into the self-consistency framework for the first time, the method achieves a 4–6% improvement in generation accuracy and outperforms visual-language model baselines by 11% on trajectory verification tasks.
📝 Abstract
Self-consistency has proven to be an effective technique for improving LLM performance on natural language reasoning tasks in a lightweight, unsupervised manner. In this work, we study how to adapt self-consistency to visual domains. Specifically, we consider the generation and verification of LLM-produced motion graphics trajectories. Given a prompt (e.g., "Move the circle in a spiral path"), we first sample diverse motion trajectories from an LLM, and then identify groups of consistent trajectories via clustering. Our key insight is to model the family of shapes associated with a prompt as a prototype trajectory paired with a group of geometric transformations (e.g., rigid, similarity, and affine). Two trajectories can then be considered consistent if one can be transformed into the other under the warps allowable by the transformation group. We propose an algorithm that automatically recovers a shape family, using hierarchical relationships between a set of candidate transformation groups. Our approach improves the accuracy of LLM-based trajectory generation by 4-6%. We further extend our method to support verification, observing 11% precision gains over VLM baselines. Our code and dataset are available at https://majiaju.io/trajectory-self-consistency .