🤖 AI Summary
Generative AI struggles to simultaneously ensure output validity (i.e., grammatical correctness) and stability (i.e., cross-sample consistency) in feature transformation. To address this, we propose the first LLM–ML collaborative framework that synergistically integrates symbolic generation with gradient-driven search: a teacher LLM generates high-quality exemplars; a student LLM models semantic constraints via knowledge distillation; and latent-space embedding-guided directed search, coupled with multi-solution probability fusion, jointly guarantees validity and stability. Our method uniquely unifies LLMs’ symbolic reasoning capabilities, ML-based gradient optimization, and sequence embedding representations. Evaluated across multiple datasets, it improves downstream task performance by 5%, reduces syntactic error rates by 48%, and significantly enhances robustness—while also revealing LLMs’ latent capacity for deep structural understanding of original features.
📝 Abstract
Feature transformation enhances data representation by deriving new features from the original data. Generative AI offers potential for this task, but faces challenges in stable generation (consistent outputs) and valid generation (error-free sequences). Existing methods--traditional MLs' low validity and LLMs' instability--fail to resolve both. We find that LLMs ensure valid syntax, while ML's gradient-steered search stabilizes performance. To bridge this gap, we propose a teaming framework combining LLMs' symbolic generation with ML's gradient optimization. This framework includes four steps: (1) golden examples generation, aiming to prepare high-quality samples with the ground knowledge of the teacher LLM; (2) feature transformation sequence embedding and search, intending to uncover potentially superior embeddings within the latent space; (3) student LLM feature transformation, aiming to distill knowledge from the teacher LLM; (4) LLM-ML decoder teaming, dedicating to combine ML and the student LLM probabilities for valid and stable generation. The experiments on various datasets show that the teaming policy can achieve 5% improvement in downstream performance while reducing nearly half of the error cases. The results also demonstrate the efficiency and robustness of the teaming policy. Additionally, we also have exciting findings on LLMs' capacity to understand the original data.