🤖 AI Summary
Existing feature transformation methods predominantly rely on computationally intensive encoder-decoder architectures, resulting in excessive parameter counts, slow inference, and poor scalability. This paper proposes a lightweight generative feature transformation framework that reformulates feature transformation as an autoregressive sequence reconstruction task by rearchitecting the GPT architecture, jointly optimizing embedding-space continuity and downstream task performance. Crucially, we introduce a gradient-ascent-guided learnable transformation mechanism that eliminates the explicit encoder, reducing model parameters by 62% on average and accelerating inference by 2.3×. Evaluated across multiple benchmark datasets, our method achieves state-of-the-art or competitive performance on diverse downstream tasks—including classification and regression—while demonstrating strong generalization and deployment efficiency.
📝 Abstract
Feature transformation plays a critical role in enhancing machine learning model performance by optimizing data representations. Recent state-of-the-art approaches address this task as a continuous embedding optimization problem, converting discrete search into a learnable process. Although effective, these methods often rely on sequential encoder-decoder structures that cause high computational costs and parameter requirements, limiting scalability and efficiency. To address these limitations, we propose a novel framework that accomplishes automated feature transformation through four steps: transformation records collection, embedding space construction with a revised Generative Pre-trained Transformer (GPT) model, gradient-ascent search, and autoregressive reconstruction. In our approach, the revised GPT model serves two primary functions: (a) feature transformation sequence reconstruction and (b) model performance estimation and enhancement for downstream tasks by constructing the embedding space. Such a multi-objective optimization framework reduces parameter size and accelerates transformation processes. Experimental results on benchmark datasets show that the proposed framework matches or exceeds baseline performance, with significant gains in computational efficiency. This work highlights the potential of transformer-based architectures for scalable, high-performance automated feature transformation.