P-TA: Using Proximal Policy Optimization to Enhance Tabular Data Augmentation via Large Language Models

📅 2024-06-17
🏛️ Annual Meeting of the Association for Computational Linguistics
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the limitations of GANs—lacking commonsense knowledge—and LLMs—struggling to align with real data distributions and suffering from gradient truncation during discrete token decoding—this paper proposes PPO-GAN, the first framework to integrate Proximal Policy Optimization (PPO) into tabular data augmentation. It embeds an LLM as a differentiable policy network within a GAN architecture, enabling reinforcement learning–driven parameter updates guided by discriminator feedback. By treating LLM generation as a continuous policy optimization problem, PPO-GAN overcomes the non-differentiability barrier inherent in autoregressive token sampling, thereby unifying knowledge-guided synthesis with distributional fidelity. Evaluated on three real-world tabular datasets, downstream models trained on PPO-GAN–generated synthetic data achieve ~4% higher accuracy than state-of-the-art methods, with significant improvements in semantic coherence and statistical quality.

Technology Category

Application Category

📝 Abstract
A multitude of industries depend on accurate and reasonable tabular data augmentation for their business processes. Contemporary methodologies in generating tabular data revolve around utilizing Generative Adversarial Networks (GAN) or fine-tuning Large Language Models (LLM). However, GAN-based approaches are documented to produce samples with common-sense errors attributed to the absence of external knowledge. On the other hand, LLM-based methods exhibit a limited capacity to capture the disparities between synthesized and actual data distribution due to the absence of feedback from a discriminator during training. Furthermore, the decoding of LLM-based generation introduces gradient breakpoints, impeding the backpropagation of loss from a discriminator, thereby complicating the integration of these two approaches. To solve this challenge, we propose using proximal policy optimization (PPO) to apply GANs, guiding LLMs to enhance the probability distribution of tabular features. This approach enables the utilization of LLMs as generators for GANs in synthesizing tabular data. Our experiments demonstrate that PPO leads to an approximately 4% improvement in the accuracy of models trained on synthetically generated data over state-of-the-art across three real-world datasets.
Problem

Research questions and friction points this paper is trying to address.

Improving tabular data augmentation accuracy
Integrating GANs and LLMs effectively
Reducing common-sense errors in synthesized data
Innovation

Methods, ideas, or system contributions that make the work stand out.

Proximal Policy Optimization (PPO) integration
Enhanced tabular data augmentation
Guided Large Language Models (LLMs)
🔎 Similar Papers
No similar papers found.
S
Shuo Yang
Technical University of Munich, Germany
C
Chenchen Yuan
Technical University of Munich, Germany
Yao Rong
Yao Rong
Technical University of Munich, Germany
F
Felix Steinbauer
Technical University of Munich, Germany
Gjergji Kasneci
Gjergji Kasneci
Professor at the Technical University of Munich
Responsible Data ScienceResponsible AIExplainable Machine LearningAlgorithmic Accountability