๐ค AI Summary
Standard Lasso regression lacks domain-specific prior knowledge and is vulnerable to large language model (LLM) hallucinations. To address this, we propose LLM-Lasso: a novel framework that integrates retrieval-augmented generation (RAG) to extract semantic knowledge from domain-specific texts, enabling dynamic, learnable, adaptive penalty weights for each featureโthereby jointly optimizing data-driven modeling and domain semantics. Our method is the first to directly embed LLM-based reasoning into the Lasso objective function and introduces an intrinsic confidence calibration mechanism to quantify and suppress LLM output bias, significantly enhancing robustness. Crucially, the LLM never accesses raw training data, preserving data privacy and improving generalization. Extensive experiments across multiple biomedical tasks demonstrate that LLM-Lasso consistently outperforms standard Lasso and state-of-the-art feature selection methods.
๐ Abstract
We introduce LLM-Lasso, a novel framework that leverages large language models (LLMs) to guide feature selection in Lasso $ell_1$ regression. Unlike traditional methods that rely solely on numerical data, LLM-Lasso incorporates domain-specific knowledge extracted from natural language, enhanced through a retrieval-augmented generation (RAG) pipeline, to seamlessly integrate data-driven modeling with contextual insights. Specifically, the LLM generates penalty factors for each feature, which are converted into weights for the Lasso penalty using a simple, tunable model. Features identified as more relevant by the LLM receive lower penalties, increasing their likelihood of being retained in the final model, while less relevant features are assigned higher penalties, reducing their influence. Importantly, LLM-Lasso has an internal validation step that determines how much to trust the contextual knowledge in our prediction pipeline. Hence it addresses key challenges in robustness, making it suitable for mitigating potential inaccuracies or hallucinations from the LLM. In various biomedical case studies, LLM-Lasso outperforms standard Lasso and existing feature selection baselines, all while ensuring the LLM operates without prior access to the datasets. To our knowledge, this is the first approach to effectively integrate conventional feature selection techniques directly with LLM-based domain-specific reasoning.