LLM-Lasso: A Robust Framework for Domain-Informed Feature Selection and Regularization

๐Ÿ“… 2025-02-15
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Standard Lasso regression lacks domain-specific prior knowledge and is vulnerable to large language model (LLM) hallucinations. To address this, we propose LLM-Lasso: a novel framework that integrates retrieval-augmented generation (RAG) to extract semantic knowledge from domain-specific texts, enabling dynamic, learnable, adaptive penalty weights for each featureโ€”thereby jointly optimizing data-driven modeling and domain semantics. Our method is the first to directly embed LLM-based reasoning into the Lasso objective function and introduces an intrinsic confidence calibration mechanism to quantify and suppress LLM output bias, significantly enhancing robustness. Crucially, the LLM never accesses raw training data, preserving data privacy and improving generalization. Extensive experiments across multiple biomedical tasks demonstrate that LLM-Lasso consistently outperforms standard Lasso and state-of-the-art feature selection methods.

Technology Category

Application Category

๐Ÿ“ Abstract
We introduce LLM-Lasso, a novel framework that leverages large language models (LLMs) to guide feature selection in Lasso $ell_1$ regression. Unlike traditional methods that rely solely on numerical data, LLM-Lasso incorporates domain-specific knowledge extracted from natural language, enhanced through a retrieval-augmented generation (RAG) pipeline, to seamlessly integrate data-driven modeling with contextual insights. Specifically, the LLM generates penalty factors for each feature, which are converted into weights for the Lasso penalty using a simple, tunable model. Features identified as more relevant by the LLM receive lower penalties, increasing their likelihood of being retained in the final model, while less relevant features are assigned higher penalties, reducing their influence. Importantly, LLM-Lasso has an internal validation step that determines how much to trust the contextual knowledge in our prediction pipeline. Hence it addresses key challenges in robustness, making it suitable for mitigating potential inaccuracies or hallucinations from the LLM. In various biomedical case studies, LLM-Lasso outperforms standard Lasso and existing feature selection baselines, all while ensuring the LLM operates without prior access to the datasets. To our knowledge, this is the first approach to effectively integrate conventional feature selection techniques directly with LLM-based domain-specific reasoning.
Problem

Research questions and friction points this paper is trying to address.

Integrates LLMs with Lasso regression
Enhances feature selection using domain knowledge
Ensures robustness against LLM inaccuracies
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLM-Lasso integrates LLM with Lasso regression
Uses RAG pipeline for domain-specific knowledge
Internal validation ensures robustness and accuracy
๐Ÿ”Ž Similar Papers
No similar papers found.
Erica Zhang
Erica Zhang
Stanford University
Stochastic ProcessesOptimizationMachine Learning
R
Ryunosuke Goto
Department of Biomedical Data Science, Stanford University School of Medicine, Stanford, USA
Naomi Sagan
Naomi Sagan
EE PhD Student, Stanford University
J
Jurik Mutter
Divisions of Oncology and Hematology, Stanford University School of Medicine, Stanford, USA
N
Nick Phillips
Divisions of Oncology and Hematology, Stanford University School of Medicine, Stanford, USA
A
Ash Alizadeh
Divisions of Oncology and Hematology, Stanford University School of Medicine, Stanford, USA
Kangwook Lee
Kangwook Lee
University of Wisconsin-Madison, KRAFTON AI
Machine LearningInformation Theory
Mert Pilanci
Mert Pilanci
Stanford University
Machine LearningOptimizationNeural NetworksSignal ProcessingInformation Theory
Robert Tibshirani
Robert Tibshirani
Professor of Biomedical Data Sciences, and of Statistics, Stanford University
Statisticsdata scienceMachine Learning