🤖 AI Summary
This study addresses the lack of traceability in scientific hypothesis generation. We propose a literature-grounded framework for automated hypothesis generation. Methodologically, we introduce a novel multi-task small language model (SLM) jointly trained on four objectives: reasoning-chain veracity classification, controlled perturbation–robust enhancement, interpretable modeling, and evidence-source alignment; further integrated with retrieval-augmented literature search and structured evidence alignment. Contributions include: (1) pioneering reasoning-chain veracity classification as the primary supervisory signal, improving logical reliability (F1 +22%); (2) achieving an evidence-support score of 0.327 (p<0.01), significantly surpassing the baseline (0.305); and (3) expert evaluations indicating high feasibility and impact (both >3.5/5). Our framework advances hypothesis generation from opaque, “black-box” outputs toward verifiable, high-impact hypotheses supported by auditable, traceable reasoning.
📝 Abstract
Large Language models have demonstrated promising performance in research ideation across scientific domains. Hypothesis development, the process of generating a highly specific declarative statement connecting a research idea with empirical validation, has received relatively less attention. Existing approaches trivially deploy retrieval augmentation and focus only on the quality of the final output ignoring the underlying reasoning process behind ideation. We present $ exttt{HypER}$ ($ extbf{Hyp}$othesis Generation with $ extbf{E}$xplanation and $ extbf{R}$easoning), a small language model (SLM) trained for literature-guided reasoning and evidence-based hypothesis generation. $ exttt{HypER}$ is trained in a multi-task setting to discriminate between valid and invalid scientific reasoning chains in presence of controlled distractions. We find that $ exttt{HypER}$ outperformes the base model, distinguishing valid from invalid reasoning chains (+22% average absolute F1), generates better evidence-grounded hypotheses (0.327 vs. 0.305 base model) with high feasibility and impact as judged by human experts ($>$3.5 on 5-point Likert scale).