🤖 AI Summary
Existing foundation models struggle to generate novel and scientifically feasible hypotheses, primarily due to the absence of a structured natural language generation (NLG) dataset tailored for scientific hypothesis generation (SHG).
Method: We introduce HypoGen—the first large-scale, structured NLG benchmark for SHG, comprising over 5,500 question–hypothesis pairs. It explicitly models hypothesis formation mechanisms and reasoning chains via a novel Bit-Flip-Spark ternary schema, formalizing SHG as a controllable and evaluable conditional language generation task. Our approach integrates structured data construction, fine-tuning of LLMs (Llama/Mistral), automated metrics (BLEU, ROUGE, Novelty), and LLM-based adjudicative ranking evaluation.
Contribution/Results: Experiments demonstrate significant improvements over baselines in hypothesis novelty, feasibility, and overall quality. HypoGen is publicly released on Hugging Face to advance reproducible research in SHG.
📝 Abstract
Generating novel and creative scientific hypotheses is a cornerstone in achieving Artificial General Intelligence. Large language and reasoning models have the potential to aid in the systematic creation, selection, and validation of scientifically informed hypotheses. However, current foundation models often struggle to produce scientific ideas that are both novel and feasible. One reason is the lack of a dedicated dataset that frames Scientific Hypothesis Generation (SHG) as a Natural Language Generation (NLG) task. In this paper, we introduce HypoGen, the first dataset of approximately 5500 structured problem-hypothesis pairs extracted from top-tier computer science conferences structured with a Bit-Flip-Spark schema, where the Bit is the conventional assumption, the Spark is the key insight or conceptual leap, and the Flip is the resulting counterproposal. HypoGen uniquely integrates an explicit Chain-of-Reasoning component that reflects the intellectual process from Bit to Flip. We demonstrate that framing hypothesis generation as conditional language modelling, with the model fine-tuned on Bit-Flip-Spark and the Chain-of-Reasoning (and where, at inference, we only provide the Bit), leads to improvements in the overall quality of the hypotheses. Our evaluation employs automated metrics and LLM judge rankings for overall quality assessment. We show that by fine-tuning on our HypoGen dataset we improve the novelty, feasibility, and overall quality of the generated hypotheses. The HypoGen dataset is publicly available at huggingface.co/datasets/UniverseTBD/hypogen-dr1.