FRIDA to the Rescue! Analyzing Synthetic Data Effectiveness in Object-Based Common Sense Reasoning for Disaster Response

📅 2025-02-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limited physical commonsense reasoning capability—particularly regarding object functionality and state changes—of small language models (SLMs) in disaster response scenarios. To this end, we propose FRIDA, a domain-collaborative synthetic data generation framework. Methodologically, FRIDA integrates instruction tuning (on LLaMA/Mistral), knowledge-guided data construction, and systematic ablation analysis. Its key contributions are twofold: (1) it pioneers a collaborative protocol between domain experts and linguists to curate high-quality seed instructions; and (2) it empirically demonstrates that injecting only two core physical commonsense categories—object functionality and physical state—yields superior performance over full-scale synthetic data training. Evaluated on 119 disaster-related commonsense reasoning prompts, FRIDA-tuned models consistently outperform all baselines. Notably, the model trained exclusively on the functionality + state subset achieves peak performance, validating both the efficacy and efficiency of commonsense decoupling.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) have the potential for substantial common sense reasoning. However, these capabilities are often emergent in larger models. This means smaller models that can be run locally are less helpful and capable with respect to certain reasoning tasks. To meet our problem space requirements, we fine-tune smaller LLMs to disaster domains, as these domains involve complex and low-frequency physical common sense knowledge. We introduce a pipeline to create Field Ready Instruction Decoding Agent (FRIDA) models, where domain experts and linguists combine their knowledge to make high-quality seed data that is used to generate synthetic data for fine-tuning. We create a set of 130 seed instructions for synthetic generation, a synthetic dataset of 25000 instructions, and 119 evaluation instructions relating to both general and earthquake-specific object affordances. We fine-tune several LLaMa and Mistral instruction-tuned models and find that FRIDA models outperform their base models at a variety of sizes. We then run an ablation study to understand which kinds of synthetic data most affect performance and find that training physical state and object function common sense knowledge alone improves over FRIDA models trained on all data. We conclude that the FRIDA pipeline is capable of instilling general common sense, but needs to be augmented with information retrieval for specific domain knowledge.
Problem

Research questions and friction points this paper is trying to address.

Enhancing small LLMs for disaster response
Synthesizing data for domain-specific reasoning
Improving common sense in low-frequency scenarios
Innovation

Methods, ideas, or system contributions that make the work stand out.

Fine-tune smaller LLMs
Generate synthetic data
Enhance common sense reasoning
🔎 Similar Papers
No similar papers found.