๐ค AI Summary
This paper addresses the systemic absence of responsible practices in foundational model development by introducing the first comprehensive, multimodal resource guide covering text, vision, and speech modalities. Through systematic literature review, cross-modal taxonomy construction, and tool-to-capability mapping, it identifies four critical structural gaps: (1) scarcity of multimodal and multilingual tooling; (2) weak capabilities in data curation and safety evaluation; (3) insufficient system-level monitoring and reproducibility infrastructure; and (4) lack of environmental impact assessment and release governance frameworks. The project delivers a curated practice inventory comprising 250+ open-source tools and resources spanning data governance, training optimization, safety auditing, carbon footprint analysis, and responsible deployment. Empirically grounded, the findings inform policy formulation, tool development, and standardization effortsโadvancing AI development from heuristic practice toward a verifiable, auditable, and sustainable engineering paradigm.
๐ Abstract
Foundation model development attracts a rapidly expanding body of contributors, scientists, and applications. To help shape responsible development practices, we introduce the Foundation Model Development Cheatsheet: a growing collection of 250+ tools and resources spanning text, vision, and speech modalities. We draw on a large body of prior work to survey resources (e.g. software, documentation, frameworks, guides, and practical tools) that support informed data selection, processing, and understanding, precise and limitation-aware artifact documentation, efficient model training, advance awareness of the environmental impact from training, careful model evaluation of capabilities, risks, and claims, as well as responsible model release, licensing and deployment practices. We hope this curated collection of resources helps guide more responsible development. The process of curating this list, enabled us to review the AI development ecosystem, revealing what tools are critically missing, misused, or over-used in existing practices. We find that (i) tools for data sourcing, model evaluation, and monitoring are critically under-serving ethical and real-world needs, (ii) evaluations for model safety, capabilities, and environmental impact all lack reproducibility and transparency, (iii) text and particularly English-centric analyses continue to dominate over multilingual and multi-modal analyses, and (iv) evaluation of systems, rather than just models, is needed so that capabilities and impact are assessed in context.