Fox-1: Open Small Language Model for Cloud and Edge

📅 2024-11-08
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
To advance LLM democratization and enable efficient edge deployment in cloud-edge collaborative scenarios. Method: We propose Fox-1, an open-source small language model (1.6B parameters), featuring a novel three-stage data-curriculum pretraining strategy; a deeper architecture with an expanded vocabulary and Grouped Query Attention (GQA) for joint performance-efficiency optimization at scale; and training on 3 trillion tokens (pretraining) plus 5 billion tokens (instruction tuning), supporting variable-length sequences from 2K to 8K. Contribution/Results: Released under the Apache 2.0 license, Fox-1 significantly outperforms comparable models—including StableLM-2-1.6B and Gemma-2B—across multiple benchmarks. It achieves high inference throughput and low latency, empirically validating the feasibility of lightweight architectures that balance openness, efficiency, and practical utility.

Technology Category

Application Category

📝 Abstract
We present Fox-1, a series of small language models (SLMs) consisting of Fox-1-1.6B and Fox-1-1.6B-Instruct-v0.1. These models are pre-trained on 3 trillion tokens of web-scraped document data and fine-tuned with 5 billion tokens of instruction-following and multi-turn conversation data. Aiming to improve the pre-training efficiency, Fox-1-1.6B model introduces a novel 3-stage data curriculum across all the training data with 2K-8K sequence length. In architecture design, Fox-1 features a deeper layer structure, an expanded vocabulary, and utilizes Grouped Query Attention (GQA), offering a performant and efficient architecture compared to other SLMs. Fox-1 achieves better or on-par performance in various benchmarks compared to StableLM-2-1.6B, Gemma-2B, Qwen1.5-1.8B, and OpenELM1.1B, with competitive inference speed and throughput. The model weights have been released under the Apache 2.0 license, where we aim to promote the democratization of LLMs and make them fully accessible to the whole open-source community.
Problem

Research questions and friction points this paper is trying to address.

Develop efficient small language models for cloud and edge
Improve pre-training efficiency with novel data curriculum
Enhance performance and accessibility of open-source SLMs
Innovation

Methods, ideas, or system contributions that make the work stand out.

3-stage data curriculum for efficient pre-training
Deeper layers and expanded vocabulary design
Grouped Query Attention for performance efficiency
🔎 Similar Papers
No similar papers found.