JT-Safe: Intrinsically Enhancing the Safety and Trustworthiness of LLMs

📅 2025-10-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
LLMs’ hallucination and reliability issues originate in the pretraining phase—where training data contains factual inaccuracies, logical inconsistencies, and distributional biases, and lacks grounding in real-world semantic contexts. To address this, we propose “World Context Augmentation” (DWC), the first systematic approach to anchor pretraining data to its spatiotemporal and real-world semantic context, enabling construction of a high-quality dataset enriched with world context. Integrated with industrial-scenario data injection, continual pretraining, and targeted post-training, DWC enhances model safety and factual consistency from the source. Applying this framework to JT-35B-Base yields JT-Safe-35B, which achieves a +1.79% average improvement on safety and trustworthiness benchmarks. Remarkably, it surpasses comparably sized Qwen models using only 6.2T tokens, empirically validating the effectiveness and necessity of world-context modeling for establishing foundational LLM reliability.

Technology Category

Application Category

📝 Abstract
The hallucination and credibility concerns of large language models (LLMs) are global challenges that the industry is collectively addressing. Recently, a significant amount of advances have been made on post-training and inference techniques to mitigate these challenges. However, it is widely agreed that unsafe and hallucinations of LLMs intrinsically originate from pre-training, involving pre-training data and the next-token prediction learning mechanism. In this paper, we focus on enhancing pre-training data to improve the trustworthiness and safety of LLMs. Since the data is vast, it's almost impossible to entirely purge the data of factual errors, logical inconsistencies, or distributional biases. Moreover, the pre-training data lack grounding in real-world knowledge. Each piece of data is treated as a sequence of tokens rather than as a representation of a part of the world. To overcome these issues, we propose approaches to enhancing our pre-training data with its context in the world and increasing a substantial amount of data reflecting industrial scenarios. We argue that most source data are created by the authors for specific purposes in a certain spatial-temporal context. They have played a role in the real world. By incorporating related world context information, we aim to better anchor pre-training data within real-world scenarios, thereby reducing uncertainty in model training and enhancing the model's safety and trustworthiness. We refer to our Data with World Context as DWC. We continue pre-training an earlier checkpoint of JT-35B-Base with 1.5 trillion of DWC tokens. We introduce our post-training procedures to activate the potentials of DWC. Compared with the Qwen model of a similar scale, JT-Safe-35B achieves an average performance improvement of 1.79% on the Safety and Trustworthy evaluation benchmarks, while being pretrained with only 6.2 trillion tokens.
Problem

Research questions and friction points this paper is trying to address.

Enhancing LLM safety by improving pre-training data quality
Addressing hallucinations through real-world context integration
Reducing factual errors and biases in language models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Enhancing pre-training data with world context
Adding industrial scenario data to pre-training
Continuing pre-training with world-contextualized data tokens
🔎 Similar Papers
No similar papers found.
Junlan Feng
Junlan Feng
Chief Scientist at China Mobile Research
Natural LanguageMachine LearningSpeech ProcessingData Mining
F
Fanyu Meng
China Mobile Jiutian Research, Beijing, China
C
Chong Long
China Mobile Jiutian Research, Beijing, China
P
Pengyu Cong
China Mobile Jiutian Research, Beijing, China
D
Duqing Wang
China Mobile Jiutian Research, Beijing, China
Y
Yan Zheng
China Mobile Jiutian Research, Beijing, China
Yuyao Zhang
Yuyao Zhang
Renmin University of China
Artificial Intelligence
X
Xuanchang Gao
China Mobile Jiutian Research, Beijing, China
Y
Ye Yuan
China Mobile Jiutian Research, Beijing, China
Yunfei Ma
Yunfei Ma
MIT media lab/Alibaba Group/Uber Technologies
Mobile networksWANTransport Layer ProtocolsML
Z
Zhijie Ren
China Mobile Jiutian Research, Beijing, China
F
Fan Yang
China Mobile Jiutian Research, Beijing, China
N
Na Wu
China Mobile Jiutian Research, Beijing, China
D
Di Jin
China Mobile Jiutian Research, Beijing, China
C
Chao Deng
China Mobile Jiutian Research, Beijing, China