We Can't Understand AI Using our Existing Vocabulary

📅 2025-02-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
A persistent human-AI conceptual gap impedes AI interpretability and controllability, rooted in the inadequacy of natural language to precisely encode bidirectional human–machine intent. Method: This paper introduces the “Human–Machine Shared Language” framework, proposing—systematically for the first time—the deliberate construction of neologisms as the core mechanism for conceptual alignment, thereby transcending conventional interpretability methods’ reliance on pre-existing semantics. Through concept engineering, linguistic-philosophical analysis, LLM prompting experiments, and abstract-level modeling, we design and empirically validate two operational neologisms—“length” and “diversity”—to govern output structure. Contribution/Results: These terms enable significantly more precise, fine-grained control over large language model outputs. The work establishes lexical innovation as a foundational pathway toward AI controllability and provides a scalable, methodology-driven foundation for semantic alignment in human–AI collaboration.

Technology Category

Application Category

📝 Abstract
This position paper argues that, in order to understand AI, we cannot rely on our existing vocabulary of human words. Instead, we should strive to develop neologisms: new words that represent precise human concepts that we want to teach machines, or machine concepts that we need to learn. We start from the premise that humans and machines have differing concepts. This means interpretability can be framed as a communication problem: humans must be able to reference and control machine concepts, and communicate human concepts to machines. Creating a shared human-machine language through developing neologisms, we believe, could solve this communication problem. Successful neologisms achieve a useful amount of abstraction: not too detailed, so they're reusable in many contexts, and not too high-level, so they convey precise information. As a proof of concept, we demonstrate how a"length neologism"enables controlling LLM response length, while a"diversity neologism"allows sampling more variable responses. Taken together, we argue that we cannot understand AI using our existing vocabulary, and expanding it through neologisms creates opportunities for both controlling and understanding machines better.
Problem

Research questions and friction points this paper is trying to address.

Develop new words for AI concepts
Create shared human-machine language
Enhance AI interpretability and control
Innovation

Methods, ideas, or system contributions that make the work stand out.

Develop neologisms for AI understanding
Frame interpretability as communication issue
Create shared human-machine language
🔎 Similar Papers
No similar papers found.