🤖 AI Summary
This study addresses the limited representational flexibility of current AI systems, which hinders analogical and creative reasoning. We propose a computable modeling framework grounded in Representational Systems Theory (RST), featuring a novel core data structure, a domain-specific representational communication language, and a structure-transformation engine enabling dynamic, cross-domain and multimodal representational mappings. Building upon this, we develop Oruga—the first system to engineer RST’s foundational abstractions—namely structural mapping, role binding, and constraint preservation—into an executable computational architecture. Empirical evaluation demonstrates Oruga’s effectiveness on canonical representational transformation and cross-domain analogy tasks, significantly improving machine understanding and generation of human-like cognitive representations (e.g., diagrams, metaphors, analogies). Our work establishes a new paradigm for AI systems aligned with human cognitive principles.
📝 Abstract
Humans use representations flexibly. We draw diagrams, change representations and exploit creative analogies across different domains. We want to harness this kind of power and endow machines with it to make them more compatible with human use. Previously we developed Representational Systems Theory (RST) to study the structure and transformations of representations. In this paper we present Oruga (caterpillar in Spanish; a symbol of transformation), an implementation of various aspects of RST. Oruga consists of a core of data structures corresponding to concepts in RST, a language for communicating with the core, and an engine for producing transformations using a method we call structure transfer. In this paper we present an overview of the core and language of Oruga, with a brief example of the kind of transformation that structure transfer can execute.