🤖 AI Summary
This paper addresses the computational implementation problem: how cognitive systems perform specific computations over internal representational vehicles. To this end, it proposes a computational implementation framework grounded in causal abstraction theory, formally characterizing the representation–computation relationship as a causal dependency structure and empirically evaluating it using deep learning models—particularly intermediate representations in neural networks. Its key contributions are threefold: (i) it is the first to establish causal abstraction as a rigorous theoretical foundation for computational explanation, precisely specifying sufficient conditions for a representational system to implement a given computation; (ii) it bridges philosophy of computation and machine learning, revealing an intrinsic link between representational generalizability and causal robustness; and (iii) it provides a novel, philosophically rigorous yet computationally tractable framework for analyzing the computational nature of AI and cognitive systems.
📝 Abstract
Explanations of cognitive behavior often appeal to computations over representations. What does it take for a system to implement a given computation over suitable representational vehicles within that system? We argue that the language of causality -- and specifically the theory of causal abstraction -- provides a fruitful lens on this topic. Drawing on current discussions in deep learning with artificial neural networks, we illustrate how classical themes in the philosophy of computation and cognition resurface in contemporary machine learning. We offer an account of computational implementation grounded in causal abstraction, and examine the role for representation in the resulting picture. We argue that these issues are most profitably explored in connection with generalization and prediction.