Abstraction requires breadth: a renormalisation group approach

📅 2024-07-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper challenges the conventional assumption that network depth alone induces abstraction in neural networks, arguing instead that the breadth—i.e., diversity and coverage—of training data is the primary driver of “genuine abstract representations.” Method: Drawing inspiration from the renormalization group (RG) theory in statistical physics, the authors introduce RG fixed-point concepts into deep learning for the first time, constructing a data-agnostic hierarchical feature model as a theoretical benchmark for abstraction. They integrate RG-based theoretical modeling, empirical evaluation using deep belief networks (DBNs), and numerical validation. Contribution/Results: Experiments demonstrate that as training data breadth increases, deep-layer representations asymptotically converge to the RG fixed point, quantitatively validating the “breadth-driven abstraction” mechanism. This work transcends depth-centric paradigms, establishing a theoretically grounded, quantifiable framework for characterizing abstraction in neural representations.

Technology Category

Application Category

📝 Abstract
Abstraction is the process of extracting the essential features from raw data while ignoring irrelevant details. This is similar to the process of focusing on large-scale properties, systematically removing irrelevant small-scale details, implemented in the renormalisation group of statistical physics. This analogy is suggestive because the fixed points of the renormalisation group offer an ideal candidate of a truly abstract -- i.e. data independent -- representation. It has been observed that abstraction emerges with depth in neural networks. Deep layers of neural network capture abstract characteristics of data, such as"cat-ness"or"dog-ness"in images, by combining the lower level features encoded in shallow layers (e.g. edges). Yet we argue that depth alone is not enough to develop truly abstract representations. We advocate that the level of abstraction crucially depends on how broad the training set is. We address the issue within a renormalisation group approach where a representation is expanded to encompass a broader set of data. We take the unique fixed point of this transformation -- the Hierarchical Feature Model -- as a candidate for an abstract representation. This theoretical picture is tested in numerical experiments based on Deep Belief Networks trained on data of different breadth. These show that representations in deep layers of neural networks approach the Hierarchical Feature Model as the data gets broader, in agreement with theoretical predictions.
Problem

Research questions and friction points this paper is trying to address.

Explores abstraction in neural networks
Links abstraction to data breadth
Tests theory with Deep Belief Networks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Renormalisation group approach for abstraction
Hierarchical Feature Model as abstract representation
Deep Belief Networks with broad training data
🔎 Similar Papers
No similar papers found.
C
Carlo Orientale Caputo
SISSA - International School for Advanced Studies, 34136 Trieste, Italy
E
Elias Seiffert
University of Tübingen, Germany
Matteo Marsili
Matteo Marsili
Senior reserch scientist, Abdus Salam ICTP, Trieste
Statistical mechanicsstochastic processescollective phenomena in socio-economic systemsnetworkscomplex systems