Revisiting the Role of Relearning in Semantic Dementia

📅 2025-03-05
🏛️ 2023 Conference on Cognitive Computational Neuroscience
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Semantic dementia (SD) is traditionally attributed to anterior temporal lobe (ATL) neurodegeneration; however, it remains unclear whether characteristic behavioral phenotypes—such as prototypicality effects and cross-category confusions—arise solely from structural atrophy or are instead shaped by dynamic post-atrophic neural reorganization. Method: We employed deep linear neural networks to simulate progressive neuron loss in the ATL, followed by online retraining on semantic tasks after lesioning, while incorporating hierarchical semantic feature representations. Contribution/Results: Our model demonstrates, for the first time, that SD-like cognitive deficits emerge exclusively through adaptive relearning following structural degradation—without requiring nonlinear output mechanisms. These findings challenge the canonical “atrophy → symptom” causality, reframing SD progression as a consequence of compensatory, plastic cortical reorganization triggered by language system failure. The work establishes a novel computational framework for modeling neurodegenerative disorders and informs theoretically grounded intervention strategies targeting maladaptive plasticity.

Technology Category

Application Category

📝 Abstract
Patients with semantic dementia (SD) present with remarkably consistent atrophy of neurons in the anterior temporal lobe and behavioural impairments, such as graded loss of category knowledge. While relearning of lost knowledge has been shown in acute brain injuries such as stroke, it has not been widely supported in chronic cognitive diseases such as SD. Previous research has shown that deep linear artificial neural networks exhibit stages of semantic learning akin to humans. Here, we use a deep linear network to test the hypothesis that relearning during disease progression rather than particular atrophy cause the specific behavioural patterns associated with SD. After training the network to generate the common semantic features of various hierarchically organised objects, neurons are successively deleted to mimic atrophy while retraining the model. The model with relearning and deleted neurons reproduced errors specific to SD, including prototyping errors and cross-category confusions. This suggests that relearning is necessary for artificial neural networks to reproduce the behavioural patterns associated with SD in the absence of extit{output} non-linearities. Our results support a theory of SD progression that results from continuous relearning of lost information. Future research should revisit the role of relearning as a contributing factor to cognitive diseases.
Problem

Research questions and friction points this paper is trying to address.

Examine relearning's role in semantic dementia progression.
Test if relearning during atrophy causes SD behavioral patterns.
Use neural networks to model SD-specific errors and relearning.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Deep linear network models semantic learning.
Neuron deletion mimics semantic dementia atrophy.
Relearning reproduces specific behavioral SD errors.
🔎 Similar Papers
No similar papers found.