🤖 AI Summary
This study addresses the policy optimization challenge of simultaneously achieving social well-being and respecting planetary boundaries—central to Doughnut Economics. We propose a parsimonious machine learning modeling paradigm that uniquely integrates random forest classification with Q-learning reinforcement learning. The random forest efficiently identifies feasible policy regions satisfying dual sustainability constraints, while Q-learning autonomously discovers optimal dynamic policy pathways toward desired states under low computational overhead. Empirical evaluation demonstrates the framework’s ability to identify multiple policy parameter configurations that jointly ensure environmental safety and social foundation adequacy. Compared to conventional high-fidelity simulation approaches, our method substantially reduces data requirements and computational costs. It further exhibits strong model interpretability, policy relevance, and empirical validity in complex socio-ecological systems. This work establishes a novel, scalable paradigm for quantitative governance of sustainable development.
📝 Abstract
The'Doughnut'of social and planetary boundaries has emerged as a popular framework for assessing environmental and social sustainability. Here, we provide a proof-of-concept analysis that shows how machine learning (ML) methods can be applied to a simple macroeconomic model of the Doughnut. First, we show how ML methods can be used to find policy parameters that are consistent with'living within the Doughnut'. Second, we show how a reinforcement learning agent can identify the optimal trajectory towards desired policies in the parameter space. The approaches we test, which include a Random Forest Classifier and $Q$-learning, are frugal ML methods that are able to find policy parameter combinations that achieve both environmental and social sustainability. The next step is the application of these methods to a more complex ecological macroeconomic model.