Published 'Reasoning with trees: interpreting CNNs using hierarchies,' introducing a framework that uses hierarchical segmentation techniques for faithful and interpretable explanations of Convolutional Neural Networks (CNNs). Published 'Unsupervised discovery of Interpretable Visual Concepts,' proposing two methods, MAGE and Ms-IV, to enhance global interpretability of model decisions.
Research Experience
Postdoctoral researcher at IRISA, researching the development of low-complexity video compression algorithms; PhD research at Université Gustave Eiffel focused on understanding and explaining the reasoning processes of deep neural networks.
Education
PhD from Université Gustave Eiffel, worked in the LIGM and LRE laboratories; Master's degree from the Institute of Computing (Unicamp), focused on machine learning applied to image analysis in forensics.
Background
Postdoctoral researcher focusing on developing low-complexity algorithms for video compression networks to create simpler and interpretable models. Research interests include understanding the reasoning processes of Deep Neural Networks (explainable Artificial Intelligence, xAI) and presenting these complex explanations in a way that is easily interpretable for humans. Particularly intrigued by the cognitive aspects of machine learning and how it can be compared to human learning.
Miscellany
Interested in language mechanisms, including semantics and semiotics, and believes they can inspire a better understanding of artificial models.