Browse publications on Google Scholar (top-right) ↗
Resume (English only)
Academic Achievements
He proposed a preference-based reinforcement learning framework where users can provide binary feedback (better/worse) to trajectories demonstrated by the robot, thereby reducing the user's role to that of a mere critic; During his time at Darmstadt, along with the team, they explored various topics including reinforcement learning, Bayesian optimization, among others, and their applications in robotics.
Research Experience
He serves as a researcher at Inria Scool, aiming to integrate elements of symbolic AI into predominantly continuous machine learning tools; He worked as a postdoc at TU Darmstadt in Jan Peters' lab, engaging in research on topics like reinforcement learning, Bayesian optimization, deep RL, or convex optimization, with applications in robotics such as in-hand object manipulation or ball catching; He also had a short stint at Aalto University, collaborating with Joni Pajarinen, Alexander Ilin, and Juho Kannala on object-centric image decomposition for and with reinforcement learning.
Education
During his PhD, he studied under the guidance of Michèle Sebag and Marc Schoenauer, focusing on reducing the expertise requirements of policy learning algorithms; He received a diploma in Computer Engineering from École Nationale Supérieure d'Informatique (Algiers, Algeria) and an MSc in Artificial Intelligence and Decision from Sorbonne Université (Paris, France).
Background
His research interest lies in understanding intelligence. As a computer scientist, he attempts to reproduce intelligence by having computer programs solve complex tasks. His work mainly focuses on reinforcement learning, also involving areas such as computer vision, Bayesian optimization, and convex optimization. In terms of applications, he is interested in robotics and more recently in learning human-understandable decision-making policies.