Maximilian Benedikt Schier
Scholar

Maximilian Benedikt Schier

Google Scholar ID: Y5jxsosAAAAJ
Institut für Informationsverarbeitung, Leibniz Universität Hannover
Machine LearningArtificial Intelligence
Citations & Impact
All-time
Citations
44
 
H-index
3
 
i10-index
2
 
Publications
9
 
Co-authors
0
 
Publications
9 items
Browse publications on Google Scholar (top-right) ↗
Resume (English only)
Academic Achievements
  • Explainable Reinforcement Learning via Dynamic Mixture Policies, 2025 IEEE International Conference on Robotics and Automation (ICRA)
  • Cell Tracking according to Biological Needs - Strong Mitosis-aware Multi-Hypothesis Tracker with Aleatoric Uncertainty, Transactions on Medical Imaging, IEEE, 2025
  • Safe Resetless Reinforcement Learning: Enhancing Training Autonomy with Risk-Averse Agents, European Conference on Computer Vision Workshops (ECCVW), October 2024
  • Learned Fourier Bases for Deep Set Feature Extractors in Automotive Reinforcement Learning, 2023 IEEE 26th International Conference on Intelligent Transportation Systems (ITSC), September 2023
  • Deep Reinforcement Learning for Autonomous Driving Using High-Level Heterogeneous Graph Representations, 2023 IEEE International Conference on Robotics and Automation (ICRA), 2023
  • Constrained Mean Shift Clustering, Proceedings of the 2022 SIAM International Conference on Data Mining (SDM), April 2022
  • Adversarial N-player Search using Locality for the Game of Battlesnake, INFORMATIK 2019, September 2019
Research Experience
  • Current research focuses on Reinforcement Learning and dynamic representations, such as graphs. Additionally, he is involved in projects on panoptic segmentation of bio-medical images.
Education
  • Master's degree in Computer Science from Leibniz University Hannover. Master thesis on Deep Image Clustering completed in July 2021.
Background
  • Research interests include Graph Neural Networks, dynamic scene representations, Reinforcement Learning (especially in automotive/traffic context), Deep & Reinforcement Learning for autonomous racing, and Panoptic Segmentation for Biology/Medicine.
Miscellany
  • Provides a toy problem example of a Soft Actor-Critic Reinforcement Learning agent controlling a vehicle end-to-end (sensors to actuators).
Co-authors
0 total
Co-authors: 0 (list not available)