Maciej Wołczyk
Scholar

Maciej Wołczyk

Google Scholar ID: f6Xi7aoAAAAJ
Google
continual learningdeep learningreinforcement learning
Citations & Impact
All-time
Citations
797
 
H-index
12
 
i10-index
13
 
Publications
20
 
Co-authors
7
list available
Resume (English only)
Academic Achievements
  • - Paper 'Fine-tuning Reinforcement Learning Models is Secretly a Forgetting Mitigation Problem' accepted as a spotlight at ICML 2024
  • - Paper 'AdaGlimpse: Active Visual Exploration with Arbitrary Glimpse Position and Scale' accepted at ECCV 2024
  • - Received FNP Start scholarship in May 2024
  • - Workshop paper on mixing and retrieving states in State-Space Models accepted at the Next Generation Sequence Models workshop at ICML 2024 in June 2024
  • - Paper on the issue of forgetting in RL fine-tuning accepted as a spotlight at ICML 2024 in May 2024
  • - Paper on continual learning with weight interval regularization accepted to ICML 2022 as a short presentation in May 2022
  • - Two papers, 'Zero Time Waste' and 'Continual World', accepted to NeurIPS 2021 as poster presentations in September 2021
Research Experience
  • - Joined Google, Paradigms of Intelligence, and moved to Zürich in January 2025
  • - Talked about work on forgetting in RL fine-tuning at UCL Dark seminar in July 2024
  • - Co-organized the Next Generation Sequence Models workshop at ICML 2024 in June 2024
  • - Defended PhD thesis with distinction in February 2024
  • - Started work as a postdoc at IDEAS NCBR in October 2023
  • - Conducted a research internship with João Sacramento at ETH Zurich from March 2022 to September 2022
  • - Worked on imitation learning for planning in self-driving cars at Woven Planet Level-5 (previously Lyft Level-5) from April 2021 to September 2021
Education
  • PhD Student at Jagiellonian University, supervised by Prof. Jacek Tabor.
Background
  • Currently a Research Scientist at the Google Paradigms of Intelligence team. Main research interests include: Sequential Decision Making, Multi-agent systems, Fine-tuning RL models, and Adaptation in foundation models.
Miscellany
  • Personal interests not specifically mentioned