μLO: Compute-Efficient Meta-Generalization of Learned Optimizers. In Submission to NeurIPS 2024.
Simple and Scalable Strategies to Continually Pre-train Large Language Models. Published in Transactions on Machine Learning Research (06/2024).
Learning Optimizers for Local SGD. In International Workshop on Federated Learning in the Age of Foundation Models in Conjunction with NeurIPS 2023.
Continual Pre-Training of Large Language Models: How to re-warm your model? In Efficient Systems for Foundation Models Workshop at ICML 2023, Honolulu, USA.
Object Re-Identification from Point Clouds. In proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), 2024.
An Exploration of Robustness to L-infinity and Spatial Perturbations and their Composition. (ArXiv)
Out-of-Distribution Detection for LiDAR-based 3D Object Detection. In 2022 IEEE 25th International Conference on Intelligent Transportation Systems (ITSC).
Parametric Scattering Networks. In proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2022 (Oral: 4.2% of submissions).
CLaC-BP at SemEval-2021 Task 8 : SciBERT Plus Rules for MeasEval. In Proceedings of the Fifteenth Workshop on Semantic Evaluation (SemEval-2021). Association for Computational Linguistics.
Background
Ph.D. student at Mila & Université de Montréal, focusing on efficient foundation model pre-training through continual learning and meta-learning.