Scholar
Calarina Muslimani
Google Scholar ID: 3S4LYDQAAAAJ
University of Alberta
reinforcement learning
reward alignment
human-in-the-loop
Follow
Google Scholar
↗
Citations & Impact
All-time
Citations
14
H-index
2
i10-index
0
Publications
9
Co-authors
11
list available
Contact
No contact links provided.
Publications
6 items
The Trajectory Alignment Coefficient in Two Acts: From Reward Tuning to Reward Learning
2026
Cited
0
Reward Learning through Ranking Mean Squared Error
2026
Cited
0
Towards Improving Reward Design in RL: A Reward Alignment Metric for RL Practitioners
2025
Cited
0
Boosting Robustness in Preference-Based Reinforcement Learning with Dynamic Sparsity
arXiv.org · 2024
Cited
0
Leveraging Sub-Optimal Data for Human-in-the-Loop Reinforcement Learning
Adaptive Agents and Multi-Agent Systems · 2024
Cited
1
Reinforcement Teaching
Trans. Mach. Learn. Res. · 2022
Cited
1
Resume (English only)
Co-authors
11 total
Matthew E. Taylor
Professor, University of Alberta
Kerrick Johnstonbaugh
University of Alberta
Suyog H. Chandramouli
Princeton University
Alex Lewandowski
University of Alberta
Serena Booth
Brown University
W Bradley Knox
Research Associate Professor at UT Austin
Decebal Constantin Mocanu
Associate Professor in Machine Learning, University of Luxembourg, TU Eindhoven
Carrie DEMMANS EPP
EdTeKLA Group, Dept. of Computing Science, University of Alberta
×
Welcome back
Sign in to Agora
Welcome back! Please sign in to continue.
Email address
Password
Forgot password?
Continue
Do not have an account?
Sign up