Lisa Alazraki
Scholar

Lisa Alazraki

Google Scholar ID: XtFMI4QAAAAJ
PhD Student, Imperial College London
Machine LearningNatural Language Processing
Citations & Impact
All-time
Citations
44
 
H-index
3
 
i10-index
1
 
Publications
10
 
Co-authors
10
list available
Resume (English only)
Academic Achievements
  • Paper 'Reverse Engineering Human Preferences with Reinforcement Learning' accepted as a spotlight at NeurIPS.
  • Paper 'No Need for Explanations: LLMs can implicitly learn from mistakes in-context' accepted as an oral at EMNLP.
  • Released the AgentCoMa benchmark.
  • Paper 'AgentCoMa: A Compositional Benchmark Mixing Commonsense and Mathematical Reasoning in Real-World Scenarios' published on arXiv.
  • Paper 'How to Improve the Robustness of Closed-Source Models on NLI' published on arXiv.
  • Paper 'Enhancing LLM Robustness to Perturbed Instructions: An Empirical Study' published at ICLR BuildingTrust.
  • Paper 'How (not) to ensemble LVLMs for VQA' published at NeurIPS ICBINB.
  • Paper 'Meta-Reasoning Improves Tool Use in Large Language Models' published at NAACL Findings.
  • Paper 'How Can Representation Dimension Dominate Structurally Pruned LLMs?' published at ICLR SLLM.
Research Experience
  • Currently a Research Scientist Intern at Meta Superintelligence Labs, working with Akhil Mathur. Previously, a Research Intern at Cohere (2024) and Google (2022, 2023).
Education
  • PhD student at the NLP Group at Imperial College London, advised by Marek Rei.
Background
  • PhD student, interested in generalisable learning, OOD robustness, and the relationship between reasoning and language.
Miscellany
  • Teaching Assistant for 70050 Intro to Machine Learning, 70016 Natural Language Processing, 70010 Deep Learning, and 40008 Graphs and Algorithms.