Amanda Bertsch
Scholar

Amanda Bertsch

Google Scholar ID: G1Jw4CYAAAAJ
PhD student, Language Technologies Institute, Carnegie Mellon University
summarizationlong-context NLUconditional generationNLP
Citations & Impact
All-time
Citations
683
 
H-index
10
 
i10-index
11
 
Publications
19
 
Co-authors
12
list available
Resume (English only)
Academic Achievements
  • Published several papers, such as 'Efficient Many-Shot In-Context Learning with Dynamic Block-Sparse Attention' (preprint), 'In-context learning with long-context models: An in-depth exploration' (NAACL 2025), 'From Decoding to Meta-Generation: Inference-time Algorithms for Large Language Models' (TMLR 2024), 'To Build Our Future, We Must Know Our Past: Contextualizing Paradigm Shifts in Natural Language Processing' (EMNLP 2023), and 'Unlimiformer: Long-Range Transformers with Unlimited Length Input' (NeurIPS). Received NSF Graduate Research Fellowship.
Research Experience
  • Conducting PhD research at Carnegie Mellon University, involved in multiple research projects, including long-context in-context learning, a system for distilling a model from a single textual instruction, and an analysis paper about Minimum Bayes Risk decoding. Interned at Meta GenAI and AI2, working on long context and model deployment.
Education
  • PhD: Language Technologies Institute at Carnegie Mellon University, advised by Matt Gormley and Graham Neubig; Bachelor's: Mathematics and Computer Science from the University of Arizona, advised by Steven Bethard.
Background
  • Research interests include conditional generation, particularly long-context modeling and inference-time algorithms; broader research interests include better ways to reason over large quantities of knowledge, model large-scale structure in text, and effectively integrate external knowledge into models. Currently, she is excited about evaluation for realistic long-context settings, more efficient model deployment, and understanding how community divergence affects whose work we engage with. She is also broadly interested in meta-analysis of the NLP community, including critically examining the benchmarks, datasets, and modeling choices we take as defaults.
Miscellany
  • Member of NeuLab and organizer for Queer in AI. In her spare time, she writes and reads speculative fiction, hikes, runs, and plays tabletop games.