Akshat Gupta
Scholar

Akshat Gupta

Google Scholar ID: v80j6o0AAAAJ
UC Berkeley
Knowledge EditingNatural Language ProcessingSpoken Language Modeling
Citations & Impact
All-time
Citations
539
 
H-index
15
 
i10-index
18
 
Publications
20
 
Co-authors
9
list available
Resume (English only)
Academic Achievements
  • Publications: - 'How LLMs use their Depth?' released on Arxiv - 'Lifelong Knowledge Editing requires Better Regularization' accepted to EMNLP 2025 Findings - 'Efficient Knowledge Editing via Minimal Precomputation' accepted to ACL 2025 Main Conference - 'Sylber: Syllabic Embedding Representation of Speech from Raw Audio' accepted to ICLR 2025 - 'PokerBench: Training LLMs to become Professional Poker Players' accepted to AAAI 2025 - 'Rebuilding ROME : Resolving Model Collapse during Sequential Editing' accepted to EMNLP 2024 Main Conference - 'A Unified Framework for Model Editing' accepted to EMNLP 2024 Findings - 'Self-Assessment Tests are Unreliable Measures of LLM Personality' accepted to BlackboxNLP 2024 - 'Model Editing at Scale leads to Gradual and Catastrophic Forgetting' accepted to ACL 2024 Findings. Awards: - Outstanding Paper Award at the KnowFM Workshop @ AAAI 2025.
Research Experience
  • Worked as an NLP Research Engineer at AI Research, JPMorgan before joining Berkeley; current research focuses on continual learning, interpretability, large language models, and poker.
Education
  • UC Berkeley, advised by Gopala Anumanchipalli.
Background
  • Third-year PhD student, with research interests including continual learning, interpretability, large language models, and poker.
Miscellany
  • Interested in predicting the future and enjoys hearing people's thoughts on it.