Karen Hambardzumyan
Scholar

Karen Hambardzumyan

Google Scholar ID: V3JjNJ0AAAAJ
FAIR, Meta + University College London
InterpretabilityNatural Language ProcessingFew-Shot Learning
Citations & Impact
All-time
Citations
1,230
 
H-index
8
 
i10-index
7
 
Publications
15
 
Co-authors
0
 
Resume (English only)
Academic Achievements
  • Published numerous papers on topics such as adversarial training for robust LLM safeguarding, interactive tools for analyzing transformer language models, scaling laws for generative mixed-modal language models, BARTSmiles for molecular representations, word-level adversarial reprogramming, systems for WMT20 biomedical translation task, BioRelEx 1.0 for biological relation extraction, natural language inference over interaction space, joint part-of-speech tagging and lemmatization using RNNs, and contributions to CleverHans v2.1.0 adversarial examples library.
Research Experience
  • Involved in several research projects, serving as the primary maintainer of Aim, and working with YerevaNN, USC ISI, and Yerevan State University.
Education
  • PhD student at FAIR (Meta) and UCL NLP (University College London), supervised by Lena Voita and Pontus Stenetorp.
Background
  • Research interests include machine learning, neural networks, and natural language processing. Key skills encompass algorithms and data structures, multiple programming languages (e.g., Python, C, C++, JavaScript), and deep learning frameworks (e.g., PyTorch, AllenNLP, FairSeq, TensorFlow, Keras).
Miscellany
  • Personal links include GitHub, Google Scholar, Semantic Scholar, Twitter, and a resume.
Co-authors
0 total
Co-authors: 0 (list not available)