Nihal V. Nayak
Scholar

Nihal V. Nayak

Google Scholar ID: Bx497RMAAAAJ
Postdoctoral Fellow, Harvard University
natural language processingmachine learningdeep learning
Citations & Impact
All-time
Citations
5,120
 
H-index
7
 
i10-index
7
 
Publications
18
 
Co-authors
0
 
Resume (English only)
Academic Achievements
  • 1. Paper 'Learning to Generate Instruction Tuning Datasets for Zero-Shot Task Adaptation' accepted to Findings of the Association for Computational Linguistics: ACL 2024.
  • 2. Paper 'Does CLIP Bind Concepts? Probing Compositionality in Large Image Models' accepted to Findings of the Association for Computational Linguistics: EACL 2024.
  • 3. Paper 'Learning to Compose Soft Prompts for Compositional Zero-Shot Learning' presented at ICLR 2023.
  • 4. Paper 'Zero-Shot Learning with Common Sense Knowledge Graphs' published in Transactions on Machine Learning Research (TMLR) 2022.
  • 5. Work on 'pre-training foundation models in Academia' accepted to COLM 2025.
  • 6. Research on 'predicting unobserved drug interactions using graph paths with large language models' accepted to KDD 2025.
Research Experience
  • 1. Postdoctoral Fellow at Harvard University (SEAS) (June 2025 - present), working with David Alvarez-Melis.
  • 2. During Ph.D., studied zero-shot generalization, synthetic datasets (Bonito), composition (CSP, CLIP Binding), and structured knowledge (ZSL-KG).
Education
  • Ph.D. in Computer Science from Brown University, advised by Stephen Bach.
Background
  • Research Interests: Efficiently adapting large machine learning models through data-centric solutions. Field: Computer Science. Brief: Postdoctoral Fellow at Harvard University (SEAS), working with David Alvarez-Melis.
Miscellany
  • Invited talks at Ai2, Netflix, and Snowflake on Data-Centric Approaches to Adapting Foundation Models.
Co-authors
0 total
Co-authors: 0 (list not available)