Scholar
Greg Durrett
Google Scholar ID: EpQ_sDEAAAAJ
Associate Professor of Computer Science, New York University
Natural Language Processing
Follow
Homepage
↗
Google Scholar
↗
Citations & Impact
All-time
Citations
7,493
H-index
44
i10-index
97
Publications
20
Co-authors
0
Contact
Email
gdurrett@nyu.edu
CV
Open ↗
Publications
28 items
CREATE: Testing LLMs for Associative Creativity
2026
Cited
0
VeriSoftBench: Repository-Scale Formal Verification Benchmarks for Lean
2026
Cited
0
Calibrate-Then-Act: Cost-Aware Exploration in LLM Agents
2026
Cited
0
SkillFactory: Self-Distillation For Learning Cognitive Behaviors
2025
Cited
0
Adaptive Margin RLHF via Preference over Preferences
2025
Cited
0
Report on NSF Workshop on Science of Safe AI
2025
Cited
0
PropMEND: Hypernetworks for Knowledge Propagation in LLMs
2025
Cited
0
Causal Graph based Event Reasoning using Semantic Relation Experts
2025
Cited
0
Load more
Resume (English only)
Academic Achievements
Yin et al., 'Specializing LLMs with insights from interpretability', NeurIPS 2024
Tang et al., 'Learning models to assess fine-grained factuality of generation systems', EMNLP 2024
Ye et al., 'Augmenting LLMs with new capabilities like SMT solvers to improve their reasoning', NeurIPS 2023
Sprague et al., 'Assessing strengths and weaknesses of chain-of-thought', ICLR 2025
Singhal et al., 'Post-training analysis of LLMs', COLM 2024
Co-authored 'Contemporary NLP Modeling in Six Comprehensive Programming Assignments', presented at the Fifth Workshop on Teaching NLP
Background
Associate Professor in the Computer Science Department (Courant Institute) and Center for Data Science (CDS) at New York University
Was a professor in the Computer Science Department at the University of Texas at Austin from 2017 to 2025
Primary research area is Natural Language Processing (NLP) and machine learning
Focuses on improving large language models’ (LLMs) ability to reason about knowledge in text
Addresses real-world challenges of LLMs in medical information processing, scientific discovery, and legal reasoning
Develops methods to train new capabilities, enhance reliability, and evaluate model outputs
Co-authors
0 total
Co-authors: 0 (list not available)
×
Welcome back
Sign in to Agora
Welcome back! Please sign in to continue.
Email address
Password
Forgot password?
Continue
Do not have an account?
Sign up