Paper 'Closing the Loop: Learning to Generate Writing Feedback via Language Model Simulated Student Revisions' accepted to EMNLP 2024 (Oral)
Paper 'MIDGARD: Self-Consistency Using Minimum Description Length for Structured Commonsense Reasoning' accepted to ACL 2024 (Oral), awarded Area Chair Award
Paper 'Drilling Down into the Discourse Structure with LLMs for Long Document Question Answering' accepted to EMNLP 2023 Findings
Co-authored 'Friendly Neighbors: Contextualized Sequence-to-Sequence Link Prediction' accepted to ACL 2023 Workshop RepL4NLP
Preprints include 'Do Language Models Think Consistently? A Study of Value Preferences Across Varying Response Lengths' and 'ExpertLongBench: Benchmarking Language Models on Expert-Level Long-Form Generation Tasks with Structured Checklists'
Background
Third-year Ph.D. candidate at the University of Michigan, Ann Arbor, advised by Prof. Lu Wang
Broad research interests in Machine Learning, Natural Language Processing (NLP), and related fields
Currently investigating deceptive and scheming behaviors that may emerge in language models during alignment
Exploring the generation of complex function/API calling data points using LLMs to improve and evaluate tool-use capabilities
Previously worked on applications of LLMs in education, structured commonsense reasoning, and consistency evaluation of value preferences