ACL 2023: "Dialect-robust Evaluation of Generated Text"
NeurIPS 2023: Co-authored "LIMA: Less is More for Alignment"
EMNLP 2022: Published multiple papers including "Controlled Pun Generation", "Pun with Explanation", "Investigate the Benefits of Free-form Rationales", and "Robustness of Bias Evaluation"
EMNLP 2021: "Paraphrase Generation" (papers AESOP and ESTER)
ACL 2021: Work on event bias in Wikipedia received Best Paper Nomination
CHI 2022: Research on bias in greeting cards received Best Paper Honorable Mention
Selected as a 2023 EECS Rising Star
Background
Ph.D. candidate in Natural Language Processing at the University of Southern California (USC)
Amazon Fellow
Research focuses on trustworthy text generation
Specific interests include controlled text generation, robustness of NLG systems, NLG evaluation, and data efficiency
Actively involved in large language model (LLM) research, including pretraining on 60+ TB data, fine-tuning mT5 models up to XXL scale, distilling PaLM's dialect rewriting capability into EdiT5, and co-developing LIMA—a high-quality LLM trained on only 1,000 examples
Advocates for fair AI; studied bias in Wikipedia event descriptions and greeting cards