Published several papers, such as 'Co-LLM: Training LLMs to Decode Collaboratively', 'Reduce Hallucination in Patient Summarization', 'Verifiable Text Generation via Symbolic References', etc. Also gave numerous academic talks and lectures, e.g., 'Rethinking the Design and Evaluation of Human and LLM Collaboration' at the Stanford HCI Group Lunch Seminar.
Research Experience
Involved in multiple research projects, including Co-LLM: Training LLMs to Decode Collaboratively, Reduce Hallucination in Patient Summarization, Verifiable Text Generation via Symbolic References, Chapyter: LLM coding assistant in JupyterLab, and Real-world Legal Summaries at Multiple Granularities.
Education
Currently a fourth-year PhD student in the Computer Science department at MIT, advised by David Sontag.
Background
Research interests: collaboration between humans and AI (especially large language models) for expert tasks. Research involves developing novel NLP models and suitable interactions/interfaces to tackle challenging HAI problems like model hallucination and generation verification, and support expert tasks such as programming, doctors' writing, and legal summarization.
Miscellany
Personal interests include visual design in scholarly communication.