Published multiple papers, including 'Communicating Activations Between Language Model Agents' (ICML 2025), 'When Bad Data Leads to Good Models' (ICML 2025), 'Dialogue Action Tokens: Steering Language Models in Goal-Directed Dialogue with a Multi-Turn Planner' (preprint), 'Designing a Dashboard for Transparency and Control of Conversational AI' (preprint), 'Measuring and Controlling Instruction (In)Stability in Language Model Dialogs' (COLM 2024), 'Q-Probe: A Lightweight Approach to Reward Maximization for Language Models' (ICML 2024), 'Inference-Time Intervention: Eliciting Truthful Answers from a Language Model' (NeurIPS 2023), and 'Emergent World Representations: Exploring a Sequence Model Trained on a Synthetic Task' (ICLR 2023).
Research Experience
Held a graduate-student Superalignment Fast Grant from OpenAI and interned at MSR Asia and Meta AI.
Education
PhD from Harvard University in May 2025, advised by Martin Wattenberg, Fernanda Viégas, and Hanspeter Pfister; funded by Kempner Institute Graduate Fellowship.
Background
Research interests include language models, dialogue systems, and transparency and control in AI. PhD student at Harvard, advised by Martin Wattenberg, Fernanda Viégas, and Hanspeter Pfister.
Miscellany
Contact: likenneth.ai [at] gmail.com; Google Scholar, Twitter, GitHub, LinkedIn