Co-created the GEM benchmark for standardizing natural language generation evaluation; released the MultiBERTs resource containing many checkpoints of BERT variants for robustness analysis; open-sourced the ToTTo table-to-text dataset; open-sourced BLEURT, a 'learned metric' for text generation problems; served as a senior area chair for ACL 2023 for the LLM track; was an action editor for ACL Rolling Review until 2023; a paper co-authored in 2013 won the ACL 2023 ten-year test of time paper award; program co-chair of COLM 2024 conference.
Research Experience
Worked at Google Brain and Google Research before joining Google Deepmind. Currently contributes to the Gemini project, collaborating with many researchers and engineers across Google Deepmind and the rest of Google to ensure Gemini post-trained models have the highest possible factual accuracy in communicative scenarios.
Education
Completed a Ph.D. from the Language Technologies Institute, School of Computer Science at Carnegie Mellon University in 2012. Completed a B.Tech. in Computer Science and Engineering from IIT Kharagpur in 2005.
Background
Currently a Senior Director of Research at Google Deepmind, based in New York City. Leads teams of researchers distributed between New York, London, Mountain View, Zurich, and San Francisco, focusing on language technologies. Current research ensures that large language models can generate factually accurate content attributable to trustworthy sources. Broad interests lie in controllable models of language generation in communicative and collaborative scenarios.