Published multiple papers on scalable AI safety via doubly efficient debate, skill-mix evaluations for AI models, faster algorithms and constant lower bounds for worst-case expected error, optimal inapproximability with universal factor graphs, extended formulation lower bounds for refuting random CSPs, formal barriers to longest-chain proof-of-stake protocols, correlation decay and tractability of CSPs, the matching problem having no small symmetric SDP, combinatorial optimization algorithms via polymorphisms.
Research Experience
Currently a research scientist at Google DeepMind; Previously an assistant professor at Chalmers University of Technology; Was a postdoc at KTH Royal Institute of Technology.
Education
PhD from University of California, Berkeley, advised by Prasad Raghavendra; Postdoc at KTH Royal Institute of Technology with Johan Håstad.
Background
Interested in topics in algorithms and complexity theory including constraint satisfaction problems, linear and semidefinite programming hierarchies, extended formulation lower bounds, hardness of approximation, the complexity of statistical inference, and the application of complexity theoretic approaches to AI alignment.