Charles Lovering
Scholar

Charles Lovering

Google Scholar ID: w0hYPqEAAAAJ
Kensho Technologies, Brown University
natural language understandingreinforcement learningmodel interpretability
Citations & Impact
All-time
Citations
2,565
 
H-index
10
 
i10-index
11
 
Publications
20
 
Co-authors
0
 
Resume (English only)
Academic Achievements
  • Publications:
  • - Charles Lovering*, Jessica Forde*, George Konidaris, Ellie Pavlick, Michael Littman. Evaluation beyond Task Performance: Analyzing Concepts in AlphaZero in Hex. Neurips, 2022. (*Equal contribution.)
  • - Charles Lovering, Ellie Pavlick. Unit Testing for Concepts in Neural Networks. TACL, 2022.
  • - Charles Lovering, Rohan Jha, Tal Linzen, Ellie Pavlick. Predicting Inductive Biases of Pre-Trained Models. ICLR, 2021.
  • - Rohan Jha, Charles Lovering, Ellie Pavlick. Does Data Augmentation Improve Generalization in NLP? 2020. PREPRINT.
Research Experience
  • Work Experience:
  • - Presented at Jane Street Research Symposium
  • - Spoke at NLP & Fairness, Interpretability, and Robustness; Google
  • - Gave talks at Language Understanding and Representations; Brown University
  • - Worked on projects involving concept analysis in AlphaZero for Hex, unit testing for concepts in neural networks, predicting inductive biases of pre-trained models, etc.
Education
  • Degree: PhD in Computer Science
  • School: Not explicitly mentioned
  • Advisor: Not explicitly mentioned
  • Time: Not provided
  • Field: Natural Language Understanding
Background
  • PhD Student in Computer Science (NLU).
Miscellany
  • Interests and Hobbies:
  • - Creating Lindenmayer systems
  • - Developing interactive visualizations
  • - Writing introductions to byte-encoding representations, beam search, transformer architecture, and neural turing machines.
  • Other:
  • - This site replicates the distill design.
  • - Uses Adobe XD CC for diagrams, D3 for visualizations, and PyTorch for deep learning.
Co-authors
0 total
Co-authors: 0 (list not available)