Tianlu Wang
Scholar

Tianlu Wang

Google Scholar ID: inzQqX8AAAAJ
Research Scientist at Meta AI (FAIR team)
artificial intelligencenatural language processingcomputer vision
Citations & Impact
All-time
Citations
10,338
 
H-index
26
 
i10-index
33
 
Publications
20
 
Co-authors
6
list available
Publications
20 items
Browse publications on Google Scholar (top-right) ↗
Resume (English only)
Academic Achievements
  • Jointly Reinforcing Diversity and Quality in Language Model Generations
  • ASTRO: Teaching Language Models to Reason by Reflecting and Backtracking In-Context
  • J1: Incentivizing thinking in llm-as-a-judge via reinforcement learning
  • Multi-Token Attention (COLM 2025)
  • Learning to plan & reason for evaluation with thinking-llm-as-a-judge (ICML 2025)
  • Self-taught evaluators
  • Contextual Position Encoding: Learning to Count What's Important
  • Chameleon: Mixed-modal early-fusion foundation models
  • Shepherd: A critic for language model generation
  • Efficient tool use with chain-of-abstraction reasoning (COLING 2025)
  • Understanding in-context learning via supportive pretraining data (ACL 2023)
  • OPT: Open Pre-trained Transformer Language Models
  • Few-shot Learning with Multilingual Language Models (EMNLP 2022)
  • Identifying and mitigating spurious correlations for improving robustness in NLP models (NAACL 2022 Findings)
  • VisualNews : Benchmark and Challenges in Entity-aware Image Captioning (EMNLP 2021)
  • General Multi-label Image Classification with Transformers (CVPR 2021)
  • CAT-Gen: Improving Robustness in NLP Models via Controlled Adversarial Text Generation
Research Experience
  • Serves as a research scientist at Meta AI, FAIR team, focusing on post-training of large language models.
Education
  • Ph.D. in Computer Science from the University of Virginia, advised by Prof. Vicente Ordóñez Román; Bachelor's degree in Computer Science from Zhejiang University, China.
Background
  • Currently a research scientist at Meta AI, FAIR team, working on large language model post-training.