Scholar
Taishi Nakamura
Google Scholar ID: nbPQwgUAAAAJ
Institute of Science Tokyo
artificial general intelligence
large language models
machine learning
Follow
Homepage
↗
Google Scholar
↗
Citations & Impact
All-time
Citations
181
H-index
6
i10-index
5
Publications
14
Co-authors
14
list available
Contact
No contact links provided.
Publications
12 items
On the Optimal Reasoning Length for RL-Trained Language Models
2026
Cited
0
MixtureVitae: Open Web-Scale Pretraining Dataset With High Quality Instruction and Reasoning Data Built from Permissive-First Text Sources
2025
Cited
0
Open-sci-ref-0.01: open and reproducible reference baselines for language model and dataset comparison
2025
Cited
0
Optimal Sparsity of Mixture-of-Experts Language Models for Reasoning Tasks
2025
Cited
0
Rewriting Pre-Training Data Boosts LLM Performance in Math and Code
2025
Cited
0
Building Instruction-Tuning Datasets from Human-Written Instructions with Open-Weight Large Language Models
2025
Cited
0
Wider or Deeper? Scaling LLM Inference-Time Compute with Adaptive Branching Tree Search
2025
Cited
0
Drop-Upcycling: Training Sparse Mixture of Experts with Partial Re-initialization
2025
Cited
0
Load more
Resume (English only)
Co-authors
14 total
Rio Yokota
Professor, Institute of Science Tokyo
Kazuki Fujii
Institute of Science Tokyo
Naoaki Okazaki
Institute of Science Tokyo
Takuya Akiba
Sakana AI
So Kuroki
Sakana AI
Yuichi Inoue
Sakana AI
Yuki Imajuku
Sakana AI
Yujin Tang
Sakana AI
×
Welcome back
Sign in to Agora
Welcome back! Please sign in to continue.
Email address
Password
Forgot password?
Continue
Do not have an account?
Sign up