Scholar
Tzu-Quan Lin
Google Scholar ID: efKSVR8AAAAJ
National Taiwan University
Self-Supervised Learning
Spoken Language Models
Model Compression
Interpretability
Follow
Google Scholar
↗
Citations & Impact
All-time
Citations
167
H-index
7
i10-index
6
Publications
15
Co-authors
4
list available
Contact
No contact links provided.
Publications
8 items
How Contrastive Decoding Enhances Large Audio Language Models?
2026
Cited
0
Pseudo2Real: Task Arithmetic for Pseudo-Label Correction in Automatic Speech Recognition
2025
Cited
0
DeSTA2.5-Audio: Toward General-Purpose Large Audio Language Model with Self-Generated Cross-Modal Alignment
2025
Cited
0
Identifying Speaker Information in Feed-Forward Layers of Self-Supervised Speech Transformers
2025
Cited
0
An Exploration of Mamba for Speech Self-Supervised Models
2025
Cited
0
Speech-FT: A Fine-tuning Strategy for Enhancing Speech Representation Models Without Compromising Generalization Ability
2025
Cited
0
Building a Taiwanese Mandarin Spoken Language Model: A First Attempt
arXiv.org · 2024
Cited
0
Compressing Transformer-based self-supervised models for speech processing
arXiv.org · 2022
Cited
6
Resume (English only)
Co-authors
4 total
Hung-yi Lee
National Taiwan University
Yi-Cheng Lin
National Taiwan University
Hao Tang
University of Edinburgh
Guan-Ting Lin
National Taiwan University
×
Welcome back
Sign in to Agora
Welcome back! Please sign in to continue.
Email address
Password
Forgot password?
Continue
Do not have an account?
Sign up