AgoraResearch hub
ExploreLibraryProfile
Account
Linghan Huang
Scholar

Linghan Huang

Google Scholar ID: 9b_rfZUAAAAJ
The University of Sydney
Trustworthy MLSoftware Security
Homepage↗Google Scholar↗
Citations & Impact
All-time
Citations
194
 
H-index
4
 
i10-index
3
 
Publications
9
 
Co-authors
0
 
Contact
TwitterOpen ↗
Publications
7 items
Trust in One Round: Confidence Estimation for Large Language Models via Structural Signals
2026
Cited
0
Feature-Selective Representation Misdirection for Machine Unlearning
2025
Cited
0
LLMs are All You Need? Improving Fuzz Testing for MOJO with Large Language Models
2025
Cited
0
The Tower of Babel Revisited: Multilingual Jailbreak Prompts on Closed-Source Large Language Models
2025
Cited
0
From Compliance to Exploitation: Jailbreak Prompt Attacks on Multimodal LLMs
2025
Cited
0
From LLMs to LLM-based Agents for Software Engineering: A Survey of Current, Challenges and Future
arXiv.org · 2024
Cited
17
On the Challenges of Fuzzing Techniques via Large Language Models
2024
Cited
14
Resume (English only)
Co-authors
0 total
Co-authors: 0 (list not available)

Welcome back

Sign in to Agora

Welcome back! Please sign in to continue.

Do not have an account?