1. Paper 'Active Code Learning: Benchmarking Sample-Efficient Training of Code Models' accepted at IEEE Transactions on Software Engineering (TSE).
2. Paper 'PromptCharm: Text-to-Image Generation through Multi-modal Prompting and Refinement' accepted at ACM CHI 2024.
3. Paper 'ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning' accepted at ICRA 2024.
4. Papers 'LRR: Language-Driven Resamplable Continuous Representation against Adversarial Tracking Attacks' and 'Neuron Activation Coverage: Rethinking Out-of-distribution Detection and Generalization' accepted at ICLR 2024.
5. Paper 'MultiTest: Physical-Aware Object Insertion for Testing Multi-sensor Fusion Perception Systems' accepted at ICSE 2024.
6. Paper 'LUNA: A Model-Based Universal Analysis Framework for Large Language Models' available on Arxiv.
7. Paper 'Look Before You Leap: An Exploratory Study of Uncertainty Measurement for Large Language Models' available on Arxiv.
Research Experience
Currently an Associate Professor at The University of Tokyo (Japan) and the University of Alberta (Canada). Leads a research group that continuously recruits PhD, MSc, and postdoc candidates.
Education
Educational background information is not provided.
Background
Research interests include, but are not limited to, code learning, multi-modal prompting, and the analysis and improvement of behaviors in large language models. Has a strong self-motivation and will to achieve.
Miscellany
Welcomes students with relevant backgrounds and strong self-motivation to apply and join the research team.