Browse publications on Google Scholar (top-right) ↗
Resume (English only)
Academic Achievements
Published papers: 'The global landscape of academic guidelines for generative AI and LLMs' (Nature Human Behavior, March 2025), 'Trust in AI: Progress, Challenges, and Future Directions' (Nature Humanities & Social Sciences Communications, November 2024), 'Embedded Ethics for Responsible Artificial Intelligence Systems (EE-RAIS) in disaster management: a conceptual model and its deployment' (AI and Ethics, June 2023), 'A probabilistic theory of trust concerning artificial intelligence' (AI and Ethics, June 2022), 'Tracing app technology: an ethical review in the COVID-19 era and directions for post-COVID-19' (Ethics and Information Technology, June 2022). Publicly available datasets: IGGA (Industrial Guidelines and Policy Statements for Generative AIs) and AGGA (Academic Guidelines for Generative AIs).
Research Experience
Currently a Senior RPC|AI Research Scientist at The University of Texas at Austin, managing the NRT Responsible AI program, overseeing both its educational and research components, and conducting research on generative AIs. Previously a co-investigator at SUNY-IBM AI Research Alliance (2021-2023).
Education
Ph.D. in Philosophy (AI Ethics & Bioethics), State University of New York at Albany; M.Sc. Data Science (in progress)
Background
Research interests: the intersection of (epistemic/normative & individual/social) values, data, and machine learning in various domains including engineering design & education, AI/LLM development & technology, medicine and healthcare, psychology, etc. Recent research area involves responsible AI and large language models, including topics such as fairness, accountability, transparency, privacy, and explainability.