About Me
I am an incoming Ph.D. student at the University of Notre Dame, joining in Fall 2026 under the supervision of Prof. Fanny Ye. Prior to that, I completed my undergraduate studies in Cyber Science and Engineering at Sichuan University.
During my visiting studies, I was mentored by Prof. Xiangliang Zhang and senior researcher Yue Huang, whose guidance shaped my research interests in trustworthy AI and foundation models. I continue to collaborate closely with Zheyuan Zhang.
My research lies at the intersection of LLM agents, reinforcement learning, and trustworthy AI — building systems that are not only capable but also safe, fair, and reliable.
News
Two papers were accepted to ICML 2026: Capability-Oriented Training Induced Alignment Risk and Drift-Bench.
Two papers were accepted to ACL 2026 Findings.
One paper was accepted to WWW 2026 Demo Track. One paper was accepted to ICLR 2026.
One paper was accepted to AAAI 2025.
Research Interest
-
LLM Agent Reliability & Failure Diagnosis
Diagnosing multi-turn agent failures under input faults, interaction noise, and cooperative breakdowns.
This includes: Drift-Bench (ICML'26), IntraAI (WWW'26 Demo).
-
Alignment Risk & Guardrail Models
Studying training-induced alignment risks and building guardian/advisor models for safer LLM behavior.
This includes: Capability-Oriented Training Induced Alignment Risk (ICML'26), Guardian-as-an-Advisor (ACL'26 Findings), and Edge Alignment (Position Paper).
-
Trustworthiness Evaluation of Foundation Models
Developing benchmarks and assessment protocols for generative, vision-language, and domain-specific foundation models.
This includes: TrustGen (ICLR'26), AutoDavis (arXiv'25), and PolicyLLM (ACL'26 Findings).
Publications
Education
University of Notre Dame
Ph.D.Ph.D. in Computer Science
Supervisor: Prof. Fanny Ye
Research Areas
Trustworthy AI · LLM Alignment · Foundation Models
Sichuan University
B.Eng.B.Eng. in Cyber Science and Engineering
Research Experience
Advisor: Prof. Xiangliang Zhang
Trustworthy Generative Models · LLM Alignment · Agent Evaluation