【SIST】Building Trustworthy AI Systems: Advances and Challenges for Neural Network Certification

发布时间2026-01-27文章来源 上海科技大学作者责任编辑系统管理员

As deep learning (DL) systems become integral to safety-critical domains, from autonomous driving to healthcare, their trustworthiness is more important than ever. Despite remarkable progress, deep neural networks can exhibit instability and remain vulnerable to adversarial perturbations. Addressing these issues requires new techniques that provide robustness guarantees and practical scalability. In this talk, I will introduce key ideas in certification of neural networks and present recent results on robustness verification, testing methodologies for complex models, and adversarial robust learning. I will close by outlining several open challenges and research opportunities for certifying DL models and building trustworthy modern foundation models.