Jiancong Xiao

prof_pic.jpg

Welcome to my homepage! I am currently a postdoctoral researcher at the University of Pennsylvania, working with Prof. Qi Long and Prof. Weijie J. Su.

Privously, I obtained my Ph.D. from The Chinese University of Hong Kong, Shenzhen, where I was advised by Prof. Zhi-Quan (Tom) Luo and worked closely with Prof. Ruoyu Sun. Prior to that, I received my M.S. degree from The Chinese University of Hong Kong and my B.S. degree from Sun Yat-sen University.

Research Interest: I am broadly interested in statistical issues and learning theory for trustworthy machine learning models, with a recent focus on the theory of large language models.

  1. Adversarial Robustness: Explaining adversarial examples, robust overfitting and adversarially robust generalization from a learning theory perspective.

  2. Large Language Models: Establishing theories for fine-tuning, algorithmic bias, calibration, hallucinations, etc.

  3. Classical Learning theory: Optimization (non-convex, non-smooth problem, convergence and stability), Generalization (Rademacher complexity, VC-dim, Pac-Bayes, NTK).

news

Oct 07, 2024 Attending COLM2024 at UPenn.
Sep 20, 2024 I will attend SIAM Conference on Mathematics of Data Science (MDS24) in Atlanta, Georgia.
Aug 03, 2024 I will attend JSM 2024 in Portland, Oregon. Welcome to my talk about algorithmic bias of LLMs in the session titled Harnessing Large Language Models: Opportunities and Challenges for Statistics.
Jun 30, 2024 I will attend COLT 2024 in Edmonton, Canada. I will chair a session titled “Adversarial/Robust Learning” and also present our recent work on adversarially robust generalization.
May 08, 2024 Two papers about adversarial training theory are accepted to COLT 2024 and ICML 2024, respectively!

selected publications

  1. arXiv
    On the Algorithmic Bias of Aligning Large Language Models with RLHF: Preference Collapse and Matching Regularization
    Jiancong Xiao, Ziniu Li, Xingyu Xie, Emily Getzen, Cong Fang, Qi Long, and Weijie J. Su
    submitted to Journal of the American Statistical Association (JASA), Major Revision , 2024
  2. COLT 2024
    Bridging the Gap: Rademacher Complexity in Robust and Standard Generalization
    Jiancong Xiao, Ruoyu Sun, Qi Long, and Weijie J. Su
    In Conference on Learning Theory, 2024
  3. ICML 2024
    Uniformly Stable Algorithms for Adversarial Training and Beyond
    Jiancong Xiao*, Jiawei Zhang*, Zhi-Quan Luo, and Asuman E. Ozdaglar
    In International Conference on Machine Learning, 2024
  4. NeurIPS 2023
    PAC-Bayesian Spectrally-Normalized Bounds for Adversarially Robust Generalization
    Jiancong Xiao, Ruoyu Sun, and Zhi-Quan Luo
    In Advances in Neural Information Processing Systems, 2023
  5. NeurIPS 2022 Spotlight
    Stability Analysis and Generalization Bounds of Adversarial Training
    Jiancong Xiao, Yanbo Fan, Ruoyu Sun, Jue Wang, and Zhi-Quan Luo
    In Advances in Neural Information Processing Systems, 2022
  6. arXiv
    Adversrial Rademacher Complexity of Deep Neural Networks
    Jiancong Xiao, Yanbo Fan, Ruoyu Sun, and Zhi-Quan Luo
    submitted to Journal of Machine Learning Research (JMLR), Major Revision , 2022