About Me

I am a Ph.D. student at University of Texas at Austin (UT Austin). I am honored to be advised by Prof. Qiang Liu.
I am interested in developing fundamental yet computationally feasible algorithms for the basic learning, inference, and optimization problems that underpin the cutting-edge AI/ML/statistical technologies. These days, I am mostly drawn by the training efficiency of large-scale models.

If you share common interests, are interested in potential collaboration, or simply want to connect for a chat, feel free to contact me. I'm always open to conversation :)

Educations

Ph.D student in Computer Science (2022.8 - ~)
Advisor: Prof. Qiang Liu.

Projects

In this study, we introduce an optimization-centric approach to enhance the efficiency of large-scale model training. Our primary contribution lies in the development of novel optimization algorithms that exhibit superior generalization performance, reduce hyperparameter tuning requirements, and lower computational and memory costs. Our approach not only achieves comparable test accuracy in less training time but also demonstrates improved model performance under similar training costs. Memory efficiency is achieved through innovative schemes that factorize first and second-order momentum. By leveraging the parallel computing capabilities of GPUs and crafting custom optimizers for specific model architectures, we optimize training processes. Moreover, we prioritize optimization objectives that ensure robust generalization concerning diverse data distributions and model architectures. This research paves the way for more efficient and practical large-scale model applications across various domains.
Lion (Evolved Sign Momentum), a new optimizer discovered through program search, has shown promising results in training large AI models. It performs comparably or favorably to AdamW but with greater memory efficiency. This work aims to demystify Lion. Based on both continuous-time and discrete-time analysis, we demonstrate that Lion is a theoretically novel and principled approach for minimizing a general loss function f(x) while enforcing a bound constraint ‖x‖∞ ≤ 1/λ. Our findings provide valuable insights into the dynamics of Lion and pave the way for further improvements and extensions of Lion-related algorithms.

Selected Publications & Preprints

Lizhang Chen*, Bo Liu*, Kaizhao Liang*, Qiang Liu
Spotlight, ICLR 2024
Lizhang Chen*, Bo Liu*, Lemeng Wu*, Kaizhao Liang, Jiaxu Zhu, Chen Liang, Raghuraman Krishnamoorthi, Qiang Liu
NeurIPS 2024
Kaizhao Liang, Bo Liu, Lizhang Chen, Qiang Liu
NeurIPS 2024

Awards

  • McCombs Dean's Fellowship, University of Texas at Austin. 2022 - 2027
  • Mitacs Globalink Research Internship. 2021
  • Honorable Mention in Chinese Undergraduate Physics Tournament. 2020 (5%)
  • Meritorious in North China Undergraduate Physics Tournament. 2020 (0.5%)
  • Honorable Mention in Chinese Undergraduate Physical Experiment Competition,2020 (2%)
  • Meritorious in Chinese Mathematical Olympiad (CMO). 2017



  • All rights reserved & Last update on Apr, 2024