Co-Designing Algorithms and Hardware for Machine Learning Systems
Date: Wed, October 01, 2025
Time: 9:30am - 10:30am
Location: Holmes Hall 389
Speaker: Dr. Caiwen Ding, University of Minnesota - Twin Cities
(hosted by Prof. Hanqing Guo (guohanqi@hawaii.edu), College of Engineering, ECE Department)
ECE Graduate Students: This will count towards your seminar credit.
Abstract
The rapid deployment of ML has witnessed various challenges such as prolonged computation and high memory footprint on systems. In this talk, we will present several ML acceleration frameworks through algorithmhardware co-design. First, we introduce a fine-grained crossbar-based ML accelerator. Rather than mapping trained positive and negative weights post hoc, we proactively ensure that all weights within the same crossbar column share the same sign, reducing area overhead. Additionally, by dividing the crossbar into sub-arrays, we enable efficient input zero-bit skipping. Next, we focus on co-designing graph neural network (GNN) training. To leverage training sparsity and enhance explainable ML, we propose a hardware-friendly nonlinearity with tailored GPU kernel support. Finally, we explore the use of Large Language Models (LLMs) for AI accelerator design, demonstrating their potential to automate and optimize hardware architectures for ML workloads. Our approach outperforms state-of-the-art methods across various tasks.
Biography
Caiwen Ding is an Associate Professor in the Department of Computer Science and Engineering at the University of Minnesota – Twin Cities. From 2019–2024, he was an assistant professor in the School of Computing at the University of Connecticut. He received his Ph.D. degree from Northeastern University, Boston, in 2019, supervised by Prof. Yanzhi Wang. His research interests mainly include efficient embedded and high-performance systems for machine learning, and machine learning for hardware design. His work has been published in high-impact venues (e.g., DAC, ICCAD, ASPLOS, ISCA, MICRO, HPCA, SC, ICS, FPGA, Oakland, NeurIPS, ICML, ICCV, IJCAI, AAAI, ACL, EMNLP).
He is a recipient of the 2024 NSF CAREER Award, Amazon Research Award, and CISCO Research Award. He received the Best Paper Award at the 2025 IEEE International Conference on LLM-Aided Design (ICLAD), and DL-Hardware Co-Design for AI Acceleration (DCAA) workshop at 2023 AAAI, outstanding student paper award at 2023 HPEC, best paper nomination at 2018 DATE and 2021 DATE, publicity paper at 2022 DAC, and the 2021 Excellence in Teaching Award. His team won first place in accuracy and fourth place overall at the 2022 TinyML Design Contest at ICCAD, third place at the 2024 ICCAD Contest on LLM-Assisted Hardware Code Generation.
He serves as Associate Editor at IEEE MWSCAS in Neural Networks Track, and IEEE-TCCPS Newsletter. His research has been funded by NSF, DOE, NIH, DOT, USDA, and industrial sponsors.