Hello and Welcome! You can call me Hank. I am currently a Ph.D. Student at MIT, advised by Prof. Song Han.
Previously, I received my Master from MIT in 2020 and
Bachelor from Fudan University in 2018.
During my undergrad, I was fortunate to work with Prof. Jason Cong, Prof. C.-J. Richard Shi, Prof. Xiaoyang Zeng and Prof. Yibo Fan.
My research area is the intersection of Computer
Architecture, Quantum Computing and Machine Learning.
Apr. 2020. "MicroNet for Efficient Language Modeling" accepted to Journal of Machine Learning Research 2020.
Apr. 2020. "HAT: Hardware-Aware Transformers for Efficient Natural Language Processing" accepted to ACL 2020.
Mar. 2020. "APQ: Joint Search for Network Architecture, Pruning and Quantization Policy" accepted to CVPR 2020.
Feb. 2020. I gave a talk at Qualcomm Research Center on "GCN-RL Circuit Designer: Transferable Transistor Sizing With Graph Neural Networks and Reinforcement Learning".
Feb. 2020. I gave a talk in HPCA 2020 on "SpArch: Efficient Architecture for Sparse Matrix Multiplication".
Feb. 2020. "GCN-RL Circuit Designer: Transferable Transistor Sizing with Graph Neural Networks and Reinforcement Learning" accepted to DAC 2020.
Dec. 2020. I gave a talk on Efficient Langauge Modeling at NeurIPS 2019 MicroNet Challenge.
Nov. 2019. "SpArch: Efficient Architecture for Sparse Matrix Multiplication" accepted to HPCA 2020.
Nov. 2019. I won the NeurIPS 2019 MicroNet Challenge, code open-sourced.
Park: An Open Platform for Learning-Augmented Computer Systems
Hongzi Mao, Parimarjan Negi, Akshay Narayan, Hanrui Wang, Jiacheng Yang, Haonan Wang, Ryan Marcus, ravichandra addanki, Mehrdad Khani Shirkoohi, Songtao He, Vikram Nathan, Frank Cangialosi, Shaileshh, Venkatakrishnan, Wei-Hung Weng, Song Han, Tim Kraska, Mohammad Alizadeh
Advances in Neural Information Processing Systems (NeurIPS), 2019
Paper /
Code
Learning to Design Circuits Hanrui Wang*, Jiacheng Yang*, Hae-Seung Lee, Song Han (*Equal Contributions)
Advances in Neural Information Processing Systems (NeurIPS) Workshop on ML for Systems, 2018
Paper /
Project Page
Understanding Performance Differences of FPGAs and GPUs
Jason Cong, Zhenman Fang, Michael Lo, Hanrui Wang, Jingxian Xu, Shaochong Zhang (Alphabetical Order)
International Symposium On Field-Programmable Custom Computing Machines (FCCM), 2018
Paper
Honors and Awards
2021 Qualcomm Innovation Fellowship
2021 Baidu Graduate Scholarship
2021 Analog Devices Outstanding Student Designer Award
2021 Global Top 100 Chinese Rising Stars in AI Award
2020 Nvidia Graduate Fellowship Finalist
2020 DAC Young Fellowship & Young Fellow Best Presentation Award
2019 Champion of NeurIPS 2019 MicroNet efficient Language Model Competition
2019 Best Paper Award of ICML 2019 Reinforcement Learning for
Real Life Workshop
2018 Bronze Medal in Kaggle TensorFlow Speech Recognition Challenge
2017 UCLA CSST Fellowship & CSST Best Research Award
2016 Chun-Tsung Research Fellowship
2015/2016/2017 China National Scholarship
Talks
Efficient Natural Language Processing
Machine Learning for Analog Circuit Design
Efficient Sparse Matrix Multiplication Accelerator