Ruochen Wang

CS PhD @ UCLA, Google

google scholar

About Me

I am currently a Ph.D. student at UCLA, advised by Prof. Cho-Jui Hsieh. My research area is Efficient and Automated Machine Learning methods. Besides research, I am also open to venture capital and entrepreneurial opportunities.

I obtained my B.S. degree (dual) in Computer Science and Statistics at the University of Michigan, with the highest distinction. During this period, I interned at Microsoft Research and Sensetime on machine learning and computer vision, as well as helped a startup to develop its prototype robots. Prior to that, I worked on quantitative investing at Shanghai Key Laboratory of Finance.

Research Overview

I study the problem of AI for AI. The goal is to leverage the power of A.I. to automatize the development of itself. For the past two years, I have mainly focused on a prominent direction under this concept - Automated Machine Learning (AutoML), which includes:

  • Neural Architecture Search (NAS)
  • Dataset Condensation (DC)
  • Optimizer Search (OS)

AutoML is a highly general field with connections to many, if not all, facets of machine learning and its applications. Because of this, I am (or was) also involved in several other related topics such as:

  • Transformers (Efficient Inference, Multi-Modality, e.t.c.)
  • Scalable Graph Learning Algorithms
  • Adversarial Robustness
  • Federated Learning


  • [Sep 2022] Our first benchmark for Dataset Condensation methods is accepted at NeurIPS 2022, checkout DC-BENCH
  • [Sep 2022] Two papers (one 1st author) accepted at NeurIPS 2022.
  • [Jul 2022] We released DC-BENCH - the first benchmark for evaluating Dataset Condensation methods.
  • [May 2022] I started my internship at Perception Team, Google, co-advised by Dr. Boqing Gong and Dr. Ting Liu.
  • [May 2022] I received Outstanding Graduate Student Award for the Master's degree at UCLA.
  • [Jan 2022] Two papers (one 1st author) accepted at ICLR 2022
  • [Jul 2021] One paper (1st author) accepted at ICCV 2021
  • [May 2021] I will present our paper in the Outstanding Paper Session at ICLR 2021
  • [Apr 2021] Our paper "Rethinking Architecture Selection in Differentiable NAS" won the Outstanding Paper Award at ICLR 2021.
  • [Jan 2021] Two papers (1st author) accepted at ICLR 2021

Education & Experiences

  • University of California, Los Angeles
    Ph.D. in Computer Science, 2020 - 2025
    M.S. in Computer Science, GPA=4.0/4.0

  • University of Michigan - Ann Arbor
    B.S. in Computer Science and Statistics, GPA=4.0/4.0, 2015 - 2019

  • Shanghai University of Finance and Economics
    Finance (Honors Class, 30 students selected from the entire college), GPA=3.93/4.0 (1st), 2013 - 2015 (Transferred)


FedDM: Iterative Distribution Matching for Communication-Efficient Federated Learning
Yuanhao Xiong*, Ruochen Wang*, Minhao Cheng, Felix Yu, Cho-Jui Hsieh
Arxiv 2022



* denotes equal contribution

DC-BENCH: Dataset Condensation Benchmark
Justin Cui, Ruochen Wang, Si Si, Cho-Jui Hsieh
To appear in NeurIPS 2022, Dataset and Benchmark

[Paper] [Code] [Leaderboard]

Efficient Non-Parametric Optimizer Search for Diverse Tasks
Ruochen Wang, Yuanhao Xiong, Minhao Cheng, Cho-Jui Hsieh
To appear in NeurIPS 2022

[Paper] [Code]

Generalizing Few-Shot NAS with Gradient Matching
Shoukang Hu*, Ruochen Wang*, Lanqing Hong, Zhenguo Li, Cho-Jui Hsieh, Jiashi Feng
In International Conference on Learning Representations (ICLR), 2022.

[Paper] [Code]

Learning to Schedule Learning rate with Graph Neural Networks
Yuanhao Xiong, Li-Cheng Lan, Xiangning Chen, Ruochen Wang, Cho-Jui Hsieh
In International Conference on Learning Representations (ICLR), 2022.


RANK-NOSH: Efficient Predictor-Based NAS via Non-Uniform Successive Halving
Ruochen Wang, Xiangning Chen, Minhao Cheng, Xiaocheng Tang, Cho-Jui Hsieh
In International Conference on Computer Vision (ICCV), 2021.

[Paper] [Code]

Rethinking Architecture Selection in Differentiable NAS
Ruochen Wang, Minhao Cheng, Xiangning Chen, Xiaocheng Tang, Cho-Jui Hsieh
In International Conference on Learning Representations (ICLR), 2021. (Outstanding Paper Award)

[Paper] [Code] [Talk] [Media]

DrNAS: Dirichlet Neural Architecture Search
Xiangning Chen*, Ruochen Wang*, Minhao Cheng*, Xiaocheng Tang, Cho-Jui Hsieh
In International Conference on Learning Representations (ICLR), 2021.

[Paper] [Code]

Awards & Honors


  • Outstanding Graduate Student Award (Master's degree, 1 per department) - UCLA CS Department, 05/2022
  • Outstanding Paper Award - ICLR, 04/2021
  • Highest Distinction Graduate Award - The University of Michigan, 08/2019
  • Berkeley Fung’s Excellence Scholarship - UC Berkeley Graduate Admission, 03/2019
  • James B. Angell Scholar - The University of Michigan, 2017-2019
  • EECS Scholar - The University of Michigan, 2017-2019
  • University Honors - The University of Michigan, 2015-2018
  • Shanghai City Scholarship - Shanghai Municipal People's Government, 09/2014
  • Peoples’ Scholarship (1st) - Shanghai University of Finance and Economics, 09/2014


  • Award of Excellence - Microsoft Research Asia (MSRA), 09/2019
  • Outstanding Intern Award - SenseTime, 01/2019
  • Honorable Employee - OvoTechnologies, 09/2016


  • Reviewer for ICML 2021, NeurIPS 2021, ICLR 2022, ICML 2022, NeurIPS 2022, TMLR
Last Updated: 1/2/2020, 10:03:33 PM