Lili Chen

I'm a first-year Ph.D. student in the Machine Learning Department at Carnegie Mellon University, fortunate to be advised by Deepak Pathak.

Previously, I received my B.A. in Computer Science from UC Berkeley. I was an undergraduate researcher at the Robot Learning Lab, where I was advised by Pieter Abbeel and Kimin Lee. After graduating, I spent some time working on large language models at Cohere.

Email  /  Google Scholar  /  GitHub  /  LinkedIn  /  Twitter

profile photo

I’m broadly interested in machine learning and robotics. I hope to build intelligent agents that can generalize to a wide range of real-world scenarios.

Decision Transformer: Reinforcement Learning via Sequence Modeling
Lili Chen*, Kevin Lu*, Aravind Rajeswaran, Kimin Lee, Aditya Grover, Michael Laskin, Pieter Abbeel, Aravind Srinivas*, Igor Mordatch*
Neural Information Processing Systems (NeurIPS), 2021.
pdf / website / code / video (by Yannic Kilcher)

We propose to replace traditional offline RL algorithms with a simple transformer model trained on sequences of returns, states, and actions with an autoregressive prediction loss.

State Entropy Maximization with Random Encoders for Efficient Exploration
Younggyo Seo*, Lili Chen*, Jinwoo Shin, Honglak Lee, Pieter Abbeel, Kimin Lee
International Conference on Machine Learning (ICML), 2021.
pdf / website / code

We tackle exploration for high-dimensional observation spaces using a k-NN state entropy estimator in the low-dimensional representation space of a randomly intialized CNN.

Improving Computational Efficiency in Visual Reinforcement Learning via Stored Embeddings
Lili Chen, Kimin Lee, Aravind Srinivas, Pieter Abbeel
Neural Information Processing Systems (NeurIPS), 2021.
pdf / code

We present a compute- and memory-efficient modification of off-policy visual RL methods by freezing lower layers of CNN encoders and storing low-dimensional embeddings.

Ising Model Optimization Problems on a FPGA Accelerated Restricted Boltzmann Machine
Saavan Patel, Lili Chen, Philip Canoza, Sayeef Salahuddin
arXiv preprint, 2020.

We demonstrate usage of RBMs to solve NP-Hard problems efficiently by mapping the RBM onto a reconfigurable FPGA.


I hope to improve the accessibility of computer science education, at all levels.

CS 70: Discrete Mathematics and Probability Theory

Head Teaching Assistant: Spring 2021, Fall 2020
Teaching Assistant: Spring 2020
Reader: Fall 2019
Computer Science Mentors [Website]

Mentor: Fall 2019, Spring 2019
Berkeley ANova [Website]

Mentor: Spring 2019, Fall 2018, Spring 2018

Website template from here.