Lili Chen

I do research on large language models at Cohere. Before this, I received my B.A. in Computer Science from UC Berkeley in May 2021.

At Berkeley, I was an undergraduate researcher at the Robot Learning Lab, where I was fortunate to be advised by Prof. Pieter Abbeel and Kimin Lee. I was also a Head TA for my favorite course, CS 70.

Email  /  Google Scholar  /  GitHub  /  LinkedIn  /  Twitter

profile photo
Research

I'm interested in reinforcement learning, self-supervised learning, and natural language processing.

Decision Transformer: Reinforcement Learning via Sequence Modeling
Lili Chen*, Kevin Lu*, Aravind Rajeswaran, Kimin Lee, Aditya Grover, Michael Laskin, Pieter Abbeel, Aravind Srinivas*, Igor Mordatch*
Neural Information Processing Systems (NeurIPS), 2021.
pdf / website / code / video (by Yannic Kilcher)

We propose to replace traditional offline RL algorithms with a simple transformer model trained on sequences of returns, states, and actions with an autoregressive prediction loss.

State Entropy Maximization with Random Encoders for Efficient Exploration
Younggyo Seo*, Lili Chen*, Jinwoo Shin, Honglak Lee, Pieter Abbeel, Kimin Lee
International Conference on Machine Learning (ICML), 2021.
pdf / website / code

We tackle exploration for high-dimensional observation spaces using a k-NN state entropy estimator in the low-dimensional representation space of a randomly intialized CNN.

Improving Computational Efficiency in Visual Reinforcement Learning via Stored Embeddings
Lili Chen, Kimin Lee, Aravind Srinivas, Pieter Abbeel
Neural Information Processing Systems (NeurIPS), 2021.
pdf / code

We present a compute- and memory-efficient modification of off-policy visual RL methods by freezing lower layers of CNN encoders and storing low-dimensional embeddings.

Ising Model Optimization Problems on a FPGA Accelerated Restricted Boltzmann Machine
Saavan Patel, Lili Chen, Philip Canoza, Sayeef Salahuddin
arXiv preprint, 2020.
pdf

We demonstrate usage of RBMs to solve NP-Hard problems efficiently by mapping the RBM onto a reconfigurable FPGA.

Teaching

I'm compelled to improve the accessibility of computer science education, at all levels.

CS 70: Discrete Mathematics and Probability Theory

Head Teaching Assistant: Spring 2021, Fall 2020
Teaching Assistant: Spring 2020
Reader: Fall 2019
Computer Science Mentors [Website]

Mentor: Fall 2019, Spring 2019
Berkeley ANova [Website]

Mentor: Spring 2019, Fall 2018, Spring 2018

Website template from here.