I am currently a Lecturer (Assistant Professor) in Artificial Intelligence in the Department of Computer Science at the University of Liverpool. Prior to Liverpool, I spent about 2.5 years as a Postdoctoral Researcher in the Whiteson Research Lab at the University of Oxford, advised by Shimon Whiteson. I was also a Non-Stipendiary Lecturer in Computer Science at St Catherine’s College at the University of Oxford.
My research focuses mainly on (deep) reinforcement learning, multi-agent systems, interactive machine learning, and curriculum learning. My primary research goal is to develop reinforcement learning algorithms that are more sample-efficient, robust, and scalable, with or without interacting with humans. During my postdoc, I worked mainly on developing new deep multi-agent reinforcement learning algorithms for discrete and continuous cooperative multi-agent tasks. My PhD research focuses on interactive machine learning and curriculum learning, where we study how non-expert humans want to teach the agent to solve new complex sequential decision making tasks and how to incorporate these insights into the development of new machine learning algorithms.
I received my doctorate from the School of Electrical Engineering and Computer Science at Washington State University in 2018, after working for five years at the Intelligent Robot Learning Lab with my advisor Matthew E. Taylor. Before that, I worked as a front-end web developer in Tencent after receiving my Bachelor’s degree in Computer Science from Huazhong University of Science and Technology in China in 2012.
MAY 2022: I gave an invited talk on Cooperative Multi-Agent Reinforcement Learning at the Adaptive and Learning Agents (ALA) Workshop at AAMAS 2022.
SEP 2021: Two of our papers FACMAC: Factored Multi-Agent Centralised Policy Gradients and Regularized Softmax Deep Multi-Agent Q-Learning got accepted at NeurIPS 2021.
SEP 2021: I gave an invited talk at the Transparent Agency and Learning, a CINEMENTAS workshop.
MAY 2021: Two of our papers Randomized Entity-wise Factorization for Multi-Agent Reinforcement Learning and UneVEn: Universal Value Exploration for Multi-Agent Reinforcement Learning got accepted at ICML 2021.
JAN 2021: Our paper RODE: Learning Roles to Decompose Multi-Agent Tasks got accepted at ICLR 2021.
SEP 2020: Our paper Weighted QMIX: Improving Monotonic Value Function Factorisation for Deep Multi-Agent Reinforcement Learning got accepted at NeurIPS 2020.
AUG 2020: Our paper Curriculum Learning for Reinforcement Learning Domains: A Framework and Survey got accepted to the Journal of Machine Learning Research (JMLR).
DEC 2019: Our paper Optimistic Exploration even with a Pessimistic Initialisation got accepted at ICLR 2020 in Addis Ababa, Ethiopia!
NOV 2019: I am now a Non-Stipendiary Lecturer in Computer Science at St Catherine’s College at University of Oxford.
OCT 2018: I am now doing my internship at Microsoft Research Lab Redmond.
JUL 2018: I defended my Ph.D. dissertation entitled “Learning from Human Teachers: Supporting How People Want to Teach in Interactive Machine Learning.”
JUL 2018: I was selected as an organizer for the Adaptive Learning Agents Workshop (ALA) at AAMAS 2019.
MAR 2018: Our journal paper has been accepted for publication in the IEEE Transactions on Emerging Topics in Computational Intelligence.
MAR 2018: I am now doing my internship at Borealis AI in Edmonton, Canada.
JUL 2017: I am now doing my internship at Tencent AI Lab in Seattle.