About Me
I am currently a Lecturer (Assistant Professor) in Artificial Intelligence in the Department of Computer Science at the University of Liverpool. I am also a Fellow of the Higher Education Academy. Prior to Liverpool, I spent about 2.5 years as a Postdoctoral Researcher in reinforcement learning in the Whiteson Research Lab at the University of Oxford, advised by Shimon Whiteson. I was also a Non-Stipendiary Lecturer in Computer Science at St Catherine’s College at the University of Oxford.
My research focuses mainly on (deep) reinforcement learning, multi-agent systems, interactive machine learning, and curriculum learning. My primary research goal is to develop reinforcement learning algorithms that are more sample-efficient, robust, and scalable, with or without interacting with humans. During my postdoc, I worked mainly on developing new deep multi-agent reinforcement learning algorithms for discrete and continuous cooperative multi-agent tasks. My PhD research focuses on interactive machine learning and curriculum learning, where we study how non-expert humans want to teach the agent to solve new complex sequential decision making tasks and how to incorporate these insights into the development of new machine learning algorithms.
I received my doctorate from the School of Electrical Engineering and Computer Science at Washington State University in 2018, after working for five years at the Intelligent Robot Learning Lab with my advisor Matthew E. Taylor. Before that, I worked as a front-end web developer in Tencent after receiving my Bachelor’s degree in Computer Science from Huazhong University of Science and Technology in China in 2012.
Prospective Students: I am currently seeking a PhD student to work with me on reinforcement learning from human feedback. The application deadline is April 18, 2025. More details are available here.
Recent News
SEP 2024: Our paper Centralised Rehearsal of Decentralised Cooperation: Multi-Agent Reinforcement Learning for the Scalable Coordination of Residential Energy Flexibility has been accepted for publication in Applied Energy.
SEP 2024: Our paper Improving Diversity of Commonsense Generation by Large Language Models via In-Context Learning got accepted at the Findings of the Conference on Empirical Methods in Natural Language Processing.
AUG 2024: Our paper Accelerating Laboratory Automation Through Robot Skill Learning for Sample Scraping got accepted at IEEE CASE 2024 and was selected as a finalist for the Best Healthcare Automation Paper Award.
AUG 2024: Our paper Contextual Transformers for Goal-Oriented Reinforcement Learning got accepted at the SGAI International Conference on Artificial Intelligence.
FEB 2024: I served as the Co-Chair for the Competition Track of IJCAI 2024.
OCT 2023: I gave an invited talk at the Game Theory and Machine Learning Workshop at London School of Economics.
OCT 2023: I served as a Panellist in the RL-CONFROM Workshop at IROS 2023.
AUG 2023: Our paper Deep Reinforcement Learning for Continuous Control of Material Thickness got accepted at the SGAI International Conference on Artificial Intelligence.
JUN 2023: I received the Grace Hopper Celebration 2023 Faculty Scholarship.
MAR 2023: Our paper Curriculum Learning for Relative Overgeneralization was accepted at the Adaptive and Learning Agents (ALA) Workshop at AAMAS 2023. It is Lin’s first workshop paper and the result of his hard work on his final year undergraduate project.
AUG 2022: Our paper Dependable Learning-Enabled Multiagent Systems has been accepted for publication in AI Communications.
JUL 2022: I gave a lecture on Introduction to Multi-Agent Reinforcement Learning at the CIFAR 2022 Deep Learning + Reinforcement Learning (DLRL) Summer School.
MAY 2022: I gave an invited talk on Cooperative Multi-Agent Reinforcement Learning at the Adaptive and Learning Agents (ALA) Workshop at AAMAS 2022.