In the paradigm of Reinforcement Learning (RL), an agent learns from the reward feedback when interacting with the environment. However, it remains a challenge to design proper reward functions for complex tasks in real life. To that end, Imitation Learning (IL), also known as Learning from Demonstration (LfD), acts as an alternative method to learn policies without any reward signal and make better use of existed exert demonstrations. Besides, it is observed both theoretically and empirically that IL can help agent learn a sub-optimal policy in a more data-efficient way than RL. Not only for single-agent tasks, IL also helps agents learn in multi-agent settings, where the expert demonstrations are interactions among agents.
Our workshop is a half-day workshop on Imitation Learning at DAI 2020, with the aim to provide a venue, which can bring together academia researchers and industry practitioners (i) to discuss the principles, limitations and applications of IL for both single-agent and multi-agent scenarios, and (ii) to foster research on innovative algorithms, novel techniques, and new applications of IL.
Autonomous Driving (AD) is the next frontier of artificial intelligence and machine learning. Intending to further research in AD, DAI 2020 offers an international competition on autonomous decision making. There are two separate tracks in the competition. Track 1 is focused on single-agent multi-lane cruising, Track 2 is focused on multi-agent safe driving. Your goal is to design an autonomous algorithm that is capable of driving safely and efficiently across a variety of simulated maps containing numerous tasks, such as merging, intersections, high-ways, and many others. The competition offers such scenarios through a simulator called SMARTS that emulates real-world traffic at a variety of granularity levels. This workshop consists of a tutorial talk for SMARTS & reinforcement learning for AD and the technical talks for the competition winners. Winners will be announced in this ceremony workshop. This competition is organized by the Decision Making and Reasoning Laboratory of Huawei Noah's Ark Lab and the APEX Lab of Shanghai Jiao Tong University.
Reinforcement learning (RL) is an active field of research that deals with the problem of (single or multiple agents') sequential decision-making in unknown and possibly partially observable domains, whose dynamics may be deterministic, stochastic or adversarial. In the last few years, we have seen a growing interest in RL from both research communities and industries, and recent developments in exploration-exploitation, credit assignment, policy search, model learning, transfer/hierarchical/interactive learning, online/multi-task learning, planning, and representation learning are making RL more and more appealing to real-world applications, with promising results in challenging domains such as recommendation systems, computer games, financial marketing, intelligent transportation systems, healthcare and robotic control. After great sucesses in the past four AWRL workshops held in Hamilton, New Zealand (2016), Seoul, Korea (2017), Beijing, China (2018, 2019), the 5th AWRL workshop focuses on both theoretical models, frameworks, algorithms and analysis of RL, as well as its practical applications in various real-life domains. The half-day workshop consists of sessions devoted to invited talks on specific topics on RL and presentations on publications in top conferences such as AAMAS, AAAI, IJCAI, KDD, ICML, NeurIPS. The ultimate goal is to bring together diverse viewpoints in the RL area in an attempt to consolidate the common ground, identify new research directions, and promote the rapid advance of RL research community.
Decision making is one of the primary goals of artificial intelligence. Based on the success of the deep reinforcement learning in recent years, multi-agent (deep) reinforcement learning (MARL) attracts more and more attentions from AI researchers, which extends decision making from single-agent to multi-agent environments. Remarkable MARL algorithms are proposed, such as CommNet, MADDPG and COMA. However, due to complicated reasons including the non-stationary environment and the various settings of training and execution, we still lack a standard measure to compare different agent policies or different MARL algorithms. This workshop aims to bring together researchers to discuss (i) how to evaluate an agent policy in a particular multi-agent environment and (ii) how to evaluate an MARL algorithm against various opponent algorithms over various multi-agent environments.
The workshop is planned to be host a half day, with 1 keynote and 4 invited talks. The keynote speaker should be a well-recognized professor or scientist working on the area. There are two encouraged types of invited talks and peer reviewed oral research talks: (i) the academic talk on fundamental research on reinforcement learning with an attempt of application on IR; (ii) the industrial talk on practice of designing or applying deep reinforcement learning techniques for real-world IR tasks. Each talk is expected to be presented as a lecture with slides. There will be a QA session at the end of each talk.