Where to Add Actions in Human-in-the-Loop Reinforcement Learning
Abstract
In order for reinforcement learning systems to learn quickly in vast action spaces such as the space of all possible pieces of text or the space of all images, leveraging human intuition and creativity is key. However, a human-designed action space is likely to be initially imperfect and limited; furthermore, humans may improve at creating useful actions with practice or new information. Therefore, we propose a framework in which a human adds actions to a reinforcement learning system over time to boost performance. In this setting, however, it is key that we use human effort as efficiently as possible, and one significant danger is that humans waste effort adding actions at places (states) that aren't very important. Therefore, we propose Expected Local Improvement (ELI), an automated method which selects states at which to query humans for a new action. We evaluate ELI on a variety of simulated domains adapted from the literature, including domains with over a million actions and domains where the simulated experts change over time. We find ELI demonstrates excellent empirical performance, even in settings where the synthetic "experts" are quite poor.
Authors
Travis Mandel
Yun-En Liu
Emma Brunskill
Zoran Popović
Resources
Where to Add Actions in Human-in-the-Loop Reinforcement Learning
Travis Mandel, Yun-En Liu, Emma Brunskill, Zoran Popović
AAAI Conference on Artificial Intelligence (AAAI 2017)
[Main text (551 KB PDF)]
[Appendix (186 KB PDF)]