Learning to Utilize Shaping Rewards: A New Approach of Reward Shaping

, , , , , , ,

Reward shaping is an effective technique for incorporating domain knowledge into reinforcement learning (RL). Existing approaches such as potential-based reward shaping normally make full use of a given shaping reward function. However, since the transformation of human knowledge into numeric reward values is often imperfect due to reasons such as human cognitive bias, completely utilizing the shaping reward function may fail to improve the performance of RL algorithms. In this paper, we consider the problem of adaptively utilizing a given shaping reward function. We formulate the utilization of shaping rewards as a bi-level optimization problem, where the lower level is to optimize policy using the shaping rewards and the upper level is to optimize a parameterized shaping weight function for true reward maximization. We formally derive the gradient of the expected true reward with respect to the shaping weight function parameters and accordingly propose three learning algorithms based on different assumptions. Experiments in sparse-reward cartpole and MuJoCo environments show that our algorithms can fully exploit beneficial shaping rewards, and meanwhile ignore unbeneficial shaping rewards or even transform them into beneficial ones.

» Read on
Yujing Hu, Weixun Wang, Hangtian Jia, Yixiang Wang, Yingfeng Chen, Jianye Hao, Feng Wu, Changjie Fan. Learning to Utilize Shaping Rewards: A New Approach of Reward Shaping. In Proceedings of the 34th Conference on Neural Information Processing Systems (NIPS), December 2020.
Save as file
@inproceedings{HWJnips20,
 author = {Yujing Hu and Weixun Wang and Hangtian Jia and Yixiang Wang and Yingfeng Chen and Jianye Hao and Feng Wu and Changjie Fan},
 booktitle = {Proceedings of the 34th Conference on Neural Information Processing Systems (NIPS)},
 month = {December},
 title = {Learning to Utilize Shaping Rewards: A New Approach of Reward Shaping},
 year = {2020}
}