PublicationsRefined Risk Management in Safe Reinforcement Learning with a Distributional Safety CriticQisong Yang, Thiago D. Simão, Simon H. Tindemans, and Matthijs T. J. Spaan. Refined Risk Management in Safe Reinforcement Learning with a Distributional Safety Critic. In Safe Reinforcement Learning, 2022. Workshop at IJCAI22 DownloadAbstractSafety is critical to broadening the real-world use of reinforcement learning (RL). Modeling the safety aspects using a safety-cost signal separate from the reward is becoming standard practice, since it avoids the problem of finding a good balance between safety and performance. However, the total safety-cost distribution of different trajectories is still largely unexplored. In this paper, we propose an actor critic method for safe RL that uses an implicit quantile network to approximate the distribution of accumulated safety-costs. Using an accurate estimate of the distribution of accumulated safety-costs, in particular of the upper tail of the distribution, greatly improves the performance of risk-averse RL agents. The empirical analysis shows that our method achieves good risk control in complex safety-constrained environments. BibTeX Entry@InProceedings{Yang22saferl, author = {Qisong Yang and Thiago D. Sim{\~a}o and Simon H. Tindemans and Matthijs T. J. Spaan}, title = {Refined Risk Management in Safe Reinforcement Learning with a Distributional Safety Critic}, booktitle = {Safe Reinforcement Learning}, year = 2022, note = {Workshop at IJCAI22} } Note: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder. Generated by bib2html.pl (written by Patrick Riley) on Thu Feb 29, 2024 16:15:45 UTC |