1UBC - University of British Columbia (Vancouver Campus, , 2329 West Mall, Vancouver, BC, V6T 1Z4 /
Okanagan Campus, 3333 University Way, Kelowna, BC, V1V 1V7 - Canada)
Abstract : Urban communities rely heavily on the system of interconnected critical infrastructures. The interdependencies in these complex systems give rise to vulnerabilities that must be considered in disaster mitigation planning. Only then will it be possible to address and mitigate major critical infrastructure disruptions in a timely manner.This paper describes an intelligent decision making system that optimizes the allocation of resources following an infrastructure disruption. The novelty of the approach arises from the application of Monte Carlo estimation for policy evaluation in reinforcement learning to draw on experiential knowledge gained from a massive number of simulations. This method enables a learning agent to explore and exploit the available trajectories, which lead to an optimum goal in a reasonable amount of time. The specific goal of the case study described in this paper is to maximize the number of patients discharged from two hospitals in the aftermath of an infrastructure disruption by intelligently utilizing the available resources. The results demonstrate that a learning agent, through interactions with an environment of simulated catastrophic scenarios, is capable of making informed decisions in a timely manner.
https://hal.inria.fr/hal-01386763
Contributor : Hal Ifip <>
Submitted on : Monday, October 24, 2016 - 3:33:00 PM Last modification on : Friday, December 18, 2020 - 5:30:03 PM
Mohammed Khouj, Sarbjit Sarkaria, Cesar Lopez, Jose Marti. Reinforcement Learning Using Monte Carlo Policy Estimation for Disaster Mitigation. 8th International Conference on Critical Infrastructure Protection (ICCIP), Mar 2014, Arlington, United States. pp.155-172, ⟨10.1007/978-3-662-45355-1_11⟩. ⟨hal-01386763⟩