Abstract : Value and policy iteration are powerful methods for verifying quantitative properties of Markov Decision Processes (MDPs). In order to accelerate these methods many approaches have been proposed. The performance of these methods depends on the graphical structure of MDPs. Experimental results show that they don’t work much better than normal value/policy iteration when the graph of the MDP is dense. In this paper we present an algorithm which tries to reduce the number of updates in dense MDPs. In this algorithm, instead of saving unnecessary updates we use graph partitioning method to have more important updates.
https://hal.inria.fr/hal-01446601 Contributor : Hal IfipConnect in order to contact the contributor Submitted on : Thursday, January 26, 2017 - 10:43:31 AM Last modification on : Thursday, January 26, 2017 - 10:57:19 AM Long-term archiving on: : Friday, April 28, 2017 - 5:54:34 AM