Skip to Main content Skip to Navigation
Theses

Robust Preference Learning-based Reinforcement Learning

Abstract : The thesis contributions resolves around sequential decision taking and more precisely Reinforcement Learning (RL). Taking its root in Machine Learning in the same way as supervised and unsupervised learning, RL quickly grow in popularity within the last two decades due to a handful of achievements on both the theoretical and applicative front. RL supposes that the learning agent and its environment follow a stochastic Markovian decision process over a state and action space. The process is said of decision as the agent is asked to choose at each time step an action to take. It is said stochastic as the effect of selecting a given action in a given state does not systematically yield the same state but rather defines a distribution over the state space. It is said to be Markovian as this distribution only depends on the current state-action pair. Consequently to the choice of an action, the agent receives a reward. The RL goal is then to solve the underlying optimization problem of finding the behaviour that maximizes the sum of rewards all along the interaction of the agent with its environment. From an applicative point of view, a large spectrum of problems can be cast onto an RL one, from Backgammon (TD-Gammon, was one of Machine Learning first success giving rise to a world class player of advanced level) to decision problems in the industrial and medical world. However, the optimization problem solved by RL depends on the prevous definition of a reward function that requires a certain level of domain expertise and also knowledge of the internal quirks of RL algorithms. As such, the first contribution of the thesis was to propose a learning framework that lightens the requirements made to the user. The latter does not need anymore to know the exact solution of the problem but to only be able to choose between two behaviours exhibited by the agent, the one that matches more closely the solution. Learning is interactive between the agent and the user and resolves around the three main following points: i) The agent demonstrates a behaviour ii) The user compares it w.r.t. to the current best one iii) The agent uses this feedback to update its preference model of the user and uses it to find the next behaviour to demonstrate. To reduce the number of required interactions before finding the optimal behaviour, the second contribution of the thesis was to define a theoretically sound criterion making the trade-off between the sometimes contradicting desires of complying with the user's preferences and demonstrating sufficiently different behaviours. The last contribution was to ensure the robustness of the algorithm w.r.t. the feedback errors that the user might make. Which happens more often than not in practice, especially at the initial phase of the interaction, when all the behaviours are far from the expected solution.
Complete list of metadata

Cited literature [138 references]  Display  Hide  Download

https://hal.inria.fr/tel-01111276
Contributor : Brigitte Briot <>
Submitted on : Friday, January 30, 2015 - 8:37:24 AM
Last modification on : Thursday, July 8, 2021 - 3:49:25 AM
Long-term archiving on: : Saturday, April 15, 2017 - 11:10:20 PM

Identifiers

  • HAL Id : tel-01111276, version 1

Citation

Riad Akrour. Robust Preference Learning-based Reinforcement Learning. Machine Learning [cs.LG]. Université Paris Sud - Paris XI, 2014. English. ⟨NNT : 2014PA112236⟩. ⟨tel-01111276⟩

Share

Metrics

Record views

1947

Files downloads

1495