Skip to Main content Skip to Navigation

Meta-Learning as a Markov Decision Process

Lisheng Sun-Hosoya 1, 2
1 TAU - TAckling the Underspecified
Inria Saclay - Ile de France, LRI - Laboratoire de Recherche en Informatique
Abstract : Machine Learning (ML) has enjoyed huge successes in recent years and an evergrowing number of real-world applications rely on it. However, designing promising algorithms for a specific problem still requires a huge human effort. Automated Machine Learning (AutoML) aims at taking the human out of the loop and develops machines that generate/recommend good algorithms for a given ML task. AutoML is usually treated as an algorithm/hyper-parameter selection problem, existing approaches include Bayesian optimization, evolutionary algorithms as well as reinforcement learning. Among them, auto-sklearn which incorporates meta-learning techniques in their search initialization, ranks consistently well in AutoML challenges. This observation oriented my research to the Meta-Learning domain, I then develop a novel framework based on Markov Decision Processes (MDP) and reinforcement learning (RL). After a general introduction, my thesis work started with an in-depth analysis of the results of the AutoML challenge. This analysis then oriented my work towards meta-learning, leading me first to propose a formulation of AutoML as a recommendation problem, and ultimately to formulate a novel conceptualization of the problem as a MDP. In the MDP setting, the problem is brought back to filling up, as quickly and efficiently as possible, a meta-learning matrix S, in which lines correspond to ML tasks and columns to ML algorithms. A matrix element S(i,j) is the performance of algorithm j applied to task i. Searching efficiently for the best values in S allows us to identify quickly algorithms best suited to given tasks. After reviewing the classical hyper-parameter optimization framework, I will introduce my first meta-learning approach, ActivMetaL, that combines active learning and collaborative filtering techniques to predict the missing values in S. Then, our latest research applies RL to the MDP problem we defined to learn an efficient policy to explore S. We call this approach REVEAL and propose an analogy with a series of toy games to help visualize agents' strategies to reveal information progressively. The main results of my Ph.D. project are: - HP/model selection: I have explored the Freeze-Thaw method and optimized the algorithm to enter the AutoML 2015-2016 challenge, achieving 3rd place in the final round. - ActivMetaL: I have designed a new algorithm for active meta-learning and compared it with other baseline methods on real-world and artificial data. This study demonstrated that ActivMetaL is generally able to discover the best algorithm faster than baseline methods. - REVEAL: I developed a new conceptualization of meta-learning as a MDP and put it into the more general framework of REVEAL games. With a master student intern, I developed agents that learn (with reinforcement learning) to predict the next best algorithm to be tried. The work presented in my thesis is empirical in nature. Several real-world meta-datasets were used in this research, each of which corresponds to one score matrix S. Artificial and semi-artificial meta-datasets are also used. The results indicate that reinforcement learning is a viable approach to this problem, although much work remains to be done to optimize algorithms to make them scale to larger meta-learning problems.
Document type :
Complete list of metadata

Cited literature [128 references]  Display  Hide  Download
Contributor : Lisheng Sun-Hosoya Connect in order to contact the contributor
Submitted on : Friday, December 20, 2019 - 6:43:15 PM
Last modification on : Wednesday, September 16, 2020 - 5:51:58 PM
Long-term archiving on: : Saturday, March 21, 2020 - 8:53:10 PM


Files produced by the author(s)


  • HAL Id : tel-02422144, version 1


Lisheng Sun-Hosoya. Meta-Learning as a Markov Decision Process. Computer Science [cs]. Laboratoire de recherche en informatique (LRI) UMR CNRS 8623, Université Paris-Sud; Institut national de recherche en informatique et en automatique - INRIA, 2019. English. ⟨tel-02422144v1⟩



Record views


Files downloads