On the Benefit of Re-optimization in Optimal Control under Perturbations

Abstract : We consider a finite-horizon optimal control prob-lem for a system subject to perturbations. We compare the performance of the nominal optimal control sequence under perturbations with a shrinking horizon strategy in which a re-optimization for the nominal model is performed in each sampling instant using the current perturbed system state as new initial value. We analyze the potential performance improvement using suitable moduli of continuity as well as sta-bility and controllability properties and illustrate our findings by numerical simulations. I. INTRODUCTION Receding horizon control (RHC), also known as model predictive control (MPC) is a control strategy based on the solution, at each sampling instant, of an optimal control problem (OCP) over a chosen horizon. In this optimization-based control technique, an OCP is solved at every sampling instant to determine a sequence of input moves that controls the current and future behavior of a physical system in an op-timal manner. Typically for an RHC scheme, after applying the first element of the optimal sequence of input moves, the fixed optimization horizon is shifted by one sampling time into the future and the procedure is repeated, i.e., a re-optimization is performed. In this work, we consider the particular case of an RHC scheme, in which the prediction horizon is decreased by one sampling interval in each re-optimization. This type of RHC scheme is typically applied to batch processes which are widely seen in various sectors of chem-ical and manufacturing industries including food products, pharmaceuticals, chemicals products, semiconductors, etc [9]. Batch processes typically refer to the processing of specific quantities of raw materials for a finite duration of time, called a cycle, to form or produce a finite quantity of end product. At the end of a cycle, initial process conditions are reset to run another cycle [7]. Due to the fixed final batch time, the optimal control problem to be solved by RHC is defined on a finite horizon and consequently the prediction horizon of the RHC implementation 'shrinks' by one sampling interval in each iteration [9]. This has led to the term 'shrinking horizon MPC' [6] with early applications seen in [4], [6], [14]. As a consequence of the dynamic programming principle, in the absence of model uncertainties and disturbances the
Type de document :
Communication dans un congrès
MTNS 2014, 2014, Groningen, Netherlands. pp.439-446, Proceedings of the 21st International Symposium on Mathematical Theory of Networks and Systems. 〈https://fwn06.housing.rug.nl/mtns2014/〉
Liste complète des métadonnées

Littérature citée [11 références]  Voir  Masquer  Télécharger

https://hal.inria.fr/hal-01098332
Contributeur : Estelle Bouzat <>
Soumis le : mardi 23 décembre 2014 - 17:45:24
Dernière modification le : vendredi 13 octobre 2017 - 17:08:16
Document(s) archivé(s) le : mardi 24 mars 2015 - 10:46:58

Fichier

gruene_palma_2014.pdf
Fichiers produits par l'(les) auteur(s)

Identifiants

  • HAL Id : hal-01098332, version 1

Collections

Citation

Lars Grüne, Vryan Gil Palma. On the Benefit of Re-optimization in Optimal Control under Perturbations. MTNS 2014, 2014, Groningen, Netherlands. pp.439-446, Proceedings of the 21st International Symposium on Mathematical Theory of Networks and Systems. 〈https://fwn06.housing.rug.nl/mtns2014/〉. 〈hal-01098332〉

Partager

Métriques

Consultations de la notice

309

Téléchargements de fichiers

84