INTEC   05402
Unidad Ejecutora - UE
capítulos de libros
MPC with learning properties applied to finite-horizon repetitive systems
Frontiers in Advanced Control System
InTech, Ginalber Luiz de Oliveira Serra (Editor)
Año: 2012; p. 193 - 213
A repetitive system is one that continuously repeats a finite-duration procedure (operation) along the time. This kind of systems can be found in several industrial fields such as robot manipulation ((Tan, Huang, Lee & Tay, 2003)), injection molding ((Yao, Gao & Allgöwer, 2008)), batch processes ((Bonvin et al., 1984; Lee & Lee, 1999)) and semiconductor processes ((Moyne, Castillo, & Hurwitz, 2003)). Because of the repetitive characteristic, these systems have two count indexes or time scales: o e for the time running within the interval each operation lasts, and the other for the number of operations or repetitions in the continuous sequence. Consequently, it can be said that a control strategy for repetitive systems requires accounting for two different objectives: a short-term disturbance rejection during a finite-duration single operation in the continuous sequence (this frequently means the tracking of a predetermined optimal trajectory) and the long-term disturbance rejection from operation to operation (i.e., considering each operation as a single point of a continuous process1). Since in essence, the continuous process basically repeats the operations (assuming that long-term disturbances are negligible), the key point to develop a control strategy that accounts for the second objective is to use the information from previous operations to improve the tracking performance of the future sequence. Despite the finite-time nature of every individual operation, the within-operation control is usually handled by strategies typically used on continuous process systems, such as PID or more sophisticated alternatives as Model Predictive Control (MPC). The main difficulty arising in these applications is associated to the stability analysis, since the distinctive finite-time characteristic requires an approach different from the traditional one; this was clearly established in (Srinivasan & Bonvin, 2007). The operations sequence control can be handled by strategies similar to the standard Iterative Learning Control (ILC), which uses information from previous operations. However, the ILC exhibits the limitation of running open-loop with respect to the current operation, since no feedback corrections are made during the time interval the operation lasts. In order to handle batch processes (Lee et al., 2000) proposed the Q-ILC, which considers a model-based controller in the iterative learning control framework. As usual in the ILC literature, only the iteration-to-iteration convergence is analyzed, as the complete input and output profiles of a given operation are considered as fix vectors (open-loop control with respect to the current operation). Another example is an MPC with learning properties presented in (Tan, Huang, Lee & Tay, 2003), where a predictive controller that iteratively improves the disturbance estimation is proposed. Form the point of view of the learning procedure, any detected state or output disturbance is taken like parameters that are updated iteration to iteration. Then, in (Lee&Lee, 1997; 1999) and (Lee et al., 2000), a real-time feedback control is incorporated into the Q-ILC (BMPC). As the authors declare, some cares must be taken when combining ILC with MPC. In fact, as read in Lee and Lee 2003, a simple-minded combination of ILC updating the nominal input trajectory for MPC before each operation does not work. The MPC proposed in this Chapter is formulated under a closed-loop paradigm ((Rossiter, 2003)). The basic idea of a closed-loop paradigm is to choose a stabilizing control law and assume that this law (underlying input sequence) is present throughout the predictions. More precisely, the MPC propose here is an Infinite Horizon MPC (IHMPC) that includes an underlying control sequence as a (deficient) reference candidate to be improved for the tracking control. Then, by solving on line a constrained optimization problem, the input sequence is corrected, and so the learning updating is performed.