PLAPIQUI   05457
PLANTA PILOTO DE INGENIERIA QUIMICA
Unidad Ejecutora - UE
congresos y reuniones científicas
Título:
Bias Detection and Identification Using Historical Data
Autor/es:
CEDEÑO MARCO; GALDEANO RUBÉN; ELWART JUAN; ALVAREZ RODRIGO; SANCHEZ MABEL
Lugar:
Nashville - EEUU
Reunión:
Congreso; 2009 AIChE Annual Meeting; 2009
Institución organizadora:
American Institute of Chemical Engineering
Resumen:
Basic and high level plant activities, such as monitoring, regulatory and supervisory control, real time optimization, planning and scheduling, etc., provide valuable results only if a reliable knowledge of current plant state is at hand. Since measurements are subject to random and gross errors, a great effort has been done during the last four decades to reduce their detrimental effects on process variable estimation by applying data reconciliation procedures. Nowadays it is a common practice in process industries to optimally adjust measurements in such a way that steady-state mass and energy balance constraints are satisfied. But to obtain accurate estimates, some action should be taken to reduce the influence of gross errors. Measurement bias is one type of gross error that can be caused by many sources, such as poorly calibrated or malfunctioning instruments. Several model-based approaches have been proposed for bias detection and identification, which work comparing the actual operation of the plant with that predicted by a mathematical model using hypothesis statistical tests. A good survey of these techniques is available in the books by Narasimhan and Jordache (2000) and Romagnoli and Sánchez (2000). To avoid biased estimations of process variables, other strategies incorporate the non-ideality of the data distribution in the formulation of the data reconciliation problem. Thus random and gross errors are removed simultaneously based on their probability distribution. This is usually accomplished by combining nonlinear programming and the maximum likelihood principle, after the error distribution has been suitable characterized (Arora and Biegler (2001); Wang and Romagnoli (2003)). The -statistic is widely used in Statistical Process Control to reliably detect the out of control status, but by itself it offers no assistance as fault identification tool. Different strategies have been proposed to calculate the contribution of each process variable to the inflated statistic. They work in the original or in the latent variable space. A straightforward method to decompose the -statistic as a unique sum of each variable contribution was recently developed by Alvarez et al. (2007), which is called OSS (Original Space Strategy). This decomposition was succesfully applied to detect and identify biases for steady state processes (Sanchez et al., 2008). Later on Alvarez et al. (2008) proposed a new strategy to estimate the influence of a given variable on the final value of the inflated statistic´s value. In this approach, the contribution of each variable is measured in terms of the distance between the current observation and its Nearest In Control Neighbour (NICN). In this work, the detection and identification capabilities of the monitoring technique presented by Alvarez et. al (2008), are compared with those corresponding to the most commonly used gross error detection and identification techniques for some benchmarks. Results indicate the technique succeeds in identifying single and multiple biases and, fulfills three issues paramount to practical implementation in commercial software: robustness, uncertainty and efficiency.