Citation:

Marco A.M. Saran, Vladimiro Miranda, State estimation pre-filtering with overlapping tiling of autoencoders, Electric Power Systems Research, Volume 157, April 2018, Pages 261-271, ISSN 0378-7796, DOI: 10.1016/j.epsr.2017.12.026

(https://www.sciencedirect.com/science/article/pii/S0378779617305059)

Keywords: Autoencoder; Bad data; Correntropy; Data fusion; Gross error; Neural network; State estimation

Abstract:

This paper presents a new concept for an approach to deal with measurements contaminated with gross errors, prior to power system state estimation.

Instead of a simple filtering operation, the new procedure develops a screen-and-repair process, going through the phases of detection, identification and correction of multiple gross errors.

The method is based on the definition of the coverage of the measurement set by a tiling scheme of 3-overlapping autoencoders, trained with denoising techniques and correntropy, that produce an ensemble-like set of three proposals for each measurement.

These proposals are then subject to a process of fusion to produce a vector of proposed/corrected measurements, and two fusion methods are compared, with advantage to the Parzen Windows method.

The original measurement vector can then be recognized as clean or diagnosed with possible gross errors, together with corrections that remove these errors.

The repaired vectors can then serve as input to classical state estimation procedures, as only small noise remains.

A test case illustrates the effectiveness of the technique, which could deal with four simultaneous gross errors and achieve a result close to full recognition and correction of the errors.

Introduction:

A control center, either in transmission or distribution, cannot function without some kind of state estimation.

The huge transformation being witnessed at distribution level, with the emergence of distributed (and uncontrolled) generation, has just reinforced the need of system operators (TSO and DSO) to be able to monitor, at all times, the state of the networks.

However, especially when closer to the distribution level, but also with PMU measurements due to wrong or absent time-tagging, gross errors tend to appear in the measurement sets observed at any moment.

It may be stated that handling this problem properly is one central concern in the architecture of a modern SCADA/EMS-DMS system.

Handling one gross error has had, in the past, some success with classical techniques working on residuals (the difference between measurements and estimated values), but the same cannot be said about handling multiple errors – and with a widespread monitoring including distribution, the ability to handle multiple gross errors becomes a necessity.

The classical and most known methods to identify gross errors are: the Chi-squares Test, the Largest Normalized Residual Test and the Hypothesis Testing Identification [1, 2].

These methods are applied only after each estimation iteration and are centered on the residuals.

The obvious conceptual flaw is that they rely on post-processing and depart from contaminated results.

As a consequence, they exhibit some failure rate in detecting bad data.

Moreover, many of these methods depend on different assumptions regarding the system and the errors characteristics.

Some of these assumptions are controversial and generate debate, mainly about the gaussianity [3] and the independence [4, 5] of the errors.

An additional difficulty derives from the fact that, in many of the conventional error handling methods, there is no provision to recover an assumed erroneous measurement, and this is simply removed from the data set [1, 2].

This reduces the redundancy of the input measurement set, discards information that could be useful and, in severe cases, hampers the observability of the network.

Meanwhile, in a completely alternative path, the work reported in [6] made a robust demonstration that neural networks with special architectures, denoted autoencoders, are tools that, properly handled, can learn the supporting manifold of system state patterns – and then they can be used to correct measurement vectors that either have components with gross errors or are corrupted and exhibit missing signals in some of the components.

There was also a lesson learned: that a very large autoencoder, representing at its input the whole set of measurements, becomes a cumbersome artifact to be trained.

But, at the same time, the need for such a huge neural network is also questioned, and a distinct conceptual model was proposed (also in [7]): a mosaic of adjacent local areas representing the network, each cell being observed by an autoencoder.

The advantage of this scheme would derive from the small scale of each neural network to be tuned, from the fact that steady state causes have only visible local effects and from the easy adaptation of this concept to system changes, because only local retraining would be necessary in case of structural changes of the network.

This paper is devoted to exploring the potential of autoencoders to act as pre-filters to the measurement vector, and thus perform the three necessary functions of an ideal system: detection, identification and quantification.

Here is the definition for these terms:

- Detection: the ability to signal out that a data set contains bad data

- Identification: the ability to pinpoint which measurement is corrupt

- Quantification (or repair, or correction): the ability to estimate the amount needed to be added to the identified corrupt measurement to bring it to a value coherent with the physical system under observation (in the power systems case, this should be the Kirchhoff laws).

If these three functions are successfully performed, the data set remains intact (albeit corrected), no observability is lost and classical state estimation methods can even take on from there, if required.

No post-processing will be needed.

There are some guidelines to be followed, if a successful method is devised: it must be fast enough to be applied in real-time; it should be non-parametric, i.e. independent of network or measurement parameters or any error assumption; it should deal with the possibility of having multiple errors originating from the same cause and not being independent; and it should be applied in a pre-processing fashion.

A new form of efficient data pre-filtering would result.

The concept described in this paper has the autoencoder as a common trait with the work in [6].

However, apart from this, it displays distinct options and choices, makes use of a different mix of algorithms, based on computational intelligence with elements of information theoretic learning, machine learning and data fusion, and proposes a different arrangement for the mosaic of autoencoders observing the network – instead of a tessellation, an overlapping tiling in now used, taking advantage of having the same node monitored by more than one autoencoder.

The results of such a new concept are impressive, in handling multiple gross errors.

References:

[1] A. Monticelli, “State Estimation in Electric Power Systems: A Generalized Approach,” Kluwer Academic Publishers, 1999

[2] A. Abur and A. G. Expósito, “Power System State Estimation: Theory and Implementation,” CRC Press, 2004

[3] R. Mínguez, A. J. Conejo and A. S. Hadi, “Advances in Mathematical and Statistical Modeling,” Chapter: Non Gaussian State Estimation in Power Systems, Birkhauser, ISBN: 978-0-8176-4625-7, Boston, 2008

[4] E. Caro, A. J. Conejo and R. Minguez, “Power System State Estimation Considering Measurement Dependencies,” IEEE Transactions on Power Systems, vol. 24, no. 4, pp. 1875-1885, 2009

[5] M. G. Cheniae, L. Mili and P. J. Rousseeuw, “Identification of multiple interacting bad data via power system decomposition,” IEEE Transactions on Power Systems, vol. 11, no. 3, pp. 1555-1563, 1996

[6] V. Miranda, J. Krstulovic, H. Keko, C. Moreira and J. Pereira, "Reconstructing Missing Data in State Estimation With Autoencoders," in IEEE Transactions on Power Systems, vol. 27, no. 2, pp. 604-611, May 2012

[7] J. Krstulovic, V. Miranda and J. Pereira, “Towards an auto-associative topology state estimator”, IEEE Transactions on PWRS, vol. 28, no. 3, pp. 3311-3318, Aug. 2013

[8] E. Parzen, "On Estimation of a Probability Density Function and Mode," The annals of mathematical statistics, 33, pp. 1065-1076, September 1962

[9] J. C. Principe, “Information Theoretic Learning: Renyi’s Entropy and Kernel Perspectives”, Springer, 2010

[10] Weifeng Liu, P. Pokharel and J. Principe, "Correntropy: A Localized Similarity Measure", IJCNN'06, International Joint Conference on Neural Networks, pp.4919-4924, 2006

[11] Weifeng Liu, P. Pokharel and J. Principe, "Error Entropy, Correntropy and M-Estimation", Machine Learning for Signal Processing, 2006. Proceedings of the 2006 16th IEEE Signal Processing Society Workshop on, pp.179-184, 2006

[12] Weifeng Liu, P. Pokharel and J. Principe, "Correntropy: Properties and Applications in Non-Gaussian Signal Processing", IEEE Transactions on Signal Processing, vol. 55, no. 11, pp. 5286-5298, 2007

[13] D. Xu and J.C. Principe, “Learning from Examples with Quadratic Mutual Information,” Proc. 1998 IEEE Signal Processing Soc. Workshop, pp. 155-164, 1998.

[14] Sainui, Janya and Masashi Sugiyama. “Direct Approximation of Quadratic Mutual Information and Its Application to Dependence-Maximization Clustering.” IEICE Transactions 96-D (2013): 2282-2285.

[15] MacQueen, J. B., "Some Methods for classification and Analysis of Multivariate Observations, Proceedings of 5-th Berkeley Symposium on Mathematical Statistics and Probability", Berkeley, University of California Press, 1:281-297, 1967

[16] Hartigan, J.A., “Clustering Algorithms”, New York: John Wiley & Sons, Inc, 1975

[17] Jain, A. K., Murthy, M.N. and Flynn, P.J., “Data Clustering: A Review”, ACM Computing Reviews, Nov 1999

[18] S. Haykin, “Neural Networks and Learning Machines,” Pearson Education, 3rd Ed., 2009

[19] R. Kamimura, “Information Theoretic Neural Computation,” World Scientific, 2002

[20] M. Hassoun and A. Sudjianto, “Compression net-free autoencoders” in Workshop on Advances in Autoencoder/Autoassociator-Based Computations at the NIPS, vol. 97, Breckenridge, Colorado, USA, 1997

[21] V. M. Stone, “The auto-associative neural network - a network architecture worth considering,” World Automation Congress, pp. 1-4, 2008

[22] Pei-Jin Wang and C. Cox, “Study on the application of auto-associative neural network,” Proceedings of 2004 International Conference on Machine Learning and Cybernetics, vol.5, pp. 3291-3295, 26-29 Aug. 2004

[23] H. Boulard, Y. Kamp, “Auto-Association by Multilayer Perceptrons and Singular Value Decomposition,” Biological Cybernectics, Vol. 59, pp. 291-264, 1988

[24] G. Desjardins, R. Proulx and R. Godin, “An Auto-Associative Neural Network for Information Retrieval,” IJCNN '06. International Joint Conference on Neural Networks, pp.3492-3498, 2006

[25] J. Krstulovic, V. Miranda, A. Simões Costa and J. Pereira, “Towards an Auto-Associative Topology State Estimator”, IEEE Transactions on Power Systems, vol. 28, no. 3, pp. 3311-3318, Aug 2013

[26] J. Krstulovic, “Information Theoretic State Estimation in Power Systems”, PhD thesis, Faculty of Engineering of the University of Porto, Portugal, April 2014,

[27] Subcommittee, P.M., "IEEE Reliability Test System," IEEE Transactions on Power Apparatus and Systems, vol. PAS-98, no.6, pp.2047-2054, Nov. 1979

[28] Grigg, C.; Wong, P.; Albrecht, P.; Allan, R.; Bhavaraju, M.; Billinton, R.; Chen, Q.; Fong, C.; Haddad, S.; Kuruganty, S.; Li, W.; Mukerji, R.; Patton, D.; Rau, N.; Reppen, D.; Schneider, A.; Shahidehpour, M.; Singh, C., "The IEEE Reliability Test System-1996. A report prepared by the Reliability Test System Task Force of the Application of Probability Methods Subcommittee," IEEE Transactions on Power Systems, vol.14, no.3, pp.1010-1020, Aug 1999

Back to Publications - Voltar à Publicações

© 2018-2020, Marco Aurélio M. Saran

All rights reserved

All rights reserved