IFLP   13074
INSTITUTO DE FISICA LA PLATA
Unidad Ejecutora - UE
artículos
Título:
Information Theoretic Measures and Their Applications
Autor/es:
OSVALDO A. ROSSO; FERNANDO MONTANI
Revista:
ENTROPY
Editorial:
MOLECULAR DIVERSITY PRESERVATION INTERNATIONAL-MDPI
Referencias:
Lugar: Basel; Año: 2020 vol. 22 p. 1382 - 1390
ISSN:
1099-4300
Resumen:
The concept of entropy, an ever-growing physical magnitude that measured the degree of decay of order in a physical system, was introduced by Rudolf Clausius in 1865 through an elegant formulation of the second law of thermodynamics. Seven years later, in 1872, Ludwig Boltzmann proved the famous H-theorem, showing that the quantity H always decreases in time, and in the case of perfect gas in equilibrium, the quantity H was related to Clausius? entropy S. The dynamical approach of Boltzmann, together with the elegant theory of statistical ensembles at equilibrium proposed by Josiah Willard Gibbs, led to the Boltzmann?Gibbs theory of statistical mechanics, which represents one of the most successful theoretical frameworks of physics. In fact, with the introduction of entropy, thermodynamics became a model of theoretical science.In 1948, Claude E. Shannon developed a ?statistical theory of communication?, taking ideas from both logic and statistics that in turn opened new paths for research. The powerful notion of information entropy played a major part in the development of new statistical techniques, overhauling the Bayesian approach to probability and statistics. It provided powerful new techniques and approaches on several fields of science, extending and shedding new light on the field.In the space of a few decades, chaos theory has jumped from the scientific literature into the popular realm, being regarded as a new way of looking at complex systems like brains or ecosystems. It is believed that the theory manages to capture the disorganized order that pervades our world. Chaos theory is a facet of the complex systems paradigm having to do with determinism randomness. In 1959, Kolmogorov observed that Shannon?s probabilistic theory of information could be applied to symbolic encodings of the phase?space descriptions of physical nonlinear dynamical systems so that one might characterize a process in terms of its Kolmogorov?Sinai entropy. Pesin?s theorem in 1977 proved that, for certain deterministic nonlinear dynamical systems exhibiting chaotic behavior, an estimation of the Kolmogorov?Sinai entropy is given by the sum of the positive Lyapunov exponents for the process. Thus, a nonlinear dynamical system may be viewed as an information source from which information-related quantifiers may help to characterize and visualize relevant details of the chaotic process.In general speaking terms, physics as well as other scientific disciplines, such as biology or finance, can be considered observational sciences, that is, they try to infer properties of an unfamiliar system from the analysis of a measured time record of its behavior (time series). Dynamical systems are systems that evolve in time. In practice, in general, one may only be able to measure a scalar time series X(t) which may be a function of variables V={v1,v2,⋯,vk} describing the underlying dynamics (i.e., dV/dt=f(V)). Then, the natural question is, how much we can learn from X(t) about the dynamics of the system. In a more formal way, given a system, be it natural or man-made, and given an observable of such a system whose evolution can be tracked through time, a natural question arises: how much information is this observable encoding about the dynamics of the underlying system ?The information content of a system is typically evaluated via a probability distribution function (PDF) P describing the apportionment of some measurable or observable quantity, generally a time series X(t)={xt,t=1,⋯,M}. Quantifying the information content of a given observable is therefore largely tantamount to characterizing its probability distribution. This is often done with a wide family of measures called Information Theory quantifiers (i.e., Shannon entropy and generalized entropy forms, relative entropy, Fisher information, statistical complexity, etc.). Thus, information theory quantifiers are measures that are able to characterize the relevant properties of the PDF associated with these time series, and, in this way, we should judiciously extract information on the dynamical system under study.The evaluation of the information theory quantifiers supposes some prior knowledge about the system; specifically, a probability distribution associated with the time series under analysis should be provided beforehand. The determination of the most adequate PDF is a fundamental problem because the PDF P and the sample space Ω are inextricably linked. Many methods have been proposed for a proper selection of the probability space (Ω,P).Usual methodologies assign a symbol from a finite alphabet A to each time point of the series X(t), thus creating a symbolic sequence that can be regarded as a non causal coarse grained description of the time series under consideration. As a consequence, order relations and the time scales of the dynamics are lost. The usual histogram technique corresponds to this kind of assignment. Time causal information may be duly incorporated if information about the past dynamics of the system is included in the symbolic sequence, i.e., symbols of alphabet A are assigned to a portion of the phase-space or trajectory.In particular, Bandt and Pompe (BP) [?Permutation Entropy: A Natural Complexity Measure for Time Series.? Phys. Rev. Lett. 1972, 88, 174102] introduced a simple and robust symbolic methodology that takes into account the time causality of the time series (causal coarse grained methodology) by comparing neighboring values in a time series. The symbolic data are (i) created by ranking the values of the series; and (ii) defined by reordering the embedded data in ascending order, which is tantamount to a phase space reconstruction with embedding dimension (pattern length) D≥2, D∈N and time lag τ∈N. In this way, it is possible to quantify the diversity of the ordering symbols (patterns) derived from a scalar time series. Note that the appropriate symbol sequence arises naturally from the time series, and no model-based assumptions are needed. In fact, the necessary ?partitions? are devised by comparing the order of neighboring relative values rather than by apportioning amplitudes according to different levels. This technique, as opposed to most of those in current practice, takes into account the temporal structure of the time series generated by the physical process under study. As such, it allows us to uncover important details concerning the ordinal structure of the time series and can also yield information about temporal correlation. Furthermore, the ordinal patterns associated with the PDF are invariant with respect to nonlinear monotonous transformations. Accordingly, nonlinear drifts or scaling artificially introduced by a measurement device will not modify the estimation of quantifiers, a nice property if one deals with experimental data.Among other methodologies of non causal coarse grained type, we can mention frequency counting, procedures based on amplitude statistics, binary symbolic dynamics, Fourier analysis, or wavelet transform. The suitability of each of the proposed methodologies depends on the peculiarity of data, such as stationarity, length of the series, the variation of the parameters, the level of noise contamination, etc. In all these cases, global aspects of the dynamics can be somehow captured, but the different approaches are not equivalent in their ability to discern all relevant physical details.In relation to other quantifiers, we can mention those based on mutual information which rigorously quantifies, in units known as ?bits?, how much information the value of one variable reveals about the value of another. This is a dimensionless quantity that can be thought of as the reduction in uncertainty about one random variable given knowledge of another. Fisher information, which predates the Shannon entropy, and the more recent statistical complexities have also proved to be useful and powerful tools in different scenarios, allowing in particular to analyze time series and data series independently of their sources. The Fisher information measure can be variously interpreted as a measure of the ability to estimate a parameter, as the amount of information that can be extracted from a set of measurements, and also as a measure of the state of disorder of a system or phenomenon.Among the most recent entropy proposals, we can mention approximate entropy; sample entropy; delayed permutation entropy; and permutation min-entropy. That is, different methodologies have been used to understand the mechanisms behind information processing. Among those, there are also methods of frequency analysis like wavelet transform (WT), which distinguishes itself from others due to the high efficiency when dealing with feature extraction. The ?wavelet analysis? is the appropriate mathematical tool to analyze signals in the time and frequency domain. All these measures have important applications not only in physics but also in quite distinct areas, such as biology, medicine, economy, cognitive sciences, numerical and computational sciences, big data analysis, complex networks, and neuroscience.In summary, in the present Special Issue, manuscripts focused on any of the above-mentioned ?Information Theoretic Measures as Mutual Information, Permutation Entropy Approaches, Sample Entropy, Wavelet Entropy and its Evaluations?, as well as its interdisciplinary applications, are more than welcome.In this special issue, a series of articles under the common denominator of Theoretical Information Measures and their applications is presented. In particular, a brief description of the content of each of the papers included is given below.