Kalman filtering is also available in a variety of forms, such as relaxing the limitations of noise uncorrelation, approximating nonlinear systems with linear systems, and so-called "adaptive filtering," and so on, and gaining an increasingly widespread range of applications.
Introduction Filtering is an operation that filters out specific band frequencies in a signal and is an important measure to suppress and prevent interference.
A probability theory and method for estimating another stochastic process associated with it based on the results of observing a stochastic process. The term filtering originates from communication theory, which is a technique for extracting useful signals from received signals containing interference. The "received signal" is equivalent to the stochastic process being observed, and the "useful signal" is equivalent to the stochastic process being estimated. For example, radar tracking aircraft, measured aircraft position data, including measurement error and other random interference, how to use these data to estimate the position, speed, acceleration, etc. of the aircraft at each moment as accurately as possible, and predict the future of the aircraft The location is a filtering and prediction problem. Such problems are abundant in electronic technology, aerospace science, control engineering, and other science and technology sectors. The earliest consideration in history was Wiener filtering, and later Kalman and RS Bussy proposed Kalman filtering in the 1960s. The research on general nonlinear filtering problems is quite active.
Basic concept filtering is an important concept in signal processing.
Classical filtering
The concept of classical filtering is an engineering concept based on Fourier analysis and transformation. According to the theory of higher mathematics, any signal that satisfies certain conditions can be regarded as being superimposed by an infinite number of sine waves. In other words, the engineering signal is a linear superposition of sinusoidal filtered waves of different frequencies. The sine waves of different frequencies that make up the signal are called the frequency components or harmonic components of the signal. A circuit that allows only signal components in a certain frequency range to pass normally, while blocking another part of the frequency component, is called a classical filter or a filter circuit.
In fact, any electronic system has its own bandwidth (the limit on the highest frequency of the signal), and the frequency characteristics reflect this basic feature of the electronic system. The filter is an engineering application circuit designed according to the influence of circuit parameters on the circuit bandwidth.
Modern filtering
The analog signal is filtered by an analog electronic circuit. The basic principle is to use the frequency filtering characteristics of the circuit to realize the selection of the frequency components in the signal. According to the frequency filtering, the signal is regarded as an analog signal formed by superposing sine waves of different frequencies, and signal filtering is realized by selecting different frequency components. Filtering This filter is called a high-pass filter when the higher frequency components of the signal are allowed to pass through the filter. This filter is called a low pass filter when the components of the lower frequency of the allowed signal pass through the filter.
This filter is called a bandpass filter when only components in a certain frequency range of the signal are allowed to pass through the filter.
The behavior of an ideal filter is usually described by an amplitude-frequency characteristic map, also called the amplitude-frequency characteristic of the filter's electrical filter path.
Detailed information For the filter, the frequency range in which the gain amplitude is not zero is called the passband, which is simply referred to as the passband, and the frequency range in which the filter gain amplitude is zero is called the stopband. For example, for LP, from -w1 when w1 is called the passband of LP, the other frequency part is called the stopband. The passband represents the signal frequency component that can pass through the filter without attenuating, and the stopband represents the signal frequency component that is attenuated by the filter. The gain obtained by the signal in the passband is called the passband gain, and the attenuation obtained by the signal in the stopband is called the stopband attenuation. In engineering real-time filtering, dB is generally used as the amplitude gain unit of the filter.
According to the filtering is carried out over a whole period of time or only at some sampling points, it can be divided into continuous time filtering and discrete time filtering. The former time parameter set T can be taken as the real semi-axis [0, ∞) or the real axis (-∞, ∞); the latter T can be taken as a non-negative integer set {0, 1, 2, ...} or a set of integers {... , -2, -1, 0, 1, 2, ...}. Let X={X,t∈T={Y,t∈T) be finite, that is, where X is the estimated process, it cannot be directly observed; Y is the observed process, which contains some information about X. Using the whole of the observation data up to the time t, if a function of the middle element (?) can be found to make the mean square error to a minimum, it is called the optimal filtering of Xt; if the range of the minimum value is limited The linear function is called the linear optimal filtering of Xt. It can be proved that both the optimal filtering and the linear optimal filtering are uniquely present with a probability of one. For the former, 悯t is the conditional expectation of Xt for σ() (the generated σ domain), which is recorded as the latter. If the mean value EXt呏EYt呏0 is further set, then 悯t is the Xt in which the Xt is formed. The projection on the special space is recorded as if (X, Y) is a two-dimensional normal process, then the optimal filtering is consistent with the linear optimal filtering.
For the convenience of application and narration, the above definitions are sometimes classified in more detail. Let τ be a real or integer integer determined by the filter, and consider the case where the estimated process is or according to τ=0, τ>0, τ0, then become the linear prediction problem of X itself under error-free observation conditions; When ≠0 and τ ≤ 0, the filtering problem of extracting the useful signal X from the received signal Y disturbed by the noise N becomes. From 1939 to 1941, Α.Η. Kolmogorov used the Wald decomposition of stationary sequences (see stationary process), gave the general theory and treatment of linear prediction, and then was extended to the stationary process of continuous time. . N. Wiener filtered in 1942 the existence of spectral density of stationary sequences and processes and satisfies some regular conditions. The spectral decomposition is used to derive the obvious expression of linear optimal prediction and filtering, namely the Wiener filtering formula. It has been applied in air defense fire control, electronic engineering and other departments. In the 1950s, the above model was extended to a stationary process with observations only in a finite time interval and a special non-stationary process for filtering, and its application range was extended to more fields. It is still one of the powerful tools for processing various dynamic data (such as meteorology, hydrology, seismic exploration, etc.) and predicting the future.
The Wiener filter formula is derived from the spectral decomposition of the stationary process, and it is difficult to generalize to the more general non-stationary process and multi-dimensional case, so the application range is limited. On the other hand, when the observation result is continuously increased, it is difficult to easily obtain a new filter value from the calculated filter value and the new observation value, and in particular, the need to quickly process a large amount of data on an electronic computer cannot be satisfied.
Kalman Filtering Due to the development of high-speed electronic computers and the need to determine the filtering techniques of satellite orbits and navigation, RE Kalman and RS Bussy proposed a new class of linear filtering models and methods in the early 1960s. For Kalman filtering. The basic assumption is that the estimated process X is the output of a finite-order multi-dimensional linear dynamic system under the influence of random noise, and the observed Yt is the superposition of the partial component of Xt or its linear function and measurement noise, which is not required here. Stationarity, but the noise values at different times are irrelevant. In addition, observations need only start at a certain moment, not necessarily infinitely long observation intervals. More importantly, to adapt to the characteristics of electronic computers, the Kalman filter formula is not a distinct functional form of the estimated value as an observation, but a recursive algorithm (ie, a real-time algorithm). Specifically, for discrete-time filtering, as long as the dimension of X is appropriately increased, the filter value table at time t can be a linear combination of the filter value at the previous time and the observed value Yt at the current time. For continuous time filtering, the linear stochastic differential equations that should be satisfied with Yt can be given. In situations where it is necessary to continuously increase the observations and output filter values, such an algorithm speeds up the processing of data and reduces the amount of data stored. Kalman also proved that if the linear system under consideration satisfies some kind of "controllability" and "observability" (this is two important concepts proposed by Kalman in modern control theory), then the optimal filtering must be "Asymptotically stable." Generally speaking, the effects caused by initial error, rounding error and other inaccuracies will gradually disappear or become stable with the extension of the filtering time, and no accumulation of errors will be formed. This is very important in practical applications.
Kalman filtering is also available in a variety of forms, such as relaxing the limitations of noise uncorrelation, approximating nonlinear systems with linear systems, and so-called "adaptive filtering," and so on, and gaining an increasingly widespread range of applications.
It has been explained before nonlinear filtering that general nonlinear optimal filtering can be attributed to the problem of conditional expectation. For the case of a finite number of observations, the conditional expectation can in principle be calculated using the Bayesian formula. But even in relatively simple situations, the results obtained are quite complicated, and are inconvenient for practical applications or theoretical research. Similar to Kalman filtering, one would also like to give some kind of recursive algorithm for nonlinear filtering or a stochastic differential equation that it satisfies. But generally they do not exist, so the process X and Y in question must be appropriately limited. The research work of nonlinear filtering is quite active. It involves many modern achievements of stochastic process theory, such as general theory of stochastic processes, 鞅, stochastic differential equations, and point processes. One of the most important questions is to study under what conditions, there is a 鞅M, so that M and Y contain the same information at any time; such M is called Y's innovation process. At present, for a class of so-called "conditional normal process filtering", a strictly implementable recursive equation for nonlinear optimal filtering has been given. In practical applications, various linear approximation methods are often used for nonlinear filtering problems.
Http://news.chinawj.com.cn Editor: (Hardware Business Network Information Center) http://news.chinawj.com.cn
Anti Reflection Film,Anti Glare Film,anti-glare screen protector,anti-reflection screen protector