1、 PDF外文:http:/ Journal of Electronics and Communications, 2003, 57(4): 295-299中文3773字 英文原文 Combined Adaptive Filter with LMS-Based Algorithms Abstract: A combined adaptive lter is proposed. It consists of parallel LMS-based adaptive FIR lters and an algorithm for choosing the better among them.
2、 As a criterion for comparison of the considered algorithms in the proposed lter, we take the ratio between bias and variance of the weighting coefcients. Simulations results conrm the advantages of the proposed adaptive lter. Keywords: Adaptive lter, LMS algorithm, Combined algorithm,Bias and varia
3、nce trade-off 1 Introduction Adaptive lters have been applied in signal processing and control, as well as in many practical problems, 1, 2. Performance of an adaptive lter depends mainly on the algorithm used for updating the lter weighting coefcients. The most commonly used adaptive systems are th
4、ose based on the Least Mean Square (LMS) adaptive algorithm and its modications (LMS-based algorithms). The LMS is simple for implementation and robust in a number of applications 13. However, since it does not always converge in an acceptable manner, there have been many attempts to improve its per
5、formance by the appropriate modications: sign algorithm (SA) 8, geometric mean LMS (GLMS) 5, variable step-size LMS(VS LMS) 6, 7. Each of the LMS-based algorithms has at least one parameter that should be dened prior to the adaptation procedure (step for LMS and SA; step and smoothing coefcients for
6、 GLMS; various parameters affecting the step for VS LMS). These parameters crucially inuence the lter output during two adaptation phases:transient and steady state. Choice of these parameters is mostly based on some kind of trade-off between the quality of algorithm performance in the mentioned ada
7、ptation phases. We propose a possible approach for the LMS-based adaptive lter performance improvement. Namely, we make a combination of several LMS-based FIR lters with different parameters, and provide the criterion for choosing the most suitable algorithm for different adaptation phases. This met
8、hod may be applied to all the LMS-based algorithms, although we here consider only several of them. The paper is organized as follows. An overview of the considered LMS-based algorithms is given in Section 2.Section 3 proposes the criterion for evaluation and combination of adaptive algorithms. Simu
9、lation results are presented in Section 4. 2. LMS based algorithms Let us dene the input signal vector Tk NkxkxkxX )1()1()( and vector of weighting coefcients as TNk kWkWkWW )()()( 110 .The weighting coefcients vector should be calculated according to:  
10、; 21 kkkk XeEWW ( 1) where is the algorithm step, E is the estimate of the expected value and kTkkk XWde is the error at the in-stant k,and dk is a reference signal. Depe
11、nding on the estimation of expected value in (1), one denes various forms of adaptive algorithms: the LMS kkkk XeXeE , the GLMS ki ikikikk aXeaaXeE 0 10,1, and the SA kkkk esig nXXeE ,1,2,5,8 .The VS LMS has the same form as the LMS, but in the adaptation the step (k) is changed 6, 7. The cons
12、idered adaptive ltering problem consists in trying to adjust a set of weighting coefcients so that the system output, kTkk XWy , tracks a reference signal, assumed as kkTkk nXWd * ,where kn is a zero mean Gaussian noise with the variance 2n ,and *kW is the optimal weight vector (Wiener vector). Two
13、cases will be considered: WWk * is a constant (stationary case) and *kW is time-varying (nonstationary case). In nonstationary case the unknown system parameters( i.e. the optimal vector *kW )are time variant. It is often assumed that variation of *kW may be modeled as Kkk ZWW * 1 is the
14、 zero-mean random perturbation, independent on kX and kn with the autocorrelation matrix IZZEG ZTkk 2 .Note that analysis for the stationary case directly follows for 02 Z .The weighting coefcient vector converges to the Wiener one, if the condition from 1, 2 is satised. Dene the weighting coefcient
15、smisalignment, 13, *kkk WWV . It is due to both the effects of gradient noise (weighting coefcients variations around the average value) and the weighting vector lag (difference between the average and the optimal value), 3. It can be expressed as: *kkkkk WWEWEWV ,
16、 (2) According to (2), the ith element of kV is: (3) where kWbias i is the weighting coefcient bias and ki is a zero-mean random variable with
17、the variance 2 .The variance depends on the type of LMS-based algorithm, as well as on the external noise variance 2n .Thus, if the noise variance is constant or slowly-varying, 2 is time invariant for a particular kkWb i a skWEkWkWkWEkViiiiiii * LMS-based algorithm. In that sense, in th
18、e analysis that follows we will assume that 2 depends only on the algorithm type, i.e. on its parameters. An important performance measure for an adaptive lter is its mean square deviation (MSD) of weighting coefcients. For the adaptive lters, it is given by, 3: kTkk VVEM SD lim. 3. Combined a
19、daptive lter The basic idea of the combined adaptive lter lies in parallel implementation of two or more adaptive LMS-based algorithms, with the choice of the best among them in each iteration 9. Choice of the most appropriate algorithm, in each iteration, reduces to the choice of the best value for
20、 the weighting coefcients. The best weighting coefcient is the one that is, at a given instant, the closest to the corresponding value of the Wiener vector. Let qkWi , be the i th weighting coefcient for LMS-based algorithm with the chosen parameter q at an instant k. Note that on
21、e may now treat all the algorithms in a unied way (LMS: q ,GLMS: q a,SA:q ). LMS-based algorithm behavior is crucially dependent on q. In each iteration there is an optimal value qopt , producing the best performance of the adaptive al- gorithm. Analyze now a combined adaptive lter
22、, with several LMS-based algorithms of the same type, but with different parameter q. The weighting coefcients are random variables distributed around the kWi* ,with qkWbias i , and the variance 2q , related by 4, 9: qiii qkWb i a skWqkW , * ,
23、 (4) where (4) holds with the probability P(), dependent on . For example, for = 2 and a Gaussian distribution,P() = 0.95 (two sigma rule). Dene the condence intervals for 9,4, qkWi : qiqii qkWkqkWkD 2,2,
24、 (5) Then, from (4) and (5) we conclude that, as long as qi qkWb ia s , kDkW ii * , independently on q. This means that, for small bias, the condence intervals, for different sq of the same LMS-based algorithm, of the same LMS-based algorithm, intersect. When, on the other hand, the bia
25、s becomes large, then the central positions of the intervals for different sq are far apart, and they do not intersect. Since we do not have apriori information about the qkWbias i , ,we will use a specic statistical approach to get the criterion for the choice of adaptive algorithm, i.e. for the values of q. The criterion follows from the trade-off condition that bias and variance are of the same order of