Muutke küpsiste eelistusi

Sparse Adaptive Filters for Echo Cancellation [Pehme köide]

Adaptive filters with a large number of coefficients are usually involved in both network and acoustic echo cancellation. Consequently, it is important to improve the convergence rate and tracking of the conventional algorithms used for these applications. This can be achieved by exploiting the sparseness character of the echo paths. Identification of sparse impulse responses was addressed mainly in the last decade with the development of the so-called ``proportionate''-type algorithms. The goal of this book is to present the most important sparse adaptive filters developed for echo cancellation. Besides a comprehensive review of the basic proportionate-type algorithms, we also present some of the latest developments in the field and propose some new solutions for further performance improvement, e.g., variable step-size versions and novel proportionate-type affine projection algorithms. An experimental study is also provided in order to compare many sparse adaptive filters in different echo cancellation scenarios.
1 Introduction
1(6)
1.1 Echo Cancellation
1(2)
1.2 Double-Talk Detection
3(1)
1.3 Sparse Adaptive Filters
4(1)
1.4 Notation
5(2)
2 Sparseness Measures
7(8)
2.1 Vector Norms
7(2)
2.2 Examples of Impulse Responses
9(1)
2.3 Sparseness Measure Based on the l0 Norm
9(1)
2.4 Sparseness Measure Based on the l1 and l2 Norms
10(1)
2.5 Sparseness Measure Based on the l1 and l∞ Norms
11(1)
2.6 Sparseness Measure Based on the l2 and l∞ Norms
12(3)
3 Performance Measures
15(4)
3.1 Mean-Square Error
15(1)
3.2 Echo-Return Loss Enhancement
16(1)
3.3 Misalignment
17(2)
4 Wiener and Basic Adaptive Filters
19(18)
4.1 Wiener Filter
19(5)
4.1.1 Efficient Computation of the Wiener-Hopf Equations
22(2)
4.2 Deterministic Algorithm
24(4)
4.3 Stochastic Algorithm
28(3)
4.4 Variable Step-Size NLMS Algorithm
31(3)
4.4.1 Convergence of the Misalignment
33(1)
4.5 Sign Algorithms
34(3)
5 Basic Proportionate-Type NLMS Adaptive Filters
37(10)
5.1 General Derivation
37(2)
5.2 The Proportionate NLMS (PNLMS) and PNLMS++ Algorithms
39(1)
5.3 The Signed Regressor PNLMS Algorithm
40(1)
5.4 The Improved PNLMS (IPNLMS) Algorithms
41(6)
5.4.1 The Regular IPNLMS
42(2)
5.4.2 The IPNLMS with the l0 Norm
44(1)
5.4.3 The IPNLMS with a Norm-Like Diversity Measure
45(2)
6 The Exponentiated Gradient Algorithms
47(8)
6.1 Cost Function
47(1)
6.2 The EG Algorithm for Positive Weights
48(1)
6.3 The EG± Algorithm for Positive and Negative Weights
49(2)
6.4 Link Between NLMS and EG± Algorithms
51(2)
6.5 Link Between IPNLMS and EG± Algorithms
53(2)
7 The Mu-Law PNLMS and Other PNLMS-Type Algorithms
55(10)
7.1 The Mu-Law PNLMS Algorithms
55(4)
7.2 The Sparseness-Controlled PNLMS Algorithms
59(1)
7.3 The PNLMS Algorithm with Individual Activation Factors
60(5)
8 Variable Step-Size PNLMS Algorithms
65(8)
8.1 Considerations on the Convergence of the NLMS Algorithm
65(5)
8.2 A Variable Step-Size PNLMS Algorithm
70(3)
9 Proportionate Affine Projection Algorithms
73(14)
9.1 Classical Derivation
73(2)
9.2 A Novel Derivation
75(4)
9.3 A Variable Step-Size Version
79(8)
10 Experimental Study
87(16)
10.1 Experimental Conditions
87(1)
10.2 IPNLMS Versus PNLMS
88(4)
10.3 MPNLMS, SC-PNLMS, and IAF-PNLMS
92(3)
10.4 VSS-IPNLMS
95(1)
10.5 PAPAs
96(7)
Bibliography 103(8)
Index 111(2)
Authors' Biographies 113