Muutke küpsiste eelistusi

Adaptive Filtering: Fundamentals of Least Mean Squares with MATLAB® [Pehme köide]

(The University of Alabama in Huntsville, USA)
  • Formaat: Paperback / softback, 364 pages, kõrgus x laius: 234x156 mm, kaal: 544 g, 19 Tables, black and white; 129 Illustrations, black and white
  • Ilmumisaeg: 26-Sep-2014
  • Kirjastus: CRC Press Inc
  • ISBN-10: 1482253356
  • ISBN-13: 9781482253351
  • Formaat: Paperback / softback, 364 pages, kõrgus x laius: 234x156 mm, kaal: 544 g, 19 Tables, black and white; 129 Illustrations, black and white
  • Ilmumisaeg: 26-Sep-2014
  • Kirjastus: CRC Press Inc
  • ISBN-10: 1482253356
  • ISBN-13: 9781482253351

Adaptive filters are used in many diverse applications, appearing in everything from military instruments to cellphones and home appliances. Adaptive Filtering: Fundamentals of Least Mean Squares with MATLAB® covers the core concepts of this important field, focusing on a vital part of the statistical signal processing area—the least mean square (LMS) adaptive filter.

This largely self-contained text:

  • Discusses random variables, stochastic processes, vectors, matrices, determinants, discrete random signals, and probability distributions
  • Explains how to find the eigenvalues and eigenvectors of a matrix and the properties of the error surfaces
  • Explores the Wiener filter and its practical uses, details the steepest descent method, and develops the Newton’s algorithm
  • Addresses the basics of the LMS adaptive filter algorithm, considers LMS adaptive filter variants, and provides numerous examples
  • Delivers a concise introduction to MATLAB®, supplying problems, computer experiments, and more than 110 functions and script files

Featuring robust appendices complete with mathematical tables and formulas, Adaptive Filtering: Fundamentals of Least Mean Squares with MATLAB® clearly describes the key principles of adaptive filtering and effectively demonstrates how to apply them to solve real-world problems.

Preface xi
Author xiii
Abbreviations xv
MATLAB® Functions xvii
Chapter 1 Vectors 1(16)
1.1 Introduction
1(10)
1.1.1 Multiplication by a Constant and Addition and Subtraction
1(2)
1.1.1.1 Multiplication by a Constant
1(1)
1.1.1.2 Addition and Subtraction
2(1)
1.1.2 Unit Coordinate Vectors
3(1)
1.1.3 Inner Product
3(2)
1.1.4 Distance between Two Vectors
5(1)
1.1.5 Mean Value of a Vector
5(2)
1.1.6 Direction Cosines
7(2)
1.1.7 The Projection of a Vector
9(1)
1.1.8 Linear Transformations
10(1)
1.2 Linear Independence, Vector Spaces, and Basis Vectors
11(2)
1.2.1 Orthogonal Basis Vectors
13(1)
Problems
13(1)
Hints—Suggestions—Solutions
14(3)
Chapter 2 Matrices 17(24)
2.1 Introduction
17(1)
2.2 General Types of Matrices
17(1)
2.2.1 Diagonal, Identity, and Scalar Matrices
17(1)
2.2.2 Upper and Lower Triangular Matrices
17(1)
2.2.3 Symmetric and Exchange Matrices
18(1)
2.2.4 Toeplitz Matrix
18(1)
2.2.5 Hankel and Hermitian
18(1)
2.3 Matrix Operations
18(3)
2.4 Determinant of a Matrix
21(3)
2.4.1 Definition and Expansion of a Matrix
21(1)
2.4.2 Trace of a Matrix
22(1)
2.4.3 Inverse of a Matrix
22(2)
2.5 Linear Equations
24(7)
2.5.1 Square Matrices (n x n)
24(2)
2.5.2 Rectangular Matrices (n < m)
26(1)
2.5.3 Rectangular Matrices (m < n)
27(2)
2.5.4 Quadratic and Hermitian Forms
29(2)
2.6 Eigenvalues and Eigenvectors
31(5)
2.6.1 Eigenvectors
32(1)
2.6.2 Properties of Eigenvalues and Eigenvectors
33(3)
Problems
36(1)
Hints—Suggestions—Solutions
37(4)
Chapter 3 Processing of Discrete Deterministic Signals: Discrete Systems 41(22)
3.1 Discrete-Time Signals
41(1)
3.1.1 Time-Domain Representation of Basic Continuous and Discrete Signals
41(1)
3.2 Transform-Domain Representation of Discrete Signals
42(6)
3.2.1 Discrete-Time Fourier Transform
42(2)
3.2.2 The Discrete FT
44(2)
3.2.3 Properties of DFT
46(2)
3.3 The z-Transform
48(4)
3.4 Discrete-Time Systems
52(8)
3.4.1 Linearity and Shift Invariant
52(1)
3.4.2 Causality
52(1)
3.4.3 Stability
52(5)
3.4.3 Transform-Domain Representation
57(3)
Problems
60(1)
Hints—Suggestions—Solutions
61(2)
Chapter 4 Discrete-Time Random Processes 63(58)
4.1 Discrete Random Signals, Probability Distributions, and Averages of Random Variables
63(8)
4.1.1 Stationary and Ergodic Processes
65(1)
4.1.2 Averages of RV
66(5)
4.1.2.1 Mean Value
66(1)
4.1.2.2 Correlation
67(2)
4.1.2.3 Covariance
69(2)
4.2 Stationary Processes
71(4)
4.2.1 Autocorrelation Matrix
71(3)
4.2.2 Purely Random Process (White Noise)
74(1)
4.2.3 Random Walk
74(1)
4.3 Special Random Signals and pdf's
75(5)
4.3.1 White Noise
75(1)
4.3.2 Gaussian Distribution (Normal Distribution)
75(3)
4.3.3 Exponential Distribution
78(1)
4.3.4 Lognormal Distribution
79(1)
4.3.5 Chi-Square Distribution
80(1)
4.4 Wiener—Khinchin Relations
80(3)
4.5 Filtering Random Processes
83(2)
4.6 Special Types of Random Processes
85(3)
4.6.1 Autoregressive Process
85(3)
4.7 Nonparametric Spectra Estimation
88(25)
4.7.1 Periodogram
88(2)
4.7.2 Correlogram
90(1)
4.7.3 Computation of Periodogram and Correlogram Using FFT
90(1)
4.7.4 General Remarks on the Periodogram
91(4)
4.7.4.1 Windowed Periodogram
93(2)
4.7.5 Proposed Book Modified Method for Better Frequency Resolution
95(5)
4.7.5.1 Using Transformation of the rv's
95(1)
4.7.5.2 Blackman—Tukey Method
96(4)
4.7.6 Bartlett Periodogram
100(6)
4.7.7 The Welch Method
106(3)
4.7.8 Proposed Modified Welch Methods
109(12)
4.7.8.1 Modified Method Using Different Types of Overlapping
109(2)
4.7.8.2 Modified Welch Method Using Transformation of rv's
111(2)
Problems
113(1)
Hints—Solutions—Suggestions
114(7)
Chapter 5 The Wiener Filter 121(50)
5.1 Introduction
121(1)
5.2 The LS Technique
121(19)
5.2.1 Linear LS
122(3)
5.2.2 LS Formulation
125(5)
5.2.3 Statistical Properties of LSEs
130(2)
5.2.4 The LS Approach
132(3)
5.2.5 Orthogonality Principle
135(1)
5.2.6 Corollary
135(1)
5.2.7 Projection Operator
136(2)
5.2.8 LS Finite Impulse Response Filter
138(2)
5.3 The Mean-Square Error
140(6)
5.3.1 The FIR Wiener Filter
142(4)
5.4 The Wiener Solution
146(5)
5.4.1 Orthogonality Condition
148(1)
5.4.2 Normalized Performance Equation
149(1)
5.4.3 Canonical Form of the Error-Performance Surface
150(1)
5.5 Wiener Filtering Examples
151(11)
5.5.1 Minimum MSE
154(1)
5.5.2 Optimum Filter (w°)
154(7)
5.5.3 Linear Prediction
161(1)
Problems
162(2)
Additional Problems
164(1)
Hints—Solutions—Suggestions
164(2)
Additional Problems
166(5)
Chapter 6 Eigenvalues of Rx: Properties of the Error Surface 171(12)
6.1 The Eigenvalues of the Correlation Matrix
171(3)
6.1.1 Karhunen—Loeve Transformation
172(2)
6.2 Geometrical Properties of the Error Surface
174(4)
Problems
178(1)
Hints—Solutions—Suggestions
178(5)
Chapter 7 Newton's and Steepest Descent Methods 183(20)
7.1 One-Dimensional Gradient Search Method
183(3)
7.1.1 Gradient Search Algorithm
183(2)
7.1.2 Newton's Method in Gradient Search
185(1)
7.2 Steepest Descent Algorithm
186(6)
7.2.1 Steepest Descent Algorithm Applied to Wiener Filter
187(1)
7.2.2 Stability (Convergence) of the Algorithm
188(2)
7.2.3 Transient Behavior of MSE
190(1)
7.2.4 Learning Curve
191(1)
7.3 Newton's Method
192(2)
7.4 Solution of the Vector Difference Equation
194(3)
Problems
197(1)
Edition Problems
197(1)
Hints—Solutions—Suggestions
198(2)
Additional Problems
200(3)
Chapter 8 The Least Mean-Square Algorithm 203(36)
8.1 Introduction
203(1)
8.2 The LMS Algorithm
203(3)
8.3 Examples Using the LMS Algorithm
206(13)
8.4 Performance Analysis of the LMS Algorithm
219(9)
8.4.1 Learning Curve
221(3)
8.4.2 The Coefficient-Error or Weighted-Error Correlation Matrix
224(1)
8.4.3 Excess MSE and Misadjustment
225(2)
8.4.4 Stability
227(1)
8.4.5 The LMS and Steepest Descent Methods
228(1)
8.5 Complex Representation of the LMS Algorithm
228(3)
Problems
231(1)
Hints—Solutions—Suggestions
232(7)
Chapter 9 Variants of Least Mean-Square Algorithm 239(62)
9.1 The Normalized Least Mean-Square Algorithm
239(5)
9.2 Power Normalized LMS
244(4)
9.3 Self-Correcting LMS Filter
248(2)
9.4 The Sign-Error LMS Algorithm
250(1)
9.5 The NLMS Sign-Error Algorithm
250(2)
9.6 The Sign-Regressor LMS Algorithm
252(1)
9.7 Self-Correcting Sign-Regressor LMS Algorithm
253(1)
9.8 The Normalized Sign-Regressor LMS Algorithm
253(1)
9.9 The Sign—Sign LMS Algorithm
254(1)
9.10 The Normalized Sign—Sign LMS Algorithm
255(2)
9.11 Variable Step-Size LMS
257(2)
9.12 The Leaky LMS Algorithm
259(3)
9.13 The Linearly Constrained LMS Algorithm
262(2)
9.14 The Least Mean Fourth Algorithm
264(1)
9.15 The Least Mean Mixed Norm LMS Algorithm
265(1)
9.16 Short-Length Signal of the LMS Algorithm
266(1)
9.17 The Transform Domain LMS Algorithm
267(5)
9.17.1 Convergence
271(1)
9.18 The Error Normalized Step-Size LMS Algorithm
272(4)
9.19 The Robust Variable Step-Size LMS Algorithm
276(6)
9.20 The Modified LMS Algorithm
282(1)
9.21 Momentum LMS
283(2)
9.22 The Block LMS Algorithm
285(1)
9.23 The Complex LMS Algorithm
286(2)
9.24 The Affine LMS Algorithm
288(2)
9.25 The Complex Affine LMS Algorithm
290(1)
Problems
291(2)
Hints—Solutions—Suggestions
293(8)
Appendix 1: Suggestions and Explanations for MATLAB Use 301(16)
A1.1 Suggestions and Explanations for MATLAB Use
301(8)
A1.1.1 Creating a Directory
301(1)
A1.1.2 Help
301(1)
A1.1.3 Save and Load
302(1)
A1.1.4 MATLAB as Calculator
302(1)
A1.1.5 Variable Names
302(1)
A1.1.6 Complex Numbers
302(1)
A1.1.7 Array Indexing
302(1)
A1.1.8 Extracting and Inserting Numbers in Arrays
303(1)
A1.1.9 Vectorization
303(1)
A1.1.10 Windowing
304(1)
A1.1.11 Matrices
304(1)
A1.1.12 Producing a Periodic Function
305(1)
A1.1.13 Script Files
305(1)
A1.1.14 Functions
305(1)
A1.1.15 Complex Expressions
306(1)
A1.1.16 Axes
306(1)
A1.1.17 2D Graphics
306(2)
A1.1.18 3D Plots
308(1)
A1.1.18.1 Mesh-Type Figures
308(1)
A1.2 General Purpose Commands
309(2)
A1.2.1 Managing Commands and Functions
309(1)
A1.2.2 Managing Variables and Workplace
309(1)
A1.2.3 Operators and Special Characters
309(1)
A1.2.4 Control Flow
310(1)
A1.3 Elementary Matrices and Matrix Manipulation
311(1)
A1.3.1 Elementary Matrices and Arrays
311(1)
A1.3.2 Matrix Manipulation
311(1)
A1.4 Elementary Mathematical Functions
312(1)
A1.4.1 Elementary Functions
312(1)
A1.5 Numerical Linear Algebra
313(1)
A1.5.1 Matrix Analysis
313(1)
A1.6 Data Analysis
313(1)
A1.6.1 Basic Operations
313(1)
A1.6.2 Filtering and Convolution
313(1)
A1.6.3 Fourier Transforms
314(1)
A1.7 2D Plotting
314(3)
A1.7.1 2D Plots
314(3)
Appendix 2: Matrix Analysis 317(12)
A2.1 Definitions
317(2)
A2.2 Special Matrices
319(3)
A2.3 Matrix Operation and Formulas
322(3)
A2.4 Eigendecomposition of Matrices
325(1)
A2.5 Matrix Expectations
326(1)
A2.6 Differentiation of a Scalar Function with respect to a Vector
327(2)
Appendix 3: Mathematical Formulas 329(6)
A3.1 Trigonometric Identities
329(1)
A3.2 Orthogonality
330(1)
A3.3 Summation of Trigonometric Forms
331(1)
A3.4 Summation Formulas
331(1)
A3.4.1 Finite Summation Formulas
331(1)
A3.4.2 Infinite Summation Formulas
331(1)
A3.5 Series Expansions
332(1)
A3.6 Logarithms
332(1)
A3.7 Some Definite Integrals
332(3)
Appendix 4: Lagrange Multiplier Method 335(2)
Bibliography 337(2)
Index 339
Alexander D. Poularikas is chairman of the electrical and computer engineering department at the University of Alabama in Huntsville, USA. He previously held positions at University of Rhode Island, Kingston, USA and the University of Denver, Colorado, USA. He has published, coauthored, and edited 14 books and served as an editor-in-chief of numerous book series. A Fulbright scholar, lifelong senior member of the IEEE, and member of Tau Beta Pi, Sigma Nu, and Sigma Pi, he received the IEEE Outstanding Educators Award, Huntsville Section in 1990 and 1996. Dr. Poularikas holds a Ph.D from the University of Arkansas, Fayetteville, USA.