Preface |
|
xiii | |
Acknowledgements |
|
xvii | |
|
The Magic of Complex Numbers |
|
|
1 | (12) |
|
History of Complex Numbers |
|
|
2 | (6) |
|
|
7 | (1) |
|
History of Mathematical Notation |
|
|
8 | (1) |
|
Development of Complex Valued Adaptive Signal Processing |
|
|
9 | (4) |
|
Why Signal Processing in the Complex Domain? |
|
|
13 | (20) |
|
Some Examples of Complex Valued Signal Processing |
|
|
13 | (6) |
|
Duality Between Signal Representations in R and C |
|
|
18 | (1) |
|
Modelling in C is Not Only Convenient But Also Natural |
|
|
19 | (1) |
|
Why Complex Modelling of Real Valued Processes? |
|
|
20 | (3) |
|
Phase Information in Imaging |
|
|
20 | (2) |
|
Modelling of Directional Processes |
|
|
22 | (1) |
|
Exploiting the Phase Information |
|
|
23 | (3) |
|
Synchronisation of Real Valued Processes |
|
|
24 | (1) |
|
Adaptive Filtering by Incorporating Phase Information |
|
|
25 | (1) |
|
Other Applications of Complex Domain Processing of Real Valued Signals |
|
|
26 | (3) |
|
Additional Benefits of Complex Domain Processing |
|
|
29 | (4) |
|
Adaptive Filtering Architectures |
|
|
33 | (10) |
|
Linear and Nonlinear Stochastic Models |
|
|
34 | (1) |
|
Linear and Nonlinear Adaptive Filtering Architectures |
|
|
35 | (4) |
|
Feedforward Neural Networks |
|
|
36 | (1) |
|
Recurrent Neural Networks |
|
|
37 | (1) |
|
Neural Networks and Polynomial Filters |
|
|
38 | (1) |
|
State Space Representation and Canonical Forms |
|
|
39 | (4) |
|
Complex Nonlinear Activation Functions |
|
|
43 | (12) |
|
Properties of Complex Functions |
|
|
43 | (3) |
|
Singularities of Complex Functions |
|
|
45 | (1) |
|
Universal Function Approximation |
|
|
46 | (2) |
|
Universal Approximation in R |
|
|
47 | (1) |
|
Nonlinear Activation Functions for Complex Neural Networks |
|
|
48 | (5) |
|
|
49 | (2) |
|
Fully Complex Nonlinear Activation Functions |
|
|
51 | (2) |
|
Generalised Splitting Activation Functions (GSAF) |
|
|
53 | (1) |
|
|
53 | (1) |
|
Summary: Choice of the Complex Activation Function |
|
|
54 | (1) |
|
|
55 | (14) |
|
Continuous Complex Functions |
|
|
56 | (1) |
|
The Cauchy-Riemann Equations |
|
|
56 | (1) |
|
Generalised Derivatives of Functions of Complex Variable |
|
|
57 | (5) |
|
|
59 | (1) |
|
Link between R- and C-derivatives |
|
|
60 | (2) |
|
CR-derivatives of Cost Functions |
|
|
62 | (7) |
|
|
62 | (2) |
|
|
64 | (1) |
|
The Complex Jacobian and Complex Differential |
|
|
64 | (1) |
|
Gradient of a Cost Function |
|
|
65 | (4) |
|
Complex Valued Adaptive Filters |
|
|
69 | (22) |
|
Adaptive Filtering Configurations |
|
|
70 | (3) |
|
The Complex Least Mean Square Algorithm |
|
|
73 | (7) |
|
Convergence of the CLMS Algorithm |
|
|
75 | (5) |
|
Nonlinear Feedforward Complex Adaptive Filters |
|
|
80 | (5) |
|
Fully Complex Nonlinear Adaptive Filters |
|
|
80 | (2) |
|
Derivation of CNGD using CR calculus |
|
|
82 | (1) |
|
|
83 | (1) |
|
Dual Univariate Adaptive Filtering Approach (DUAF) |
|
|
84 | (1) |
|
Normalisation of Learning Algorithms |
|
|
85 | (2) |
|
Performance of Feedforward Nonlinear Adaptive Filters |
|
|
87 | (2) |
|
Summary: Choice of a Nonlinear Adaptive Filter |
|
|
89 | (2) |
|
Adaptive Filters with Feedback |
|
|
91 | (16) |
|
Training of IIR Adaptive Filters |
|
|
92 | (5) |
|
Coefficient Update for Linear Adaptive IIR Filters |
|
|
93 | (3) |
|
Training of IIR filters with Reduced Computational Complexity |
|
|
96 | (1) |
|
Nonlinear Adaptive IIR Filters: Recurrent Perceptron |
|
|
97 | (2) |
|
Training of Recurrent Neural Networks |
|
|
99 | (3) |
|
Other Learning Algorithms and Computational Complexity |
|
|
102 | (1) |
|
|
102 | (5) |
|
Filters with an Adaptive Stepsize |
|
|
107 | (12) |
|
Benveniste Type Variable Stepsize Algorithms |
|
|
108 | (2) |
|
Complex Valued GNGD Algorithms |
|
|
110 | (3) |
|
Complex GNGD for Nonlinear Filters (CFANNGD) |
|
|
112 | (1) |
|
|
113 | (6) |
|
Filters with an Adaptive Amplitude of Nonlinearity |
|
|
119 | (10) |
|
Dynamical Range Reduction |
|
|
119 | (2) |
|
FIR Adaptive Filters with an Adaptive Nonlinearity |
|
|
121 | (1) |
|
Recurrent Neural Networks with Trainable Amplitude of Activation Functions |
|
|
122 | (2) |
|
|
124 | (5) |
|
Data-reusing Algorithms for Complex Valued Adaptive Filters |
|
|
129 | (8) |
|
The Data-reusing Complex Valued Least Mean Square (DRCLMS) Algorithm |
|
|
129 | (2) |
|
Data-reusing Complex Nonlinear Adaptive Filters |
|
|
131 | (3) |
|
|
132 | (2) |
|
Data-reusing Algorithms for Complex RNNs |
|
|
134 | (3) |
|
Complex Mappings and Mobius Transformations |
|
|
137 | (14) |
|
Matrix Representation of a Complex Number |
|
|
137 | (3) |
|
The Mobius Transformation |
|
|
140 | (2) |
|
Activation Functions and Mobius Transformations |
|
|
142 | (4) |
|
All-pass Systems as Mobius Transformations |
|
|
146 | (1) |
|
|
147 | (4) |
|
Augmented Complex Statistics |
|
|
151 | (18) |
|
Complex Random Variables (CRV) |
|
|
152 | (6) |
|
|
153 | (1) |
|
The Multivariate Complex Normal Distribution |
|
|
154 | (3) |
|
Moments of Complex Random Variables (CRV) |
|
|
157 | (1) |
|
Complex Circular Random Variables |
|
|
158 | (1) |
|
|
159 | (2) |
|
Wide Sense Stationarity, Multicorrelations, and Multispectra |
|
|
160 | (1) |
|
Strict Circularity and Higher-order Statistics |
|
|
161 | (1) |
|
Second-order Characterisation of Complex Signals |
|
|
161 | (8) |
|
Augmented Statistics of Complex Signals |
|
|
161 | (3) |
|
Second-order Complex Circularity |
|
|
164 | (5) |
|
Widely Linear Estimation and Augmented CLMS (ACLMS) |
|
|
169 | (14) |
|
Minimum Mean Square Error (MMSE) Estimation in C |
|
|
169 | (3) |
|
Widely Linear Modelling in C |
|
|
171 | (1) |
|
|
172 | (1) |
|
Autoregressive Modelling in C |
|
|
173 | (2) |
|
Widely Linear Autoregressive Modelling in C |
|
|
174 | (1) |
|
Quantifying Benefits of Widely Linear Estimation |
|
|
174 | (1) |
|
The Augmented Complex LMS (ACLMS) Algorithm |
|
|
175 | (3) |
|
Adaptive Prediction Based on ACLMS |
|
|
178 | (5) |
|
Wind Forecasting Using Augmented Statistics |
|
|
180 | (3) |
|
Duality Between Complex Valued and Real Valued Filters |
|
|
183 | (8) |
|
A Dual Channel Real Valued Adaptive Filter |
|
|
184 | (2) |
|
Duality Between Real and Complex Valued Filters |
|
|
186 | (2) |
|
Operation of Standard Complex Adaptive Filters |
|
|
186 | (1) |
|
Operation of Widely Linear Complex Filters |
|
|
187 | (1) |
|
|
188 | (3) |
|
Widely Linear Filters with Feedback |
|
|
191 | (16) |
|
The Widely Linear ARMA (WL-ARMA) Model |
|
|
192 | (1) |
|
Widely Linear Adaptive Filters with Feedback |
|
|
192 | (5) |
|
Widely Linear Adaptive IIR Filters |
|
|
195 | (1) |
|
Augmented Recurrent Perceptron Learning Rule |
|
|
196 | (1) |
|
The Augmented Complex Valued RTRL (ACRTRL) Algorithm |
|
|
197 | (1) |
|
The Augmented Kalman Filter Algorithm for RNNs |
|
|
198 | (2) |
|
EKF Based Training of Complex RNNs |
|
|
200 | (1) |
|
Augmented Complex Unscented Kalman Filter (ACUKF) |
|
|
200 | (3) |
|
State Space Equations for the Complex Unscented Kalman Filter |
|
|
201 | (1) |
|
ACUKF Based Training of Complex RNNs |
|
|
202 | (1) |
|
|
203 | (4) |
|
Collaborative Adaptive Filtering |
|
|
207 | (14) |
|
Parametric Signal Modality Characterisation |
|
|
207 | (2) |
|
Standard Hybrid Filtering in R |
|
|
209 | (1) |
|
Tracking the Linear/Nonlinear Nature of Complex Valued Signals |
|
|
210 | (4) |
|
Signal Modality Characterisation in C |
|
|
211 | (3) |
|
Split vs Fully Complex Signal Natures |
|
|
214 | (2) |
|
Online Assessment of the Nature of Wind Signal |
|
|
216 | (1) |
|
Effects of Averaging on Signal Nonlinearity |
|
|
216 | (1) |
|
Collaborative Filters for General Complex Signals |
|
|
217 | (4) |
|
Hybrid Filters for Noncircular Signals |
|
|
218 | (2) |
|
Online Test for Complex Circularity |
|
|
220 | (1) |
|
Adaptive Filtering Based on EMD |
|
|
221 | (12) |
|
The Empirical Mode Decomposition Algorithm |
|
|
222 | (4) |
|
Empirical Mode Decomposition as a Fixed Point Iteration |
|
|
223 | (1) |
|
Applications of Real Valued EMD |
|
|
224 | (1) |
|
Uniqueness of the Decomposition |
|
|
225 | (1) |
|
Complex Extensions of Empirical Mode Decomposition |
|
|
226 | (4) |
|
Complex Empirical Mode Decomposition |
|
|
227 | (1) |
|
Rotation Invariant Empirical Mode Decomposition (RIEMD) |
|
|
228 | (1) |
|
Bivariate Empirical Mode Decomposition (BEMD) |
|
|
228 | (2) |
|
Addressing the Problem of Uniqueness |
|
|
230 | (1) |
|
Applications of Complex Extensions of EMD |
|
|
230 | (3) |
|
Validation of Complex Representations - Is This Worthwhile? |
|
|
233 | (12) |
|
Signal Modality Characterisation in R |
|
|
234 | (5) |
|
|
235 | (2) |
|
Test Statistics: The DVV Method |
|
|
237 | (2) |
|
Testing for the Validity of Complex Representation |
|
|
239 | (4) |
|
Complex Delay Vector Variance Method (CDVV) |
|
|
240 | (3) |
|
Quantifying Benefits of Complex Valued Representation |
|
|
243 | (2) |
|
Pros and Cons of the Complex DVV Method |
|
|
244 | (1) |
|
Appendix A: Some Distinctive Properties of Calculus in C |
|
|
245 | (6) |
|
Appendix B: Liouville's Theorem |
|
|
251 | (2) |
|
Appendix C: Hypercomplex and Clifford Algebras |
|
|
253 | (4) |
|
Definitions of Algebraic Notions of Group, Ring and Field |
|
|
253 | (1) |
|
Definition of a Vector Space |
|
|
254 | (1) |
|
Higher Dimension Algebras |
|
|
254 | (1) |
|
The Algebra of Quaternions |
|
|
255 | (1) |
|
|
256 | (1) |
|
Appendix D: Real Valued Activation Functions |
|
|
257 | (2) |
|
Logistic Sigmoid Activation Function |
|
|
257 | (1) |
|
Hyperbolic Tangent Activation Function |
|
|
258 | (1) |
|
Appendix E: Elementary Transcendental Functions (ETF) |
|
|
259 | (4) |
|
Appendix F: The Notation and Standard Vector and Matrix Differentiation |
|
|
263 | (2) |
|
|
263 | (1) |
|
Standard Vector and Matrix Differentiation |
|
|
263 | (2) |
|
Appendix G: Notions From Learning Theory |
|
|
265 | (4) |
|
|
266 | (1) |
|
The Bias-Variance Dilemma |
|
|
266 | (1) |
|
Recursive and Iterative Gradient Estimation Techniques |
|
|
267 | (1) |
|
Transformation of Input Data |
|
|
267 | (2) |
|
Appendix H: Notions from Approximation Theory |
|
|
269 | (4) |
|
Appendix I: Terminology Used in the Field of Neural Networks |
|
|
273 | (2) |
|
Appendix J: Complex Valued Pipelined Recurrent Neural Network (CPRNN) |
|
|
275 | (4) |
|
The Complex RTRL Algorithm (CRTRL) for CPRNN |
|
|
275 | (4) |
|
Linear Subsection Within the PRNN |
|
|
277 | (2) |
|
Appendix K: Gradient Adaptive Step Size (GASS) Algorithms in R |
|
|
279 | (4) |
|
Gradient Adaptive Stepsize Algorithms Based on ∂J/∂μ |
|
|
280 | (1) |
|
Variable Stepsize Algorithms Based on ∂J/∂ε |
|
|
281 | (2) |
|
Appendix L: Derivation of Partial Derivatives from Chapter 8 |
|
|
283 | (4) |
|
Derivation of ∂e(k)/∂n(k) |
|
|
283 | (1) |
|
Derivation of ∂e*(k)/∂ε(k -- 1) |
|
|
284 | (2) |
|
Derivation of ∂w(k)/∂ε(k - 1) |
|
|
286 | (1) |
|
Appendix M: A Posteriori Learning |
|
|
287 | (4) |
|
A Posteriori Strategies in Adaptive Learning |
|
|
288 | (3) |
|
Appendix N: Notions from Stability Theory |
|
|
291 | (2) |
|
Appendix O: Linear Relaxation |
|
|
293 | (6) |
|
|
293 | (1) |
|
Relaxation in Linear Systems |
|
|
294 | (5) |
|
Convergence in the Norm or State Space? |
|
|
297 | (2) |
|
Appendix P: Contraction Mappings, Fixed Point Iteration and Fractals |
|
|
299 | (10) |
|
|
303 | (2) |
|
More on Convergence: Modified Contraction Mapping |
|
|
305 | (3) |
|
Fractals and Mandelbrot Set |
|
|
308 | (1) |
References |
|
309 | (12) |
Index |
|
321 | |