Muutke küpsiste eelistusi

E-raamat: Sampling Theory: Beyond Bandlimited Systems

(Weizmann Institute of Science, Israel)
  • Formaat: PDF+DRM
  • Ilmumisaeg: 09-Apr-2015
  • Kirjastus: Cambridge University Press
  • Keel: eng
  • ISBN-13: 9781316055854
  • Formaat - PDF+DRM
  • Hind: 102,49 €*
  • * hind on lõplik, st. muud allahindlused enam ei rakendu
  • Lisa ostukorvi
  • Lisa soovinimekirja
  • See e-raamat on mõeldud ainult isiklikuks kasutamiseks. E-raamatuid ei saa tagastada.
  • Formaat: PDF+DRM
  • Ilmumisaeg: 09-Apr-2015
  • Kirjastus: Cambridge University Press
  • Keel: eng
  • ISBN-13: 9781316055854

DRM piirangud

  • Kopeerimine (copy/paste):

    ei ole lubatud

  • Printimine:

    ei ole lubatud

  • Kasutamine:

    Digitaalõiguste kaitse (DRM)
    Kirjastus on väljastanud selle e-raamatu krüpteeritud kujul, mis tähendab, et selle lugemiseks peate installeerima spetsiaalse tarkvara. Samuti peate looma endale  Adobe ID Rohkem infot siin. E-raamatut saab lugeda 1 kasutaja ning alla laadida kuni 6'de seadmesse (kõik autoriseeritud sama Adobe ID-ga).

    Vajalik tarkvara
    Mobiilsetes seadmetes (telefon või tahvelarvuti) lugemiseks peate installeerima selle tasuta rakenduse: PocketBook Reader (iOS / Android)

    PC või Mac seadmes lugemiseks peate installima Adobe Digital Editionsi (Seeon tasuta rakendus spetsiaalselt e-raamatute lugemiseks. Seda ei tohi segamini ajada Adober Reader'iga, mis tõenäoliselt on juba teie arvutisse installeeritud )

    Seda e-raamatut ei saa lugeda Amazon Kindle's. 

Covering the fundamental mathematical underpinnings together with engineering principles and applications, this is a comprehensive guide to the theory and practice of sampling. Written from an engineering perspective, it focuses on uniform sampling in shift-invariant spaces and deterministic signals, and includes a wealth of worked examples and end-of-chapter exercises.

Covering the fundamental mathematical underpinnings together with key principles and applications, this book provides a comprehensive guide to the theory and practice of sampling from an engineering perspective. Beginning with traditional ideas such as uniform sampling in shift-invariant spaces and working through to the more recent fields of compressed sensing and sub-Nyquist sampling, the key concepts are addressed in a unified and coherent way. Emphasis is given to applications in signal processing and communications, as well as hardware considerations, throughout. With 200 worked examples and over 200 end-of-chapter problems, this is an ideal course textbook for senior undergraduate and graduate students. It is also an invaluable reference or self-study guide for engineers and students across industry and academia.

Arvustused

'I must say that this is really a unique book on sampling theory. The introduction of vector space terminology right from the beginning is a great idea. Starting from classical sampling, the book goes all the way to the most recent breakthroughs including compressive sensing, union-of-subspace setting, and the CoSamp algorithm. Eldar has the right combination of mathematics and practical sense, and she has very good command of the 'art of writing'. This, combined with the archival nature of the topic (which has seen seven decades of history), makes the book an invaluable addition to the Cambridge collection of advanced texts in signal processing.' P. P. Vaidyanathan, California Institute of Technology 'The observation that a bandlimited signal is completely specified by uniform sampling at Nyquist rate might well go back to Cauchy, and the idea of approaching signal recovery as parameter estimation certainly goes back to the 1950s. These ideas provided the theoretical foundation for digitization of telephone networks and in turn the challenge of digital communication inspired new developments in signal analysis. Today new applications from A/D conversion to medical imaging are inspiring a new sampling theory and this book takes us to terra incognita beyond bandlimited systems.' Robert Calderbank, Duke University

Muu info

A comprehensive guide to sampling for engineers, covering the fundamental mathematical underpinnings together with practical engineering principles and applications.
Preface xvii
List of abbreviations xxiv
1 Introduction 1(8)
1.1 Standard sampling
2(3)
1.2 Beyond bandlimited signals
5(1)
1.3 Outline and outlook
6(3)
2 Introduction to linear algebra 9(58)
2.1 Signal expansions: some examples
9(4)
2.2 Vector spaces
13(2)
2.2.1 Subspaces
13(1)
2.2.2 Properties of subspaces
14(1)
2.3 Inner product spaces
15(6)
2.3.1 The inner product
16(1)
2.3.2 Orthogonality
17(2)
2.3.3 Calculus in inner product spaces
19(1)
2.3.4 Hilbert spaces
20(1)
2.4 Linear transformations
21(11)
2.4.1 Subspaces associated with a linear transformation
22(2)
2.4.2 Invertibility
24(1)
2.4.3 Direct-sum decompositions
25(4)
2.4.4 The adjoint
29(3)
2.5 Basis expansions
32(12)
2.5.1 Set transformations
33(2)
2.5.2 Bases
35(1)
2.5.3 Riesz bases
36(4)
2.5.4 Riesz basis expansions
40(4)
2.6 Projection operators
44(7)
2.6.1 Orthogonal projection operators
46(2)
2.6.2 Oblique projection operators
48(3)
2.7 Pseudoinverse of a transformation
51(4)
2.7.1 Definition and properties
52(2)
2.7.2 Matrices
54(1)
2.8 Frames
55(8)
2.8.1 Definition of frames
56(2)
2.8.2 Frame expansions
58(1)
2.8.3 The canonical dual
59(4)
2.9 Exercises
63(4)
3 Fourier analysis 67(28)
3.1 Linear time-invariant systems
68(7)
3.1.1 Linearity and time-invariance
68(3)
3.1.2 The impulse response
71(2)
3.1.3 Causality and stability
73(2)
3.1.4 Eigenfunctions of LTI systems
75(1)
3.2 The continuous-time Fourier transform
75(5)
3.2.1 Definition of the CTFT
75(1)
3.2.2 Properties of the CTFT
76(1)
3.2.3 Examples of the CTFT
77(2)
3.2.4 Fubini's theorem
79(1)
3.3 Discrete-time systems
80(5)
3.3.1 Discrete-time impulse response
80(1)
3.3.2 Discrete-time Fourier transform
81(1)
3.3.3 Properties of the DTFT
82(3)
3.4 Continuous—discrete representations
85(5)
3.4.1 Poisson-sum formula
87(1)
3.4.2 Sampled correlation sequences
88(2)
3.5 Exercises
90(5)
4 Signal spaces 95(51)
4.1 Structured bases
95(3)
4.1.1 Sampling and reconstruction spaces
95(1)
4.1.2 Practical sampling theorems
96(2)
4.2 Bandlimited sampling
98(12)
4.2.1 The Shannon—Nyquist theorem
98(2)
4.2.2 Sampling by modulation
100(2)
4.2.3 Aliasing
102(3)
4.2.4 Orthonormal basis interpretation
105(4)
4.2.5 Towards more general sampling spaces
109(1)
4.3 Sampling in shift-invariant spaces
110(12)
4.3.1 Shift-invariant spaces
110(2)
4.3.2 Spline functions
112(2)
4.3.3 Digital communication signals
114(3)
4.3.4 Multiple generators
117(4)
4.3.5 Refinable functions
121(1)
4.4 Gabor and wavelet expansions
122(10)
4.4.1 Gabor spaces
122(4)
4.4.2 Wavelet expansions
126(6)
4.5 Union of subspaces
132(6)
4.5.1 Signal model
133(3)
4.5.2 Union classes
136(2)
4.6 Stochastic and smoothness priors
138(4)
4.7 Exercises
142(4)
5 Shift-invariant spaces 146(32)
5.1 Riesz basis in SI spaces
146(6)
5.1.1 Riesz basis condition
147(2)
5.1.2 Examples
149(3)
5.2 Riesz basis expansions
152(9)
5.2.1 Biorthogonal basis
152(3)
5.2.2 Expansion coefficients
155(1)
5.2.3 Alternative basis expansions
156(5)
5.3 Partition of unity
161(2)
5.4 Redundant sampling in SI spaces
163(6)
5.4.1 Redundant bandlimited sampling
165(3)
5.4.2 Missing samples
168(1)
5.5 Multiple generators
169(6)
5.5.1 Riesz condition
170(1)
5.5.2 Biorthogonal basis
171(4)
5.6 Exercises
175(3)
6 Subspace priors 178(60)
6.1 Sampling and reconstruction processes
178(8)
6.1.1 Sampling setups
178(1)
6.1.2 Sampling process
179(2)
6.1.3 Unconstrained recovery
181(1)
6.1.4 Predefined recovery kernel
182(1)
6.1.5 Design objectives
183(3)
6.2 Unconstrained reconstruction
186(5)
6.2.1 Geometric interpretation
186(2)
6.2.2 Equal sampling and prior spaces
188(3)
6.3 Sampling in general spaces
191(14)
6.3.1 The direct-sum condition
192(2)
6.3.2 Unique recovery
194(4)
6.3.3 Computing the oblique projection operator
198(4)
6.3.4 Oblique biorthogonal basis
202(3)
6.4 Summary: unique unconstrained recovery
205(6)
6.4.1 Consistent recovery
205(3)
6.4.2 Recovery error
208(3)
6.5 Nonunique recovery
211(4)
6.5.1 Least squares recovery
211(2)
6.5.2 Minimax recovery
213(2)
6.6 Constrained recovery
215(9)
6.6.1 Minimal-error recovery
216(3)
6.6.2 Least squares recovery
219(3)
6.6.3 Minimax recovery
222(2)
6.7 Unified formulation of recovery techniques
224(2)
6.8 Multichannel sampling
226(9)
6.8.1 Recovery methods
226(1)
6.8.2 Papoulis' generalized sampling
227(8)
6.9 Exercises
235(3)
7 Smoothness priors 238(46)
7.1 Unconstrained recovery
238(11)
7.1.1 Smoothness prior
238(1)
7.1.2 Least squares solution
239(3)
7.1.3 Minimax solution
242(1)
7.1.4 Examples
243(4)
7.1.5 Multichannel sampling
247(2)
7.2 Constrained recovery
249(10)
7.2.1 Least squares solution
249(2)
7.2.2 Minimax-regret solution
251(5)
7.2.3 Comparison between least squares and minimax
256(3)
7.3 Stochastic priors
259(6)
7.3.1 The hybrid Wiener filter
261(2)
7.3.2 Constrained reconstruction
263(2)
7.4 Summary of sampling methods
265(4)
7.4.1 Summary of methods
265(3)
7.4.2 Unified view
268(1)
7.5 Sampling with noise
269(12)
7.5.1 Constrained reconstruction problem
270(2)
7.5.2 Least squares solution
272(1)
7.5.3 Regularized least squares
273(1)
7.5.4 Minimax MSE filters
273(2)
7.5.5 Hybrid Wiener filter
275(1)
7.5.6 Summary of the different filters
275(2)
7.5.7 Bandlimited interpolation
277(2)
7.5.8 Unconstrained recovery
279(2)
7.6 Exercises
281(3)
8 Nonlinear sampling 284(41)
8.1 Sampling with nonlinearities
285(3)
8.1.1 Nonlinear model
285(1)
8.1.2 Wiener—Hammerstein systems
286(2)
8.2 Pointwise sampling
288(6)
8.2.1 Bandlimited signals
288(2)
8.2.2 Reproducing kernel Hilbert spaces
290(4)
8.3 Subspace-preserving nonlinearities
294(1)
8.4 Equal prior and sampling spaces
295(17)
8.4.1 Iterative recovery
297(5)
8.4.2 Linearization approach
302(3)
8.4.3 Conditions for invertibility
305(1)
8.4.4 Newton algorithm
306(4)
8.4.5 Comparison between algorithms
310(2)
8.5 Arbitrary sampling filters
312(10)
8.5.1 Recovery algorithms
312(2)
8.5.2 Uniqueness conditions
314(3)
8.5.3 Algorithm convergence
317(2)
8.5.4 Examples
319(3)
8.6 Exercises
322(3)
9 Resampling 325(45)
9.1 Bandlimited sampling rate conversion
326(11)
9.1.1 Interpolation by an integer factor I
327(2)
9.1.2 Decimation by an integer factor D
329(3)
9.1.3 Rate conversion by a rational factor I/D
332(2)
9.1.4 Rate conversion by arbitrary factors
334(3)
9.2 Spline interpolation
337(4)
9.2.1 Interpolation formula
337(3)
9.2.2 Comparison with bandlimited interpolation
340(1)
9.3 Dense-grid interpolation
341(9)
9.3.1 Subspace prior
342(6)
9.3.2 Smoothness prior
348(1)
9.3.3 Stochastic prior
349(1)
9.4 Projection-based resampling
350(15)
9.4.1 Orthogonal projection resampling
351(6)
9.4.2 Oblique projection resampling
357(8)
9.5 Summary of conversion methods
365(1)
9.5.1 Computational aspects
365(1)
9.5.2 Anti-aliasing aspects
366(1)
9.6 Exercises
366(4)
10 Union of subspaces 370(22)
10.1 Motivating examples
371(4)
10.1.1 Multiband sampling
371(2)
10.1.2 Time-delay estimation
373(2)
10.2 Union model
375(7)
10.2.1 Definition and properties
375(3)
10.2.2 Classes of unions
378(4)
10.3 Sampling over unions
382(7)
10.3.1 Unique and stable sampling
382(4)
10.3.2 Rate requirements
386(1)
10.3.3 Xampling: compressed sampling methods
387(2)
10.4 Exercises
389(3)
11 Compressed sensing 392(83)
11.1 Motivation for compressed sensing
392(2)
11.2 Sparsity models
394(9)
11.2.1 Normed vector spaces
395(2)
11.2.2 Sparse signal models
397(6)
11.2.3 Low-rank matrix models
403(1)
11.3 Sensing matrices
403(28)
11.3.1 Null space conditions
404(6)
11.3.2 The restricted isometry property
410(7)
11.3.3 Coherence
417(5)
11.3.4 Uncertainty relations
422(6)
11.3.5 Sensing matrix constructions
428(3)
11.4 Recovery algorithms
431(11)
11.4.1 l1 recovery
432(4)
11.4.2 Greedy algorithms
436(4)
11.4.3 Combinatorial algorithms
440(1)
11.4.4 Analysis versus synthesis methods
441(1)
11.5 Recovery guarantees
442(15)
11.5.1 l1 recovery: RIP-based results
443(7)
11.5.2 l1 recovery: coherence-based results
450(1)
11.5.3 Instance-optimal guarantees
451(2)
11.5.4 The cross-polytope and phase transitions
453(2)
11.5.5 Guarantees on greedy methods
455(2)
11.6 Multiple measurement vectors
457(13)
11.6.1 Signal model
457(2)
11.6.2 Recovery algorithms
459(6)
11.6.3 Performance guarantees
465(1)
11.6.4 Infinite measurement vectors
466(4)
11.7 Summary and extensions
470(1)
11.8 Exercises
471(4)
12 Sampling over finite unions 475(64)
12.1 Finite unions
475(7)
12.1.1 Signal model
475(3)
12.1.2 Problem formulation
478(1)
12.1.3 Connection with block sparsity
479(3)
12.2 Uniqueness and stability
482(6)
12.2.1 Block RIP
483(2)
12.2.2 Block coherence and subcoherence
485(3)
12.3 Signal recovery algorithms
488(5)
12.3.1 Exponential recovery algorithm
488(1)
12.3.2 Convex recovery algorithm
489(1)
12.3.3 Greedy algorithms
490(3)
12.4 RIP-based recovery results
493(7)
12.4.1 Block basis pursuit recovery
493(6)
12.4.2 Random matrices and block RIP
499(1)
12.5 Coherence-based recovery results
500(13)
12.5.1 Recovery conditions
500(4)
12.5.2 Extensions
504(3)
12.5.3 Proofs of theorems
507(6)
12.6 Dictionary and subspace learning
513(9)
12.6.1 Dictionary learning
514(3)
12.6.2 Subspace learning
517(5)
12.7 Blind compressed sensing
522(12)
12.7.1 BCS problem formulation
522(1)
12.7.2 BCS with a constrained dictionary
523(8)
12.7.3 BCS with multiple measurement matrices
531(3)
12.8 Exercises
534(5)
13 Sampling over shift-invariant unions 539(35)
13.1 Union model
539(4)
13.1.1 Sparse union of SI subspaces
539(2)
13.1.2 Sub-Nyquist sampling
541(2)
13.2 Compressed sensing in sparse unions
543(10)
13.2.1 Union of discrete sequences
543(2)
13.2.2 Reduced-rate sampling
545(8)
13.3 Application to detection
553(10)
13.3.1 Matched-filter receiver
554(2)
13.3.2 Maximum-likelihood detector
556(1)
13.3.3 Compressed-sensing receiver
557(6)
13.4 Multiuser detection
563(8)
13.4.1 Conventional multiuser detectors
564(1)
13.4.2 Reduced-dimension MUD (RD-MUD)
565(3)
13.4.3 Performance of RD-MUD
568(3)
13.5 Exercises
571(3)
14 Multiband sampling 574(75)
14.1 Sampling of multiband signals
574(3)
14.2 Multiband signals with known carriers
577(10)
14.2.1 I/Q demodulation
577(2)
14.2.2 Landau rate
579(3)
14.2.3 Direct undersampling of bandpass signals
582(5)
14.3 Interleaved ADCs
587(21)
14.3.1 Bandpass sampling
587(5)
14.3.2 Multiband sampling
592(10)
14.3.3 Universal sampling patterns
602(4)
14.3.4 Hardware considerations
606(2)
14.4 Modulated wideband converter
608(16)
14.4.1 MWC operation
610(1)
14.4.2 MWC signal recovery
611(3)
14.4.3 Collapsing channels
614(6)
14.4.4 Sign-alternating sequences
620(4)
14.5 Blind sampling of multiband signals
624(9)
14.5.1 Minimal sampling rate
625(2)
14.5.2 Blind recovery
627(2)
14.5.3 Multicoset sampling and the sparse SI framework
629(2)
14.5.4 Sub-Nyquist baseband processing
631(1)
14.5.5 Noise folding
632(1)
14.6 Hardware prototype of sub-Nyquist multiband sensing
633(3)
14.7 Simulations
636(8)
14.7.1 MWC designs
636(2)
14.7.2 Sign-alternating sequences
638(1)
14.7.3 Effect of CTF length
639(1)
14.7.4 Parameter limits
640(4)
14.8 Exercises
644(5)
15 Finite rate of innovation sampling 649(106)
15.1 Finite rate of innovation signals
649(7)
15.1.1 Shift-invariant spaces
650(1)
15.1.2 Channel sounding
651(3)
15.1.3 Other examples
654(2)
15.2 Periodic pulse streams
656(36)
15.2.1 Time-domain formulation
657(3)
15.2.2 Frequency-domain formulation
660(4)
15.2.3 Prony's method
664(3)
15.2.4 Noisy samples
667(5)
15.2.5 Matrix pencil
672(5)
15.2.6 Subspace methods
677(5)
15.2.7 Covariance-based methods
682(4)
15.2.8 Compressed sensing formulation
686(2)
15.2.9 Sub-Nyquist sampling
688(4)
15.3 Sub-Nyquist sampling with a single channel
692(13)
15.3.1 Coset sampling
692(3)
15.3.2 Sum-of-sincs filter
695(3)
15.3.3 Noise effects
698(3)
15.3.4 Finite and infinite pulse streams
701(4)
15.4 Multichannel sampling
705(12)
15.4.1 Modulation-based multichannel systems
706(8)
15.4.2 Filterbank sampling
714(3)
15.5 Noisy FRI recovery
717(6)
15.5.1 MSE bounds
718(3)
15.5.2 Periodic versus semiperiodic FRI signals
721(2)
15.5.3 Choosing the sampling kernels
723(1)
15.6 General FRI sampling
723(10)
15.6.1 Sampling method
724(1)
15.6.2 Minimal sampling rate
725(2)
15.6.3 Least squares recovery
727(1)
15.6.4 Iterative recovery
728(5)
15.7 Applications of FRI
733(17)
15.7.1 Sub-Nyquist radar
733(10)
15.7.2 Time-varying system identification
743(1)
15.7.3 Ultrasound imaging
744(6)
15.8 Exercises
750(5)
Appendix A Finite linear algebra 755(13)
A.1 Matrices
755(5)
A.1.1 Matrix operations
755(1)
A.1.2 Matrix properties
756(2)
A.1.3 Special classes of matrices
758(2)
A.2 Eigendecomposition of matrices
760(4)
A.2.1 Eigenvalues and eigenvectors
760(3)
A.2.2 Singular value decomposition
763(1)
A.3 Linear equations
764(1)
A.4 Matrix norms
765(3)
A.4.1 Induced norms
766(1)
A.4.2 Entrywise norms
767(1)
A.4.3 Schatten norms
767(1)
Appendix B Stochastic signals 768(7)
B.1 Random variables
768(2)
B.1.1 Probability density function
768(1)
B.1.2 Jointly random variables
769(1)
B.2 Random vectors
770(1)
B.3 Random processes
770(3)
B.3.1 Continuous-time random processes
770(2)
B.3.2 Discrete-time random processes
772(1)
B.4 Sampling of bandlimited processes
773(2)
References 775(24)
Index 799
Yonina C. Eldar is a Professor in the Department of Electrical Engineering at the Technion Israel Institute of Technology (holding the Edwards Chair in Engineering), a Research Affiliate with the Research Laboratory of Electronics at the Massachusetts Institute of Technology and a Visiting Professor at Stanford University. She has received numerous awards for excellence in research and teaching, including the Wolf Foundation Krill Prize for Excellence in Scientific Research, the Hershel Rich Innovation Award, the Michael Bruno Memorial Award from the Rothschild Foundation, the Weismann Prize for Exact Sciences, and the Muriel and David Jacknow Award for Excellence in Teaching. She is the Editor in Chief of Foundations and Trends in Signal Processing and an Associate Editor for several journals in the areas of signal processing and mathematics. She is a Signal Processing Distinguished Lecturer, an IEEE Fellow, a member of the Young Israel Academy of Science and the Israel Committee for Higher Education.