Muutke küpsiste eelistusi

E-raamat: Tensors for Data Processing: Theory, Methods, and Applications

Edited by (Associate Professor, UESTC, Chengdu, China)
  • Formaat: EPUB+DRM
  • Ilmumisaeg: 21-Oct-2021
  • Kirjastus: Academic Press Inc
  • Keel: eng
  • ISBN-13: 9780323859653
Teised raamatud teemal:
  • Formaat - EPUB+DRM
  • Hind: 135,07 €*
  • * hind on lõplik, st. muud allahindlused enam ei rakendu
  • Lisa ostukorvi
  • Lisa soovinimekirja
  • See e-raamat on mõeldud ainult isiklikuks kasutamiseks. E-raamatuid ei saa tagastada.
  • Formaat: EPUB+DRM
  • Ilmumisaeg: 21-Oct-2021
  • Kirjastus: Academic Press Inc
  • Keel: eng
  • ISBN-13: 9780323859653
Teised raamatud teemal:

DRM piirangud

  • Kopeerimine (copy/paste):

    ei ole lubatud

  • Printimine:

    ei ole lubatud

  • Kasutamine:

    Digitaalõiguste kaitse (DRM)
    Kirjastus on väljastanud selle e-raamatu krüpteeritud kujul, mis tähendab, et selle lugemiseks peate installeerima spetsiaalse tarkvara. Samuti peate looma endale  Adobe ID Rohkem infot siin. E-raamatut saab lugeda 1 kasutaja ning alla laadida kuni 6'de seadmesse (kõik autoriseeritud sama Adobe ID-ga).

    Vajalik tarkvara
    Mobiilsetes seadmetes (telefon või tahvelarvuti) lugemiseks peate installeerima selle tasuta rakenduse: PocketBook Reader (iOS / Android)

    PC või Mac seadmes lugemiseks peate installima Adobe Digital Editionsi (Seeon tasuta rakendus spetsiaalselt e-raamatute lugemiseks. Seda ei tohi segamini ajada Adober Reader'iga, mis tõenäoliselt on juba teie arvutisse installeeritud )

    Seda e-raamatut ei saa lugeda Amazon Kindle's. 

Tensors for Data Processing: Theory, Methods and Applications presents both classical and state-of-the-art methods on tensor computation for data processing, covering computation theories, processing methods, computing and engineering applications, with an emphasis on techniques for data processing. This reference is ideal for students, researchers and industry developers who want to understand and use tensor-based data processing theories and methods.

As a higher-order generalization of a matrix, tensor-based processing can avoid multi-linear data structure loss that occurs in classical matrix-based data processing methods. This move from matrix to tensors is beneficial for many diverse application areas, including signal processing, computer science, acoustics, neuroscience, communication, medical engineering, seismology, psychometric, chemometrics, biometric, quantum physics and quantum chemistry.

  • Provides a complete reference on classical and state-of-the-art tensor-based methods for data processing
  • Includes a wide range of applications from different disciplines
  • Gives guidance for their application
List of contributors
xiii
Preface xix
Chapter 1 Tensor decompositions: computations, applications, and challenges
1(30)
Yingyue Bi
Yingcong Lu
Zhen Long
Ce Zhu
Yipeng Liu
1.1 Introduction
1(2)
1.1.1 What is a tensor?
1(1)
1.1.2 Why do we need tensors?
2(1)
1.2 Tensor operations
3(10)
1.2.1 Tensor notations
3(1)
1.2.2 Matrix operators
4(2)
1.2.3 Tensor transformations
6(1)
1.2.4 Tensor products
7(4)
1.2.5 Structural tensors
11(2)
1.2.6 Summary
13(1)
1.3 Tensor decompositions
13(11)
1.3.1 Tucker decomposition
13(1)
1.3.2 Canonical polyadic decomposition
14(2)
1.3.3 Block term decomposition
16(2)
1.3.4 Tensor singular value decomposition
18(1)
1.3.5 Tensor network
19(5)
1.4 Tensor processing techniques
24(1)
1.5 Challenges
25(6)
References
26(5)
Chapter 2 Transform-based tensor singular value decomposition in multidimensional image recovery
31(30)
Tai-Xiang Jiang
Michael K. Ng
Xi-Le Zhao
2.1 Introduction
32(2)
2.2 Recent advances of the tensor singular value decomposition
34(10)
2.2.1 Preliminaries and basic tensor notations
34(1)
2.2.2 The t-SVD framework
35(3)
2.2.3 Tensor nuclear norm and tensor recovery
38(3)
2.2.4 Extensions
41(3)
2.2.5 Summary
44(1)
2.3 Transform-based t-SVD
44(5)
2.3.1 Linear invertible transform-based t-SVD
45(2)
2.3.2 Beyond invertibility and data adaptivity
47(2)
2.4 Numerical experiments
49(4)
2.4.1 Examples within the t-SVD framework
49(2)
2.4.2 Examples of the transform-based t-SVD
51(2)
2.5 Conclusions and new guidelines
53(8)
References
55(6)
Chapter 3 Partensor
61(30)
Paris A. Karakasis
Christos Kolomvakis
George Lourakis
George Lykoudis
Ioannis Marios Papagiannakos
Ioanna Siaminou
Christos Tsalidis
Athanasios P. Liavas
3.1 Introduction
62(2)
3.1.1 Related work
62(1)
3.1.2 Notation
63(1)
3.2 Tensor decomposition
64(6)
3.2.1 Matrix least-squares problems
65(4)
3.2.2 Alternating optimization for tensor decomposition
69(1)
3.3 Tensor decomposition with missing elements
70(5)
3.3.1 Matrix least-squares with missing elements
71(3)
3.3.2 Tensor decomposition with missing elements: the unconstrained case
74(1)
3.3.3 Tensor decomposition with missing elements: the nonnegative case
75(1)
3.3.4 Alternating optimization for tensor decomposition with missing elements
75(1)
3.4 Distributed memory implementations
75(8)
3.4.1 Some MPI preliminaries
75(2)
3.4.2 Variable partitioning and data allocation
77(2)
3.4.3 Tensor decomposition
79(2)
3.4.4 Tensor decomposition with missing elements
81(1)
3.4.5 Some implementation details
82(1)
3.5 Numerical experiments
83(4)
3.5.1 Tensor decomposition
83(1)
3.5.2 Tensor decomposition with missing elements
84(3)
3.6 Conclusion
87(4)
Acknowledgment
88(1)
References
88(3)
Chapter 4 A Riemannian approach to low-rank tensor learning
91(30)
Hiroyuki Kasai
Pratik Jawanpuria
Bamdev Mishra
4.1 Introduction
91(2)
4.2 A brief introduction to Riemannian optimization
93(4)
4.2.1 Riemannian manifolds
94(1)
4.2.2 Riemannian quotient manifolds
95(2)
4.3 Riemannian Tucker manifold geometry
97(7)
4.3.1 Riemannian metric and quotient manifold structure
97(3)
4.3.2 Characterization of the induced spaces
100(2)
4.3.3 Linear projectors
102(1)
4.3.4 Retraction
103(1)
4.3.5 Vector transport
104(1)
4.3.6 Computational cost
104(1)
4.4 Algorithms for tensor learning problems
104(3)
4.4.1 Tensor completion
105(1)
4.4.2 General tensor learning
106(1)
4.5 Experiments
107(9)
4.5.1 Choice of metric
108(1)
4.5.2 Low-rank tensor completion
109(4)
4.5.3 Low-rank tensor regression
113(2)
4.5.4 Multilinear multitask learning
115(1)
4.6 Conclusion
116(5)
References
117(4)
Chapter 5 Generalized thresholding for low-rank tensor recovery: approaches based on model and learning
121(32)
Fei Wen
Zhonghao Zhang
Yipeng Liu
5.1 Introduction
121(2)
5.2 Tensor singular value thresholding
123(8)
5.2.1 Proximity operator and generalized thresholding
123(3)
5.2.2 Tensor singular value decomposition
126(2)
5.2.3 Generalized matrix singular value thresholding
128(1)
5.2.4 Generalized tensor singular value thresholding
129(2)
5.3 Thresholding based low-rank tensor recovery
131(5)
5.3.1 Thresholding algorithms for low-rank tensor recovery
132(2)
5.3.2 Generalized thresholding algorithms for low-rank tensor recovery
134(2)
5.4 Generalized thresholding algorithms with learning
136(5)
5.4.1 Deep unrolling
137(3)
5.4.2 Deep plug-and-play
140(1)
5.5 Numerical examples
141(4)
5.6 Conclusion
145(8)
References
147(6)
Chapter 6 Tensor principal component analysis
153(62)
Pan Zhou
Canyi Lu
Zhouchen Lin
6.1 Introduction
153(2)
6.2 Notations and preliminaries
155(6)
6.2.1 Notations
156(1)
6.2.2 Discrete Fourier transform
157(2)
6.2.3 T-product
159(1)
6.2.4 Summary
160(1)
6.3 Tensor PCA for Gaussian-noisy data
161(5)
6.3.1 Tensor rank and tensor nuclear norm
161(4)
6.3.2 Analysis of tensor PCA on Gaussian-noisy data
165(1)
6.3.3 Summary
166(1)
6.4 Tensor PCA for sparsely corrupted data
166(25)
6.4.1 Robust tensor PCA
167(5)
6.4.2 Tensor low-rank representation
172(14)
6.4.3 Applications
186(5)
6.4.4 Summary
191(1)
6.5 Tensor PCA for outlier-corrupted data
191(16)
6.5.1 Outlier robust tensor PCA
192(4)
6.5.2 The fast OR-TPCA algorithm
196(2)
6.5.3 Applications
198(8)
6.5.4 Summary
206(1)
6.6 Other tensor PCA methods
207(1)
6.7 Future work
208(1)
6.8 Summary
208(7)
References
209(6)
Chapter 7 Tensors for deep learning theory
215(34)
Yoav Levine
Noam Wies
Or Sharir
Nadav Cohen
Amnon Shashua
7.1 Introduction
215(2)
7.2 Bounding a function's expressivity via tensorization
217(6)
7.2.1 A measure of capacity for modeling input dependencies
218(2)
7.2.2 Bounding correlations with tensor matricization ranks
220(3)
7.3 A case study: self-attention networks
223(19)
7.3.1 The self-attention mechanism
223(4)
7.3.2 Self-attention architecture expressivity questions
227(3)
7.3.3 Results on the operation of self-attention
230(5)
7.3.4 Bounding the separation rank of self-attention
235(7)
7.4 Convolutional and recurrent networks
242(3)
7.4.1 The operation of convolutional and recurrent networks
243(1)
7.4.2 Addressed architecture expressivity questions
243(2)
7.5 Conclusion
245(4)
References
245(4)
Chapter 8 Tensor network algorithms for image classification
249(44)
Cong Chen
Kim Batselier
Ngai Wong
8.1 Introduction
249(2)
8.2 Background
251(7)
8.2.1 Tensor basics
251(2)
8.2.2 Tensor decompositions
253(3)
8.2.3 Support vector machines
256(1)
8.2.4 Logistic regression
257(1)
8.3 Tensorial extensions of support vector machine
258(26)
8.3.1 Supervised tensor learning
258(2)
8.3.2 Support tensor machines
260(3)
8.3.3 Higher-rank support tensor machines
263(2)
8.3.4 Support Tucker machines
265(4)
8.3.5 Support tensor train machines
269(6)
8.3.6 Kernelized support tensor train machines
275(9)
8.4 Tensorial extension of logistic regression
284(4)
8.4.1 Rank-1 logistic regression
285(1)
8.4.2 Logistic tensor regression
286(2)
8.5 Conclusion
288(5)
References
289(4)
Chapter 9 High-performance tensor decompositions for compressing and accelerating deep neural networks
293(48)
Xiao-Yang Liu
Yiming Fang
Liuqing Yang
Zechu Li
Anwar Walid
9.1 Introduction and motivation
294(1)
9.2 Deep neural networks
295(10)
9.2.1 Notations
295(1)
9.2.2 Linear layer
295(3)
9.2.3 Fully connected neural networks
298(2)
9.2.4 Convolutional neural networks
300(3)
9.2.5 Backpropagation
303(2)
9.3 Tensor networks and their decompositions
305(16)
9.3.1 Tensor networks
305(3)
9.3.2 CP tensor decomposition
308(2)
9.3.3 Tucker decomposition
310(3)
9.3.4 Hierarchical Tucker decomposition
313(2)
9.3.5 Tensor train and tensor ring decomposition
315(3)
9.3.6 Transform-based tensor decomposition
318(3)
9.4 Compressing deep neural networks
321(12)
9.4.1 Compressing fully connected layers
321(1)
9.4.2 Compressing the convolutional layer via CP decomposition
322(3)
9.4.3 Compressing the convolutional layer via Tucker decomposition
325(2)
9.4.4 Compressing the convolutional layer via TT/TR decompositions
327(3)
9.4.5 Compressing neural networks via transform-based decomposition
330(3)
9.5 Experiments and future directions
333(8)
9.5.1 Performance evaluations using the MNIST dataset
333(3)
9.5.2 Performance evaluations using the CIFAR10 dataset
336(1)
9.5.3 Future research directions
337(1)
References
338(3)
Chapter 10 Coupled tensor decompositions for data fusion
341(30)
Christos Chatzichristos
Simon Van Eyndhoven
Eleftherios Kofidis
Sabine Van Huffel
10.1 Introduction
341(1)
10.2 What is data fusion?
342(6)
10.2.1 Context and definition
342(1)
10.2.2 Challenges of data fusion
343(4)
10.2.3 Types of fusion and data fusion strategies
347(1)
10.3 Decompositions in data fusion
348(7)
10.3.1 Matrix decompositions and statistical models
350(1)
10.3.2 Tensor decompositions
351(1)
10.3.3 Coupled tensor decompositions
352(3)
10.4 Applications of tensor-based data fusion
355(3)
10.4.1 Biomedical applications
355(2)
10.4.2 Image fusion
357(1)
10.5 Fusion of EEG and fMRI: a case study
358(3)
10.6 Data fusion demos
361(2)
10.6.1 SDF demo - approximate coupling
361(2)
10.7 Conclusion and prospects
363(8)
Acknowledgments
364(1)
References
364(7)
Chapter 11 Tensor methods for low-level vision
371(56)
Tatsuya Yokota
Cesar F. Caiafa
Qibin Zhao
11.1 Low-level vision and signal reconstruction
371(7)
11.1.1 Observation models
372(2)
11.1.2 Inverse problems
374(4)
11.2 Methods using raw tensor structure
378(31)
11.2.1 Penalty-based tensor reconstruction
379(14)
11.2.2 Tensor decomposition and reconstruction
393(16)
11.3 Methods using tensorization
409(6)
11.3.1 Higher-order tensorization
411(2)
11.3.2 Delay embedding/Hankelization
413(2)
11.4 Examples of low-level vision applications
415(4)
11.4.1 Image inpainting with raw tensor structure
415(1)
11.4.2 Image inpainting using tensorization
416(1)
11.4.3 Denoising, deblurring, and superresolution
417(2)
11.5 Remarks
419(8)
Acknowledgments
420(1)
References
420(7)
Chapter 12 Tensors for neuroimaging
427(56)
Aybuke Erol
Borbala Hunyadi
12.1 Introduction
427(2)
12.2 Neuroimaging modalities
429(2)
12.3 Multidimensionality of the brain
431(2)
12.4 Tensor decomposition structures
433(4)
12.4.1 Product operations for tensors
434(1)
12.4.2 Canonical polyadic decomposition
435(1)
12.4.3 Tucker decomposition
435(2)
12.4.4 Block term decomposition
437(1)
12.5 Applications of tensors in neuroimaging
437(34)
12.5.1 Filling in missing data
438(3)
12.5.2 Denoising, artifact removal, and dimensionality reduction
441(3)
12.5.3 Segmentation
444(1)
12.5.4 Registration and longitudinal analysis
445(2)
12.5.5 Source separation
447(4)
12.5.6 Activity recognition and source localization
451(5)
12.5.7 Connectivity analysis
456(6)
12.5.8 Regression
462(1)
12.5.9 Feature extraction and classification
463(5)
12.5.10 Summary and practical considerations
468(3)
12.6 Future challenges
471(1)
12.7 Conclusion
472(11)
References
473(10)
Chapter 13 Tensor representation for remote sensing images
483(54)
Yang Xu
Fei Ye
Bo Ren
Liangfu Lu
Xudong Cui
Jocelyn Chanussot
Zebin Wu
13.1 Introduction
483(5)
13.2 Optical remote sensing: HSI and MSI fusion
488(29)
13.2.1 Tensor notations and preliminaries
488(1)
13.2.2 Nonlocal patch tensor sparse representation for HSI-MSI fusion
488(8)
13.2.3 High-order coupled tensor ring representation for HSI-MSI fusion
496(8)
13.2.4 Joint tensor factorization for HSI-MSI fusion
504(13)
13.3 Polarimetric synthetic aperture radar: feature extraction
517(20)
13.3.1 Brief description of PolSAR data
518(1)
13.3.2 The tensorial embedding framework
519(3)
13.3.3 Experiment and analysis
522(10)
References
532(5)
Chapter 14 Structured tensor train decomposition for speeding up kernel-based learning
537(28)
Yassine Zniyed
Ouafae Karmouda
Remy Boyer
Jeremie Boulanger
Andre L.F. de Almeida
Gerard Favier
14.1 Introduction
538(2)
14.2 Notations and algebraic background
540(1)
14.3 Standard tensor decompositions
541(4)
14.3.1 Tucker decomposition
542(1)
14.3.2 HOSVD
542(1)
14.3.3 Tensor networks and TT decomposition
543(2)
14.4 Dimensionality reduction based on a train of low-order tensors
545(3)
14.4.1 TD-train model: equivalence between a high-order TD and a train of low-order TDs
546(2)
14.5 Tensor train algorithm
548(3)
14.5.1 Description of the TT-HSVD algorithm
548(1)
14.5.2 Comparison of the sequential and the hierarchical schemes
549(2)
14.6 Kernel-based classification of high-order tensors
551(4)
14.6.1 Formulation of SVMs
552(1)
14.6.2 Polynomial and Euclidean tensor-based kernel
553(1)
14.6.3 Kernel on a Grassmann manifold
553(1)
14.6.4 The fast kernel subspace estimation based on tensor train decomposition (FAKSETT) method
554(1)
14.7 Experiments
555(3)
14.7.1 Datasets
555(2)
14.7.2 Classification performance
557(1)
14.8 Conclusion
558(7)
References
560(5)
Index 565
Yipeng Liu received the BSc degree in biomedical engineering and the PhD degree in information and communication engineering from University of Electronic Science and Technology of China (UESTC), Chengdu, in 2006 and 2011, respectively. From 2011 to 2014, he was a postdoctoral research fellow at University of Leuven, Leuven, Belgium. Since 2014, he has been an associate professor with UESTC, Chengdu, China. His research interest is tensor signal processing. He has authored or co-authored over 70 publication, inculding a series of papers on sparse tensor, tensor completion, tensor PCA, tensor regression, and so on.

He has served as an associate editor for IEEE Signal Processing Letters (2019 - now), an editorial board member for Heliyon (2018 - 2019), and the managing guest editor for the special issue tensor image processing” in Signal Processing: Image Communication. He has served on technical or program committees for 5 international conferences. He is an IEEE senior member, a member of the Multimedia Technology Technical Committee of Chinese Computer Federation, and a member of China Society of Image and Graphics on Youth Working Committee.

He has given give tutorials for a few international conferences, including 2019 IEEE International Symposium on Circuits and Systems (ISCAS), 2019 IEEE International Workshop on Signal Processing Systems (SiPS), and 11th Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC), and is going to give tutorials on the 27th IEEE International Conference on Image Processing (ICIP 2020) and The 2020 IEEE Symposium Series on Computational Intelligence (IEEE SSCI 2020). He has been teaching optimization theory and applications for graduates since 2015, and received the 8th University Teaching Achievement Award in 2016.