Muutke küpsiste eelistusi

Handbook of Chemometrics and Qualimetrics: Part B, Volume 20B [Kõva köide]

Volume editor
  • Formaat: Hardback, 876 pages, kõrgus x laius: 240x165 mm, kaal: 1310 g
  • Sari: Data Handling in Science and Technology
  • Ilmumisaeg: 04-Dec-1998
  • Kirjastus: Elsevier Science Ltd
  • ISBN-10: 0444828532
  • ISBN-13: 9780444828538
Teised raamatud teemal:
  • Kõva köide
  • Hind: 234,10 €*
  • * saadame teile pakkumise kasutatud raamatule, mille hind võib erineda kodulehel olevast hinnast
  • See raamat on trükist otsas, kuid me saadame teile pakkumise kasutatud raamatule.
  • Kogus:
  • Lisa ostukorvi
  • Tasuta tarne
  • Lisa soovinimekirja
  • Formaat: Hardback, 876 pages, kõrgus x laius: 240x165 mm, kaal: 1310 g
  • Sari: Data Handling in Science and Technology
  • Ilmumisaeg: 04-Dec-1998
  • Kirjastus: Elsevier Science Ltd
  • ISBN-10: 0444828532
  • ISBN-13: 9780444828538
Teised raamatud teemal:

Arvustused

from: D.B. Hibbert, University of New South Wales, Sydney "I have caused my university library to purchase a copy of Handbook of Chenometrics and Qualimetrics, and I believe that all serious groups working in chenometrics will feel the need to own the two volumes...For those long ensconced in chenometrics, browsing these books will not fail to impress with the breadth and complexity of the subject." --Chenometrics & Intelligent Laboratory Systems, Vol. 49, 1999

Preface v
Introduction to Part B
1(6)
References
5(2)
Vectors, Matrices and Operations on Matrices
7(50)
Vector space
8(2)
Geometrical properties of vectors
10(5)
Matrices
15(4)
Matrix product
19(8)
Dimension and rank
27(3)
Eigenvectors and eigenvalues
30(12)
Statistical interpretation of matrices
42(9)
Geometrical interpretation of matrix products
51(6)
References
56(1)
Cluster Analysis
57(30)
Clusters
57(3)
Measures of (dis)similarity
60(9)
Similarity and distance
60(1)
Measures of (dis)similarity for continuous variables
60(1)
Distances
60(2)
Correlation coefficient
62(2)
Scaling
64(1)
Measures of (dis)similarity for other variables
65(1)
Binary variables
65(1)
Ordinal variables
66(1)
Mixed variables
67(1)
Similarity matrix
68(1)
Clustering algorithms
69(18)
Hierarchical methods
69(7)
Non-hierarchical methods
76(3)
Other methods
79(3)
Selecting clusters
82(1)
Measures for clustering tendency
82(1)
How many clusters?
83(1)
Conclusion
84(1)
References
85(2)
Analysis of Measurement Tables
87(74)
Introduction
87(1)
Principal components analysis
88(16)
Singular vectors and singular values
89(2)
Eigenvectors and eigenvalues
91(4)
Latent vectors and latent values
95(1)
Scores and loadings
95(1)
Principal components
96(4)
Transition formulae
100(1)
Reconstructions
100(4)
Geometrical interpretation
104(11)
Line of closest fit
104(4)
Distances
108(4)
Unipolar axes
112(1)
Bipolar axes
113(2)
Preprocessing
115(19)
No transformation
118(1)
Column-centering
119(3)
Column-standardization
122(1)
Log column-centering
123(2)
Log double-centering
125(5)
Double-closure
130(4)
Algorithms
134(6)
Singular value decomposition
134(4)
Eigenvalue decomposition
138(2)
Validation
140(6)
Scree-plot
142(1)
Malinowski's F-test
143(1)
Cross-validation
144(2)
Principal coordinates analysis
146(3)
Distances defined from data
146(2)
Distances derived from comparisons of pairs
148(1)
Eigenvalue decomposition
148(1)
Non-linear principal components analysis
149(4)
Extensions of the data by higher order terms
149(1)
Non-linear transformations of the data
149(1)
Non-linear PCA biplot
150(3)
Three-way principal components analysis
153(3)
Unfolding
153(1)
The Tucker3 model
154(2)
The PARAFAC model
156(1)
PCA and cluster analysis
156(5)
References
158(3)
Analysis of Contingency Tables
161(46)
Contingency table
161(5)
Chi-square statistic
166(1)
Closure
167(3)
Row-closure
168(1)
Column-closure
168(1)
Double-closure
169(1)
Weighted metric
170(5)
Distance of chi-square
175(7)
Row-closure
175(1)
Column-closure
176(1)
Double-closure
177(5)
Correspondence factor analysis
182(19)
Historical background
182(1)
Generalized singular value decomposition
183(4)
Biplots
187(6)
Application
193(8)
Log-linear model
201(6)
Historical introduction
201(1)
Algorithm
201(3)
Application
204(1)
References
205(2)
Supervised Pattern Recognition
207(36)
Supervised and unsupervised pattern recognition
207(1)
Derivation of classification rules
208(28)
Types of classification rules
208(5)
Canonical variates and linear discriminant analysis
213(7)
Quadratic discriminant analysis and related methods
220(3)
The k-nearest neighbour method
223(2)
Density methods
225(2)
Classification trees
227(1)
UNEQ, SIMCA and related methods
228(4)
Partial least squares
232(1)
Neural networks
233(3)
Feature selection and reduction
236(2)
Validation of classification rules
238(5)
References
239(4)
Curve and Mixture Resolution by Factor Analysis and Related Techniques
243(64)
Abstract and true factors
243(8)
Full-rank methods
251(23)
A qualitative approach
251(1)
Factor rotations
252(2)
The Varimax rotation
254(2)
Factor rotation by target transformation factor analysis (TTFA)
256(4)
Curve resolution based methods
260(1)
Curve Resolution of two-factor systems
260(7)
Curve resolution of three-factor systems
267(1)
Factor rotation by iterative target transformation factor analysis (ITTFA)
268(6)
Evolutionary and local rank methods
274(12)
Evolving factor analysis (EFA)
274(4)
Fixed-size window evolving factor analysis (FSWEFA)
278(2)
Heuristic evolving latent projections (HELP)
280(6)
Pure column (or row) techniques
286(12)
The variance diagram (VARDIA) technique
286(6)
Simplisma
292(3)
Orthogonal projection approach (OPA)
295(3)
Quantitative methods for factor analysis
298(3)
Generalized rank annihilation factor analysis (GRAFA)
298(2)
Residual bilinearization (RBL)
300(1)
Discussion
301(1)
Application of factor analysis for peak purity check in HPLC
301(1)
Guidance for the selection of a factor analysis method
302(5)
References
303(4)
Relations between Measurement Tables
307(42)
Introduction
307(3)
Procrustes analysis
310(7)
Introduction
310(4)
Algorithm
314(1)
Discussion
314(3)
Canonical correlation analysis
317(6)
Introduction
317(3)
Algorithm
320(1)
Discussion
321(2)
Multivariate least squares regression
323(1)
Introduction
323(1)
Algorithm
324(1)
Discussion
324(1)
Reduced rank regression
324(5)
Introduction
324(1)
Algorithm
325(1)
Discussion
326(1)
Example
326(3)
Principal components regression
329(2)
Introduction
329(1)
Algorithm
329(1)
Discussion
330(1)
Partial least squares regression
331(11)
NIPALS-PLS Algorithm
336(1)
Discussion
337(3)
Alternative PLS algorithms
340(2)
Continuum regression methods
342(3)
Concluding remarks
345(4)
References
346(3)
Multivariate Calibration
349(34)
Introduction
349(2)
Calibration methods
351(17)
Classical least squares
353(4)
Inverse least squares
357(1)
Principal Components Regression
358(8)
Partial least squares regression
366(1)
Other linear methods
367(1)
Validation
368(3)
Other aspects
371(4)
Calibration design
371(1)
Data pretreatment
372(2)
Outliers
374(1)
New developments
375(8)
Feature selection
375(1)
Transfer of calibration models
376(2)
Non-linear methods
378(1)
References
379(4)
Quantitative Structure-Activity Relationships (QSAR)
383(38)
Extrathermodynamic methods
383(14)
Hansch analysis
388(5)
Free-Wilson analysis
393(4)
Principal components models
397(11)
Principal components analysis
398(4)
Spectral map analysis
402(3)
Correspondence factor analysis
405(3)
Canonical variate models
408(1)
Linear discriminant analysis
408(1)
Canonical correlation analysis
409(1)
Partial least squares models
409(7)
PLS regression and CoMFA
409(2)
Two-block PLS and indirect QSAR
411(5)
Other approaches
416(5)
References
417(4)
Analysis of Sensory Data
421(28)
Introduction
421(1)
Difference tests
421(6)
Triangle test
421(1)
Duo-trio test
422(3)
Paired comparisons
425(2)
Multidimensional scaling
427(4)
The analysis of Quantitative Descriptive Analysis profile data
431(2)
Comparison of two or more sensory data sets
433(4)
Linking sensory data to instrumental data
437(3)
Temporal aspects of perception
440(4)
Product formulation
444(5)
References
446(3)
Pharmacokinetic Models
449(58)
Introduction
449(2)
Compartmental analysis
451(42)
One-compartment open model for intravenous administration
455(6)
Two-compartment catenary model for extravascular administration
461(8)
Two-compartment catenary model for extravascular administration with incomplete absorption
469(1)
One-compartment open model for continuous intravenous infusion
470(3)
One-compartment open model for repeated intravenous administration
473(3)
Two-compartment mammillary model for intravenous administration using Laplace transform
476(11)
Multi-compartment models
487(1)
The convolution method
487(3)
The γ-method
490(3)
Non-compartmental analysis
493(7)
Compartment models versus non-compartmental analysis
500(2)
Linearization of non-linear models
502(5)
References
505(2)
Signal Processing
507(68)
Signal domains
507(2)
Types of signal processing
509(1)
The Fourier transform
510(20)
Time and frequency domain
510(3)
The Fourier transform of a continuous signal
513(5)
Derivation of the Fourier transform of a sine
518(1)
The discrete Fourier transformation
519(1)
Frequency range and resolution
520(4)
Sampling
524(2)
Zero filling and resolution
526(1)
Periodicity and symmetry
527(1)
Shift and phase
528(1)
Distributivity and scaling
529(1)
The fast Fourier transform
530(1)
Convolution
530(5)
Signal processing
535(18)
Characterization of noise
535(1)
Signal enhancement in the time domain
536(2)
Time averaging
538(1)
Smoothing by moving average
538(4)
Polynomial smoothing
542(2)
Exponential smoothing
544(3)
Signal enhancement in the frequency domain
547(2)
Smoothing and filtering: a comparison
549(1)
The derivative of a signal
550(1)
Data compression by a Fourier transform
550(3)
Deconvolution by Fourier transform
553(3)
Other deconvolution methods
556(6)
Maximum Likelihood
557(1)
Maximum Entropy
558(4)
Other transforms
562(13)
The Hadamard transform
562(2)
The time-frequency Fourier transform
564(2)
The wavelet transform
566(7)
References
573(2)
Kalman Filtering
575(30)
Introduction
575(2)
Recursive regression of a straight line
577(8)
Recursive multicomponent analysis
585(4)
System equations
589(5)
System equation for a kinetics experiment
592(1)
System equation of a calibration line with drift
593(1)
The Kalman filter
594(4)
Theory
594(2)
Kalman filter of a kinetics model
596(2)
Kalman filtering of a calibration line with drift
598(1)
Adaptive Kalman filtering
598(3)
Evaluation of the innovation
599(1)
The adaptive Kalman filter model
599(2)
Applications
601(4)
References
603(2)
Applications of Operations Research
605(22)
An overview
605(1)
Linear programming
605(4)
Queuing problems
609(1)
Queuing and Waiting
610(8)
Application in analytical laboratory management
617(1)
Discrete event simulation
618(3)
A shortest path problem
621(6)
References
625(2)
Artificial Intelligence: Expert and Knowledge Based Systems
627(22)
Artificial intelligence and expert systems
627(1)
Expert systems
628(1)
Structure of expert systems
629(1)
Knowledge representation
630(3)
Rule-based knowledge representation
631(1)
Frame-based knowledge representation
632(1)
The inference engine
633(7)
Rule-based inferencing
633(4)
Frame-based inferencing
637(1)
Inheritance
637(1)
Object-oriented programming techniques
638(1)
Reasoning with uncertainty
639(1)
The interaction module
640(1)
Tools
641(1)
Development of an expert system
642(3)
Analysis of the application area
642(1)
Definition of knowledge domain, sources and tools
643(1)
Knowledge acquisition
643(1)
Implementation
644(1)
Testing, validation and evaluation
644(1)
Maintenance
645(1)
Conclusion
645(4)
References
646(3)
Artificial Neural Networks
649(52)
Introduction
649(1)
Historical overview
650(1)
The basic unit---the neuron
650(3)
The linear learning machine and the perceptron network
653(9)
Principle
653(3)
Learning strategy
656(3)
Limitations
659(3)
Multilayer feed forward (MLF) networks
662(19)
Introduction
662(1)
Structure
662(2)
Signal propagation
664(1)
The transfer function
665(1)
Role of the transfer function
665(1)
Transfer function of the output units
666(1)
Transfer function in the hidden units
666(4)
Learning rule
670(3)
Learning rate and momentum term
673(1)
Training and testing an MLF network
674(1)
Network performance
674(3)
Local minima
677(1)
Determining the number of hidden units
677(2)
Data Preprocessing
679(1)
Scaling
679(1)
Variable selection and reduction
679(1)
Validation of MLF networks
679(1)
Aspects of use
680(1)
Chemical applications
680(1)
Radial basis function networks
681(6)
Structure
681(1)
Training
682(1)
An example
683(1)
Applications
684(3)
Kohonen networks
687(5)
Structure
687(1)
Training
688(2)
Interpretation of the Kohonen map
690(1)
Applications
691(1)
Adaptive resonance theory networks
692(9)
Introduction
692(1)
Structure
693(1)
Training
693(1)
Application
694(1)
References
695(6)
Index 701