Muutke küpsiste eelistusi

Non-Standard Parameter Adaptation for Exploratory Data Analysis 2009 ed. [Kõva köide]

  • Formaat: Hardback, 223 pages, kõrgus x laius: 235x155 mm, kaal: 1140 g, XI, 223 p., 1 Hardback
  • Sari: Studies in Computational Intelligence 249
  • Ilmumisaeg: 28-Sep-2009
  • Kirjastus: Springer-Verlag Berlin and Heidelberg GmbH & Co. K
  • ISBN-10: 3642040047
  • ISBN-13: 9783642040047
  • Kõva köide
  • Hind: 95,02 €*
  • * hind on lõplik, st. muud allahindlused enam ei rakendu
  • Tavahind: 111,79 €
  • Säästad 15%
  • Raamatu kohalejõudmiseks kirjastusest kulub orienteeruvalt 2-4 nädalat
  • Kogus:
  • Lisa ostukorvi
  • Tasuta tarne
  • Tellimisaeg 2-4 nädalat
  • Lisa soovinimekirja
  • Formaat: Hardback, 223 pages, kõrgus x laius: 235x155 mm, kaal: 1140 g, XI, 223 p., 1 Hardback
  • Sari: Studies in Computational Intelligence 249
  • Ilmumisaeg: 28-Sep-2009
  • Kirjastus: Springer-Verlag Berlin and Heidelberg GmbH & Co. K
  • ISBN-10: 3642040047
  • ISBN-13: 9783642040047

A review of standard algorithms provides the basis for more complex data mining techniques in this overview of exploratory data analysis. Recent reinforcement learning research is presented, as well as novel methods of parameter adaptation in machine learning.



Exploratory data analysis, also known as data mining or knowledge discovery from databases, is typically based on the optimisation of a specific function of a dataset. Such optimisation is often performed with gradient descent or variations thereof. In this book, we first lay the groundwork by reviewing some standard clustering algorithms and projection algorithms before presenting various non-standard criteria for clustering. The family of algorithms developed are shown to perform better than the standard clustering algorithms on a variety of datasets.

We then consider extensions of the basic mappings which maintain some topology of the original data space. Finally we show how reinforcement learning can be used as a clustering mechanism before turning to projection methods.

We show that several varieties of reinforcement learning may also be used to define optimal projections for example for principal component analysis, exploratory projection pursuit and canonical correlation analysis. The new method of cross entropy adaptation is then introduced and used as a means of optimising projections. Finally an artificial immune system is used to create optimal projections and combinations of these three methods are shown to outperform the individual methods of optimisation.

Introduction
1(6)
Unsupervised Exploratory Data Analysis
1(2)
Projection Methods
3(1)
Clustering
3(1)
Structure of the Book
4(3)
Review of Clustering Algorithms
7(22)
Distance Measures
8(1)
Hierarchical Clustering
9(5)
Single-Linkage versus Complete-Linkage
11(3)
Partitioning Clustering
14(8)
K-Means Clustering Algorithm
14(1)
K-Means++ Clustering Algorithm
15(1)
K-Medoids (or Partition Around Medoids - PAM) Clustering Algorithm
16(1)
Fuzzy C-Means Clustering Algorithm (FCM)
17(1)
Soft K-Means Clustering Algorithm
18(1)
K-Harmonic Means Clustering Algorithm (KHM)
18(3)
Kernel K-Means Clustering Algorithm
21(1)
Topology Preserving Mapping
22(6)
Self-Organizing Map (SOM)
22(2)
Generative Topographic Map (GTM)
24(2)
Topographic Product of Experts (ToPoE)
26(1)
Harmonic Topographic Mapping (HaToM)
27(1)
Conclusion
28(1)
Review of Linear Projection Methods
29(20)
Linear Projection Methods
29(11)
Principal Component Analysis
29(2)
Exploratory Projection Pursuit
31(3)
Independent Component Analysis
34(4)
Canonical Correlation Analysis
38(1)
Deflationary Orthogonalization Methods
39(1)
Kernel Methods
40(4)
Kernel Principal Component Analysis
40(3)
Kernel Canonical Correlation Analysis
43(1)
Latent Variable Models
44(4)
Density Modeling and Latent Variables
45(2)
Probabilistic Principal Component Analysis
47(1)
Conclusions
48(1)
Non-standard Clustering Criteria
49(24)
A Family of New Algorithms
49(21)
Weighted K-Means Algorithm (WK)
50(2)
Inverse Weighted K-Means Algorithm (IWK)
52(4)
The Inverse Weighted Clustering Algorithm
56(8)
Inverse Exponential K-Means Algorithm 1 (IEK1)
64(3)
Inverse Exponential K-Means Algorithm 2 (IEK2)
67(1)
Simulations
67(2)
Summary
69(1)
Spectral Clustering Algorithm
70(2)
Conclusion
72(1)
Topographic Mappings and Kernel Clustering
73(12)
A Topology Preserving Mapping
73(6)
Simulations
74(5)
Kernel Clustering Algorithms
79(5)
Kernel Inverse Weighted Clustering Algorithm (KIWC)
80(1)
Kernel K-Harmonic Means Algorithm (KKHM)
80(1)
Kernel Inverse Weighted K-Means Algorithm (KIWK)
81(1)
Simulations
81(3)
Conclusion
84(1)
Online Clustering Algorithms and Reinforcement Learning
85(24)
Online Clustering Algorithms
85(7)
Online K-Means Algorithm
85(1)
IWK Online Algorithm v1 (IWKO1)
86(1)
IWK Online Algorithm v2 (IWKO2)
87(2)
K-Harmonic Means - Online Mode Algorithm (KHMO)
89(2)
Inverse-Weighted K-Means (Online) Topology-Preserving Mapping (IKoToM)
91(1)
Reinforcement Learning
92(9)
Immediate Reward Reinforcement Learning
93(2)
Global Reinforcement Learning in Neural Networks with Stochastic Synapses
95(2)
Temporal Difference Learning
97(2)
Evolutionary Algorithms for Reinforcement Learning
99(2)
Clustering with Reinforcement Learning
101(7)
New Algorithm RL1
102(1)
New Algorithm RL2
103(1)
New Algorithm RL3
103(1)
Simulations
104(2)
Topology Preserving Mapping
106(2)
Conclusion
108(1)
Connectivity Graphs and Clustering with Similarity Functions
109(14)
Different Similarity Graphs (or Connectivity Graphs)
109(3)
The ε-Neighborhood Graph
109(1)
k-Nearest Neighbor Graphs
109(1)
New Similarity Graph
110(2)
Simulations
112(1)
Clustering with Similarity Functions
113(9)
Exponential Function as Similarity Function
115(2)
Simulations
117(2)
Inverse Weighted Clustering with Similarity Function Topology Preserving Mapping (IWCSFToM)
119(3)
Conclusion
122(1)
Reinforcement Learning of Projections
123(28)
Projection with Immediate Reward Learning
123(14)
An Example: Independent Component Analysis
124(6)
Multiple Components with Immediate Reward Reinforcement Learning - PCA
130(3)
Simulation: Canonical Correlation Analysis
133(2)
Deflationary Orthogonalization for Kernel Methods - Kernel PCA
135(2)
Projections with Stochastic Synapses
137(5)
Linear Projection Methods with Stochastic Weights
138(2)
Kernel Methods with Stochastic Weights
140(2)
Projection with Temporal Difference Learning
142(6)
Linear Projection with Q-Learning
143(1)
Non-linear Projection with Sarsa Learning
144(4)
Conclusion
148(3)
Cross Entropy Methods
151(24)
The Cross Entropy Method
151(5)
Rare-Event Simulation via Cross Entropy
151(3)
Combinatorial Optimization via Cross Entropy
154(2)
ICA as Associated Stochastic Problem
156(3)
Linear Projection with Cross Entropy Method
159(5)
Principal Component Analysis
160(1)
Exploratory Projection Pursuit
161(1)
Canonical Correlation Analysis
162(2)
Cross Entropy Latent Variable Models
164(7)
Probabilistic Principal Component Analysis
164(2)
Independent Component Analysis
166(3)
Topology Preserving Manifolds
169(2)
Deep Architectures in Unsupervised Data Exploration
171(2)
Multilayer Topology Preserving Manifolds
171(2)
Conclusion
173(2)
Artificial Immune Systems
175(24)
Clonal Selection Algorithm
176(4)
Artificial Immune Network
178(2)
Projection with Immune-Inspired Algorithms
180(1)
Linear Projections with the Modified CLONALG Algorithm
180(8)
Multiple Components
185(3)
Combining Adaptation Methods
188(1)
Artificial Immune System with Cross Entropy
188(6)
TD Learning with Artificial Immune Systems
190(3)
Ensembles of the Non-standard Adaptation Methods
193(1)
Bootstrapping and Bagging
194(2)
Non-standard Adaptation Methods with Bagging
194(2)
Conclusion
196(3)
Conclusions
199(8)
Rationale
199(2)
Summary and Remarks
201(3)
Further Research
204(3)
References 207(14)
Index 221