Muutke küpsiste eelistusi

E-raamat: Information-Theoretic Approach to Neural Computing

  • Formaat - PDF+DRM
  • Hind: 110,53 €*
  • * hind on lõplik, st. muud allahindlused enam ei rakendu
  • Lisa ostukorvi
  • Lisa soovinimekirja
  • See e-raamat on mõeldud ainult isiklikuks kasutamiseks. E-raamatuid ei saa tagastada.

DRM piirangud

  • Kopeerimine (copy/paste):

    ei ole lubatud

  • Printimine:

    ei ole lubatud

  • Kasutamine:

    Digitaalõiguste kaitse (DRM)
    Kirjastus on väljastanud selle e-raamatu krüpteeritud kujul, mis tähendab, et selle lugemiseks peate installeerima spetsiaalse tarkvara. Samuti peate looma endale  Adobe ID Rohkem infot siin. E-raamatut saab lugeda 1 kasutaja ning alla laadida kuni 6'de seadmesse (kõik autoriseeritud sama Adobe ID-ga).

    Vajalik tarkvara
    Mobiilsetes seadmetes (telefon või tahvelarvuti) lugemiseks peate installeerima selle tasuta rakenduse: PocketBook Reader (iOS / Android)

    PC või Mac seadmes lugemiseks peate installima Adobe Digital Editionsi (Seeon tasuta rakendus spetsiaalselt e-raamatute lugemiseks. Seda ei tohi segamini ajada Adober Reader'iga, mis tõenäoliselt on juba teie arvutisse installeeritud )

    Seda e-raamatut ei saa lugeda Amazon Kindle's. 

Neural networks provide a powerful new technology to model and control nonlinear and complex systems. In this book, the authors present a detailed formulation of neural networks from the information-theoretic viewpoint. They show how this perspective provides new insights into the design theory of neural networks. In particular, they show how these methods may be applied to the topics of supervised and unsupervised learning, including feature extraction, linear and nonlinear independent component analysis, and Boltzmann machines.
Readers are assumed to have a basic understanding of neural networks, but all of the relevant concepts from information theory are carefully introduced and explained. Consequently, readers from several different scientific disciplines - notably, cognitive scientists, engineers, physicists, statisticians, and computer scientists - will find this book to be a very valuable contribution to this topic.

A detailed formulation of neural networks from the information-theoretic viewpoint. The authors show how this perspective provides new insights into the design theory of neural networks. In particular they demonstrate how these methods may be applied to the topics of supervised and unsupervised learning, including feature extraction, linear and non-linear independent component analysis, and Boltzmann machines. Readers are assumed to have a basic understanding of neural networks, but all the relevant concepts from information theory are carefully introduced and explained. Consequently, readers from varied scientific disciplines, notably cognitive scientists, engineers, physicists, statisticians, and computer scientists, will find this an extremely valuable introduction to this topic.
1 Introduction.- 2 Preliminaries of Information Theory and Neural
Networks.- 2.1 Elements of Information Theory.- 2.2 Elements of the Theory of
Neural Networks.- I: Unsupervised Learning.- 3 Linear Feature Extraction:
Infomax Principle.- 4 Independent Component Analysis: General Formulation and
Linear Case.- 5 Nonlinear Feature Extraction: Boolean Stochastic Networks.- 6
Nonlinear Feature Extraction: Deterministic Neural Networks.- II: Supervised
Learning.- 7 Supervised Learning and Statistical Estimation.- 8 Statistical
Physics Theory of Supervised Learning and Generalization.- 9 Composite
Networks.- 10 Information Theory Based Regularizing Methods.- References.