Muutke küpsiste eelistusi
  • Formaat - PDF+DRM
  • Hind: 110,53 €*
  • * hind on lõplik, st. muud allahindlused enam ei rakendu
  • Lisa ostukorvi
  • Lisa soovinimekirja
  • See e-raamat on mõeldud ainult isiklikuks kasutamiseks. E-raamatuid ei saa tagastada.

DRM piirangud

  • Kopeerimine (copy/paste):

    ei ole lubatud

  • Printimine:

    ei ole lubatud

  • Kasutamine:

    Digitaalõiguste kaitse (DRM)
    Kirjastus on väljastanud selle e-raamatu krüpteeritud kujul, mis tähendab, et selle lugemiseks peate installeerima spetsiaalse tarkvara. Samuti peate looma endale  Adobe ID Rohkem infot siin. E-raamatut saab lugeda 1 kasutaja ning alla laadida kuni 6'de seadmesse (kõik autoriseeritud sama Adobe ID-ga).

    Vajalik tarkvara
    Mobiilsetes seadmetes (telefon või tahvelarvuti) lugemiseks peate installeerima selle tasuta rakenduse: PocketBook Reader (iOS / Android)

    PC või Mac seadmes lugemiseks peate installima Adobe Digital Editionsi (Seeon tasuta rakendus spetsiaalselt e-raamatute lugemiseks. Seda ei tohi segamini ajada Adober Reader'iga, mis tõenäoliselt on juba teie arvutisse installeeritud )

    Seda e-raamatut ei saa lugeda Amazon Kindle's. 

Exploration of Visual Data presents latest research efforts in the area of content-based exploration of image and video data. The main objective is to bridge the semantic gap between high-level concepts in the human mind and low-level features extractable by the machines.

The two key issues emphasized are "content-awareness" and "user-in-the-loop". The authors provide a comprehensive review on algorithms for visual feature extraction based on color, texture, shape, and structure, and techniques for incorporating such information to aid browsing, exploration, search, and streaming of image and video data. They also discuss issues related to the mixed use of textual and low-level visual features to facilitate more effective access of multimedia data.

Exploration of Visual Data provides state-of-the-art materials on the topics of content-based description of visual data, content-based low-bitrate video streaming, and latest asymmetric and nonlinear relevance feedback algorithms, which to date are unpublished.

Muu info

Springer Book Archives
1. Introduction.- 1.1 Challenges.- 1.2 Research Scope.- 1.3
State-of-the-Art.- 1.4 Outline of Book.-
2. Overview of Visual Information
Representation.- 2.1 Color.- 2.2 Texture.- 2.3 Shape.- 2.4 Spatial Layout.-
2.5 Interest Points.- 2.6 Image Segmentation.- 2.7 Summary.-
3. Edge-Based
Structural Features.- 3.1 Visual Feature Representation.- 3.2 Edge-Based
Structural Features.- 3.3 Experiments and Analysis.-
4. Probabilistic Local
Structure Models.- 4.1 Introduction.- 4.2 The Proposed Modeling Scheme.- 4.3
Implementation Issues.- 4.4 Experiments and Discussion.- 4.5 Summary and
Discussion.-
5. Constructing Table-of-Content for Videos.- 5.1 Introduction.-
5.2 Related Work.- 5.3 The Proposed Approach.- 5.4 Determination of the
Parameters.- 5.5 Experimental Results.- 5.6 Conclusions.-
6. Nonlinearly
Sampled Video Streaming.- 6.1 Introduction.- 6.2 Problem Statement.- 6.3
Frame Saliency Scoring.- 6.4 Scenario and Assumptions.- 6.5 Minimum Buffer
Formulation.- 6.6 Limited-Buffer Formulation.- 6.7 Extensions and Analysis.-
6.8 Experimental Evaluation.- 6.9 Discussion.-
7. Relevance Feedback for
Visual Data Retrieval.- 7.1 The Need for User-in-the-Loop.- 7.2 Problem
Statement.- 7.3 Overview of Existing Techniques.- 7.4 Learning from Positive
Feedbacks.- 7.5 Adding Negative Feedbacks: Discriminant Analysis?.- 7.6
Biased Discriminant Analysis.- 7.7 Nonlinear Extensions Using Kernel and
Boosting.- 7.8 Comparisons and Analysis.- 7.9 Relevance Feedback on Image
Tiles.-
8. Toward Unification of Keywords and Low-Level Contents.- 8.1
Introduction.- 8.2 Joint Querying and Relevance Feedback.- 8.3 Learning
Semantic Relations between Keywords.- 8.4 Discussion.-
9. Future Research
Directions.- 9.1 Low-level and intermediate-level visual descriptors.- 9.2
Learning from user interactions.-9.3 Unsupervised detection of
patterns/events.- 9.4 Domain-specific applications.- References.