|
|
xi | |
|
|
xv | |
Series Foreword |
|
xvii | |
Preface |
|
xix | |
Contributing Authors |
|
xxi | |
|
Bridging the Semantic Gap in Content Management Systems |
|
|
1 | (10) |
|
|
|
Computational Media Aesthetics |
|
|
3 | (2) |
|
Primitive Feature Extraction |
|
|
4 | (1) |
|
Higher Order Semantic Construct Extraction |
|
|
5 | (1) |
|
|
5 | (6) |
|
|
9 | (2) |
|
Essentials of Applied Media Aesthetics |
|
|
11 | (28) |
|
|
Applied Media Aesthetics: Definition and Method |
|
|
12 | (1) |
|
|
13 | (1) |
|
The First Aesthetic Field: Light |
|
|
13 | (5) |
|
Attached and Cast Shadows |
|
|
14 | (1) |
|
Above-and Below Eye-Level Lighting |
|
|
15 | (2) |
|
|
17 | (1) |
|
The Extended First Aesthetic Field: Color |
|
|
18 | (1) |
|
|
18 | (1) |
|
|
18 | (1) |
|
|
19 | (1) |
|
|
19 | (1) |
|
The Two-Dimensional Field: Area |
|
|
19 | (6) |
|
|
20 | (1) |
|
|
20 | (1) |
|
|
21 | (1) |
|
|
21 | (2) |
|
|
23 | (1) |
|
|
24 | (1) |
|
The Three-Dimensional Field: Depth and Volume |
|
|
25 | (3) |
|
|
25 | (1) |
|
Z-Axis Articulation and Lenses |
|
|
26 | (1) |
|
|
27 | (1) |
|
The Four-Dimensional Field: Time-Motion |
|
|
28 | (4) |
|
|
28 | (1) |
|
Time in Television and Film Presentations |
|
|
29 | (1) |
|
|
30 | (2) |
|
The Five-Dimensional Field: Sound |
|
|
32 | (2) |
|
|
32 | (1) |
|
|
33 | (1) |
|
|
34 | (1) |
|
|
34 | (5) |
|
|
37 | (2) |
|
Space-Time Mappings as Database Browsing Tools |
|
|
39 | (18) |
|
|
The Need to Segment and the Narrative Map |
|
|
40 | (1) |
|
The Shortcomings of Common Database Search Practices as They Apply to Moving Image Databases |
|
|
41 | (1) |
|
The Cartesian Grid as the Spatio-Temporal Mapping for Browsing |
|
|
42 | (3) |
|
From the Frame to the Shot |
|
|
42 | (1) |
|
Self-Generating Segmentations |
|
|
43 | (1) |
|
|
44 | (1) |
|
Embedded Linkages and Taggability |
|
|
45 | (1) |
|
|
45 | (2) |
|
Conclusion - Generalizing the Notion of Segmentation |
|
|
47 | (10) |
|
|
55 | (2) |
|
|
57 | (28) |
|
|
|
|
The Need for a Framework: Computational Media Aesthetics |
|
|
59 | (4) |
|
A Short History of Automatic Content Management |
|
|
59 | (2) |
|
Approaches to Film Content Management |
|
|
61 | (2) |
|
The Solution: The Framework of Film Grammar |
|
|
63 | (3) |
|
|
63 | (1) |
|
How do We Use Film Grammar? |
|
|
64 | (2) |
|
Using the Framework: Extracting and Analyzing Film Tempo |
|
|
66 | (12) |
|
|
67 | (1) |
|
|
68 | (1) |
|
Computational Aspects of Tempo |
|
|
69 | (1) |
|
Extracting the Components of Tempo |
|
|
69 | (1) |
|
|
70 | (2) |
|
|
72 | (2) |
|
An Example from the Movie, The Matrix |
|
|
74 | (1) |
|
Building on the Tempo Function |
|
|
75 | (3) |
|
|
78 | (7) |
|
|
81 | (4) |
|
Modeling Color Dynamics for the Semantics of Commercials |
|
|
85 | (20) |
|
|
|
|
Semantics of Color and Motion in Commercials |
|
|
87 | (2) |
|
Modeling Arrangements of Entities Extended over Time and Space |
|
|
89 | (6) |
|
Absolute Dynamics of a Single Entity |
|
|
89 | (2) |
|
Properties and Derivation |
|
|
91 | (1) |
|
|
92 | (1) |
|
Relative Dynamics of Two Entities |
|
|
93 | (1) |
|
Properties and Derivation |
|
|
94 | (1) |
|
Distance Based on 3D Weighted Walkthroughs |
|
|
94 | (1) |
|
Extraction and Representation of Color Dynamics |
|
|
95 | (2) |
|
|
95 | (2) |
|
|
97 | (1) |
|
Video Retrieval by Color Dynamics |
|
|
97 | (5) |
|
|
98 | (1) |
|
Evaluating Absolute Dynamics |
|
|
99 | (2) |
|
Evaluating Relative Dynamics |
|
|
101 | (1) |
|
|
102 | (3) |
|
|
103 | (2) |
|
Scene Determination Using Auditive Segmentation |
|
|
105 | (26) |
|
|
|
|
106 | (4) |
|
Audio Editing Practices for Scenes |
|
|
110 | (4) |
|
Automatic Extraction of Auditive Scenes |
|
|
114 | (5) |
|
Scenes Created by Narration |
|
|
114 | (1) |
|
Scenes Created by Editing |
|
|
115 | (1) |
|
|
115 | (3) |
|
|
118 | (1) |
|
|
119 | (4) |
|
Scenes Determined by Linguistic Analysis |
|
|
119 | (2) |
|
Scenes Determined by Sound Classification |
|
|
121 | (1) |
|
Scenes Determined by Feature Patterns |
|
|
122 | (1) |
|
|
123 | (8) |
|
|
125 | (6) |
|
Determining Affective Events Through Film Audio |
|
|
131 | (28) |
|
|
|
|
|
133 | (1) |
|
|
134 | (3) |
|
Matching the Visual Event via Sound Energy |
|
|
135 | (1) |
|
|
135 | (1) |
|
|
136 | (1) |
|
Predictive Reinforcing Syncopation |
|
|
136 | (1) |
|
Counterpoint via Sound Energy |
|
|
137 | (1) |
|
Computing Affective Events in Motion Pictures |
|
|
137 | (10) |
|
|
138 | (1) |
|
Sound Energy Envelope Characteristics |
|
|
138 | (1) |
|
Sound Energy Event Composition and Affect |
|
|
139 | (2) |
|
Sound Energy Patterns without Affect |
|
|
141 | (1) |
|
Location and Semantics of Sound Energy Events |
|
|
142 | (1) |
|
Sound Energy Event Occurrence Classification |
|
|
142 | (1) |
|
Intra Sound Energy Pattern and Affect |
|
|
142 | (1) |
|
|
143 | (1) |
|
|
143 | (1) |
|
Sound Energy Event Detection Algorithm |
|
|
144 | (1) |
|
Computing Sound Energy Dynamics |
|
|
144 | (2) |
|
Detecting Sound Energy Events |
|
|
146 | (1) |
|
|
147 | (6) |
|
Accuracy of Event Detection |
|
|
147 | (1) |
|
Accuracy of Affect Detection |
|
|
148 | (2) |
|
Data Support for Affect Events |
|
|
150 | (2) |
|
|
152 | (1) |
|
|
153 | (6) |
|
|
157 | (2) |
|
The Future of Media Computing |
|
|
159 | |
|
|
The Structure of a Semantic and Semiotic Continuum |
|
|
162 | |
|
|
162 | |
|
|
163 | |
|
|
164 | |
|
|
166 | |
|
|
167 | |
|
Digital Production - Environment and Tools |
|
|
168 | |
|
|
169 | |
|
|
171 | |
|
|
175 | |
|
|
180 | |
|
Information Space Editing Environment (ISEE) |
|
|
182 | |
|
Dynamic Presentation Environment (DPE) |
|
|
184 | |
|
|
186 | |
|
|
189 | |