Muutke küpsiste eelistusi

E-raamat: Virtual Reality and Light Field Immersive Video Technologies for Real-World Applications

(Université Libre de Bruxelles, Laboratory of Image Synthesis and Analysis (LISA), Belgium), (Université Libre de Bruxelles, Laboratory of Image Synthesis and Analysis (LISA), Belgium)
  • Formaat: EPUB+DRM
  • Sari: Computing and Networks
  • Ilmumisaeg: 21-Jan-2022
  • Kirjastus: Institution of Engineering and Technology
  • Keel: eng
  • ISBN-13: 9781785615795
Teised raamatud teemal:
  • Formaat - EPUB+DRM
  • Hind: 214,50 €*
  • * hind on lõplik, st. muud allahindlused enam ei rakendu
  • Lisa ostukorvi
  • Lisa soovinimekirja
  • See e-raamat on mõeldud ainult isiklikuks kasutamiseks. E-raamatuid ei saa tagastada.
  • Formaat: EPUB+DRM
  • Sari: Computing and Networks
  • Ilmumisaeg: 21-Jan-2022
  • Kirjastus: Institution of Engineering and Technology
  • Keel: eng
  • ISBN-13: 9781785615795
Teised raamatud teemal:

DRM piirangud

  • Kopeerimine (copy/paste):

    ei ole lubatud

  • Printimine:

    ei ole lubatud

  • Kasutamine:

    Digitaalõiguste kaitse (DRM)
    Kirjastus on väljastanud selle e-raamatu krüpteeritud kujul, mis tähendab, et selle lugemiseks peate installeerima spetsiaalse tarkvara. Samuti peate looma endale  Adobe ID Rohkem infot siin. E-raamatut saab lugeda 1 kasutaja ning alla laadida kuni 6'de seadmesse (kõik autoriseeritud sama Adobe ID-ga).

    Vajalik tarkvara
    Mobiilsetes seadmetes (telefon või tahvelarvuti) lugemiseks peate installeerima selle tasuta rakenduse: PocketBook Reader (iOS / Android)

    PC või Mac seadmes lugemiseks peate installima Adobe Digital Editionsi (Seeon tasuta rakendus spetsiaalselt e-raamatute lugemiseks. Seda ei tohi segamini ajada Adober Reader'iga, mis tõenäoliselt on juba teie arvutisse installeeritud )

    Seda e-raamatut ei saa lugeda Amazon Kindle's. 

Inspired by MPEG-I and JPEG-PLENO standardization activities, this book is for readers who want to understand 3D representations and multi-camera video processing for novel immersive media applications. The authors address new challenges that arise beyond compression-only, such as depth acquisition and 3D rendering.



Virtual reality (VR) refers to technologies that use headsets to generate realistic images, sounds and other sensations that replicate a real-world environment or create an imaginary setting. VR also simulates a user's physical presence in this environment. In virtual reality, six degrees of freedom allows users to not only look around, but also to move around the virtual world and look from above, below or behind objects. To have a true VR experience, the hardware must provide six degrees of freedom, using both orientation tracking (rotational) and positional tracking (translation).

This book is addressed to video experts who want to understand the basics of 3D representations and multi-camera video processing to target new immersive media applications. Unlike single camera video coding, future VR technologies address new challenges that arise beyond compression-only, including the pre- and post-processing (depth acquisition and 3D rendering). This book is inspired by the MPEG-I (immersive media) and JPEG-PLENO (plenoptic media) standardization activities, and offers a glimpse of their underlying technologies.

About the authors xi
1 Immersive video introduction
1(8)
References
7(2)
2 Virtual reality
9(14)
2.1 Introduction/history
9(5)
2.2 The challenge of three to six degrees of freedom
14(4)
2.3 The challenge of stereoscopic to holographic vision
18(5)
References
19(4)
3 3D gaming and VR
23(18)
3.1 OpenGLinVR
23(1)
3.2 3D data representations
24(9)
3.2.1 Triangular meshes
24(4)
3.2.2 Subdivision surfaces and Bezier curves
28(3)
3.2.3 Textures and cubemaps
31(2)
3.3 OpenGL pipeline
33(8)
References
38(3)
4 Camera and projection models
41(16)
4.1 Mathematical preliminaries
41(4)
4.2 The pinhole camera model
45(3)
4.3 Intrinsics of the pinhole camera
48(1)
4.4 Projection matrices
49(8)
4.4.1 Mathematical derivation of projection matrices
51(4)
4.4.2 Characteristics of the projection matrices
55(1)
References
56(1)
5 Light equations
57(20)
5.1 Light contributions
59(8)
5.1.1 Emissive light source
60(1)
5.1.2 Ambient light
60(1)
5.1.3 Diffuse light
61(2)
5.1.4 Specular light
63(4)
5.2 Physically correct light models
67(1)
5.3 Light models for transparent materials
68(2)
5.4 Shadows rendering
70(1)
5.5 Mesh-based 3D rendering with light equations
71(6)
5.5.1 Gouraud shading
72(1)
5.5.2 Phong shading
72(1)
5.5.3 Bump mapping
73(1)
5.5.4 3D file formats
73(2)
References
75(2)
6 Kinematics
77(26)
6.1 Rigid body animations
77(6)
6.1.1 Rotations with Euler angles
78(2)
6.1.2 Rotations around an arbitrary axis
80(2)
6.1.3 ModelView transformation
82(1)
6.2 Quaternions
83(2)
6.2.1 Spherical linear interpolation
85(1)
6.3 Deformable body animations
85(5)
6.3.1 Keyframes and inverse kinematics
85(1)
6.3.2 Clothes animation
86(3)
6.3.3 Particle systems
89(1)
6.4 Collisions in the physics engine
90(13)
6.4.1 Collision of a triangle with a plane
91(2)
6.4.2 Collision between two spheres, only one moving
93(3)
6.4.3 Collision of two moving spheres
96(1)
6.4.4 Collision of a sphere with a plane
96(1)
6.4.5 Collision of a sphere with a cube
97(1)
6.4.6 Separating axes theorem and bounding boxes
98(3)
References
101(2)
7 Raytracing
103(14)
7.1 Raytracing complexity
109(3)
7.2 Raytracing with analytical objects
112(3)
7.3 VR challenges
115(2)
References
115(2)
8 2D transforms for VR with natural content
117(18)
8.1 The affine transform
117(2)
8.2 The homography
119(3)
8.3 Homography estimation
122(1)
8.4 Feature points and RANSAC outliers for panoramic stitching
123(3)
8.5 Homography and affine transform revisited
126(2)
8.6 Pose estimation for AR
128(7)
References
131(4)
9 3DoF VR with natural content
135(14)
9.1 Stereoscopic viewing
135(2)
9.2 360 panoramas
137(12)
9.2.1 360 panoramas with planar reprojections
137(4)
9.2.2 Cylindrical and spherical 360 panoramas
141(3)
9.2.3 360 panoramas with equirectangular projection images
144(3)
References
147(2)
10 VR goggles
149(18)
10.1 Wide angle lens distortion
149(8)
10.1.1 Wide angle lens model
149(2)
10.1.2 Radial distortion model
151(4)
10.1.3 VR goggles pre-distortion
155(2)
10.2 Asynchronous high frame rate rendering
157(2)
10.3 Stereoscopic time warping
159(1)
10.4 Advanced HMD rendering
159(8)
10.4.1 Optical systems
159(2)
10.4.2 Eye accommodation
161(2)
References
163(4)
11 6DoF navigation
167(38)
11.1 6DoF with point clouds
167(1)
11.2 Active depth sensing
167(1)
11.3 Time of flight
168(7)
11.3.1 Phase from a modulated light source
169(3)
11.3.2 Structured light
172(2)
11.3.3 Phase from interferometry
174(1)
11.4 Point cloud registration and densification
175(20)
11.4.1 Photogrammetry
181(12)
11.4.2 SLAM navigational applications
193(2)
11.5 3D rendering of point clouds
195(10)
11.5.1 Poisson reconstruction
195(1)
11.5.2 Splatting
196(1)
References
196(9)
12 Towards 6DoF with image-based rendering
205(80)
12.1 Introduction
205(3)
12.2 Finding relative camera positions
208(30)
12.2.1 Epipolar geometry
208(2)
12.2.2 Rotation and translation from the essential and fundamental matrices
210(3)
12.2.3 Epipolar line equation
213(1)
12.2.4 Extrinsics with checkerboard calibration
213(2)
12.2.5 Extrinsics with sparse bundle adjustment
215(1)
12.2.6 Depth estimation
215(1)
12.2.7 Stereo matching
215(2)
12.2.8 Depth quantization
217(1)
12.2.9 Stereo matching and cost volumes
218(4)
12.2.10 Occlusions
222(1)
12.2.11 Stereo matching with adaptive windows around depth discontinuities
222(1)
12.2.12 Stereo matching with priors
223(3)
12.2.13 Uniform texture regions
226(6)
12.2.14 Epipolar plane image with multiple images
232(2)
12.2.15 Plane sweeping
234(4)
12.3 Graph cut
238(6)
12.3.1 The binary graph cut
243(1)
12.4 MPEG reference depth estimation
244(1)
12.5 Depth estimation challenges
245(1)
12.6 6DoF view synthesis with depth image-based rendering
246(19)
12.6.1 Morphing without depth
247(1)
12.6.2 Nyquist-Whittaker-Shannon and Petersen-Middleton in DIBR view synthesis
247(6)
12.6.3 Depth-based 2D pixel to 3D point reprojections
253(4)
12.6.4 Splatting and hole filling
257(1)
12.6.5 Super-pixels and hole filling
257(2)
12.6.6 Depth reliability in view synthesis
259(1)
12.6.7 MPEG-I view synthesis with estimated depth maps
260(2)
12.6.8 MPEG-I view synthesis with sensed depth maps
262(1)
12.6.9 Depth layered images-Google
262(3)
12.7 Use case I: view synthesis in holographic stereograms
265(4)
12.8 Use case II: view synthesis in integral photography
269(1)
12.9 Difference between PCC and DIBR
270(15)
References
271(14)
13 Multi-camera acquisition systems
285(16)
13.1 Stereo vision
285(1)
13.2 Multiview vision
286(5)
13.2.1 Geometry correction for camera array
286(2)
13.2.2 Colour correction for camera array
288(3)
13.3 Plenoptic imaging
291(10)
13.3.1 Processing tools for plenoptic camera
292(1)
13.3.2 Conversion from Lenslet to Multi view images for plenoptic camera 1.0
293(5)
References
298(3)
14 3D light field displays
301(30)
14.1 3D TV
301(1)
14.2 Eye vision
301(2)
14.3 Surface light field system
303(1)
14.4 1D-II 3D display system
303(2)
14.5 Integral photography
305(1)
14.6 Real-time free viewpoint television
305(1)
14.7 SMV256
306(1)
14.8 Light field video camera system
307(1)
14.9 Multipoint camera and microphone system
308(1)
14.10 Walk-through system
308(1)
14.11 Ray emergent imaging (REI)
308(1)
14.12 Holografika
308(2)
14.13 Light field 3D display
310(1)
14.14 Aktina Vision
310(2)
14.15 IP by 3D VIVANT
312(2)
14.16 Projection type IP
314(1)
14.17 Tensor display
314(1)
14.18 Multi-, plenoptic-, coded-aperture-, multi-focus-camera to tensor display system
314(2)
14.19 360° light field display
316(2)
14.20 360° mirror scan
318(1)
14.21 Seelinder
318(1)
14.22 Holo Table
319(1)
14.23 fVisiOn
320(1)
14.24 Use cases of virtual reality systems
320(11)
14.24.1 Public use cases
320(1)
14.24.2 Professional use cases
320(4)
14.24.3 Scientific use cases
324(1)
References
324(7)
15 Visual media compression
331(36)
15.1 3D video compression
334(4)
15.1.1 Image and video compression
335(3)
15.2 MPEG standardization and compression with 2D video codecs
338(8)
15.2.1 Cubemap video
339(2)
15.2.2 Multiview video and depth compression (3D-HEVC)
341(2)
15.2.3 Dense light field compression
343(3)
15.3 Future challenges in 2D video compression
346(2)
15.4 MPEG codecs for 3D immersion
348(19)
15.4.1 Point cloud coding with 2D video codecs
348(3)
15.4.2 MPEG immersive video compression
351(2)
15.4.3 Visual volumetric video coding
353(1)
15.4.4 Compression for light field displays
353(2)
References
355(12)
16 Conclusion and future perspectives
367(2)
Index 369
Gauthier Lafruit is professor of multimedia at the Laboratory of Image Synthesis and Analysis (LISA), Université Libre de Bruxelles, Belgium. His work focuses on multi-camera 3D acquisitions, steadily evolving towards photo-realistic free-viewpoint navigation modalities and glasses-free 3D viewing. He is an IEEE senior member who is active in IEEE Circuits and Systems and IEEE Signal Processing Societies.



Mehrdad Teratani is professor of Light Field Video Engineering at the Laboratory of Image Synthesis and Analysis (LISA), Université Libre de Bruxelles, Belgium. His research interests include 3D imaging systems with a focus on 3D image processing and compression, 3D media integration and immersive communication, robotics, intelligent video systems and computer vision. He is an IEEE senior member who is active in IEEE Circuits and Systems and IEEE Signal Processing Societies.