Muutke küpsiste eelistusi

Computer Vision ECCV 2024: 18th European Conference, Milan, Italy, September 29October 4, 2024, Proceedings, Part LXIV 2024 ed. [Pehme köide]

Edited by , Edited by , Edited by , Edited by , Edited by , Edited by
  • Formaat: Paperback / softback, 492 pages, kõrgus x laius: 235x155 mm, 163 Illustrations, color; 4 Illustrations, black and white; LXXXV, 492 p. 167 illus., 163 illus. in color., 1 Paperback / softback
  • Sari: Lecture Notes in Computer Science 15122
  • Ilmumisaeg: 31-Oct-2024
  • Kirjastus: Springer International Publishing AG
  • ISBN-10: 3031730380
  • ISBN-13: 9783031730382
  • Pehme köide
  • Hind: 67,23 €*
  • * hind on lõplik, st. muud allahindlused enam ei rakendu
  • Tavahind: 79,09 €
  • Säästad 15%
  • Raamatu kohalejõudmiseks kirjastusest kulub orienteeruvalt 2-4 nädalat
  • Kogus:
  • Lisa ostukorvi
  • Tasuta tarne
  • Tellimisaeg 2-4 nädalat
  • Lisa soovinimekirja
  • Formaat: Paperback / softback, 492 pages, kõrgus x laius: 235x155 mm, 163 Illustrations, color; 4 Illustrations, black and white; LXXXV, 492 p. 167 illus., 163 illus. in color., 1 Paperback / softback
  • Sari: Lecture Notes in Computer Science 15122
  • Ilmumisaeg: 31-Oct-2024
  • Kirjastus: Springer International Publishing AG
  • ISBN-10: 3031730380
  • ISBN-13: 9783031730382
The multi-volume set of LNCS books with volume numbers 15059 up to 15147 constitutes the refereed proceedings of the 18th European Conference on Computer Vision, ECCV 2024, held in Milan, Italy, during September 29October 4, 2024.





The 2387 papers presented in these proceedings were carefully reviewed and selected from a total of 8585 submissions. They deal with topics such as computer vision; machine learning; deep neural networks; reinforcement learning; object recognition; image classification; image processing; object detection; semantic segmentation; human pose estimation; 3d reconstruction; stereo vision; computational photography; neural networks; image coding; image reconstruction; motion estimation.
Depth-guided NeRF Training via Earth Movers Distance.- INTRA:
Interaction Relationship-aware Weakly Supervised Affordance Grounding.-
DEPICT: Diffusion-Enabled Permutation Importance for Image Classification
Tasks.- Meerkat: Audio-Visual Large Language Model for Grounding in Space and
Time.- Diagnosing and Re-learning for Balanced Multimodal Learning.-
Contribution-based Low-Rank Adaptation with Pre-training Model for Real Image
Restoration.- Elucidating the Hierarchical Nature of Behavior with Masked
Autoencoders.- BeyondScene: Higher-Resolution Human-Centric Scene Generation
With Pretrained Diffusion.- SpaRP: Fast 3D Object Reconstruction and Pose
Estimation from Sparse Views.- MMEarth: Exploring Multi-Modal Pretext Tasks
For Geospatial Representation Learning.- Discovering Unwritten Visual
Classifiers with Large Language Models.- LITA: Language Instructed
Temporal-Localization Assistant.- MARs: Multi-view Attention Regularizations
for Patch-based Feature Recognition of Space Terrain.- Ferret-UI: Grounded
Mobile UI Understanding with Multimodal LLMs.- Bridging the Pathology Domain
Gap: Efficiently Adapting CLIP for Pathology Image Analysis with Limited
Labeled Data.- AugUndo: Scaling Up Augmentations for Monocular Depth
Completion and Estimation.- CARB-Net: Camera-Assisted Radar-Based Network for
Vulnerable Road User Detection.- SAH-SCI: Self-Supervised Adapter for
Efficient Hyperspectral Snapshot Compressive Imaging.- Minimalist Vision with
Freeform Pixels.- All You Need is Your Voice: Emotional Face Representation
with Audio Perspective for Emotional Talking Face Generation.- LatentEditor:
Text Driven Local Editing of 3D Scenes.- Single-Photon 3D Imaging with
Equi-Depth Photon Histograms.- Asynchronous Bioplausible Neuron for Spiking
Neural Networks for Event-Based Vision.- Viewpoint textual inversion:
discovering scene representations and 3D view control in 2D diffusion
models.- POET: Prompt Offset Tuning for Continual Human Action Adaptation.-
Domain Generalization of 3D Object Detection by Density-Resampling.- IG
Captioner: Information Gain Captioners are Strong Zero-shot Classifiers.