Muutke küpsiste eelistusi

E-raamat: Computer Vision - ECCV 2024: 18th European Conference, Milan, Italy, September 29-October 4, 2024, Proceedings, Part XXIII

Edited by , Edited by , Edited by , Edited by , Edited by , Edited by
  • Formaat - EPUB+DRM
  • Hind: 80,26 €*
  • * hind on lõplik, st. muud allahindlused enam ei rakendu
  • Lisa ostukorvi
  • Lisa soovinimekirja
  • See e-raamat on mõeldud ainult isiklikuks kasutamiseks. E-raamatuid ei saa tagastada.

DRM piirangud

  • Kopeerimine (copy/paste):

    ei ole lubatud

  • Printimine:

    ei ole lubatud

  • Kasutamine:

    Digitaalõiguste kaitse (DRM)
    Kirjastus on väljastanud selle e-raamatu krüpteeritud kujul, mis tähendab, et selle lugemiseks peate installeerima spetsiaalse tarkvara. Samuti peate looma endale  Adobe ID Rohkem infot siin. E-raamatut saab lugeda 1 kasutaja ning alla laadida kuni 6'de seadmesse (kõik autoriseeritud sama Adobe ID-ga).

    Vajalik tarkvara
    Mobiilsetes seadmetes (telefon või tahvelarvuti) lugemiseks peate installeerima selle tasuta rakenduse: PocketBook Reader (iOS / Android)

    PC või Mac seadmes lugemiseks peate installima Adobe Digital Editionsi (Seeon tasuta rakendus spetsiaalselt e-raamatute lugemiseks. Seda ei tohi segamini ajada Adober Reader'iga, mis tõenäoliselt on juba teie arvutisse installeeritud )

    Seda e-raamatut ei saa lugeda Amazon Kindle's. 

The multi-volume set of LNCS books with volume numbers 15059 up to 15147 constitutes the refereed proceedings of the 18th European Conference on Computer Vision, ECCV 2024, held in Milan, Italy, during September 29October 4, 2024.





The 2387 papers presented in these proceedings were carefully reviewed and selected from a total of 8585 submissions. They deal with topics such as computer vision; machine learning; deep neural networks; reinforcement learning; object recognition; image classification; image processing; object detection; semantic segmentation; human pose estimation; 3d reconstruction; stereo vision; computational photography; neural networks; image coding; image reconstruction; motion estimation.
Weak-to-Strong Compositional Learning from Generative Models for
Language-based Object Detection.- Domesticating SAM for Breast Ultrasound
Image Segmentation via Spatial-frequency Fusion and Uncertainty Correction.-
CanonicalFusion: Generating Drivable 3D Human Avatars from Multiple Images.-
Camera Height Doesn't Change: Unsupervised Training for Metric Monocular
Road-Scene Depth Estimation.- Uni3DL: A Unified Model for 3D Vision-Language
Understanding.- Object-Aware NIR-to-Visible Translation.- PaPr: Training-Free
One-Step Patch Pruning with Lightweight ConvNets for Faster Inference.-
GENIXER: Empowering Multimodal Large Language Models as a Powerful Data
Generator.- BLINK: Multimodal Large Language Models Can See but Not
Perceive.- AFF-ttention! Affordances and Attention models for Short-Term
Object Interaction Anticipation.- PreLAR: World Model Pre-training  with
Learnable Action Representation.- Multi-HMR: Multi-Person Whole-Body Human
Mesh Recovery in a Single Shot.- De-confounded Gaze Estimation.- Diffusion
Models for Monocular Depth Estimation: Overcoming Challenging Conditions.-
FreestyleRet: Retrieving Images from Style-Diversified Queries.- ReGround:
Improving Textual and Spatial Grounding at No Cost.- CardiacNet: Learning to
Reconstruct Abnormalities for Cardiac Disease Assessment from Echocardiogram
Videos.- LaMI-DETR: Open-Vocabulary Detection with Language Model
Instruction.- Unrolled Decomposed Unpaired Learning for Controllable
Low-Light Video Enhancement.- Efficient Image Pre-Training with Siamese
Cropped Masked Autoencoders.- VP-SAM: Taming Segment Anything Model for Video
Polyp Segmentation via Disentanglement and Spatio-temporal Side Network.-
Dataset Enhancement with Instance-Level Augmentations.- FreeMotion:
MoCap-Free Human Motion Synthesis with Multimodal Large Language Models.-
Chameleon: A Data-Efficient Generalist for Dense Visual Prediction in the
Wild.- Reliability in Semantic Segmentation: Can We Use Synthetic Data?.-
SCPNet: Unsupervised Cross-modal Homography Estimation via Intra-modal
Self-supervised Learning.- SCAPE: A Simple and Strong Category-Agnostic Pose
Estimator.