Muutke küpsiste eelistusi

E-raamat: Computer Vision - ECCV 2024: 18th European Conference, Milan, Italy, September 29-October 4, 2024, Proceedings, Part XIV

Edited by , Edited by , Edited by , Edited by , Edited by , Edited by
  • Formaat - PDF+DRM
  • Hind: 80,26 €*
  • * hind on lõplik, st. muud allahindlused enam ei rakendu
  • Lisa ostukorvi
  • Lisa soovinimekirja
  • See e-raamat on mõeldud ainult isiklikuks kasutamiseks. E-raamatuid ei saa tagastada.

DRM piirangud

  • Kopeerimine (copy/paste):

    ei ole lubatud

  • Printimine:

    ei ole lubatud

  • Kasutamine:

    Digitaalõiguste kaitse (DRM)
    Kirjastus on väljastanud selle e-raamatu krüpteeritud kujul, mis tähendab, et selle lugemiseks peate installeerima spetsiaalse tarkvara. Samuti peate looma endale  Adobe ID Rohkem infot siin. E-raamatut saab lugeda 1 kasutaja ning alla laadida kuni 6'de seadmesse (kõik autoriseeritud sama Adobe ID-ga).

    Vajalik tarkvara
    Mobiilsetes seadmetes (telefon või tahvelarvuti) lugemiseks peate installeerima selle tasuta rakenduse: PocketBook Reader (iOS / Android)

    PC või Mac seadmes lugemiseks peate installima Adobe Digital Editionsi (Seeon tasuta rakendus spetsiaalselt e-raamatute lugemiseks. Seda ei tohi segamini ajada Adober Reader'iga, mis tõenäoliselt on juba teie arvutisse installeeritud )

    Seda e-raamatut ei saa lugeda Amazon Kindle's. 

The multi-volume set of LNCS books with volume numbers 15059 up to 15147 constitutes the refereed proceedings of the 18th European Conference on Computer Vision, ECCV 2024, held in Milan, Italy, during September 29October 4, 2024.





The 2387 papers presented in these proceedings were carefully reviewed and selected from a total of 8585 submissions. The papers deal with topics such as computer vision; machine learning; deep neural networks; reinforcement learning; object recognition; image classification; image processing; object detection; semantic segmentation; human pose estimation; 3d reconstruction; stereo vision; computational photography; neural networks; image coding; image reconstruction; motion estimation.





 
ProMerge: Prompt and Merge for Unsupervised Instance Segmentation.-
M2D2M: Multi-Motion Generation from Text with Discrete Diffusion Models.- The
Hard Positive Truth about Vision-Language Compositionality.- GaussCtrl:
Multi-View Consistent Text-Driven 3D Gaussian Splatting Editing.-
Shapefusion: 3D localized human diffusion models.- Eta Inversion: Designing
an Optimal Eta Function for Diffusion-based Real Image Editing.- Prompting
Language-Informed Distribution for Compositional Zero-Shot Learning.-
Wear-Any-Way: Manipulable Virtual Try-on via Sparse Correspondence
Alignment.- 3iGS: Factorised Tensorial Illumination for 3D Gaussian
Splatting.- Distribution-Aware Robust Learning from Long-Tailed Data with
Noisy Labels.- Free-Viewpoint Video of Outdoor Sports Using a Drone.-
Wavelength-Embedding-guided Filter-Array Transformer for Spectral
Demosaicing.- ConGeo: Robust Cross-view Geo-localization across Ground View
Variations.- Generalizable Facial Expression Recognition.- GAURA:
Generalizable Approach for Unified Restoration and Rendering of Arbitrary
Views.- Self-Supervised Any-Point Tracking by Contrastive Random Walks.-
MixDQ: Memory-Efficient Few-Step Text-to-Image Diffusion Models with
Metric-Decoupled Mixed Precision Quantization.- Siamese Vision Transformers
are Scalable Audio-visual Learners.- LCM-Lookahead for Encoder-based
Text-to-Image Personalization.- Towards Architecture-Agnostic Untrained
Networks Priors for Image Reconstruction with Frequency Regularization.-
Towards Open-Ended Visual Recognition with Large Language Models.-
Ray-Distance Volume Rendering for Neural Scene Reconstruction.- ReNoise: Real
Image Inversion Through Iterative Noising.- Attention Decomposition for
Cross-Domain  Semantic Segmentation.- Be Yourself:  Bounded Attention for
Multi-Subject Text-to-Image Generation.- Handling The Non-Smooth Challenge in
Tensor SVD: A Multi-Objective Tensor Recovery Framework.- RodinHD:
High-Fidelity 3D Avatar Generation with Diffusion Models.