Muutke küpsiste eelistusi

E-raamat: Computer Vision - ECCV 2024: 18th European Conference, Milan, Italy, September 29-October 4, 2024, Proceedings, Part LIX

Edited by , Edited by , Edited by , Edited by , Edited by , Edited by
  • Formaat - PDF+DRM
  • Hind: 80,26 €*
  • * hind on lõplik, st. muud allahindlused enam ei rakendu
  • Lisa ostukorvi
  • Lisa soovinimekirja
  • See e-raamat on mõeldud ainult isiklikuks kasutamiseks. E-raamatuid ei saa tagastada.

DRM piirangud

  • Kopeerimine (copy/paste):

    ei ole lubatud

  • Printimine:

    ei ole lubatud

  • Kasutamine:

    Digitaalõiguste kaitse (DRM)
    Kirjastus on väljastanud selle e-raamatu krüpteeritud kujul, mis tähendab, et selle lugemiseks peate installeerima spetsiaalse tarkvara. Samuti peate looma endale  Adobe ID Rohkem infot siin. E-raamatut saab lugeda 1 kasutaja ning alla laadida kuni 6'de seadmesse (kõik autoriseeritud sama Adobe ID-ga).

    Vajalik tarkvara
    Mobiilsetes seadmetes (telefon või tahvelarvuti) lugemiseks peate installeerima selle tasuta rakenduse: PocketBook Reader (iOS / Android)

    PC või Mac seadmes lugemiseks peate installima Adobe Digital Editionsi (Seeon tasuta rakendus spetsiaalselt e-raamatute lugemiseks. Seda ei tohi segamini ajada Adober Reader'iga, mis tõenäoliselt on juba teie arvutisse installeeritud )

    Seda e-raamatut ei saa lugeda Amazon Kindle's. 

The multi-volume set of LNCS books with volume numbers 15059 up to 15147 constitutes the refereed proceedings of the 18th European Conference on Computer Vision, ECCV 2024, held in Milan, Italy, during September 29October 4, 2024.





The 2387 papers presented in these proceedings were carefully reviewed and selected from a total of 8585 submissions. They deal with topics such as computer vision; machine learning; deep neural networks; reinforcement learning; object recognition; image classification; image processing; object detection; semantic segmentation; human pose estimation; 3d reconstruction; stereo vision; computational photography; neural networks; image coding; image reconstruction; motion estimation.

Kernel Diffusion: An Alternate Approach to Blind Deconvolution.- MUSES: The Multi-Sensor Semantic Perception Dataset for Driving under Uncertainty.- Discovering Novel Actions from Open World Egocentric Videos with Object-Grounded Visual Commonsense Reasoning.- Bidirectional Progressive Transformer for Interaction Intention Anticipation.- Reinforcement Learning Meets Visual Odometry.- Bucketed Ranking-based Losses for Efficient Training of Object Detectors.- Robustness Tokens: Towards Adversarial Robustness of Transformers.- RSL-BA: Rolling Shutter Line Bundle Adjustment.- DecentNeRFs: Decentralized Neural Radiance Fields from Crowdsourced Images.- DreamMesh: Jointly Manipulating and Texturing Triangle Meshes for Text-to-3D Generation.- Unveiling Typographic Deceptions: Insights of the Typographic Vulnerability in Large Vision-Language Models.- N2F2: Hierarchical Scene Understanding with Nested Neural Feature Fields.- ConceptExpress: Harnessing Diffusion Models for Single-image Unsupervised Concept Extraction.- PairingNet: A Learning-based Pair-searching and -matching Network for Image Fragments.- Skeleton-based Group Activity Recognition via Spatial-Temporal Panoramic Graph.- Towards Multimodal Open-Set Domain Generalization and Adaptation through Self-supervision.- ReCON: Training-Free Acceleration for Text-to-Image Synthesis with Retrieval of Concept Prompt Trajectories.- AMES: Asymmetric and Memory-Efficient Similarity Estimation for Instance-level Retrieval.- TCAN: Animating Human Images with Temporally Consistent Pose Guidance using Diffusion Models.- 3D Hand Sequence Recovery from Real Blurry Images and Event Stream.- GlobalPointer: Large-Scale Plane Adjustment with Bi-Convex Relaxation.- Dissolving Is Amplifying: Towards Fine-Grained Anomaly Detection.- StyleCity: Large-Scale 3D Urban Scenes Stylization.- ViG-Bias: Visually Grounded Bias Discovery and Mitigation.- DiffBIR: Toward Blind Image Restoration with Generative Diffusion Prior.- Assessing Sample Quality via the Latent Space of Generative Models.- Relightable Neural Actor with Intrinsic Decomposition and Pose Control.