Muutke küpsiste eelistusi

Computer Vision ECCV 2024 Workshops: Milan, Italy, September 29October 4, 2024, Proceedings, Part XIX [Pehme köide]

Edited by , Edited by , Edited by , Edited by
  • Formaat: Paperback / softback, 270 pages, kõrgus x laius: 235x155 mm, 73 Illustrations, color; 4 Illustrations, black and white; LIV, 270 p. 77 illus., 73 illus. in color., 1 Paperback / softback
  • Sari: Lecture Notes in Computer Science 15641
  • Ilmumisaeg: 01-Jun-2025
  • Kirjastus: Springer International Publishing AG
  • ISBN-10: 3031938054
  • ISBN-13: 9783031938054
  • Pehme köide
  • Hind: 78,34 €*
  • * hind on lõplik, st. muud allahindlused enam ei rakendu
  • Tavahind: 92,17 €
  • Säästad 15%
  • Raamatu kohalejõudmiseks kirjastusest kulub orienteeruvalt 2-4 nädalat
  • Kogus:
  • Lisa ostukorvi
  • Tasuta tarne
  • Tellimisaeg 2-4 nädalat
  • Lisa soovinimekirja
  • Formaat: Paperback / softback, 270 pages, kõrgus x laius: 235x155 mm, 73 Illustrations, color; 4 Illustrations, black and white; LIV, 270 p. 77 illus., 73 illus. in color., 1 Paperback / softback
  • Sari: Lecture Notes in Computer Science 15641
  • Ilmumisaeg: 01-Jun-2025
  • Kirjastus: Springer International Publishing AG
  • ISBN-10: 3031938054
  • ISBN-13: 9783031938054
The multi-volume set LNCS 15623 until LNCS 15646 constitutes the proceedings of the workshops that were held in conjunction with the 18th European Conference on Computer Vision, ECCV 2024, which took place in Milan, Italy, during September 29October 4, 2024. 



These LNCS volumes contain 574 accepted papers from 53 of the 73 workshops. The list of workshops and distribution of the workshop papers in the LNCS volumes can be found in the preface that is freely accessible online.
Practical Dataset Distillation Based on Deep Support Vectors.-
Leveraging FINCH and K-means for Enhanced Cluster-Based Instance Selection.-
GSTAM: Efficient Graph Distillation with Structural Attention-Matching.- DiM:
Distilling Dataset into Generative Model.- Generative Dataset Distillation
using Min-Max Diffusion Model.- Data-Efficient Generation for Dataset
Distillation.- Generative Dataset Distillation Based on Diffusion Model.-
Optimizing Dataset Distillation Using DATM: Adjusting Learning Rate and Upper
Bound.- Well Begun is Half Done: The Importance of Initialization in Dataset
Distillation.- Enhancing Dataset Distillation via Label Inconsistency
Elimination and Learning Pattern Refinement.- A Spitting Image: Modular
Superpixel Tokenization in Vision Transformers.- NIGHT - Non-Line-of-Sight
Imaging from indirect Time of Flight data.- Self-accumulative Vision
Transformer for Bone Age Assessment using the Sauvegrain Method.- FastTalker:
Jointly Generating Speech and Conversational Gestures from Text.-
Attend-Fusion: Efficient Audio-Visual Fusion for Video Classification.- CMMD:
Contrastive Multi-Modal Diffusion for Video-Audio Conditional Modeling.-
Unveiling Visual Biases in Audio-Visual Localization Benchmarks.- AV-CPL:
Continuous Pseudo-Labeling for Audio-Visual Speech Recognition.- Towards
Multimodal In-Context Learning for Vision & Language Models.