How to Train the Teacher Model for Effective Knowledge Distillation.- Tight and Efficient Upper Bound on Spectral Norm of Convolutional Layers.- Deciphering the Role of Representation Disentanglement: Investigating Compositional Generalization in CLIP Models.- Modality Translation for Object Detection Adaptation without forgetting prior knowledge.- FroSSL: Frobenius Norm Minimization for Efficient Multiview Self-Supervised Learning.- Learning Multimodal Latent Generative Models with Energy-Based Prior.- On Learning Discriminative Features from Synthesized Data for Self-Supervised Fine-Grained Visual Recognition.- LaWa: Using Latent Space for In-Generation Image Watermarking.- Hierarchical Conditioning of Diffusion Models Using Tree-of-Life for Studying Species Evolution.- Markov Knowledge Distillation: Make Nasty Teachers trained by Self-undermining Knowledge Distillation Fully Distillable.- Co-speech Gesture Video Generation with 3D Human Meshes.- When and How do negative prompts take effect?.- GS2Mesh: Surface Reconstruction from Gaussian Splatting via Novel Stereo Views.- CARFF: Conditional Auto-encoded Radiance Field for 3D Scene Forecasting.- Snuffy: Efficient Whole Slide Image Classifier.- Learning to Build by Building Your Own Instructions.- Exploring Active Learning in Meta-Learning: Enhancing Context Set Labeling.- BlenderAlchemy: Editing 3D Graphics with Vision-Language Models.- DepS: Delayed e-Shrinking for Faster Once-For-All Training.- Customize-A-Video: One-Shot Motion Customization of Text-to-Video Diffusion Models.