LSD: Object-Centric Slot Diffusion

NeurIPS 2023 🌟Spotlight🌟

1Rutgers University, 2KAIST
teaser

LSD achieves unsupervised learning of object-centric representation and compositional scene generation.

teaser

By integrating with Pre-Trained Diffusion Models, LSD achieves object-centric learning and conditional generation for unconstrained real-world objects.


Abstract

The recent success of transformer-based image generative models in object-centric learning highlights the importance of powerful image generators for handling complex scenes. However, despite the high expressiveness of diffusion models in image generation, their integration into object-centric learning remains largely unexplored in this domain. In this paper, we explore the feasibility and potential of integrating diffusion models into object-centric learning and investigate the pros and cons of this approach. We introduce Latent Slot Diffusion (LSD), a novel model that serves dual purposes: it is the first object-centric learning model to replace conventional slot decoders with a latent diffusion model conditioned on object slots, and it is also the first unsupervised compositional conditional diffusion model that operates without the need for supervised annotations like text. Through experiments on various object-centric tasks, including the first application of the FFHQ dataset in this field, we demonstrate that LSD significantly outperforms state-of-the-art transformer-based decoders, particularly in more complex scenes, and exhibits superior unsupervised compositional generation quality. In addition, we conduct a preliminary investigation into the integration of pre-trained diffusion models in LSD and demonstrate its effectiveness in real-world image segmentation and generation.


Method

method

Left: In training, we encode the given image as a VQGAN latent and as slots. We then add noise to the VQGAN latent and we train a denoising network to predict the noise given the noisy latent and the slots. Right: Given the trained model, we can generate novel images by composing a slot-based concept prompt and decoding it using the trained latent slot diffusion decoder.


Unsupervised Object-Centric Learning

 

 

LSD achieves tighter object boundaries, less object splitting, and cleaner background segmentation compared to other state-of-the-art techniques. These advantages are especially noticeable in more complex datasets.


Compositional Generation

 

 

LSD provides significantly higher fidelity and more clear details compared to the other methods.


Slot-Based Image Editing - Background Replacement

 

 

In LSD, replacing the background of an image can be achieved by replacing the background slot.


Slot-Based Image Editing - Object Insertion

 

 

In LSD, we can insert new objects by adding the corresponding object slot to the existing set of slots.


Slot-Based Image Editing - Face Replacement

 

 

In LSD, we can also achieve face replacement by replacing the face-associated slot.


LSD with Pre-Trained Diffusion Models - Real-World Unsupervised Object-Centric Learning and Conditional Generation

 

 

With the strong generative capabilities of pre-trained DMs, LSD can scale up object-centric learning to real-world images and perform conditional generation. We show generation results with different value of classifier-free guidance.


LSD with Pre-Trained Diffusion Models - Slot-Conditioned Generations with Unconstrained Real-World Objects

 

 

The strong generative ability of pre-trained diffusion models enable LSD to capture semantic abstractions of real-world objects, allowing the model to maintain semantic consistency while generating diverse variations across samples.


BibTeX

@inproceedings{jiang2023object,
  title = {Object-Centric Slot Diffusion},
  author = {Jiang, Jindong and Deng, Fei and Singh, Gautam and Ahn, Sungjin},
  booktitle = {Advances in Neural Information Processing Systems},
  volume = {36},
  pages = {8563--8601},
  url = {https://proceedings.neurips.cc/paper_files/paper/2023/file/1b3ceb8a495a63ced4a48f8429ccdcd8-Paper-Conference.pdf},
  year = {2023}
}