Skip to content

123-code/Diffusion-transformer

Repository files navigation

DiT Diffusion Transformer for Face Generation

This project implements a Diffusion Transformer (DiT) model trained on Celeba latents

Generated Samples

generated_epoch_50

The model was trained for 15 epochs using a DiT architecture with latent diffusion on compressed VAE representations of Celeba images.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages