Skip to content

Latest commit

 

History

History
74 lines (46 loc) · 3.3 KB

File metadata and controls

74 lines (46 loc) · 3.3 KB

visitors

Deforum Stable Diffusion (V0.5) Local Version

This is a Local implementation of Deforum Stable Diffusion V0.5, supports json settings file. Supports all Stable Diffusion Models, Including v1-5-pruned.ckpt

Example Animated Video these example videos are generated using Deforum 0.5 and SD Check Point 1.5 (v1-5-pruned.ckpt). Also the settings for these examples are available in "examples" folder

Videos Generated using this Script Watch these videos on youtube, these were generated using Deforum Stable Diffusion V0.5. I built this script primarily to generate these kind of Videos.

This script is based on the Colab code by deforum (v0.5). I have tested it on Ubuntu 22.04 + Nvidia RTX 3090 ti

Installation

You can use an anaconda environment to run this on your local machine:

conda create --name dsdv0.5 python=3.8.5 -y
conda activate dsdv0.5

And then cd to the cloned folder, run the setup code.

python setup.py

Manually download 3 Model Files

Most of these files will be automatically downloaded during your first run. If not, you can download them manually.

  • You need to get the v1-5-pruned.ckpt file and put it on the ./models folder. It can be downloaded from HuggingFace.
  • Additionally, you should put dpt_large-midas-2f21e586.pt on the ./models folder as well, the download link is here
  • There should be another extra file AdaBins_nyu.pt which should be downloaded into ./pretrained folder, the download link is here

How to use it?

The run command should looks like this:

python run.py --settings "./settings/animation_settings.json" --generate_video true

The output results will be available at ./output folder.

Required variables & prompts for Deforum Stable Diffusion are set in the json file found in settings folder and I have also provided the settings for the example videos in "example" folder.

Thanks to

Enjoy!