File size: 2,283 Bytes
fa1b5a6
6bed295
 
fa1b5a6
 
 
6bed295
ef573ad
fa1b5a6
 
 
 
0035a82
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
---
title: Seine
emoji: 😊
colorFrom: pink
colorTo: pink
sdk: gradio
sdk_version: 4.44.0
app_file: app.py
pinned: false
---


# SEINE
This repository is the official implementation of [SEINE](https://arxiv.org/abs/2310.20700).

**[SEINE: Short-to-Long Video Diffusion Model for Generative Transition and Prediction](https://arxiv.org/abs/2310.20700)**

[Arxiv Report](https://arxiv.org/abs/2310.20700) | [Project Page](https://vchitect.github.io/SEINE-project/)

<img src="seine.gif" width="800">


##  Setups for Inference

### Prepare Environment
```
conda env create -f env.yaml
conda activate seine
```

### Downlaod our model and T2I base model
Download our model checkpoint from [Google Drive](https://drive.google.com/drive/folders/1cWfeDzKJhpb0m6HA5DoMOH0_ItuUY95b?usp=sharing) and save to directory of ```pre-trained```


Our model is based on Stable diffusion v1.4, you may download [Stable Diffusion v1-4](https://huggingface.co/CompVis/stable-diffusion-v1-4) to the director of ``` pre-trained ```

Now under `./pretrained`, you should be able to see the following:
```
β”œβ”€β”€ pretrained_models
β”‚   β”œβ”€β”€ seine.pt
β”‚   β”œβ”€β”€ stable-diffusion-v1-4
β”‚   β”‚   β”œβ”€β”€ ...
└── └── β”œβ”€β”€ ...
        β”œβ”€β”€ ...
```

#### Inference for I2V 
```python
python sample_scripts/with_mask_sample.py --config configs/sample_i2v.yaml
```
The generated video will be saved in ```./results/i2v```.

#### Inference for Transition
```python
python sample_scripts/with_mask_sample.py --config configs/sample_transition.yaml
```
The generated video will be saved in ```./results/transition```.



#### More Details
You can modify ```./configs/sample_mask.yaml``` to change the generation conditions.
For example, 
```ckpt``` is used to specify a model checkpoint.
```text_prompt``` is used to describe the content of the video.
```input_path``` is used to specify the path to the image.


## BibTeX
```bibtex
@article{chen2023seine,
title={SEINE: Short-to-Long Video Diffusion Model for Generative Transition and Prediction},
author={Chen, Xinyuan and Wang, Yaohui and Zhang, Lingjun and Zhuang, Shaobin and Ma, Xin and Yu, Jiashuo and Wang, Yali and Lin, Dahua and Qiao, Yu and Liu, Ziwei},
journal={arXiv preprint arXiv:2310.20700},
year={2023}
}
```