pipeline_tag: text-to-video | |
License: Apache-2.0 | |
# LongAnimateDiff | |
[Sapir Weissbuch](https://github.com/SapirW), [Naomi Ken Korem](https://github.com/Naomi-Ken-Korem), [Daniel Shalem](https://github.com/dshalem), [Yoav HaCohen](https://github.com/yoavhacohen) | Lightricks Research | |
We are pleased to release the "LongAnimateDiff" model, which has been trained to generate videos with a variable frame count, | |
ranging from 16 to 64 frames. | |
This model is compatible with the original AnimateDiff model. | |
We release two models: | |
1. The LongAnimateDiff model, capable of generating videos with frame counts ranging from 16 to 64. For optimal results, we recommend using a motion scale of 1.28. | |
2. A specialized model designed to generate 32-frame videos. This model typically produces higher quality videos compared to the LongAnimateDiff model supporting 16-64 frames. | |
For better results, use a motion scale of 1.15. | |
Ths original AnimateDiff model can be found here: https://huggingface.co/guoyww/animatediff |