AutoencoderKLCogVideoX
The 3D variational autoencoder (VAE) model with KL loss used in CogVideoX was introduced in CogVideoX: Text-to-Video Diffusion Models with An Expert Transformer by Tsinghua University & ZhipuAI.
The model can be loaded with the following code snippet.
from diffusers import AutoencoderKLCogVideoX
vae = AutoencoderKLCogVideoX.from_pretrained("THUDM/CogVideoX-2b", subfolder="vae", torch_dtype=torch.float16).to("cuda")
AutoencoderKLCogVideoX
class diffusers.AutoencoderKLCogVideoX
< source >( in_channels: int = 3 out_channels: int = 3 down_block_types: Tuple = ('CogVideoXDownBlock3D', 'CogVideoXDownBlock3D', 'CogVideoXDownBlock3D', 'CogVideoXDownBlock3D') up_block_types: Tuple = ('CogVideoXUpBlock3D', 'CogVideoXUpBlock3D', 'CogVideoXUpBlock3D', 'CogVideoXUpBlock3D') block_out_channels: Tuple = (128, 256, 256, 512) latent_channels: int = 16 layers_per_block: int = 3 act_fn: str = 'silu' norm_eps: float = 1e-06 norm_num_groups: int = 32 temporal_compression_ratio: float = 4 sample_height: int = 480 sample_width: int = 720 scaling_factor: float = 1.15258426 shift_factor: Optional = None latents_mean: Optional = None latents_std: Optional = None force_upcast: float = True use_quant_conv: bool = False use_post_quant_conv: bool = False )
Parameters
- in_channels (int, optional, defaults to 3) — Number of channels in the input image.
- out_channels (int, optional, defaults to 3) — Number of channels in the output.
- down_block_types (
Tuple[str]
, optional, defaults to("DownEncoderBlock2D",)
) — Tuple of downsample block types. - up_block_types (
Tuple[str]
, optional, defaults to("UpDecoderBlock2D",)
) — Tuple of upsample block types. - block_out_channels (
Tuple[int]
, optional, defaults to(64,)
) — Tuple of block output channels. - act_fn (
str
, optional, defaults to"silu"
) — The activation function to use. - sample_size (
int
, optional, defaults to32
) — Sample input size. - scaling_factor (
float
, optional, defaults to1.15258426
) — The component-wise standard deviation of the trained latent space computed using the first batch of the training set. This is used to scale the latent space to have unit variance when training the diffusion model. The latents are scaled with the formulaz = z * scaling_factor
before being passed to the diffusion model. When decoding, the latents are scaled back to the original scale with the formula:z = 1 / scaling_factor * z
. For more details, refer to sections 4.3.2 and D.1 of the High-Resolution Image Synthesis with Latent Diffusion Models paper. - force_upcast (
bool
, optional, default toTrue
) — If enabled it will force the VAE to run in float32 for high image resolution pipelines, such as SD-XL. VAE can be fine-tuned / trained to a lower range without loosing too much precision in which caseforce_upcast
can be set toFalse
- see: https://huggingface.co/madebyollin/sdxl-vae-fp16-fix
A VAE model with KL loss for encoding images into latents and decoding latent representations into images. Used in CogVideoX.
This model inherits from ModelMixin. Check the superclass documentation for it’s generic methods implemented for all models (such as downloading or saving).
Disable sliced VAE decoding. If enable_slicing
was previously enabled, this method will go back to computing
decoding in one step.
Disable tiled VAE decoding. If enable_tiling
was previously enabled, this method will go back to computing
decoding in one step.
Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to compute decoding in several steps. This is useful to save some memory and allow larger batch sizes.
enable_tiling
< source >( tile_sample_min_height: Optional = None tile_sample_min_width: Optional = None tile_overlap_factor_height: Optional = None tile_overlap_factor_width: Optional = None )
Parameters
- tile_sample_min_height (
int
, optional) — The minimum height required for a sample to be separated into tiles across the height dimension. - tile_sample_min_width (
int
, optional) — The minimum width required for a sample to be separated into tiles across the width dimension. - tile_overlap_factor_height (
int
, optional) — The minimum amount of overlap between two consecutive vertical tiles. This is to ensure that there are no tiling artifacts produced across the height dimension. Must be between 0 and 1. Setting a higher value might cause more tiles to be processed leading to slow down of the decoding process. - tile_overlap_factor_width (
int
, optional) — The minimum amount of overlap between two consecutive horizontal tiles. This is to ensure that there are no tiling artifacts produced across the width dimension. Must be between 0 and 1. Setting a higher value might cause more tiles to be processed leading to slow down of the decoding process.
Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow processing larger images.
tiled_decode
< source >( z: Tensor return_dict: bool = True ) → ~models.vae.DecoderOutput
or tuple
Parameters
- z (
torch.Tensor
) — Input batch of latent vectors. - return_dict (
bool
, optional, defaults toTrue
) — Whether or not to return a~models.vae.DecoderOutput
instead of a plain tuple.
Returns
~models.vae.DecoderOutput
or tuple
If return_dict is True, a ~models.vae.DecoderOutput
is returned, otherwise a plain tuple
is
returned.
Decode a batch of images using a tiled decoder.
tiled_encode
< source >( x: Tensor ) → torch.Tensor
Encode a batch of images using a tiled encoder.
When this option is enabled, the VAE will split the input tensor into tiles to compute encoding in several steps. This is useful to keep memory use constant regardless of image size. The end result of tiled encoding is different from non-tiled encoding because each tile uses a different encoder. To avoid tiling artifacts, the tiles overlap and are blended together to form a smooth output. You may still see tile-sized changes in the output, but they should be much less noticeable.
AutoencoderKLOutput
class diffusers.models.modeling_outputs.AutoencoderKLOutput
< source >( latent_dist: DiagonalGaussianDistribution )
Output of AutoencoderKL encoding method.
DecoderOutput
class diffusers.models.autoencoders.vae.DecoderOutput
< source >( sample: Tensor commit_loss: Optional = None )
Output of decoding method.