runtime error

13/15 [00:45<00:07, 3.50s/it] Fetching 15 files: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 15/15 [00:45<00:00, 3.05s/it] Traceback (most recent call last): File "app.py", line 20, in <module> pipe.enable_xformers_memory_efficient_attention() File "/home/user/.local/lib/python3.8/site-packages/diffusers/pipelines/pipeline_utils.py", line 1293, in enable_xformers_memory_efficient_attention self.set_use_memory_efficient_attention_xformers(True, attention_op) File "/home/user/.local/lib/python3.8/site-packages/diffusers/pipelines/pipeline_utils.py", line 1318, in set_use_memory_efficient_attention_xformers fn_recursive_set_mem_eff(module) File "/home/user/.local/lib/python3.8/site-packages/diffusers/pipelines/pipeline_utils.py", line 1309, in fn_recursive_set_mem_eff module.set_use_memory_efficient_attention_xformers(valid, attention_op) File "/home/user/.local/lib/python3.8/site-packages/diffusers/models/modeling_utils.py", line 219, in set_use_memory_efficient_attention_xformers fn_recursive_set_mem_eff(module) File "/home/user/.local/lib/python3.8/site-packages/diffusers/models/modeling_utils.py", line 215, in fn_recursive_set_mem_eff fn_recursive_set_mem_eff(child) File "/home/user/.local/lib/python3.8/site-packages/diffusers/models/modeling_utils.py", line 215, in fn_recursive_set_mem_eff fn_recursive_set_mem_eff(child) File "/home/user/.local/lib/python3.8/site-packages/diffusers/models/modeling_utils.py", line 215, in fn_recursive_set_mem_eff fn_recursive_set_mem_eff(child) File "/home/user/.local/lib/python3.8/site-packages/diffusers/models/modeling_utils.py", line 212, in fn_recursive_set_mem_eff module.set_use_memory_efficient_attention_xformers(valid, attention_op) File "/home/user/.local/lib/python3.8/site-packages/diffusers/models/attention.py", line 104, in set_use_memory_efficient_attention_xformers raise ValueError( ValueError: torch.cuda.is_available() should be True but is False. xformers' memory efficient attention is only available for GPU

Container logs:

Fetching error logs...