Text-to-Audio
Audiocraft
magnet

MAGNeT with CPU?

#2
by mahimairaja - opened

Hi, I tried to run this using CPU? But I got these error msgs:

----> 3 wav = model.generate("happy rock")
      4 
      5 # for idx, one_wav in enumerate(wav):

26 frames
/usr/local/lib/python3.10/dist-packages/xformers/ops/fmha/dispatch.py in _run_priority_list(name, priority_list, inp)
     61     for op, not_supported in zip(priority_list, not_supported_reasons):
     62         msg += "\n" + _format_not_supported_reasons(op, not_supported)
---> 63     raise NotImplementedError(msg)
     64 
     65 

NotImplementedError: No operator found for `memory_efficient_attention_forward` with inputs:
     query       : shape=(20, 498, 16, 64) (torch.float32)
     key         : shape=(20, 498, 16, 64) (torch.float32)
     value       : shape=(20, 498, 16, 64) (torch.float32)
     attn_bias   : <class 'NoneType'>
     p           : 0
`decoderF` is not supported because:
    device=cpu (supported: {'cuda'})
    attn_bias type is <class 'NoneType'>
`[email protected]` is not supported because:
    device=cpu (supported: {'cuda'})
    dtype=torch.float32 (supported: {torch.float16, torch.bfloat16})
`tritonflashattF` is not supported because:
    device=cpu (supported: {'cuda'})
    dtype=torch.float32 (supported: {torch.float16, torch.bfloat16})
    operator wasn't built - see `python -m xformers.info` for more info
    triton is not available
`cutlassF` is not supported because:
    device=cpu (supported: {'cuda'})
`smallkF` is not supported because:
    max(query.shape[-1] != value.shape[-1]) > 32
    device=cpu (supported: {'cuda'})
    unsupported embed per head: 64

Please anyone help, if you know how to run the model using a CPU

Sign up or log in to comment