Edit model card

TobDeBer/arco-Q4_K_M-GGUF

This model was converted to Big Endian Q4_K_M GGUF format from appvoid/arco using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model.

Container Repository for CPU adaptations of Inference code

Variants for Inference

Slim container

  • run std binaries

CPUdiffusion

  • inference diffusion models on CPU
  • include CUDAonCPU stack

Diffusion container

  • run diffusion app.py variants
  • support CPU and CUDA
  • include Flux

Slim CUDA container

  • run CUDA binaries

Variants for Build

Llama.cpp build container

  • build llama-cli-static
  • build llama-server-static

sd build container

  • build sd
  • optional: build sd-server

CUDA build container

  • build cuda binaries
  • support sd_cuda
Downloads last month
18
GGUF
Model size
514M params
Architecture
llama
Inference API
Unable to determine this model's library. Check the docs .

Model tree for TobDeBer/myContainers

Base model

appvoid/arco
Quantized
(5)
this model