Edit model card

dolphin-2.9.4-llama-3.1-8b-ov

dolphin-2.9.4-llama-3.1-8b-ov is an OpenVino int4 quantized version of dolphin-2.9.4-llama-3.1-8b-ov, providing a fast inference implementation, optimized for AI PCs using Intel GPU, CPU and NPU.

dolphin-2.9.4-llama-3.1-8b is a leading open source Dolphin fine-tuned model on top of Llama 3.1 base.

Model Description

  • Developed by: cognitive computations
  • Quantized by: llmware
  • Model type: llama-3.1
  • Parameters: 8 billion
  • Model Parent: cognitivecomputations/dolphin-2.9.4-llama-3.1-8b
  • Language(s) (NLP): English
  • License: Llama 3.1 Community License
  • Uses: General chat use cases
  • RAG Benchmark Accuracy Score: NA
  • Quantization: int4

Model Card Contact

llmware on github

llmware on hf

llmware website

Downloads last month
5
Inference API
Inference API (serverless) has been turned off for this model.

Model tree for llmware/dolphin-2.9.4-llama3.1-8b-ov

Collection including llmware/dolphin-2.9.4-llama3.1-8b-ov