Edit model card

CODE

LLaMA-3-V: Extending the Visual Capabilities of LLaVA with Meta-Llama-3-8B-Instruct

Repository Overview

This repository features LLaVA v1.5 trained with the Meta-Llama-3-8B-Instruct LLM. This integration aims to leverage the strengths of both models to offer advanced vision-language understanding.

Training Strategy

  • Pretraining: Only Vision-to-Language projector is trained. The rest of the model is frozen.
  • Fine-tuning: All model parameters including LLM are fine-tuned. Only the vision-backbone (CLIP) is kept frozen.

Key Components

Training Data

Download It As

git lfs install
git clone https://huggingface.co/MBZUAI/LLaVA-Meta-Llama-3-8B-Instruct-FT

Contributions

Contributions are welcome! Please ๐ŸŒŸ our repository LLaVA++ if you find this model useful.


Downloads last month
93
Safetensors
Model size
8.35B params
Tensor type
BF16
ยท
Inference Examples
Inference API (serverless) is not available, repository is disabled.

Space using MBZUAI/LLaVA-Meta-Llama-3-8B-Instruct-FT 1

Collection including MBZUAI/LLaVA-Meta-Llama-3-8B-Instruct-FT