Edit model card
from transformers import AutoProcessor, AutoConfig, AutoModelForVision2Seq

config = AutoConfig.from_pretrained("llava-hf/llava-1.5-7b-hf")

config.text_config.num_hidden_layers = 2
config.text_config.intermediate_size = 16
config.text_config.hidden_size = 64
config.text_config.max_position_embeddings = 64

config.vision_config.num_hidden_layers = 4
config.vision_config.intermediate_size = 16
config.vision_config.hidden_size = 64
config.vision_config.num_attention_heads = 4

model = AutoModelForVision2Seq.from_config(config)
processor = AutoProcessor.from_pretrained("llava-hf/llava-1.5-7b-hf")

model_id = "trl-internal-testing/tiny-random-llava-1.5"
model.push_to_hub(model_id)
processor.push_to_hub(model_id)
Downloads last month
12,427
Safetensors
Model size
4.3M params
Tensor type
F32
·
Inference Examples
Inference API (serverless) does not yet support transformers models for this pipeline type.