DinoV2-SigLIP-Phi3(LoRA) VLM
- Vision Encoder - DinoV2 + SigLIP @384px resolution. Why 2 vision encoders?
- Connector - MLP (Dino and SigLIP features are concatenated and then projected to Phi3 representation space)
- Language Model - Phi3 + LoRA
- Pre-train (Align) Dataset - LLaVA-CC3M-Pretrain-595K
- Fine-tune (Instruction) Dataset - LLAVA-v1.5-Instruct + LRV-Instruct
Scripts to build and train the models are available at NMS05/DinoV2-SigLIP-Phi3-LoRA-VLM.
Unable to determine this model's library. Check the
docs
.