Post
5054
π Introducing nanoLLaVA, a powerful multimodal AI model that packs the capabilities of a 1B parameter vision language model into just 5GB of VRAM. π This makes it an ideal choice for edge devices, bringing cutting-edge visual understanding and generation to your devices like never before. π±π»
Model: qnguyen3/nanoLLaVA π
Spaces: qnguyen3/nanoLLaVA (thanks to @merve )
Under the hood, nanoLLaVA is based on the powerful vilm/Quyen-SE-v0.1 (my Qwen1.5-0.5B finetune) and Google's impressive google/siglip-so400m-patch14-384. π§ The model is trained using a data-centric approach to ensure optimal performance. π
In the spirit of transparency and collaboration, all code and model weights are open-sourced under the Apache 2.0 license. π€
Model: qnguyen3/nanoLLaVA π
Spaces: qnguyen3/nanoLLaVA (thanks to @merve )
Under the hood, nanoLLaVA is based on the powerful vilm/Quyen-SE-v0.1 (my Qwen1.5-0.5B finetune) and Google's impressive google/siglip-so400m-patch14-384. π§ The model is trained using a data-centric approach to ensure optimal performance. π
In the spirit of transparency and collaboration, all code and model weights are open-sourced under the Apache 2.0 license. π€