Edit model card

MalayaLLM: Gemma [മലയാളം/Malayalam]

Baby MalayaLLM

Introducing the Developer:

Discover the mind behind this model and stay updated on their contributions to the field https://www.linkedin.com/in/vishnu-prasad-j/

Model description

The MalayaLLM models have been improved and customized expanding upon the groundwork laid by the original Gemma model.

Model Update

Latest Gemma2-9B trained model is here :MalayaLLM:Gemma-2-9B

How to run GGUF

  • llama.cpp Web Server

    • The web server is a lightweight HTTP server that can be used to serve local models and easily connect them to existing clients.
  • Building llama.cpp

  • Running llama.cpp as a Web Server

    • Once you have built llama.cpp, you can run it as a web server. Below is an example of how to start the server:
      llama-server.exe -m gemma_7b_instruction.Q4_K_M.gguf -ngl 42 -c 128 -n 100
      
  • Accessing the Web UI

    • After starting the server, you can access the basic web UI via your browser at the following address: http://localhost:8080Baby MalayaLLM

Made Using UNSLOTH

Thanks to Unsloth, the process of fine-tuning large language models (LLMs) has become much easier and more efficient. Unsloth

🌟Happy coding💻🌟

Downloads last month
3
GGUF
Model size
8.54B params
Architecture
gemma

4-bit

Inference API
Unable to determine this model's library. Check the docs .

Collection including VishnuPJ/MalayaLLM_Gemma_7B_Instruct_V1_GGUF