Edit model card

Some GGUF v2 quantizations of the model llmware/bling-sheared-llama-1.3b-0.1

bling-sheared-llama-1.3b-0.1 is part of the BLING ("Best Little Instruction-following No-GPU-required") model series, instruct trained on top of a Sheared-LLaMA-1.3B base model.

BLING models are fine-tuned with distilled high-quality custom instruct datasets, targeted at a specific subset of instruct tasks with the objective of providing a high-quality Instruct model that is 'inference-ready' on a CPU laptop even without using any advanced quantization optimizations.

Model Description

  • Developed by: llmware
  • Model type: Instruct-trained decoder
  • Language(s) (NLP): English
  • License: Apache 2.0
  • Finetuned from model [optional]: princeton-nlp/Sheared-LLaMA-1.3B

Uses

The intended use of BLING models is two-fold:

  1. Provide high-quality Instruct models that can run on a laptop for local testing. We have found it extremely useful when building a proof-of-concept, or working with sensitive enterprise data that must be closely guarded, especially in RAG use cases.

  2. Push the state of the art for smaller Instruct-following models in the sub-7B parameter range, especially 1B-3B, as single-purpose automation tools for specific tasks through targeted fine-tuning datasets and focused "instruction" tasks.

Prompt Format

<human>: Anything that you want to say
<bot:

or

<human>: Context
Instruction/Question
<bot:

Direct Use

BLING is designed for enterprise automation use cases, especially in knowledge-intensive industries, such as financial services, legal and regulatory industries with complex information sources. Rather than try to be "all things to all people," BLING models try to focus on a narrower set of Instructions more suitable to a ~1B parameter GPT model.

BLING is ideal for rapid prototyping, testing, and the ability to perform an end-to-end workflow locally on a laptop without having to send sensitive information over an Internet-based API.

The first BLING models have been trained for common RAG scenarios, specifically: question-answering, key-value extraction, and basic summarization as the core instruction types without the need for a lot of complex instruction verbiage - provide a text passage context, ask questions, and get clear fact-based responses.

Bias, Risks, and Limitations

Any model can provide inaccurate or incomplete information, and should be used in conjunction with appropriate safeguards and fact-checking mechanisms.

Downloads last month
155
GGUF
Model size
1.35B params
Architecture
llama

4-bit

5-bit

8-bit

16-bit

Inference API
Unable to determine this model's library. Check the docs .