Omkar Thawakar commited on
Commit
3fd8e4c
1 Parent(s): 98cccfc

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +27 -1
README.md CHANGED
@@ -1,3 +1,29 @@
1
  ---
2
- license: apache-2.0
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ license: mit
3
+ license_link: https://huggingface.co/microsoft/phi-2/resolve/main/LICENSE
4
+ language:
5
+ - en
6
+ pipeline_tag: text-generation
7
+ tags:
8
+ - nlp
9
+ - code
10
  ---
11
+
12
+ ## Model Summary
13
+
14
+ MobiLlama-05B is a Small Language Model with **0.5 billion** parameters. It was trained using the Amber data sources [Amber-Dataset](https://huggingface.co/datasets/LLM360/AmberDatasets).
15
+
16
+ ## How to Use
17
+
18
+ MobiLlama-05B has been integrated in the development version (4.37.0.dev) of `transformers`. Until the official version is released through `pip`, ensure that you are doing one of the following:
19
+
20
+ * When loading the model, ensure that `trust_remote_code=True` is passed as an argument of the `from_pretrained()` function.
21
+
22
+ * Update your local `transformers` to the development version: `pip uninstall -y transformers && pip install git+https://github.com/huggingface/transformers`. The previous command is an alternative to cloning and installing from the source.
23
+
24
+ The current `transformers` version can be verified with: `pip list | grep transformers`.
25
+
26
+ ## Intended Uses
27
+
28
+ Given the nature of the training data, the Phi-2 model is best suited for prompts using the QA format, the chat format, and the code format.
29
+