Model Card for Model ID
Model Details
Model Description
- Developed by: Dr Sujit Vasanth
- Model type: QLoRA PEFT
- Language(s) (NLP): Json, English
- License: [More Information Needed]
- Finetuned from model [optional]: TheBloke/openchat-3.5-0106-GPTQ
Model Sources [optional]
- Repository: https://github.com/sujitvasanth/GPTQ-finetune
- Demo [optional]: https://github.com/sujitvasanth/GPTQ-finetune/blob/main/GPTQ-finetune.py
How to Get Started with the Model
model = AutoModelForCausalLM.from_pretrained(model_id, quantization_config= GPTQConfig(bits=4, disable_exllama=False),device_map="auto") # is_trainable=True tokenizer = AutoTokenizer.from_pretrained(model_id) tokenizer.pad_token = tokenizer.eos_token model.load_adapter(adapter_id)
Training Details
Training Data
https://huggingface.co/datasets/sujitvasanth/jsonsearch2 User: Assistant examples of Json search Query
Training Procedure
QLora PEFT training on custom dataset
Preprocessing [optional]
[More Information Needed]
Training Hyperparameters
- Training regime: [More Information Needed]
Speeds, Sizes, Times [optional]
[More Information Needed]
Evaluation
Testing Data, Factors & Metrics
Testing Data
[More Information Needed]
Factors
[More Information Needed]
Metrics
[More Information Needed]
Results
[More Information Needed]
Summary
Hardware
[More Information Needed]
Software
[More Information Needed]
Citation [optional]
BibTeX:
[More Information Needed]
APA:
[More Information Needed]
Glossary [optional]
[More Information Needed]
More Information [optional]
[More Information Needed]
Model Card Authors [optional]
[More Information Needed]
Model Card Contact
[More Information Needed]
Framework versions
- PEFT 0.8.2
- Downloads last month
- 6
Model tree for sujitvasanth/TheBloke-openchat-3.5-0106-GPTQ-PEFTadapterJsonSear
Base model
mistralai/Mistral-7B-v0.1