Edit model card

Model Card for MistralSQL-7B

Model Information

  • Model Name: MistralSQL-7B
  • Base Model Name: mistralai/Mistral-7B-Instruct-v0.1
  • Base Model URL: Mistral-7B-Instruct-v0.1
  • Dataset Name: bugdaryan/sql-create-context-instruction
  • Dataset URL: SQL Create Context Dataset
  • Dataset Description: This dataset is built upon SQL Create Context, sourced from WikiSQL and Spider, providing 78,577 examples of natural language queries, SQL CREATE TABLE statements, and SQL queries answering questions using the CREATE statement as context.

Model Parameters

  • LoRA Attention Dimension: 64
  • LoRA Alpha Parameter: 16
  • LoRA Dropout Probability: 0.1
  • Bitsandbytes Parameters:
    • Activate 4-bit precision base model loading: True
    • Compute dtype for 4-bit base models: float16
    • Quantization type (fp4 or nf4): nf4
    • Activate nested quantization for 4-bit base models: False
  • TrainingArguments Parameters:
    • Output directory: "./results"
    • Number of training epochs: 1
    • Enable fp16/bf16 training: False/True
    • Batch size per GPU for training: 80
    • Batch size per GPU for evaluation: 4
    • Gradient accumulation steps: 1
    • Enable gradient checkpointing: True
    • Maximum gradient norm (gradient clipping): 0.3
    • Initial learning rate (AdamW optimizer): 2e-4
    • Weight decay: 0.001
    • Optimizer: paged_adamw_32bit
    • Learning rate schedule: cosine
    • Number of training steps (overrides num_train_epochs): -1
    • Ratio of steps for a linear warmup: 0.03
    • Group sequences into batches with the same length: True
    • Save checkpoint every X update steps: 0
    • Log every X update steps: 10
  • SFT Parameters:
    • Maximum sequence length: 500
    • Packing: False

Inference Parameters

  • Temperature: 0.7

Hardware and Software

  • Training Hardware: 2 RTX A6000 48GB GPUs

License

  • Apache-2.0

Instruction Format

To leverage instruction fine-tuning, prompts should be surrounded by [INST] and [/INST] tokens. The very first instruction should begin with a begin of sentence id. The next instructions should not. The assistant generation will be ended by the end-of-sentence token id.

For example:

from transformers import (
    AutoModelForCausalLM,
    AutoTokenizer,
    pipeline
)
import torch

model_name = 'bugdaryan/MistralSQL-7b'

model = AutoModelForCausalLM.from_pretrained(model_name, device_map='auto')
tokenizer = AutoTokenizer.from_pretrained(model_name)

pipe = pipeline('text-generation', model=model, tokenizer=tokenizer)

table = "CREATE TABLE sales ( sale_id number PRIMARY KEY, product_id number, customer_id number, salesperson_id number, sale_date DATE, quantity number, FOREIGN KEY (product_id) REFERENCES products(product_id), FOREIGN KEY (customer_id) REFERENCES customers(customer_id), FOREIGN KEY (salesperson_id) REFERENCES salespeople(salesperson_id)); CREATE TABLE product_suppliers ( supplier_id number PRIMARY KEY, product_id number, supply_price number, FOREIGN KEY (product_id) REFERENCES products(product_id)); CREATE TABLE customers ( customer_id number PRIMARY KEY, name text, address text ); CREATE TABLE salespeople ( salesperson_id number PRIMARY KEY, name text, region text ); CREATE TABLE product_suppliers ( supplier_id number PRIMARY KEY, product_id number, supply_price number );"

question = 'Find the salesperson who made the most sales.'

prompt = f"[INST] Write SQLite query to answer the following question given the database schema. Please wrap your code answer using ```: Schema: {table} Question: {question} [/INST] Here is the SQLite query to answer to the question: {question}: ``` "

ans = pipe(prompt, max_new_tokens=100)
print(ans[0]['generated_text'].split('```')[2])
Downloads last month
14
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Dataset used to train bugdaryan/MistralSQL-7b