killed message
i use this code:
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
if torch.cuda.is_available():
device = torch.device("cuda")
print(f"Using GPU {torch.cuda.get_device_name(0)}")
memory_allocated = torch.cuda.memory_allocated()
print(f"GPU memory_allocated: {memory_allocated}")
else:
device = torch.device("cpu")
print(f"Using CPU {torch.cuda.get_device_name(0)}")
model = AutoModelForCausalLM.from_pretrained("mistralai/Mistral-7B-Instruct-v0.1")
tokenizer = AutoTokenizer.from_pretrained("mistralai/Mistral-7B-Instruct-v0.1")
messages = [
{"role": "user", "content": "What is your favourite condiment?"},
{"role": "assistant", "content": "Well, I'm quite partial to a good squeeze of fresh lemon juice. It adds just the right amount of zesty flavour to whatever I'm cooking up in the kitchen!"},
{"role": "user", "content": "Do you have mayonnaise recipes?"}
]
encodeds = tokenizer.apply_chat_template(messages, return_tensors="pt")
model_inputs = encodeds.to(device)
model.to(device)
generated_ids = model.generate(model_inputs, max_new_tokens=1000, do_sample=True)
decoded = tokenizer.batch_decode(generated_ids)
print(decoded[0])
but when i start this i got:
Using GPU NVIDIA GeForce GTX 1050 Ti
GPU memory_allocated: 0
killed
I checked the loading of the video card using
nvidia-smi
It's loaded at no more than 500mb when I try to run the code
how much memory do I need to run Mistral-7B-Instruct-v0.1?
For unquantized model, you would be needing 16 GB memory while loading the model shards.