Daryl A
daryl149
AI & ML interests
Co-Founder Mirage-Studio.io, home of MirageGPT: the private ChatGPT alternative.
Organizations
daryl149's activity
Do post speed and accuracy benchmarks if you are able to run this
#2 opened 12 months ago
by
daryl149
Is this the TRUE llama2 ?
1
#1 opened about 1 year ago
by
Suprit
tokenizer.model_max_length for llama-2-7b-chat-hf
3
#3 opened over 1 year ago
by
huggingFace1108
`model_kwargs` are not used by the model: ['token_type_ids']
2
#2 opened over 1 year ago
by
vinnitu
Issue with multi GPU inference.
10
#1 opened over 1 year ago
by
eastwind
Are y'all doing ok?
#3 opened over 1 year ago
by
daryl149
ValueError: the following model_kwargs are not used by model
3
#2 opened over 1 year ago
by
daryl149
What text input pattern does this model expect?
2
#1 opened over 1 year ago
by
daryl149
can't run inference on multi GPU
1
#8 opened over 1 year ago
by
daryl149
Fix for llama delta path
1
#4 opened over 1 year ago
by
daryl149
Trying to convert LlaMa weights to HF and running out of RAM, but don't want to buy more RAM?
14
#4 opened over 1 year ago
by
daryl149
Windows coversion almost successful
3
#17 opened over 1 year ago
by
DontLike
Converting to ggml and quantizing with llama.cpp
5
#2 opened over 1 year ago
by
akiselev
Trying to convert LlaMa weights to HF and running out of RAM, but don't want to buy more RAM?
14
#4 opened over 1 year ago
by
daryl149
Trying to convert LlaMa weights to HF and running out of RAM, but don't want to buy more RAM?
14
#4 opened over 1 year ago
by
daryl149