The previous models were really great. Looking forward to see Nous Hermes 13b on GPT4all
I do hope it is not slower than GPT4-X-Vicuna, it was a great solid consistent model.
Nous-Hermes looks very promising and exciting. I mainly use these to have fun with world creation and interact with it in text-based rpg gaming dialogues.
GPT4-X-Vicuna was already very cappable.
think it is a bit faster. ggml implementations are now available as well on eachadea's HF page.
personally we are enjoying this over gpt4-x-vicuna atm, feels a bit cleaner. thank you for the compliments
I do hope it is not slower than GPT4-X-Vicuna, it was a great solid consistent model.
Nous-Hermes looks very promising and exciting. I mainly use these to have fun with world creation and interact with it in text-based rpg gaming dialogues.
GPT4-X-Vicuna was already very cappable.
Being the same base model, it should be identical speed, but obviously depends on the implementation, GPTQ/GGML/HF Transformers,fp16/8bit, etc