RichardErkhov's picture
uploaded readme
1b15f1c verified
|
raw
history blame
9.09 kB

Quantization made by Richard Erkhov.

Github

Discord

Request more models

Aika-7B - GGUF

Name Quant method Size
Aika-7B.Q2_K.gguf Q2_K 2.53GB
Aika-7B.IQ3_XS.gguf IQ3_XS 2.81GB
Aika-7B.IQ3_S.gguf IQ3_S 2.96GB
Aika-7B.Q3_K_S.gguf Q3_K_S 2.95GB
Aika-7B.IQ3_M.gguf IQ3_M 3.06GB
Aika-7B.Q3_K.gguf Q3_K 3.28GB
Aika-7B.Q3_K_M.gguf Q3_K_M 3.28GB
Aika-7B.Q3_K_L.gguf Q3_K_L 3.56GB
Aika-7B.IQ4_XS.gguf IQ4_XS 3.67GB
Aika-7B.Q4_0.gguf Q4_0 3.83GB
Aika-7B.IQ4_NL.gguf IQ4_NL 3.87GB
Aika-7B.Q4_K_S.gguf Q4_K_S 3.86GB
Aika-7B.Q4_K.gguf Q4_K 4.07GB
Aika-7B.Q4_K_M.gguf Q4_K_M 4.07GB
Aika-7B.Q4_1.gguf Q4_1 4.24GB
Aika-7B.Q5_0.gguf Q5_0 4.65GB
Aika-7B.Q5_K_S.gguf Q5_K_S 4.65GB
Aika-7B.Q5_K.gguf Q5_K 4.78GB
Aika-7B.Q5_K_M.gguf Q5_K_M 4.78GB
Aika-7B.Q5_1.gguf Q5_1 5.07GB
Aika-7B.Q6_K.gguf Q6_K 5.53GB
Aika-7B.Q8_0.gguf Q8_0 7.17GB

Original model description:

language: - en license: cc library_name: transformers tags: - mergekit - merge datasets: - Anthropic/hh-rlhf base_model: - SanjiWatsuki/Silicon-Maid-7B - Guilherme34/Samantha-v2 - jan-hq/stealth-v1.3 - mitultiwari/mistral-7B-instruct-dpo - senseable/WestLake-7B-v2 model-index: - name: sethuiyer/Aika-7B results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 65.36 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=sethuiyer/Aika-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 81.49 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=sethuiyer/Aika-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 53.91 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=sethuiyer/Aika-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 51.22 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=sethuiyer/Aika-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 77.74 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=sethuiyer/Aika-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 25.78 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=sethuiyer/Aika-7B name: Open LLM Leaderboard

Aika-7B

Aika

Aika is a language model constructed using the DARE TIES merge method using mitultiwari/mistral-7B-instruct-dpo as a base. Aika is designed to interact with users in a way that feels natural and human-like, to solve problems and answer questions with a high degree of accuracy and truthfulness, and to engage in creative and logical tasks with proficiency.

Models Merged

The following models were included in the merge:

The base model is Mistral-7Bv0.1 fine tuned on Anthropic/hh-rlhf.

Why?

  • Base model tuned on Anthropic RLHF dataset: Safe AI as a base model, to balance the uncensored model below.
  • Silicon-Maid-7B: Boasts excellent multi-turn conversational skills and logical coherence, ensuring smooth interactions.
  • Samantha-V2: Offers empathy and human-like responses, equipped with programmed "self-awareness" for a more personalized experience.
  • Stealth-V1.3: Known for enhancing performance in merges when integrated as a component, optimizing Aika's functionality.
  • WestLake-7B-V2: Sets a high benchmark for emotional intelligence (EQ) and excels in creative writing, enhancing Aika's ability to understand and respond to your needs.

Combine them all img

Source

You get Aika - a considerate, personal digital assistant.

Configuration

Please check mergekit_config.yml for the merge config.

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 59.25
AI2 Reasoning Challenge (25-Shot) 65.36
HellaSwag (10-Shot) 81.49
MMLU (5-Shot) 53.91
TruthfulQA (0-shot) 51.22
Winogrande (5-shot) 77.74
GSM8k (5-shot) 25.78