Edit model card

Noodlz_DolphLundgren

This is a merge of pre-trained language models created using mergekit.

Merge Details

Merge Method

This model was merged using the DARE TIES merge method using D:\AI\mergekit\Noodlz\Noodlz_DolphinLake-DARE_TIE_SLERP-tokenwest as a base.

Models Merged

The following models were included in the merge:

  • D:\AI\mergekit\Noodlz\Noodlz_HermeStar-DARE_TIE_SLERP-startoken

Configuration

The following YAML configuration was used to produce this model:

merge_method: dare_ties

parameters:
  int8_mask: true
  t:
    - filter: self_attn
      value: [0, 0.5, 0.3, 0.7, 1]
    - filter: mlp
      value: [1, 0.5, 0.7, 0.3, 0]
    - value: 0.5 # fallback for rest of tensors
  embed_slerp: true

# Merging DolphinLake and HermeStar. 4 models with 2xChatML and 2 others
models:
  - model: D:\AI\mergekit\Noodlz\Noodlz_DolphinLake-DARE_TIE_SLERP-tokenwest
    # No parameters necessary for base model
  - model: D:\AI\mergekit\Noodlz\Noodlz_HermeStar-DARE_TIE_SLERP-startoken
    parameters:
      density: 0.58
      weight: 0.7


base_model: D:\AI\mergekit\Noodlz\Noodlz_DolphinLake-DARE_TIE_SLERP-tokenwest
tokenizer_source: model:D:\AI\mergekit\Noodlz\Noodlz_HermeStar-DARE_TIE_SLERP-startoken

dtype: bfloat16
Downloads last month
1
Safetensors
Model size
7.24B params
Tensor type
BF16
·
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for Noodlz/IvanDrogo-7B

Quantizations
1 model