Edit model card

This is a test merge of some Gemma-7b finetunes using task_arithmatic. After testing it is confirmed to be working properly.

Merge config:

models:
  - model: gemma-7b-it-fp16
    parameters:
      weight: 1
  - model: CorticalStack_gemma-7b-ultrachat-sft
    parameters:
      weight: 1
  - model: cloudyu_google-gemma-7b-it-dpo-v1
    parameters:
      weight: 1
  - model: abideen_gemma-7b-openhermes
    parameters:
      weight: 1
merge_method: task_arithmetic
base_model: gemma-7b-base
parameters:
  normalize: true
  int8_mask: true
dtype: float16
Downloads last month
14
Safetensors
Model size
8.54B params
Tensor type
FP16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.