Spaces:
Running
Running
update
Browse files- compression_app.py +6 -7
compression_app.py
CHANGED
@@ -36,17 +36,16 @@ The encoding and decoding process can be formulated as
|
|
36 |
```
|
37 |
|
38 |
- **Lossless** <br>
|
39 |
-
Lossless tokenization preserves the exact original text, i.e. `decoded_text = input_text`.
|
40 |
|
41 |
-
|
42 |
-
|
43 |
[t5](https://huggingface.co/spaces/eson/tokenizer-arena/blob/main/stats/compression_rate/google-t5.t5-large%20%40%20cc100.es.diff.json).
|
44 |
-
|
45 |
llama performs [clean_up_tokenization_spaces](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B/blob/main/tokenizer_config.json#L2053) in decoding process,
|
46 |
-
which may bring some slight differences to the reconstructed text. 👉 Check the
|
47 |
[qwen](https://huggingface.co/spaces/eson/tokenizer-arena/raw/main/stats/compression_rate/Qwen.Qwen1.5-1.8B%20@%20cc100.ja.diff.json) and
|
48 |
[llama](https://huggingface.co/spaces/eson/tokenizer-arena/raw/main/stats/compression_rate/meta-llama.Meta-Llama-3.1-405B%20@%20cc100.en.diff.json).
|
49 |
-
|
50 |
|
51 |
|
52 |
|
@@ -146,7 +145,7 @@ with gr.Blocks(theme=theme) as demo:
|
|
146 |
# "- `g_bytes/b_tokens` measures how many gigabytes corpus per billion tokens.\n"
|
147 |
# "- `t_bytes/t_tokens` measures how many terabytes corpus per trillion tokens.\n"
|
148 |
" - `char/token` measures how many chars per token on the tokenized corpus.\n"
|
149 |
-
" - `oov_ratio`: out-of-vocabulary ratio on the selected corpus, 👉
|
150 |
"You can reproduce this procedure with [compression_util.py](https://huggingface.co/spaces/eson/tokenizer-arena/blob/main/compression_util.py)."
|
151 |
)
|
152 |
|
|
|
36 |
```
|
37 |
|
38 |
- **Lossless** <br>
|
39 |
+
Lossless tokenization preserves the exact original text, i.e. `decoded_text = input_text`. There are mainly two causes of compression loss.
|
40 |
|
41 |
+
1. `OOV`: Most lossy tokenizers get many out-of-vocabulary(OOV) words. 👉 Check the OOV and
|
42 |
+
tokenization loss of [bert](https://huggingface.co/spaces/eson/tokenizer-arena/blob/main/stats/compression_rate/google-bert.bert-base-cased%20%40%20cc100.zh-Hans.diff.json) and
|
43 |
[t5](https://huggingface.co/spaces/eson/tokenizer-arena/blob/main/stats/compression_rate/google-t5.t5-large%20%40%20cc100.es.diff.json).
|
44 |
+
2. `Normalization`: Even if a tokenizer has no OOV, it can be lossy due to text normalization. For example, qwen performs [unicode normalization](https://github.com/huggingface/transformers/blob/v4.42.3/src/transformers/models/qwen2/tokenization_qwen2.py#L338) in encoding process,
|
45 |
llama performs [clean_up_tokenization_spaces](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B/blob/main/tokenizer_config.json#L2053) in decoding process,
|
46 |
+
which may bring some slight differences to the reconstructed text. 👉 Check the tokenization loss of
|
47 |
[qwen](https://huggingface.co/spaces/eson/tokenizer-arena/raw/main/stats/compression_rate/Qwen.Qwen1.5-1.8B%20@%20cc100.ja.diff.json) and
|
48 |
[llama](https://huggingface.co/spaces/eson/tokenizer-arena/raw/main/stats/compression_rate/meta-llama.Meta-Llama-3.1-405B%20@%20cc100.en.diff.json).
|
|
|
49 |
|
50 |
|
51 |
|
|
|
145 |
# "- `g_bytes/b_tokens` measures how many gigabytes corpus per billion tokens.\n"
|
146 |
# "- `t_bytes/t_tokens` measures how many terabytes corpus per trillion tokens.\n"
|
147 |
" - `char/token` measures how many chars per token on the tokenized corpus.\n"
|
148 |
+
" - `oov_ratio`: out-of-vocabulary ratio on the selected corpus, 👉 check [OOV charset](https://huggingface.co/spaces/eson/tokenizer-arena/raw/main/stats/compression_rate.json)\n\n"
|
149 |
"You can reproduce this procedure with [compression_util.py](https://huggingface.co/spaces/eson/tokenizer-arena/blob/main/compression_util.py)."
|
150 |
)
|
151 |
|