Datasets:

Modalities:
Tabular
Text
Formats:
parquet
Languages:
English
ArXiv:
DOI:
Libraries:
Datasets
Dask
License:

350BT sample is much smaller than advertized

#53
by DavidNemeskey - opened

Hi,

I downloaded the 350BT sample to experiment with it, and found that it is actually much smaller. The exact token count depends on the tokenizer, of course, but most tokenizers I experimented with (including GTP-2) return 140B tokens or thereabouts. Even the "tokens" field in metadata backs this up, summing to a little over 141B.

On the page, there is even a graph showing how Fineweb performs compared to other datasets, which is capped at 350B tokens. So I assume a proper 350B sample does exist then?

@guipenedo would it be possible to upload the real 350B sample instead of the current, much smaller sample-350BT? Thank you!

HuggingFaceFW org
edited Sep 25

I imagine something went wrong with your download, as I just counted the values of the tokens column and get 362000915768 (362BT):


SlurmPipelineExecutor(
    job_name="count-fw-ext",
    pipeline=[
        ParquetReader("hf://datasets/HuggingFaceFW/fineweb/sample/350BT", glob_pattern="*.parquet")
    ],
    tasks=250,
    logging_dir="/fsx/guilherme/logs/count-toks/fwv1-350",
    partition="hopper-cpu",
    time="02:00:00",
).run()

Sign up or log in to comment