Datasets:
350BT sample is much smaller than advertized
Hi,
I downloaded the 350BT sample to experiment with it, and found that it is actually much smaller. The exact token count depends on the tokenizer, of course, but most tokenizers I experimented with (including GTP-2) return 140B tokens or thereabouts. Even the "tokens" field in metadata backs this up, summing to a little over 141B.
On the page, there is even a graph showing how Fineweb performs compared to other datasets, which is capped at 350B tokens. So I assume a proper 350B sample does exist then?
@guipenedo
would it be possible to upload the real 350B sample instead of the current, much smaller sample-350BT
? Thank you!
I imagine something went wrong with your download, as I just counted the values of the tokens column and get 362000915768 (362BT):
SlurmPipelineExecutor(
job_name="count-fw-ext",
pipeline=[
ParquetReader("hf://datasets/HuggingFaceFW/fineweb/sample/350BT", glob_pattern="*.parquet")
],
tasks=250,
logging_dir="/fsx/guilherme/logs/count-toks/fwv1-350",
partition="hopper-cpu",
time="02:00:00",
).run()