Datasets:
tiny-fineweb
Hello! I'd like to begin curating a 100m, 300m, and 1b token series of subsets of fineweb as an analog to "minipile". Before I begin running embeddings, are there any folks that are already doing this? π€
I'm not doing this (100m, 300m, 1b tokens) but here it is: https://huggingface.co/datasets/nampdn-ai/mini-fineweb. Still work in progress.
Hi,
It might be worth checking out a dataset I just started curating found here: https://huggingface.co/datasets/reflex-ai/fineweb-ultra-mini , I only have about 14,000 rows at the moment after only a few days since I started, but I am using Nvidia GPUs to classify the texts to it's educational values so to shrink the dataset to 2-3% of it's original size by the end of the project, while only keeping the most high quality examples. My hardware gets through thousands of examples each day which I then upload. Hope this helps!