Datasets:
Cannot load dataset from hf hub
My code gets stuck when running the following:
from datasets import load_dataset
traindata = load_dataset(
'allenai/c4', 'allenai--c4', data_files={'train': 'en/c4-train.00000-of-01024.json.gz'}, split='train'
)
valdata = load_dataset(
'allenai/c4', 'allenai--c4', data_files={'validation': 'en/c4-validation.00000-of-00008.json.gz'}, split='validation'
)
I am seeing a lot of "Resolving dataset files" lines and then nothing happens. This started happening earlier today and multiple people have been able to reproduce it on different machines, so it is probably a HF side issue.
Downloading readme: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 41.1k/41.1k [00:00<00:00, 6.04MB/s]
Resolving data files: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1024/1024 [00:07<00:00, 138.61it/s]
Resolving data files: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1024/1024 [00:00<00:00, 462969.42it/s]
Resolving data files: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 7168/7168 [00:00<00:00, 529188.23it/s]
Resolving data files: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 64/64 [00:00<00:00, 476794.77it/s]
Resolving data files: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 512/512 [00:00<00:00, 424403.88it/s]
Hi ! The "allenai--c4" configuration doesn't exist for this dataset (it's a legacy scheme from old versions of the datasets
library)
You can try this instead:
from datasets import load_dataset
traindata = load_dataset(
'allenai/c4', data_files={'train': 'en/c4-train.00000-of-01024.json.gz'}, split='train'
)
valdata = load_dataset(
'allenai/c4', data_files={'validation': 'en/c4-validation.00000-of-00008.json.gz'}, split='validation'
)
Also make sure to use the latest versions of datasets
and huggingface_hub
pip install -U datasets huggingface_hub
We recently added some examples on how to use the dataset in the dataset page btw: https://huggingface.co/datasets/allenai/c4
Thanks, this does fix the problem. I had to upgrade from datasets 2.14.6 to 2.16.1, but 2.14.6 doesn't seem very old to me. Do you know what change on HF's servers caused this problem? How can I be notified of these changes in the future so I don't have to open a ticket and can proactively upgrade my code?
Hi ! This change happened in the allenai/c4 repo itself. We recently added a list of configurations users can select to load any variant of c4 (e.g. "en", "multilingual", etc.)
You can get notifications when this dataset is updated by Watching this repo (or watching all of allenai repositories for example). Note that dataset updates are pretty rare though.
Btw it's also possible to pin the git revision
of the dataset you load with load_dataset
and ignore future update
This change broke some tools which loaded c4 using an older version of datasets
fyi
Yes unfortunately :/ I'm happy to open PRs to fix the impacted repos on github though, if you have some links to share
Note that it's still possible to use this repository before the change by pinning the old revision: revision="607bd4c8450a42878aa9ddc051a65a055450ef87"
ValueError: There are multiple 'allenai/c4' configurations in the cache: default-b04fc8a0b8562884, default-c7bc8b0aefc5e48f
Please specify which configuration to reload from the cache, e.g.
my code
cache_dir = '/mini/others/allenai'
traindata = load_dataset('allenai/c4', data_files={'train': 'en/c4-train.00000-of-01024.json.gz'}, split='train', cache_dir=cache_dir)
valdata = load_dataset('allenai/c4', data_files={'validation': 'en/c4-validation.00000-of-00008.json.gz'}, split='validation', cache_dir=cache_dir)
Which version of datasets
are you using ? Using 3.0.1 it works fine on my side