It seems the latest updates break code :(
Hi
@albertvillanova
-- I see you made changes to the wikitext dataset a few hours ago.
I'm getting an error similar to the one here : #215 and https://github.com/huggingface/datasets/issues/215
After your changes.
I deleted the original dataset cache after encountering that error and then rerun with the following command :
traindata = load_dataset('wikitext', 'wikitext-2-raw-v1', split='train', ignore_verifications=True, download_mode='force_redownload') testdata = load_dataset('wikitext', 'wikitext-2-raw-v1', split='test', ignore_verifications=True)
But this is currently hanging. Any ideas ? Thanks
Hi, I have encountered the same issue.
When I type the following command to download the wikitext-2-raw-v1 dataset, the script was still trying to automatically download the wikitext-103-raw-v1 dataset and failed:
traindata = load_dataset('wikitext', 'wikitext-2-raw-v1', split='train')
Thanks for reporting. We are investigating it.
I am sorry, but I can't reproduce the issue:
from datasets import load_dataset
ds = load_dataset("wikitext", "wikitext-2-raw-v1")
ds
DatasetDict({
test: Dataset({
features: ['text'],
num_rows: 4358
})
train: Dataset({
features: ['text'],
num_rows: 36718
})
validation: Dataset({
features: ['text'],
num_rows: 3760
})
})
I guess there is an issue with your local cache...
Hi,
My code is also broken after the update. I think I have the same issue as JiangTu.
When calling load_dataset with 'wikitext-2-raw-v1' it starts downloading and preparing 'wikitext-103-raw-v1' and ends up failing with the following error :
NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='test', num_bytes=1305088, num_examples=4358, shard_lengths=None, dataset_name=None), 'recorded': SplitInfo(name='test', num_bytes=5176698, num_examples=17432, shard_lengths=None, dataset_name='parquet')}, {'expected': SplitInfo(name='train', num_bytes=546500949, num_examples=1801350, shard_lengths=None, dataset_name=None), 'recorded': SplitInfo(name='train', num_bytes=1113622699, num_examples=3676136, shard_lengths=[1650675, 1661350, 364111], dataset_name='parquet')}, {'expected': SplitInfo(name='validation', num_bytes=1159288, num_examples=3760, shard_lengths=None, dataset_name=None), 'recorded': SplitInfo(name='validation', num_bytes=4607450, num_examples=15040, shard_lengths=None, dataset_name='parquet')}
Hope it helps finding the issue.
In the meantime, is there a way to prevent re-download and use the cached dataset I have from before the update?
Many thanks for your support.
EDIT:
@JiangTu
it now works after updating datasets in my virtual environment pip install --upgrade datasets
.
Updating the datasets library worked for me too -- but for reference, the change broke version 2.11.0
Hi @JiangTu .
As explained by@A-bao and
@luciodery
, you need to update your datasets
library:
pip install -U datasets
== Fixed, but leaving up for posterity: ==
Hi! I deleted my ~/.cache/huggingface/datasets folder, updated datasets (with conda), and still cant load wikitext:
Python 3.11.5 (main, Sep 11 2023, 08:31:25) [Clang 14.0.6 ] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import datasets
>>> datasets.__version__
'2.12.0'
>>> d= datasets.load_dataset('wikitext','wikitext-103-v1')
Downloading readme: 100%|ββββββββββββββββββ| 10.5k/10.5k [00:00<00:00, 11.1MB/s]
Downloading and preparing dataset None/wikitext-103-raw-v1 to
file:///Users/me/.cache/huggingface/datasets/parquet/wikitext-103-raw-v1-56fa33b81059af9d/0.0.0/2a3b91fbd88a2c90d1dbbb32b460cf621d31bd5b05b934492fdef7d8d6f236ec...
Downloading data: 100%|ββββββββββββββββββββββ| 157M/157M [00:10<00:00, 14.5MB/s]
Downloading data: 100%|ββββββββββββββββββββββ| 157M/157M [00:11<00:00, 14.0MB/s]
Downloading data: 100%|ββββββββββββββββββββββ| 156M/156M [00:10<00:00, 14.3MB/s]
Downloading data: 100%|ββββββββββββββββββββββ| 156M/156M [00:10<00:00, 14.4MB/s]
Downloading data: 100%|ββββββββββββββββββββ| 6.36M/6.36M [00:00<00:00, 14.1MB/s]
Downloading data: 100%|ββββββββββββββββββββ| 6.07M/6.07M [00:00<00:00, 15.6MB/s]
Downloading data: 100%|ββββββββββββββββββββββ| 657k/657k [00:00<00:00, 8.95MB/s]
Downloading data: 100%|ββββββββββββββββββββββ| 655k/655k [00:00<00:00, 11.8MB/s]
Downloading data: 100%|ββββββββββββββββββββββ| 657k/657k [00:00<00:00, 8.84MB/s]
Downloading data: 100%|ββββββββββββββββββββββ| 618k/618k [00:00<00:00, 10.5MB/s]
Downloading data: 100%|ββββββββββββββββββββββ| 733k/733k [00:00<00:00, 12.8MB/s]
Downloading data: 100%|ββββββββββββββββββββββ| 722k/722k [00:00<00:00, 12.8MB/s]
Downloading data: 100%|ββββββββββββββββββββββ| 733k/733k [00:00<00:00, 8.90MB/s]
Downloading data: 100%|ββββββββββββββββββββββ| 685k/685k [00:00<00:00, 11.9MB/s]
Downloading data files: 100%|βββββββββββββββββββββ| 3/3 [00:53<00:00, 17.70s/it]
Extracting data files: 100%|βββββββββββββββββββββ| 3/3 [00:00<00:00, 354.31it/s]
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/me/miniconda3/lib/python3.11/site-packages/datasets/load.py", line 1797, in load_dataset
builder_instance.download_and_prepare(
File "/Users/me/miniconda3/lib/python3.11/site-packages/datasets/builder.py", line 890, in download_and_prepare
self._download_and_prepare(
File "/Users/me/miniconda3/lib/python3.11/site-packages/datasets/builder.py", line 1003, in _download_and_prepare
verify_splits(self.info.splits, split_dict)
File "/Users/me/miniconda3/lib/python3.11/site-packages/datasets/utils/info_utils.py", line 100, in verify_splits
raise NonMatchingSplitsSizesError(str(bad_splits))
datasets.utils.info_utils.NonMatchingSplitsSizesError:
[{'expected': SplitInfo(name='test', num_bytes=1305088, num_examples=4358, shard_lengths=None, dataset_name=None),
'recorded': SplitInfo(name='test', num_bytes=5176698, num_examples=17432, shard_lengths=None, dataset_name='parquet')},
{'expected': SplitInfo(name='train', num_bytes=546500949, num_examples=1801350, shard_lengths=None, dataset_name=None),
'recorded': SplitInfo(name='train', num_bytes=1113622699, num_examples=3676136, shard_lengths=[1650675, 1661350, 364111], dataset_name='parquet')},
{'expected': SplitInfo(name='validation', num_bytes=1159288, num_examples=3760, shard_lengths=None, dataset_name=None),
'recorded': SplitInfo(name='validation', num_bytes=4607450, num_examples=15040, shard_lengths=None, dataset_name='parquet')}]
not sure what I'm doing wrong, will appreciate any advice!
UPDATE: what I was doing wrong: I tried to update datasets with conda instead of pip. After pip install -U datasets
, I now have datasets 2.16.1 and load_dataset works :)