The dataset viewer is not available for this dataset.
Error code: ConfigNamesError Exception: ImportError Message: To be able to use NeuML/wikipedia, you need to install the following dependency: mwparserfromhell. Please install it using 'pip install mwparserfromhell' for instance. Traceback: Traceback (most recent call last): File "/src/services/worker/src/worker/job_runners/dataset/config_names.py", line 66, in compute_config_names_response config_names = get_dataset_config_names( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 347, in get_dataset_config_names dataset_module = dataset_module_factory( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1914, in dataset_module_factory raise e1 from None File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1880, in dataset_module_factory return HubDatasetModuleFactoryWithScript( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1504, in get_module local_imports = _download_additional_modules( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 354, in _download_additional_modules raise ImportError( ImportError: To be able to use NeuML/wikipedia, you need to install the following dependency: mwparserfromhell. Please install it using 'pip install mwparserfromhell' for instance.
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
Dataset Card for Wikipedia
This repo is a fork of the olm/wikipedia repo which itself is a fork of the original Hugging Face Wikipedia repo here.
This fork modifies olm/wikipedia
to enable running on lower resourced machines. These changes have been proposed as a PR with the olm/wikipedia project.
Dataset Summary
Wikipedia dataset containing cleaned articles of all languages. The datasets are built from the Wikipedia dump (https://dumps.wikimedia.org/) with one split per language. Each example contains the content of one full Wikipedia article with cleaning to strip markdown and unwanted sections (references, etc.).
The articles are parsed using the mwparserfromhell
tool.
To load this dataset you need to install the following dependencies:
pip install mwparserfromhell datasets
Then, you can load any subset of Wikipedia per language and per date this way:
from datasets import load_dataset
load_dataset("neuml/wikipedia", language="en", date="20240101")
You can find the full list of languages and dates here.
Supported Tasks and Leaderboards
The dataset is generally used for Language Modeling.
Languages
You can find the list of languages here.
Dataset Structure
Data Instances
An example looks as follows:
{'id': '1',
'url': 'https://simple.wikipedia.org/wiki/April',
'title': 'April',
'text': 'April is the fourth month...'
}
Data Fields
The data fields are the same among all configurations:
id
(str
): ID of the article.url
(str
): URL of the article.title
(str
): Title of the article.text
(str
): Text content of the article.
Curation Rationale
Source Data
Initial Data Collection and Normalization
Who are the source language producers?
Annotations
Annotation process
Who are the annotators?
Personal and Sensitive Information
Considerations for Using the Data
Social Impact of Dataset
Discussion of Biases
Other Known Limitations
Additional Information
Dataset Curators
Licensing Information
Most of Wikipedia's text and many of its images are co-licensed under the Creative Commons Attribution-ShareAlike 3.0 Unported License(CC BY-SA) and the GNU Free Documentation License(GFDL) (unversioned, with no invariant sections, front-cover texts, or back-cover texts).
Some text has been imported only under CC BY-SA and CC BY-SA-compatible license and cannot be reused under GFDL; such text will be identified on the page footer, in the page history, or on the discussion page of the article that utilizes the text.
Citation Information
@ONLINE{wikidump,
author = "Wikimedia Foundation",
title = "Wikimedia Downloads",
url = "https://dumps.wikimedia.org"
}
- Downloads last month
- 56