The dataset viewer is not available for this dataset.
Error code: ConfigNamesError Exception: ImportError Message: To be able to use larrylawl/opus, you need to install the following dependency: opustools. Please install it using 'pip install opustools' for instance. Traceback: Traceback (most recent call last): File "/src/services/worker/src/worker/job_runners/dataset/config_names.py", line 66, in compute_config_names_response config_names = get_dataset_config_names( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 347, in get_dataset_config_names dataset_module = dataset_module_factory( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1914, in dataset_module_factory raise e1 from None File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1880, in dataset_module_factory return HubDatasetModuleFactoryWithScript( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1504, in get_module local_imports = _download_additional_modules( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 354, in _download_additional_modules raise ImportError( ImportError: To be able to use larrylawl/opus, you need to install the following dependency: opustools. Please install it using 'pip install opustools' for instance.
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
Dataset Card for [opus]
Dataset Description
Disclaimer. Loading of dataset is slow, thus it may not be feasible when loading at scale. I'd suggest to use the other OPUS datasets on Huggingface which loads a specific corpus.
Loads OPUS as HuggingFace dataset. OPUS is an open parallel corpus covering 700+ languages and 1100+ datasets.
Given a src
and tgt
language, this repository can load all available parallel corpus. To my knowledge, other OPUS datasets on Huggingface loads a specific corpus
Requirements.
pip install pandas
# pip install my fork of `opustools`
git clone https://github.com/larrylawl/OpusTools.git
pip install -e OpusTools/opustools_pkg
Example Usage.
# args follows `opustools`: https://pypi.org/project/opustools/
src="en"
tgt="id"
download_dir="data" # dir to save downloaded files
corpus="bible-uedin" # corpus name. Leave as `None` to download all available corpus for the src-tgt pair.
dataset = load_dataset("larrylawl/opus",
src=src,
tgt=tgt,
download_dir=download_dir,
corpus=corpus)
)
Disclaimer.
This repository is still in active development. Do make a PR if there're any issues!
Dataset Summary
[More Information Needed]
Supported Tasks and Leaderboards
[More Information Needed]
Languages
Available languages can be viewed on the OPUS API
Dataset Structure
Data Instances
{'src': 'In the beginning God created the heavens and the earth .',
'tgt': 'Pada mulanya , waktu Allah mulai menciptakan alam semesta'}
Data Fields
features = {
"src": datasets.Value("string"),
"tgt": datasets.Value("string"),
}
Data Splits
Merged all data into train split.
Dataset Creation
Curation Rationale
[More Information Needed]
Source Data
Initial Data Collection and Normalization
[More Information Needed]
Who are the source language producers?
[More Information Needed]
Annotations
Annotation process
[More Information Needed]
Who are the annotators?
[More Information Needed]
Personal and Sensitive Information
[More Information Needed]
Considerations for Using the Data
Social Impact of Dataset
[More Information Needed]
Discussion of Biases
[More Information Needed]
Other Known Limitations
[More Information Needed]
Additional Information
Dataset Curators
[More Information Needed]
Licensing Information
[More Information Needed]
Citation Information
[More Information Needed]
Contributions
Thanks to @larrylawl for adding this dataset.
- Downloads last month
- 61