Datasets:
annotations_creators:
- no-annotation
language:
- de
- fr
- el
- et
- fi
- hr
- ji
- pl
- ru
- sr
- sv
- uk
language_creators:
- machine-generated
multilinguality:
- multilingual
pretty_name: 'Europeana Newspapers '
size_categories:
- 1M<n<10M
source_datasets: []
tags:
- newspapers
- lam
- OCR
task_categories:
- text-generation
task_ids:
- language-modeling
Dataset Card for Dataset Name
This dataset contains historic newspapers from Europeana. In total the collection has ~32 Billion tokens. Documentation for this dataset is a WIP.
This dataset card aims to be a base template for new datasets. It has been generated using this raw template.
Dataset Details
Dataset Description
- Curated by: [More Information Needed]
- Funded by [optional]: [More Information Needed]
- Shared by [optional]: [More Information Needed]
- Language(s) (NLP): [More Information Needed]
- License: [More Information Needed]
Dataset Sources [optional]
- Repository: [More Information Needed]
- Paper [optional]: [More Information Needed]
- Demo [optional]: [More Information Needed]
Uses
Direct Use
To download the full dataset using the Datasets
library you can do the following
from datasets import load_dataset
dataset = load_dataset("biglam/europeana_newspapers")
You can also access a subset based on language or decade ranges using the following function.
from typing import List, Optional, Literal, Union
from huggingface_hub import hf_hub_url, list_repo_files
LanguageOption = Literal[
"et",
"pl",
"sr",
"ru",
"sv",
"no_language_found",
"ji",
"hr",
"el",
"uk",
"fr",
"fi",
"de",
"multi_language",
]
def get_files_for_lang_and_years(
languages: Union[None, List[LanguageOption]] = None,
min_year: Optional[int] = None,
max_year: Optional[int] = None,
):
files = list_repo_files("biglam/europeana_newspapers", repo_type="dataset")
parquet_files = [f for f in files if f.endswith(".parquet")]
parquet_files_filtered_for_lang = [
f for f in parquet_files if any(lang in f for lang in ["uk", "fr"])
]
filtered_files = [
f
for f in parquet_files
if (min_year is None or min_year <= int(f.split("-")[1].split(".")[0]))
and (max_year is None or int(f.split("-")[1].split(".")[0]) <= max_year)
]
return [
hf_hub_url("biglam/europeana_newspapers", f, repo_type="dataset")
for f in filtered_files
]
This function takes a list of language codes, and a min, max value for decades you want to include. You can can use this function to get the URLs for files you want to download from the Hub:
ds = load_dataset("parquet", data_files=get_files_for_lang_and_years(['fr']), num_proc=4)
[More Information Needed]
Out-of-Scope Use
[More Information Needed]
Dataset Structure
[More Information Needed]
Dataset Creation
Curation Rationale
[More Information Needed]
Source Data
Data Collection and Processing
[More Information Needed]
Who are the source data producers?
[More Information Needed]
Annotations [optional]
Annotation process
[More Information Needed]
Who are the annotators?
[More Information Needed]
Personal and Sensitive Information
[More Information Needed]
Bias, Risks, and Limitations
[More Information Needed]
Recommendations
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
Citation [optional]
BibTeX:
[More Information Needed]
APA:
[More Information Needed]
Glossary [optional]
[More Information Needed]
More Information [optional]
[More Information Needed]
Dataset Card Authors [optional]
[More Information Needed]
Dataset Card Contact
[More Information Needed]