Update README.md
Browse files
README.md
CHANGED
@@ -10,7 +10,7 @@ Arxiver consists of 63,357 [arXiv](https://arxiv.org/) papers converted to multi
|
|
10 |
We hope our dataset will be useful for various applications such as semantic search, domain specific language modeling, question answering and summarization.
|
11 |
|
12 |
## Curation
|
13 |
-
The Arxiver dataset is created using a neural OCR - [Nougat](https://facebookresearch.github.io/nougat/). After OCR processing, we apply custom text processing steps to refine the data. This includes extracting author information, removing reference sections, and performing additional cleaning and formatting.
|
14 |
|
15 |
## Using Arxiver
|
16 |
You can easily download and use the arxiver dataset with Hugging Face's [datasets](https://huggingface.co/datasets) library.
|
|
|
10 |
We hope our dataset will be useful for various applications such as semantic search, domain specific language modeling, question answering and summarization.
|
11 |
|
12 |
## Curation
|
13 |
+
The Arxiver dataset is created using a neural OCR - [Nougat](https://facebookresearch.github.io/nougat/). After OCR processing, we apply custom text processing steps to refine the data. This includes extracting author information, removing reference sections, and performing additional cleaning and formatting. Please refer to our GitHub [repo](https://github.com/neuralwork/arxiver) for details.
|
14 |
|
15 |
## Using Arxiver
|
16 |
You can easily download and use the arxiver dataset with Hugging Face's [datasets](https://huggingface.co/datasets) library.
|