Datasets:
Renaming dataset to pd12m.
Browse files- README.md +6 -6
- tutorials/images.md +4 -4
- tutorials/metadata.md +1 -1
README.md
CHANGED
@@ -1,24 +1,24 @@
|
|
1 |
---
|
2 |
language:
|
3 |
- en
|
4 |
-
pretty_name: "
|
5 |
license: "cdla-permissive-2.0"
|
6 |
tags:
|
7 |
- image
|
8 |
|
9 |
---
|
10 |
|
11 |
-
#
|
12 |
|
13 |
-
![
|
14 |
|
15 |
# Summary
|
16 |
-
**
|
17 |
|
18 |
# About
|
19 |
Training a state-of-the-art generative image model typically requires vast amounts of images from across the internet. Training with images from across the web introduces several data quality issues: the presence of copyright material, low quality images and captions, violent or nsfw content, PII, decaying dataset quality via broken links, etc. Additionally, downloading from the original image hosts introduces an undue burden to those hosts, impacting services for legitimate users.
|
20 |
|
21 |
-
The
|
22 |
|
23 |
Built and curated with [Source.Plus](https://source.plus).
|
24 |
|
@@ -38,7 +38,7 @@ The metadata is made available through a series of parquet files with the follow
|
|
38 |
- `license_type`: The URL of the license.
|
39 |
|
40 |
## Images
|
41 |
-
The image files are all hosted in the AWS S3 bucket `
|
42 |
|
43 |
# Tutorials
|
44 |
|
|
|
1 |
---
|
2 |
language:
|
3 |
- en
|
4 |
+
pretty_name: "PD12M"
|
5 |
license: "cdla-permissive-2.0"
|
6 |
tags:
|
7 |
- image
|
8 |
|
9 |
---
|
10 |
|
11 |
+
# PD12M
|
12 |
|
13 |
+
![PD12M](logo.png)
|
14 |
|
15 |
# Summary
|
16 |
+
**PD12M** dataset is a collection of about 12 million CC0/PD image-caption pairs for the purpose of training generative image models.
|
17 |
|
18 |
# About
|
19 |
Training a state-of-the-art generative image model typically requires vast amounts of images from across the internet. Training with images from across the web introduces several data quality issues: the presence of copyright material, low quality images and captions, violent or nsfw content, PII, decaying dataset quality via broken links, etc. Additionally, downloading from the original image hosts introduces an undue burden to those hosts, impacting services for legitimate users.
|
20 |
|
21 |
+
The PD12M dataset aims to resolve these issues through collecting only public domain and cc0 licensed images, automated recaptioning of image data, quality and safety filtering, and hosting the images in the dataset on dedicated cloud storage separate from the original image hosts. These innovations make PD12M the largest safe and reliable public image dataset available.
|
22 |
|
23 |
Built and curated with [Source.Plus](https://source.plus).
|
24 |
|
|
|
38 |
- `license_type`: The URL of the license.
|
39 |
|
40 |
## Images
|
41 |
+
The image files are all hosted in the AWS S3 bucket `pd12m`. The URLs to the images files are all maintained in the metadata files.
|
42 |
|
43 |
# Tutorials
|
44 |
|
tutorials/images.md
CHANGED
@@ -4,14 +4,14 @@ Once you have the URLs or S3 file keys from the metadata files, you can download
|
|
4 |
#### cURL
|
5 |
Download an image from a url to a local image file with the name `image.png`:
|
6 |
```bash
|
7 |
-
curl -O image.png https://
|
8 |
```
|
9 |
#### Python
|
10 |
Download an image from a url to a local image file with the name `image.png`:
|
11 |
```python
|
12 |
import requests
|
13 |
|
14 |
-
url = "https://
|
15 |
response = requests.get(url)
|
16 |
with open('image.png', 'wb') as f:
|
17 |
f.write(response.content)
|
@@ -19,11 +19,11 @@ with open('image.png', 'wb') as f:
|
|
19 |
#### img2dataset
|
20 |
You can also use the `img2dataset` tool to quickly download images from a metadata file. The tool is available [here](https://github.com/rom1504/img2dataset). The example below will download all the images to a local `images` directory.
|
21 |
```bash
|
22 |
-
img2dataset download --url_list
|
23 |
```
|
24 |
|
25 |
#### S3 CLI
|
26 |
Download an image from an S3 bucket to an image with the name `image.png`:
|
27 |
```bash
|
28 |
-
aws s3 cp s3://
|
29 |
```
|
|
|
4 |
#### cURL
|
5 |
Download an image from a url to a local image file with the name `image.png`:
|
6 |
```bash
|
7 |
+
curl -O image.png https://pd12m.s3.us-west-2.amazonaws.com/image.png
|
8 |
```
|
9 |
#### Python
|
10 |
Download an image from a url to a local image file with the name `image.png`:
|
11 |
```python
|
12 |
import requests
|
13 |
|
14 |
+
url = "https://pd12m.s3.us-west-2.amazonaws.com/image.png"
|
15 |
response = requests.get(url)
|
16 |
with open('image.png', 'wb') as f:
|
17 |
f.write(response.content)
|
|
|
19 |
#### img2dataset
|
20 |
You can also use the `img2dataset` tool to quickly download images from a metadata file. The tool is available [here](https://github.com/rom1504/img2dataset). The example below will download all the images to a local `images` directory.
|
21 |
```bash
|
22 |
+
img2dataset download --url_list pd12m-metadata.001.parquet --input_format parquet --url_col url --caption_col caption --output-dir images/
|
23 |
```
|
24 |
|
25 |
#### S3 CLI
|
26 |
Download an image from an S3 bucket to an image with the name `image.png`:
|
27 |
```bash
|
28 |
+
aws s3 cp s3://pd12m/image.png image.png
|
29 |
```
|
tutorials/metadata.md
CHANGED
@@ -14,7 +14,7 @@ The metadata files are in parquet format, and contain the following attributes:
|
|
14 |
The files are in parquet format, and can be opened with a tool like `pandas` in Python.
|
15 |
```python
|
16 |
import pandas as pd
|
17 |
-
df = pd.read_parquet('
|
18 |
```
|
19 |
|
20 |
#### Get URLs from metadata
|
|
|
14 |
The files are in parquet format, and can be opened with a tool like `pandas` in Python.
|
15 |
```python
|
16 |
import pandas as pd
|
17 |
+
df = pd.read_parquet('pd15m-metadata.001.parquet')
|
18 |
```
|
19 |
|
20 |
#### Get URLs from metadata
|