Datasets:

Modalities:
Tabular
Text
Formats:
parquet
Languages:
code
ArXiv:
Libraries:
Datasets
Dask
License:

How to download the dataset in bulk?

#7
by Chinglin - opened

The the-stack-v2-train-full-ids only provides the blob_id, to get the content, it is required to download from AWS S3 object storage by file.

Is there any method that we could download the data in bulk instead of downloading it by file.

Thanks a lot. πŸ™πŸ™

Same question. I'm currently downloading it with the following script:

import os

import boto3
from datasets import load_dataset
from smart_open import open

ds = load_dataset("bigcode/the-stack-v2-train-full-ids", streaming=True, split="train")

session = boto3.Session(
    aws_access_key_id=os.environ["AWS_ACCESS_KEY_ID"],
    aws_secret_access_key=os.environ["AWS_SECRET_ACCESS_KEY"])
s3 = session.client("s3")

def download_contents(files):
    for file in files:
        local_file = f"{SOME_PATH}/{file['blob_id']}.gz"
        s3.download_file('softwareheritage', f"content/{file['blob_id']}", local_file)
        file["local_file"] = local_file
    return {"files": files}

dataset = dataset.map(lambda row: download_contents(row["files"]))

for row in dataset:
    for file in row["files"]:
        ...

But it takes years as it is downloading the file one by one.

I'm trying to parallelize the downloading, but it does not seems to improve much. I'm new to both datasets and pytorch, so correct me if I am doing it wrong:

def collate_fn(data):
    """pytorch sometimes complains about cannot collate datetime or sth."""
    return [[[0, 1]]]

dataloader = DataLoader(
    dataset,
    batch_size=32,
    num_workers=64,
    collate_fn=collate_fn,
)

for row in dataloader:
    if not isinstance(row, dict):
        print(row)
        continue
    for file in row["files"]:
        ...

Even with 64 workers it still downloads very slow. After one night I only downloaded 5.3G of total files.

Sign up or log in to comment