The dataset viewer is not available for this subset.
Exception: SplitsNotFoundError Message: The split names could not be parsed from the dataset config. Traceback: Traceback (most recent call last): File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 499, in get_dataset_config_info for split_generator in builder._split_generators( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/webdataset/webdataset.py", line 75, in _split_generators first_examples = list(islice(pipeline, self.NUM_EXAMPLES_FOR_FEATURES_INFERENCE)) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/webdataset/webdataset.py", line 30, in _get_pipeline_from_tar for filename, f in tar_iterator: File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/utils/file_utils.py", line 1577, in __iter__ for x in self.generator(*self.args, **self.kwargs): File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/utils/file_utils.py", line 1635, in _iter_from_urlpath yield from cls._iter_zip(f) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/utils/file_utils.py", line 1605, in _iter_zip zipf = zipfile.ZipFile(f) File "/usr/local/lib/python3.9/zipfile.py", line 1266, in __init__ self._RealGetContents() File "/usr/local/lib/python3.9/zipfile.py", line 1329, in _RealGetContents endrec = _EndRecData(fp) File "/usr/local/lib/python3.9/zipfile.py", line 286, in _EndRecData return _EndRecData64(fpin, -sizeEndCentDir, endrec) File "/usr/local/lib/python3.9/zipfile.py", line 232, in _EndRecData64 raise BadZipFile("zipfiles that span multiple disks are not supported") zipfile.BadZipFile: zipfiles that span multiple disks are not supported The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/src/services/worker/src/worker/job_runners/config/split_names.py", line 65, in compute_split_names_from_streaming_response for split in get_dataset_split_names( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 572, in get_dataset_split_names info = get_dataset_config_info( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 504, in get_dataset_config_info raise SplitsNotFoundError("The split names could not be parsed from the dataset config.") from err datasets.inspect.SplitsNotFoundError: The split names could not be parsed from the dataset config.
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
THIS IS A CLONE OF https://github.com/NNNNAI/VGGFace2-HQ
VGGFace2-HQ
Related paper: TPAMI
The first open source high resolution dataset for face swapping!!!
A high resolution version of VGGFace2 for academic face editing purpose.This project uses GFPGAN for image restoration and insightface for data preprocessing (crop and align).
We provide a download link for users to download the data, and also provide guidance on how to generate the VGGFace2 dataset from scratch.
If you find this project useful, please star it. It is the greatest appreciation of our work.
Get the VGGFace2-HQ dataset from cloud!
We have uploaded the dataset of VGGFace2 HQ to the cloud, and you can download it from the cloud.
Google Drive
We are especially grateful to Kairui Feng PhD student from Princeton University.
Baidu Drive
[Baidu Drive] Password: sjtu
Generate the HQ dataset by yourself. (If you want to do so)
Preparation
Installation
We highly recommand that you use Anaconda for Installation
conda install pytorch==1.8.0 torchvision==0.9.0 torchaudio==0.8.0 cudatoolkit=10.2 -c pytorch
pip install insightface==0.2.1 onnxruntime
(optional) pip install onnxruntime-gpu==1.2.0
pip install basicsr
pip install facexlib
pip install -r requirements.txt
python setup.py develop
- The pytorch and cuda versions above are most recommanded. They may vary.
- Using insightface with different versions is not recommanded. Please use this specific version.
- These settings are tested valid on both Windows and Ununtu.
Pretrained model
- We use the face detection and alignment methods from insightface for image preprocessing. Please download the relative files and unzip them to ./insightface_func/models from this link.
- Download GFPGANCleanv1-NoCE-C2.pth from GFPGAN offical repo. Place "GFPGANCleanv1-NoCE-C2.pth" in ./experiments/pretrained_models.
Data preparation
- Download VGGFace2 Dataset from VGGFace2 Dataset for Face Recognition
Inference
- Frist, perform data preprocessing on all photos in VGGFACE2, that is, detect faces and align them to the same alignment format as FFHQdataset.
python scripts/crop_align_vggface2_FFHQalign.py --input_dir $DATAPATH$/VGGface2/train --output_dir_ffhqalign $ALIGN_OUTDIR$ --mode ffhq --crop_size 256
- And then, do the magic of image restoration with GFPGAN for processed photos.
python scripts/inference_gfpgan_forvggface2.py --input_path $ALIGN_OUTDIR$ --batchSize 8 --save_dir $HQ_OUTDIR$
Citation
If you find our work useful in your research, please consider citing:
@Article{simswapplusplus,
author = {Xuanhong Chen and
Bingbing Ni and
Yutian Liu and
Naiyuan Liu and
Zhilin Zeng and
Hang Wang},
title = {SimSwap++: Towards Faster and High-Quality Identity Swapping},
journal = {{IEEE} Trans. Pattern Anal. Mach. Intell.},
volume = {46},
number = {1},
pages = {576--592},
year = {2024}
}
Related Projects
Please visit our popular face swapping project
Please visit our another ACMMM2020 high-quality style transfer project
Please visit our AAAI2021 sketch based rendering project
Learn about our other projects
Acknowledgements
- Downloads last month
- 40