hdong51's picture
Update README.md
4390a88 verified
metadata
license: apache-2.0
language:
  - en
tags:
  - Domain Generalization
  - Multimodal Learning
pretty_name: Human-Animal-Cartoon
size_categories:
  - 1K<n<10K
task_categories:
  - zero-shot-classification

Human-Animal-Cartoon dataset

Our Human-Animal-Cartoon (HAC) dataset consists of seven actions (‘sleeping’, ‘watching tv’, ‘eating’, ‘drinking’, ‘swimming’, ‘running’, and ‘opening door’) performed by humans, animals, and cartoon figures, forming three different domains. We collect 3381 video clips from the internet with around 1000 for each domain and provide three modalities in our dataset: video, audio, and pre-computed optical flow.

The dataset can be used for Multi-modal Domain Generalization and Adaptation. More details are in SimMMDG paper and code.

Related Projects

SimMMDG: A Simple and Effective Framework for Multi-modal Domain Generalization

MOOSA: Towards Multimodal Open-Set Domain Generalization and Adaptation through Self-supervision

MultiOOD: Scaling Out-of-Distribution Detection for Multiple Modalities

Citation

If you find our dataset useful in your research please consider citing our paper:

@inproceedings{dong2023SimMMDG,
    title={Sim{MMDG}: A Simple and Effective Framework for Multi-modal Domain Generalization},
    author={Dong, Hao and Nejjar, Ismail and Sun, Han and Chatzi, Eleni and Fink, Olga},
    booktitle={Advances in Neural Information Processing Systems (NeurIPS)},
    year={2023}
}