Edit model card

Model Details

Arabic CLIP is an adaptation of the Contrastive Language-Image Pre-training (CLIP) for the Arabic language. CLIP is an OpenAI-developed model that learns conceptual concepts from images and relates them with textual descriptions. This work attempts to improve the model's understanding and interpretation of visual information in the context of the Arabic language. Rhis version of Arabic CLIP uses a training technique that consists of two stages where the first stage contains a frozen vision encoder. Then, the encoder is unfrozen in the second stage to allow finetuning.

Model Use


from transformers import VisionTextDualEncoderModel, AutoTokenizer
model = VisionTextDualEncoderModel.from_pretrained("LinaAlhuri/clip-bert-lit-two-stages")
model.save_pretrained("arabic_clip") 

tokenizer = AutoTokenizer.from_pretrained("asafaya/bert-base-arabic", cache_dir=None, use_fast=True)

Data

The aim was to create a comprehensive Arabic image-text dataset by combining various data sources due to the scarcity of Arabic resources. Challenges included limited Arabic data and the quality of translated datasets. The approach involved merging genuine datasets for rich information and using translated datasets to cover diverse domains, scenarios, and objects, striking a balance between their respective pros and cons.

Dataset name Images
Arabic Conceptual Captions 1,427,210
Arabic COCO 2014 414,113
Arabic WIT 109,366
Arabic Flicker8K 24,272
Proposed (WAP) dataset 151,252
Total 2,126,213

Performance and Limitations

We have tested the efficacy of Arabic CLIP across different benchmarks tailored for tasks like zero-shot learning, image retrieval, localization, and image search.

  • Conceptual Captions
  • COCO
  • ImageNet
  • Unsplash

Zero-shot Learning

Multilingual CLIP Top 1 Top 5 Top 10 Top 100
Short translation 10.10 21.99 26.70 47.57
Long translation 9.518 20.942 25.54 45.59
Two-stages Arabic CLIP Top 1 Top 5 Top 10 Top 100
Short translation 17.28 36.97 45.43 73.85
Long translation 15.52 34.74 43.49 72.28

Image Retrieval

Conceptual Captions Evaluation

Metric MCLIP Two-stages Arabic CLIP
MRR@1 0.064 0.151
MRR@5 0.093 0.215
MRR@10 0.100 0.227

COCO Evaluation

Metric MCLIP Two-stages Arabic CLIP
MRR@1 0.043 0.062
MRR@5 0.068 0.097
MRR@10 0.074 0.106

Limitations

To summarize the limitations into points

  • Arabic CLIP struggles to count after 3.
  • Limited genuine samples for the Arabic language.
  • Various noises and biases might be introduced into Arabic CLIP because no studies have been conducted yet to address this issue in the published Arabic dataset or Arabic language models.

Bias

For gender bias, it is important to note that Arabic uses a two-gender system in which all nouns are classified as masculine or feminine. However, this is not the case for English. Translating the text from English to Arabic may result in information loss or even make it prone to gender bias.

Downloads last month
8
Inference Examples
Inference API (serverless) does not yet support transformers models for this pipeline type.