Papers
arxiv:2302.06555

Do Vision and Language Models Share Concepts? A Vector Space Alignment Study

Published on Feb 13, 2023
· Submitted by akhaliq on Jul 11
Authors:
,
,

Abstract

Large-scale pretrained language models (LMs) are said to ``lack the ability to connect utterances to the world'' (Bender and Koller, 2020), because they do not have ``mental models of the world' '(Mitchell and Krakauer, 2023). If so, one would expect LM representations to be unrelated to representations induced by vision models. We present an empirical evaluation across four families of LMs (BERT, GPT-2, OPT and LLaMA-2) and three vision model architectures (ResNet, SegFormer, and MAE). Our experiments show that LMs partially converge towards representations isomorphic to those of vision models, subject to dispersion, polysemy and frequency. This has important implications for both multi-modal processing and the LM understanding debate (Mitchell and Krakauer, 2023).

Community

Paper submitter

Hi @jaagli congrats on this work!

Are you planning to link the HF datasets to this paper? Here's how to do that: https://huggingface.co/docs/hub/en/datasets-cards#linking-a-paper

·
Paper author

Thanks! Yes, I am planning to link the datasets to the paper. I appreciate you sharing the link on how to do that—it is very helpful.

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2302.06555 in a model README.md to link it from this page.

Datasets citing this paper 2

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2302.06555 in a Space README.md to link it from this page.

Collections including this paper 4