Using millions of emoji occurrences to learn any-domain representations for detecting sentiment, emotion and sarcasm
Abstract
NLP tasks are often limited by scarcity of manually annotated data. In social media sentiment analysis and related tasks, researchers have therefore used binarized emoticons and specific hashtags as forms of distant supervision. Our paper shows that by extending the distant supervision to a more diverse set of noisy labels, the models can learn richer representations. Through emoji prediction on a dataset of 1246 million tweets containing one of 64 common emojis we obtain state-of-the-art performance on 8 benchmark datasets within sentiment, emotion and sarcasm detection using a single pretrained model. Our analyses confirm that the diversity of our emotional labels yield a performance improvement over previous distant supervision approaches.
Community
Video Demo by authors of the paper :
https://www.youtube.com/watch?v=u_JwYxtjzUs
The mentioned online demo was discontinued, check 🤗 Spaces.
Models citing this paper 2
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 2
Collections including this paper 0
No Collection including this paper