Datasets:

Modalities:
Text
Formats:
text
Languages:
English
Libraries:
Datasets
License:
text
stringlengths
30
36
GRID_alignments/s1/align/bbaf2n
GRID_alignments/s1/align/bbaf3s
GRID_alignments/s1/align/bbaf4p
GRID_alignments/s1/align/bbaf5a
GRID_alignments/s1/align/bbal6n
GRID_alignments/s1/align/bbal7s
GRID_alignments/s1/align/bbal8p
GRID_alignments/s1/align/bbal9a
GRID_alignments/s1/align/bbas1s
GRID_alignments/s1/align/bbas2p
GRID_alignments/s1/align/bbas3a
GRID_alignments/s1/align/bbaszn
GRID_alignments/s1/align/bbaz4n
GRID_alignments/s1/align/bbaz5s
GRID_alignments/s1/align/bbaz6p
GRID_alignments/s1/align/bbaz7a
GRID_alignments/s1/align/bbbf6n
GRID_alignments/s1/align/bbbf7s
GRID_alignments/s1/align/bbbf8p
GRID_alignments/s1/align/bbbf9a
GRID_alignments/s1/align/bbbm1s
GRID_alignments/s1/align/bbbm2p
GRID_alignments/s1/align/bbbm3a
GRID_alignments/s1/align/bbbmzn
GRID_alignments/s1/align/bbbs4n
GRID_alignments/s1/align/bbbs5s
GRID_alignments/s1/align/bbbs6p
GRID_alignments/s1/align/bbbs7a
GRID_alignments/s1/align/bbbz8n
GRID_alignments/s1/align/bbbz9s
GRID_alignments/s1/align/bbie8n
GRID_alignments/s1/align/bbie9s
GRID_alignments/s1/align/bbif1a
GRID_alignments/s1/align/bbifzp
GRID_alignments/s1/align/bbil2n
GRID_alignments/s1/align/bbil3s
GRID_alignments/s1/align/bbil4p
GRID_alignments/s1/align/bbil5a
GRID_alignments/s1/align/bbir6n
GRID_alignments/s1/align/bbir7s
GRID_alignments/s1/align/bbir8p
GRID_alignments/s1/align/bbir9a
GRID_alignments/s1/align/bbiz1s
GRID_alignments/s1/align/bbiz2p
GRID_alignments/s1/align/bbiz3a
GRID_alignments/s1/align/bbwg1s
GRID_alignments/s1/align/bbwg2p
GRID_alignments/s1/align/bbwg3a
GRID_alignments/s1/align/bbwgzn
GRID_alignments/s1/align/bbwm4n
GRID_alignments/s1/align/bbwm5s
GRID_alignments/s1/align/bbwm6p
GRID_alignments/s1/align/bbwm7a
GRID_alignments/s1/align/bbws8n
GRID_alignments/s1/align/bbws9s
GRID_alignments/s1/align/bbwt1a
GRID_alignments/s1/align/bbwtzp
GRID_alignments/s1/align/bgaa6n
GRID_alignments/s1/align/bgaa7s
GRID_alignments/s1/align/bgaa8p
GRID_alignments/s1/align/bgaa9a
GRID_alignments/s1/align/bgah1s
GRID_alignments/s1/align/bgah2p
GRID_alignments/s1/align/bgah3a
GRID_alignments/s1/align/bgahzn
GRID_alignments/s1/align/bgan4n
GRID_alignments/s1/align/bgan5s
GRID_alignments/s1/align/bgan6p
GRID_alignments/s1/align/bgan7a
GRID_alignments/s1/align/bgat8n
GRID_alignments/s1/align/bgat9s
GRID_alignments/s1/align/bgau1a
GRID_alignments/s1/align/bgauzp
GRID_alignments/s1/align/bgbb1s
GRID_alignments/s1/align/bgbb2p
GRID_alignments/s1/align/bgbb3a
GRID_alignments/s1/align/bgbbzn
GRID_alignments/s1/align/bgbh4n
GRID_alignments/s1/align/bgbh5s
GRID_alignments/s1/align/bgbh6p
GRID_alignments/s1/align/bgbh7a
GRID_alignments/s1/align/bgbn8n
GRID_alignments/s1/align/bgbn9s
GRID_alignments/s1/align/bgbo1a
GRID_alignments/s1/align/bgbozp
GRID_alignments/s1/align/bgbu2n
GRID_alignments/s1/align/bgbu3s
GRID_alignments/s1/align/bgbu4p
GRID_alignments/s1/align/bgbu5a
GRID_alignments/s1/align/bgia2n
GRID_alignments/s1/align/bgia3s
GRID_alignments/s1/align/bgia4p
GRID_alignments/s1/align/bgia5a
GRID_alignments/s1/align/bgig6n
GRID_alignments/s1/align/bgig7s
GRID_alignments/s1/align/bgig8p
GRID_alignments/s1/align/bgig9a
GRID_alignments/s1/align/bgin1s
GRID_alignments/s1/align/bgin2p
GRID_alignments/s1/align/bgin3a

Enhanced GRID Corpus with Lip Landmark Coordinates

Introduction

This enhanced version of the GRID audiovisual sentence corpus, originally available at Zenodo, incorporates significant new features for auditory-visual speech recognition research. Building upon the preprocessed data from LipNet-PyTorch, we have added lip landmark coordinates to the dataset, providing detailed positional information of key points around the lips. This addition greatly enhances its utility in visual speech recognition and related fields. Furthermore, to facilitate ease of access and integration into existing machine learning workflows, we have published this enriched dataset on the Hugging Face platform, making it readily available to the research community.

Dataset Structure

This dataset is split into 3 directories:

  • lip_images: contains the images of the lips
    • speaker_id: contains the videos of a particular speaker
      • video_id: contains the video frames of a particular video
        • frame_no.jpg: jpg image of the lips of a particular frame
  • lip_coordinates: contains the landmark coordinates of the lips
    • speaker_id: contains the lip landmark of a particular speaker
      • video_id.json: a json file containing the lip landmark coordinates of a particular video, where the keys are the frame numbers and the values are the x, y lip landmark coordinates
  • GRID_alignments: contains the alignments of all the videos in the dataset
    • speaker_id: contains the alignments of a particular speaker
      • video_id.align: contains the alignments of a particular video, where each line is a word and the start and end time of the word in the video

Details

The lip landmark coordinates are extracted using the original videos in the GRID corpus and using the dlib library, using the shape_predictor_68_face_landmarks_GTX.dat pretrained model. The lip landmark coordinates are then saved in a json file, where the keys are the frame numbers and the values are the x, y lip landmark coordinates. The lip landmark coordinates are saved in the same order as the frames in the video.

Usage

The dataset can be downloaded by cloning this repository.

Cloning the repository

git clone https://huggingface.co/datasets/SilentSpeak/EGCLLC

Loading the dataset

After cloning the repository, you can load the dataset by unpacking the tar file and using dataset_tar.py script.

Alternatively, a probably faster method is that, you can un-tar the tar files using the following command:

tar -xvf lip_images.tar
tar -xvf lip_coordinates.tar
tar -xvf GRID_alignments.tar

Acknowledgements

Alvarez Casado, C., Bordallo Lopez, M. Real-time face alignment: evaluation methods, training strategies and implementation optimization. Springer Journal of Real-time image processing, 2021

Assael, Y., Shillingford, B., Whiteson, S., & Freitas, N. (2017). LipNet: End-to-End Sentence-level Lipreading. GPU Technology Conference.

Cooke, M., Barker, J., Cunningham, S., & Shao, X. (2006). The Grid Audio-Visual Speech Corpus (1.0) [Data set]. Zenodo. https://doi.org/10.5281/zenodo.3625687

Downloads last month
63
Edit dataset card