Discrepancy between num_samples when fine tuning and the number of samples in my training file
Hello,
I am trying to fine-tune ProtGPT2 using a training dataset of about 4000 sequences. However, when I run run_clm.py
it keeps saying the number of training samples is 605 (see screenshot below), so it doesn't seem like it is using all of my training data. I have tried adjusting the batch size and number of epochs, and setting the --max_train_samples
argument to 10000, all of which seem to have no effect. Has anyone else run into this?
Thanks in advance!
Kathryn
Hi codev!
the run_clm.py script combines the sequences together into single batches to fit the window size of 512. This window size is expressed in tokens, where each token more or less has an average length of 4 amino acids. So every batch could have 2-10 sequences depending on your sequence length. Hence, 605 batches can perfectly comprise 4000 sequences.
Hope this helps, let me know if it doesn't!