Title of Competition
stringclasses 343
values | Title of Writeup
stringlengths 6
139
| User
stringclasses 1
value | Writeup
stringlengths 419
80.4k
|
---|---|---|---|
chaii - Hindi and Tamil Question Answering | 43rd Place Solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: chaii - Hindi and Tamil Question Answering
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>We learned a lot from you :) Thank you for congratulations!</p>
<h2>Thanks to</h2>
<p>First of all, we'd like to thank kaggle and google research team. Also, thanks for everyone who participated in this competition. (especially, @kishalmandal and <a href="https://www.kaggle.com/tkm2261" target="_blank">@tkm2261</a>)</p>
<h2>Solution</h2>
<p>Model : deepset/xlm-roberta-large-squad2 (single)</p>
<p>Data : </p>
<ul>
<li><code>Competition dataset + MLQA + XQUAD</code> (5 folds)</li>
<li><code>Competition dataset + MLQA + XQUAD + squad_translated_tamil</code> (5 folds)</li>
<li><code>Competition dataset + MLQA + XQUAD + squad_translated_tamil + Google translated SQuAD2.0 to Hindi and Tamil</code> (2 folds)</li>
</ul>
<h2>Strategies</h2>
<ol>
<li><p>Soft voting with many folds was all we did. We assumed our 12 folds (Public LB: 0.778) will be better than 5 folds (Public LB: 0.792) on the Private LB since its folds seemed to be selected to perform well for Public LB only. Fortunately, It was exactly right. (12 folds Private LB: 0.740, 5 folds Private LB: 0.727)</p></li>
<li><p>A couple of folds using additional data of Google translated SQuAD2.0 worked well for the ensemble (LB 0.004 higher), although the score of them was not that high. (Public LB 0.685, single fold submission) <br>
We guess this might contribute to our final submission's diversity of ingredients.</p></li>
<li><p>To properly respond to the situation where 0.792s work well on Private LB, we added 0.792's 5 folds to our 12 folds and finalized the submission selection. (12 folds, 17 folds) </p></li>
<li><p>We tried Hard voting Ensemble, but its LB score was always lower than Soft voting. (0.001-0.005 lower)</p></li>
</ol>
<p>Thanks bro :D</p> |
chaii - Hindi and Tamil Question Answering | 4th Place Solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: chaii - Hindi and Tamil Question Answering
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p><a href="https://www.kaggle.com/mathurinache" target="_blank">@mathurinache</a> you are right, pp boosted much :)</p>
<p>Thank you!</p>
<p><a href="https://www.kaggle.com/lucamassaron" target="_blank">@lucamassaron</a> Becuase the annotation is noisy, I refer to totally wrong (jaccard = 0) answers as a metric. <br>
$metric = 1-\frac{# total wrong}{# samples}$</p>
<p><a href="https://www.kaggle.com/vanle73" target="_blank">@vanle73</a> thanks for the comment.<br>
-1. I made a custom sampler. In each epoch, I use the whole chaii data and sample part of the external data.</p>
<pre><code>class ChaiiRandomSampler(Sampler):
def __init__(self, data_source, downsample=1.0):
self.data_source = data_source
self.downsample = downsample
self.chaii_indices = []
self.ext_indices = []
for i, item in enumerate(self.data_source):
if item['src'] == 0:
self.chaii_indices.append(i)
else:
self.ext_indices.append(i)
self.chaii_indices = torch.tensor(self.chaii_indices, dtype=torch.long)
self.ext_indices = torch.tensor(self.ext_indices, dtype=torch.long)
self.n_chaii = len(self.chaii_indices)
self.n_ext = int(len(self.ext_indices)*self.downsample)
def __iter__(self):
sampled_ext_indices_indices = torch.randperm(len(self.ext_indices))[:self.n_ext].long()
# print(type(sampled_ext_indices_indices))
sampled_ext_indices = self.ext_indices[sampled_ext_indices_indices]
indices = torch.cat([self.chaii_indices, sampled_ext_indices])
indices_indices = torch.randperm(self.n_chaii+self.n_ext).long()
return iter(indices[indices_indices].tolist())
def __len__(self):
return len(self.data_source)
</code></pre>
<p>-2. The mapping brings a best single 0.796 to 0.817 on public.</p>
<p>[UPDATE] source code is now available at <a href="https://github.com/Zacchaeus14/chaii-4th-place-solution" target="_blank">https://github.com/Zacchaeus14/chaii-4th-place-solution</a><br>
Thank Kaggle and Google Research for hosting this fantastic competition. The CV and LB are really hard and I cannot believe we moved into the money zone. Thank my teammate <a href="https://www.kaggle.com/zzy" target="_blank">@zzy</a>. Here I share our solution. Please feel free to leave comments!</p>
<h1>TLDR:</h1>
<ol>
<li>Ensembling models with different tokenizers</li>
<li>Finetuning on squad</li>
<li>Post-processing</li>
</ol>
<h1>Models:</h1>
<p>There are 5 models involved in the final submissions:</p>
<h2>Zacchaeus:</h2>
<p>model 1. Xlm-Roberta-large 5 folds<br>
model 2. Rembert 5 folds<br>
model 3. Xlm-Roberta-large 3 seeds<br>
model 4. Muril-large 4 seeds<br>
model 4'. Muril-large 5 folds</p>
<h2>zzy:</h2>
<p>model 5. Infoxlm 5 folds</p>
<p><strong>Our submission A is an ensemble of 125 (public 0.817 private 0.767) and B 2345 (public 0.812 private 0.777). We find the best private submission is 24'5 (public 0.812 private 0.784).</strong></p>
<h1>Training:</h1>
<p>I try pre-training on hi/ta wiki with vanilla MLM, hoping these data will cover the hidden test set. Doesn't work. I find that Infoxlm / Rembert / XLM-Roberta benefits from finetuning on the squad 2.0 dataset for 1 -3 epochs. Then, I translate the dataset into Tamil with Google API, but the translated dataset does not bring significant improvements. After that, I train the models on chaii + mlqa + xquad dataset with the latter two downsampled for 3 epochs. The validation Jaccard score was quite unstable, so I adapted "non-zero Jaccard percentage" as an early stopping criterion".</p>
<h1>Ensembling:</h1>
<p>Xlm-Roberta and Infoxlm share the same tokenizer, while Rembert and Muril have their own. I ensemble models with different tokenizers by mapping the token-level logits back to characters and then average with equal weights. After mapping and blending, I replace consecutive same numbers with -1e5 and finally do the beam search.</p>
<h1>Post-processing:</h1>
<p>As the training data is inconsistent. I make various post-processing heuristics to mitigate the bias. Some major methods:</p>
<ol>
<li>Recursively strip left and right punctuations.</li>
<li>Add back some punctuations if necessary (inspired by <a href="https://www.kaggle.com/nbroad" target="_blank">@nbroad</a> 's notebook).</li>
<li>Extract years from mm/yyyy/dd answers using regular expression if the question asks for a year specifically.<br>
As a non-native speaker, going through all pieces of bad cases and copying/pasting in Google Translate can take forever.</li>
</ol>
<h1>Other things interesting but not working:</h1>
<ol>
<li>Second-stage reranking model. Thanks <a href="https://www.kaggle.com/zzy" target="_blank">@zzy</a> for attempting on this. </li>
<li>SWA.</li>
<li>KL loss (mentioned in the VAULT paper).</li>
<li>torchscript and ONNX</li>
</ol>
<h1>Acknowledgement</h1>
<p>Thank NYU Shanghai faculty for granting me the GPU resources.</p>
<p>Here is the inference kernel: <a href="https://www.kaggle.com/zacchaeus/chaii-infer-blend-postpro-4models" target="_blank">https://www.kaggle.com/zacchaeus/chaii-infer-blend-postpro-4models</a>. I will publish my github repository after cleaning up my code.</p> |
chaii - Hindi and Tamil Question Answering | 57th Place - Increasing batch size helps | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: chaii - Hindi and Tamil Question Answering
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thank you again for your notebook</p>
<p>It is my first Kaggle competition (though I have joined Kaggle 4 years ago 😣). And, thanks to this community. 😍</p>
<p>I was really lost hope when I have seen that my ranking in public LB has been decreasing day by day. So, it was a huge jump from 557th to 57th place in the final standings.</p>
<p>I have tried the following models:</p>
<ul>
<li>xlm roberta squad2</li>
<li>Muril</li>
</ul>
<p>In my case, Muril did not perform well on the public leaderboard. So, I have tried to use only Xlm Roberta squad2.</p>
<p>While increasing batch size from 4 to 16, I have found it performs well. Also, validation loss is not a good measurement to pick the best model in each fold, so I have used Jaccard score.</p>
<p>Also, when I got stuck, I finetuned my current best model with all external datasets available in Kaggle. </p>
<p>I have started this competition 24 days ago. I wish I could experiment with blending the models, using other (Rembert, etc) models, and increasing the number of folds.</p>
<p>Also, another thing, I wish that I could select one of my other models for the final score because this model really performs well in the private score.</p>
<p>Thanks, <a href="https://www.kaggle.com/kishalmandal" target="_blank">@kishalmandal</a> because of his post-processing and infer code. It really helps me a lot.</p> |
chaii - Hindi and Tamil Question Answering | 5th Place Solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: chaii - Hindi and Tamil Question Answering
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First of all thanks to Kaggle and the organizers for this competition. </p>
<h3>Data Strategy</h3>
<p>The competition data was not up to the mark and thus, there was no way you could build a good CV that correlates with the leaderboard. Unclean public training data and clean hidden private test data only meant we could use either different models and combine them together or use a single strong model like muril-large and train it on a variety of datasets shared by others. We did both.</p>
<p>The first step was to divide the training data into 5 folds. After that, we trained our final models on subset of SQuAD v2, Google-translated SQuAD v2 shared by <a href="https://www.kaggle.com/tkm2261" target="_blank">@tkm2261</a>, all TyDi QA, hindi parts of MLQA, XQUAD and 5-10x oversampled Chaii dataset for each fold. We found that oversampling Chaii dataset helped us to improve CV and LB score. Please note that for CV calculation we used only the original validation folds.</p>
<p>All Models were trained for 1-2 epochs with early stopping based on CV Jaccard score with max_length = 384 and doc_stride = 128. We did try other lengths such as 400, 512 and other strides like 192 but they didn’t perform very well with our cross-validation strategy.</p>
<h3>Models</h3>
<ul>
<li>XLM Roberta Large (Public LB - 0.800)</li>
<li>Muril Large (Public LB - 0.802)</li>
<li>RemBERT (Public LB - 0.803)</li>
</ul>
<h3>Ensemble</h3>
<p>Since different models have different tokenizations, simple average ensembling was not possible so tried using prediction scores to select best predictions from different models. This gave us a little improvement.</p>
<p>The next thing we tried was to normalize the scores of all models to the same scale. We saw a little improvement using this too but we knew that this is not the best way to combine models.</p>
<p>The vanilla version failed as different models had different scales for logits. So to overcome this I came up with a custom softmax which could normalize the logits across different context splits. Base code for this is attached below:</p>
<pre><code>def CustomSoftmax(examples, features, raw_predictions):
all_start_logits, all_end_logits = raw_predictions
# Build a map example to its corresponding features.
example_id_to_index = {k: i for i, k in enumerate(examples["id"])}
features_per_example = collections.defaultdict(list)
for i, feature in enumerate(features):
features_per_example[example_id_to_index[feature["example_id"]]].append(i)
for example_index, example in enumerate(tqdm(examples)):
feature_indices = features_per_example[example_index]
start_logits_sum = 0
end_logits_sum = 0
logCs = None
logCe = None
# Find maximum logit value across context
for feature_index in feature_indices:
if logCs is None:
logCs = all_start_logits[feature_index].max()
else:
logCs = max(logCs, all_start_logits[feature_index].max())
if logCe is None:
logCe = all_end_logits[feature_index].max()
else:
logCe = max(logCe, all_end_logits[feature_index].max())
# Subtract maximum value among all logits before applying exponentiation for numerical stability
# And take sum across all context splits
for feature_index in feature_indices:
all_start_logits[feature_index] -= logCs
all_end_logits[feature_index] -= logCe
start_logits_sum += np.exp(all_start_logits[feature_index]).sum()
end_logits_sum += np.exp(all_end_logits[feature_index]).sum()
# Softmax
for feature_index in feature_indices:
all_start_logits[feature_index] = np.exp(all_start_logits[feature_index])/start_logits_sum
all_end_logits[feature_index] = np.exp(all_end_logits[feature_index])/end_logits_sum
return all_start_logits, all_end_logits
</code></pre>
<p>This was the final idea we had in the last two days and it boosted our public score from 0.809 to 0.821!</p>
<h3>Post-processing</h3>
<p>We followed a similar preprocessing technique as used in public notebooks. Though we observed that removing a few of the symbols were hurting the performance so we decided to use a custom punctuations list (<code>'!"#$&\'()*+,-/:;<=>?@[\\]^_`{|}~'</code>). </p>
<p>We also tried to solve the issue of tamil predictions as indicated by <a href="https://www.kaggle.com/jdoesv" target="_blank">@jdoesv</a> in <a href="https://www.kaggle.com/c/chaii-hindi-and-tamil-question-answering/discussion/264831" target="_blank">https://www.kaggle.com/c/chaii-hindi-and-tamil-question-answering/discussion/264831</a> with the help of stemming tamil words in the answers it helped to boost Muril performance from 0.802 -> 0.804 on the public LB.</p>
<p>In the end, we didn’t add many rule-based post-processing as we thought this might overfit the public leaderboard. </p>
<p>Our best private submission would have scored 0.779 but we did not select it. It’s funny why we didn’t select that submission. It had a small bug. However, even with that bug, that submission was our best private submission :)</p>
<p>We also had a single 5 fold muril-large model that would have placed us “in the money”.</p>
<p>Our best inference kernel can be found here: <a href="https://www.kaggle.com/abhishek/chaii-xlm-roberta-x-muril-x-rembert-score-based" target="_blank">https://www.kaggle.com/abhishek/chaii-xlm-roberta-x-muril-x-rembert-score-based</a></p>
<p>You can also try one of our best muril-large models on Hugging Face Hub: <a href="https://huggingface.co/abhishek/muril-large-chaii" target="_blank">https://huggingface.co/abhishek/muril-large-chaii</a></p>
<p>We will also publish our training code soon.</p>
<h3>Final thoughts</h3>
<p>It was a fun competition but we feel that it could have been better if the provided training data was better or at least similar to the hidden test dataset. We gave up many times.</p>
<p>We also tried a meta-model to re-rank the answers generated by different models but it was quite late in the competition (on the final day) and we didn’t have enough time to experiment with it further. </p>
<p>P.S. <a href="https://www.kaggle.com/abhishek" target="_blank">@abhishek</a> still feels that this competition was all about data collection from hundreds of participants.</p>
<p>P.S.S Training code can be found at: <a href="https://github.com/abhishekkrthakur/chaii-hindi-tamil-question-answering" target="_blank">https://github.com/abhishekkrthakur/chaii-hindi-tamil-question-answering</a></p>
<p>No the output shape will remain as that of the input only difference is that now they are normalised between 0-1 so that we could now use the scores from <code>postprocess_qa_predictions</code> to chose the best predictions across models.</p> |
chaii - Hindi and Tamil Question Answering | 67th position - Muril Large | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: chaii - Hindi and Tamil Question Answering
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks to Kaggle and Google India for this amazing competition.</p>
<p>I am very happy about my score in the very first competition. Though I forget to pick my best model which might fetch me a silver medal[0.745 private LB]. </p>
<p>Ensemble of two models:</p>
<blockquote>
<p>Training procedure:<br>
First model<br>
* Finetuned on squad v2.0<br>
* Finetuned on CHAII, MLQA, XQUAD<br>
Second model<br>
* Finetuned on translated squad<br>
* Finetuned on CHAII, MLQA, XQUAD</p>
</blockquote>
<p>Special Thanks to Google Team which developed google muril bert model and all Kagglers.</p> |
chaii - Hindi and Tamil Question Answering | 6th place solution - with a visualization of our Jaccard scores over time | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: chaii - Hindi and Tamil Question Answering
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks! :)</p>
<p>Thank you :)</p>
<p>We just used the public available additional data (mlqa, xquad and the 7.92 mb additional tamil data which was shared some weeks ago) and the regular training data.</p>
<p>First of all, I would like to thank Kaggle and the competitions hosts for hosting this interesting nlp challenge. I like the trend, that we see more and more nlp competitions. I need a short break but will join Jigsaw for sure 😉.</p>
<p>We realized early in the competition <strong>that the local cv is not really meaningful</strong>. We decided the first time in a competition to focus on the public leaderboard instead on the cv but <strong>keeping diversity all the time</strong> to avoid a shakedown. Diversity was already a key for our gold medal in the CommonLit challenge. As all teams we started with XLM Roberta Large. We pushed it to <strong>0.785</strong> but didn’t manage to improve it further <strong>(1)</strong>.</p>
<p><a href="https://ibb.co/wdwXFCp"><img src="https://i.ibb.co/HKxsfYG/Bild1.png" alt="Bild1"></a></p>
<p>After that we had obviously a breakthrough. We managed to <strong>ensemble models of different architectures</strong>. We mapped the logits to the character level to combine outputs of different tokenizer sizes (link to the notebook the end of my post).</p>
<p>After that we ensembled our Roberta Large with Muril Base (0.732) and scored <strong>0.792 (2)</strong>. We implemented a normalization of the logits to take different scales into account and scored <strong>0.795 (3)</strong>.</p>
<p>Now a link to Muril Large was posted in the forums. We struggled to finetune it but finally created a model with 0.784 public score. We ensembled it with our Roberta Large and had a big jump to <strong>0.808 (4)</strong>. An interesting sidenote is that this model would have score 0.776 on the private lb already. <br>
Now we added Rembert (0.774 single score) (5) and XLM Info Large (6). In the last days of the competition, we spent a lot of time finetuning the weights of our ensemble (7). We also added the Wechsel-models with a small share in the end (8) and <strong>ended up with 0.821 public lb score</strong>. </p>
<p>Our final ensemble was:<br>
Rembert Large - 16.8%<br>
Muril Large - 29.4%<br>
Roberta Large - 44.1%<br>
Xlm Info Large - 5.7%<br>
Wechsel - 4%</p>
<p>I'm happy with the result, even though we were out of the top 5 in the end. Because of the small data, the huge gap between cv and lb, <strong>it felt right to focus on the public lb</strong>. I expected a much higher correlation between private lb and public lb than between private lb and cv. I have checked it for our submissions and feel vindicated:</p>
<p><a href="https://imgbb.com/"><img src="https://i.ibb.co/7SXN57d/Bild2.png" alt="Bild2"></a></p>
<p>It was a pleasure to tackle this problem with you guys. I will try to be more active in the forums next time and wish you all a good time.</p>
<p>Thanks to my awesome teammate <a href="https://www.kaggle.com/chamecall" target="_blank">@chamecall</a> </p>
<p>Link to our notebook:<br>
<a href="https://www.kaggle.com/benbla/fork-of-chaiiqa-inference-modelsensembling-eba980" target="_blank">https://www.kaggle.com/benbla/fork-of-chaiiqa-inference-modelsensembling-eba980</a></p>
<p>Thank you :)</p>
<p>Yeah, architecture diversity seems always to be one of the success factors. Anyway, there was a 10 fold muril ensemble in the top 5 as well but I think generally it makes sense to use different architectures. My feeling was that the xlm roberta large models in my ensemble were quite similar to each other. I mean even the .732 muril base improved lb score already.</p> |
chaii - Hindi and Tamil Question Answering | 79th place| My first competition medal | A jump of 501 places in the private leaderboard | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: chaii - Hindi and Tamil Question Answering
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thank you. 😄 </p>
<p>Thank you. 😄</p>
<p>Thank you. 😄</p>
<p>Thanks for the nice words. I am trying to learn and improve my self. <br>
Best wishes to you too.</p>
<p>In this topic , I would like to share my happiness regarding the competition.</p>
<blockquote>
<h2>I just got my first bronze medal in a competition. :")</h2>
</blockquote>
<p>On the public leaderboard, I was in the <strong>582nd</strong> position. But , after the competition ended, I directly moved to <strong>79th</strong> place.<br>
I would like to thank the Kaggle community in this competition for sharing their strategies and resources. </p>
<h3>Especially, <a href="https://www.kaggle.com/abhishek" target="_blank">@abhishek</a> , <a href="https://www.kaggle.com/nbroad" target="_blank">@nbroad</a>, and <a href="https://www.kaggle.com/mihtw1" target="_blank">@mihtw1</a> for their guidelines and recommendations.</h3>
<p>I tried to learn a lot from the public notebooks and discussions and <strong>tried to understand</strong> each line of code. Then, I tried to modify them as per my understanding to get better results. The publicly available notebooks gave me a head start.</p>
<p>Thank you. 😄</p>
<p>Thank you. ❤️</p>
<p>Thank you bro. ❤️</p>
<p>Thanks .😄</p> |
chaii - Hindi and Tamil Question Answering | 7th Place Solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: chaii - Hindi and Tamil Question Answering
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks kaggle and google research team for hosting this competition. Through this competition, I was able to gain a lot of experience and learned a lot. I am grateful for moved into gold zone. Feel free to leave comments.</p>
<h3>TLDR</h3>
<ol>
<li>finetuning on squad2</li>
<li>train use early stop</li>
<li>hard voting ensemble</li>
</ol>
<h4>Models</h4>
<p>I use 5 xlm-roberta-large, 5 muril, 4 rembert.</p>
<h4>finetuning on squad2</h4>
<p>deepset/xlm-roberta-large-squad2 was finetuning on squad2. In this point, i thought xlm-roberta-large-squad2 have good performance. Then finetune muril and rembert on squad2.</p>
<h4>train</h4>
<p>xlm-roberta-large-squad2: train 5 models with different seed (no fold out)</p>
<p>muril: train 5 models with different fold</p>
<p>rembert: train 4 models with different fold</p>
<p>xlm-roberta-large train 2epochs with linear warm-up scheduler.</p>
<p>muril and rembert train 10 epochs with linear warm-up scheduler and use ealry stop</p>
<h4>hard voting ensemble</h4>
<p>use hard voting ensemble across all models.</p>
<p>xlm-roberta-large hard voting score: 0.757 / 0.728</p>
<p>muril hard voting score: 0.764 / 0.754</p>
<p>rembert hard voting score: 0.755 / 0.731</p>
<p>all models hard voting score: 0.801 / 0.772</p>
<p>Then I tuning hyper parameter and use other datasets (XQA_tamil, translated_tamil ,, etc)<br>
I can improve score and my final submissions are as follows.</p>
<p>all models hard voting score: 0.808 / 0.770</p>
<p>all models + xlm-roberta-large(score base ensemble) + muril (score base ensemble): 0.808 / 0.770</p>
<h4>did not work</h4>
<ol>
<li><p>score base ensemble</p></li>
<li><p>muril, rembert seed ensemble</p></li>
<li><p>google translate augmentation</p></li>
<li><p>use more xlm-roberta-large models</p></li>
</ol>
<p>Congratulations on finishing the competition and see you next competitions.</p>
<p>I set patience 2epochs and take model from best jaccard validation epoch.</p>
<p>Thanks. Congratulations for 1st place. Voting is powerful. And i saw your solution. Architecture and other technics are amazing!</p>
<p>Thanks a lot. Xlm models have same tokenizer and add each model’s logits.</p>
<p>Thank you. I use early stop for muril and rembrrt. So i take the model at best <strong>epoch</strong> validation.</p>
<p>Thank you 👍👍👍</p>
<p>Yes i will share soon.</p> |
chaii - Hindi and Tamil Question Answering | 8th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: chaii - Hindi and Tamil Question Answering
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks to Kaggle and Google for hosting this exciting competition. It gave me opporutnies for getting more familiar with multilingual NLP and exploring various techniques for accurate reading comprehension. </p>
<h2>CV strategy</h2>
<p>I used local 5-fold CV for making my decisions as we usually do in other competitions since I assumed it makes my pipeline simpler. <br>
Actually, I think this choice was not good because the public LB dataset was big enough and "trust CV" style didn't allow me to get in the medal-zone during the entire period of this competition (it made me really nervous!!!). </p>
<h2>Modeling</h2>
<p>I trained <code>xlm-roberta-large-squad2</code> and <code>muril-large-cased</code> with Adam as most other participants do. The main hyperparameters for my <code>xlm-roberta-large-squad2</code> is the followings: </p>
<ul>
<li>dataset<ul>
<li>max_seq_length: 384</li>
<li>doc_stride: 128</li></ul></li>
<li>model <ul>
<li>dropout: 0.0</li>
<li>num_layers_to_reinit: 2<ul>
<li>I became to like weight re-initialization after reading <a href="https://arxiv.org/abs/2006.05987" target="_blank">Revisiting Few-sample BERT Fine-tuning</a> and <a href="https://arxiv.org/pdf/2006.04884.pdf" target="_blank">On the Stability of Fine-tuning BERT: Misconceptions, Explanations, and Strong Baselines
</a>. I couldn't see noticeable score improvement in this competition though :(. </li></ul></li></ul></li>
<li>training<ul>
<li>batch_size: 4</li>
<li>gradient_accumulation_steps: 4</li>
<li>learning_rate: 0.000015</li>
<li>weight_decay: 0.01</li>
<li>warmup_step_ratio: 0.1</li>
<li>num_epochs: 2</li></ul></li>
</ul>
<h3>Minor changes in a training procedure</h3>
<h4>Preprocessing</h4>
<p>I often see tokens including both a parenthesis and digits (e.g., "99)") after tokenizing with the <code>xlm-roberta-large-squad2</code>'s pretrained tokenizer. I tried to improve model performance by separating a parenthesis (e.g, "(") and digits (e.g, "19") at the preprocessing stage. In adition, I replaced Hindi and Tamil digits like ०, १, ௦, and ௧ with corresponding ASCII digits to make training task easier. <br>
Score improvement from these preprocessing was ignorable after post-processing. However, I decided to keep using them to make my solution more different from others' and maximize a chance of huge shake-up. </p>
<h4>OOF-based relabeling</h4>
<p>Inspired by <a href="https://arxiv.org/pdf/1909.04849.pdf" target="_blank">Hard EM</a>, I tried to choose better <code>answer_start</code> for <code>answer_text</code> with multiple occurrence. I replaced the original <code>answer_start</code> corresponding to <code>answer_text</code>'s first occurrence with other ones if there exists other spans whose scores are larger than or equal to 0.8 * (the maximum score among all span candidates including texts other than <code>answer_text</code>). I also modified <code>answer_start</code> when <code>answer_text</code> consists of digits and it's surrounded by other digits. This relabeling increased CV and privte LB by 0.005 if my memory is correct.<br>
Averaging 5-fold <code>xlm-roberta-large-squad2</code> models trained from the above procedure gave me 0.773 and 0.738 in public and private LB respectively when submitting with <a href="https://www.kaggle.com/nbroad" target="_blank">nbroad</a>'s post-processing (<a href="https://www.kaggle.com/nbroad/chaii-qa-torch-5-fold-with-post-processing-765" target="_blank">link-to-kernel</a>): </p>
<h2>Post-processing</h2>
<p>As other top-teams mentions in their solution, replacing Hindi and Tamil digits with ASCII digits when there are no other types of characters worked pretty well. Combining the original post-processing, this Hindi/Tamil digit handling, and other minor modification increased my LB scores by 0.01 (public LB = 0.779, private LB = 0.750). <br>
I suspect its successful result came from somewhat broken dataset preparation process because it's weird that all of 133 digit-only answer texts consist of only ASCII digits whereas we often see Hindi and Tamil digits in other answer texts. I guess conversion from Hindi/Tamil digits to ASCII digits happenned when importing the dataset CSV into Google Spreadsheet at some point (surprisingly, Google Spreadsheet automatically does this conversion :o). </p>
<h2>Model ensembling</h2>
<p>I aggregated prediction results from base models in a text-wise manner rather than a token-wise or character-wise manner because I didn’t have enough time to explore cool ensemble methods. Implementing text-wise aggregation is easy. </p>
<ul>
<li>Collect top-k (k = 10) candidates from each base model. Each candidate should have some score (i.g., start-logit + end-logit). </li>
<li>Given weights of base models, compute weighted sum of scores for each text in every candidate from base models</li>
<li>Select one with the largest weighted sum as a final answer</li>
</ul>
<h3>Final model</h3>
<p>I achieved my best private LB score (0.770) with the following weights obtained from hyper-parameter search with Optuna: </p>
<ul>
<li>xlm-roberta-large-squad2 (w/ preprocessing): 0.5709</li>
<li>muril-large-cased (w/ preprocessing): 0.2829</li>
<li>xlm-roberta-large-squad2 (wo/ preprocessing): 0.1461</li>
</ul>
<h2>Other things</h2>
<h3>Computation resources</h3>
<p>This is the first competition I participated in with Google Colab Pro+. A100 and V100 from Colab boosted the iteration speed of my experiments a lot :) </p>
<ul>
<li>GCP T4 (mainly for debugging)</li>
<li>Google Colab Pro+</li>
</ul>
<h3>Things that didn't work</h3>
<ul>
<li>Training with Tamil XQA</li>
<li>Upsampling of positive examples</li>
<li>Answer candidate reranking like <a href="https://aclanthology.org/P19-1611.pdf" target="_blank">RankQA</a></li>
<li>Minor tweaking of architecture and training method<ul>
<li>Dropout</li>
<li>Mixout</li>
<li>Custom head like avg-pooling & max-pooling</li>
<li>Label smoothing </li>
<li>Language-wise fine-tuning</li></ul></li>
</ul>
<p>Thanks for your comment! Congratulations on winning this competition:)</p> |
chaii - Hindi and Tamil Question Answering | 9th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: chaii - Hindi and Tamil Question Answering
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks, and congrats on first place!</p>
<p>Hi, somewhat surprisingly we achieved 9th place. Our aim for this competition was to try models trained with WECHSEL (<a href="https://openreview.net/forum?id=JcfISE1-u4" target="_blank">https://openreview.net/forum?id=JcfISE1-u4</a>, you've probably seen the models <a href="https://www.kaggle.com/c/chaii-hindi-and-tamil-question-answering/discussion/286603" target="_blank">here</a>) as kind of a sanity-check of the method when compared to SOTA multilingual models. To make our submission competitive we combined the WECHSEL models with some others:</p>
<ul>
<li><code>deepset/xlm-roberta-large-squad2</code> with <br>
max_len=384<br>
doc_stride=128<br>
batch_size=32<br>
learning_rate=3e-5<br>
epochs=2<br>
max_grad_norm=1.0<br>
and appending MLQA and XQUAD data to the train set.</li>
<li><code>muril-large-cased</code> with<br>
max_len=384<br>
doc_stride=128<br>
batch_size=32<br>
learning_rate=3e-5<br>
epochs=2<br>
max_grad_norm=0.5 (this helped stabilize training for Muril)<br>
and appending MLQA and XQUAD data to the train set.</li>
<li><code>roberta-large-wechsel-hindi</code> with<br>
max_len=384<br>
doc_stride=128<br>
batch_size=32<br>
learning_rate=3e-5<br>
epochs=2<br>
and appending MLQA and XQUAD data to the train set.</li>
</ul>
<h2>Tamil data</h2>
<p>We did not get <code>roberta-large-wechsel-tamil</code> to work properly because the Tamil dataset is very small. <a href="https://www.kaggle.com/cdminix" target="_blank">@cdminix</a> investigated adding Tamil data by scraping questions in Tamil from the web and automatically finding Wikipedia passages where the answer occurs. We ended up with ~200 extra Tamil samples this way. This improved the score of our Tamil WECHSEL model but it was still not on par with multilingual models, and we didn't really have time to investigate further.</p>
<h2>Preprocessing</h2>
<p>We removed references from the text (all square brackets where only numbers are between the open and closing brackets) since a lot of those occured in the text.</p>
<h2>Stacking</h2>
<p>We combined models by converting logits to the character level: For every token, we simply repeated the logit for each character in the token. We subtracted <code>0.01 * character position</code> for the start logits and <code>0.01 * (token length - character position)</code> for the end logits to make sure the longest character span within a token is selected as the answer.</p>
<p>After getting logits to the character level we simply summed up the predictions of different models, without any weighting.</p>
<h2>Submission</h2>
<p>After disclosing our WECHSEL models and making our submission more or less competitive by stacking with other models, we got frustrated with tuning hyperparameters and not being able to improve our public score that much. So thinking that we would need to tune more parameters and that there where probably some tricks necessary to achieve high scores (like fixing answer start, more external data) we decided not to put any more work into the competition five days ago and didn't choose any final submissions.</p>
<p>So our submission was automatically chosen to be the best submission on the public LB, which was 2x each model trained on the full dataset. Our last submission which we would have expected to be best scored 0.787 on the public LB using 5-fold CV of each model. It scored 0.779 on the private LB but was not automatically selected as one of the final submissions.</p>
<p><img src="https://i.imgur.com/oSryczx.png" alt="Best model score"></p>
<p>That would have been 2nd or 3rd place so it's a bit frustrating that we didn't select anything, but overall it's still obviously much better than we expected so I can't complain.</p>
<p>My main takeaways are:</p>
<ul>
<li>It is really, really easy to overfit to the public LB when there is this little data, so it might have been a blessing in disguise that we didn't spend much time on this competition.</li>
<li>Select a final submission, even if you don't expect to do very well :)</li>
</ul> |
chaii - Hindi and Tamil Question Answering | Tentative 2nd place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: chaii - Hindi and Tamil Question Answering
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>We assigned the maximum logit of tokens for each character.</p>
<pre><code>Token [ 1 ][ 2 ][ 3 ]
Char. [11][222222][3333]
</code></pre>
<p>Then, we just take softmax in char level. Then, we take an average over all models</p>
<p>Thanks a lot!</p>
<p>Thanks! This is in the cell [16] in <a href="https://www.kaggle.com/kanbehmw/tkm-tydi-rem-info-muril-top35?scriptVersionId=77966564" target="_blank">https://www.kaggle.com/kanbehmw/tkm-tydi-rem-info-muril-top35?scriptVersionId=77966564</a></p>
<pre><code>for i, offset in enumerate(offset_mapping):
if offset is not None:
start_flgs[offset[0]:offset[1]] = start_flgs[offset[0]:offset[1]].clip(start_logits[i], None)
end_flgs[offset[0]:offset[1]] = end_flgs[offset[0]:offset[1]].clip(end_logits[i], None)
best_answer = {"start_logits": softmax(start_flgs), "end_logits": softmax(end_flgs)}
</code></pre>
<p>We assigned the maximum logit of tokens for each character. </p>
<pre><code>Token [.1 ][ 2 ][ 3 ]
Char. [11][222222][3333]
</code></pre>
<p>Then, we just take softmax in char level</p>
<p>Thanks a lot!</p>
<p>Congratulations on the first! I think you more sincerely addressed how to learn and evaluate the limited datasets. We mainly just try many datasets and models as much as possible. After I saw your solution, I was able to understand what we should have done. Thanks! It was a very nice learning process.</p>
<p>Thanks a lot. Eventually, I made it!</p>
<p>Thanks a lot!</p>
<p>Thanks! My teammate <a href="https://www.kaggle.com/kanbehmw" target="_blank">@kanbehmw</a> is an NLP expert. He knew a lot. Besides, we just googled the datasets.</p>
<p>Thanks a lot!</p>
<p>Thanks a lot!</p>
<p>Thanks! I am quite happy. I want to get the GM foodie soon 😎</p>
<p>Thanks a lot!</p>
<p>Hi all,</p>
<p>Congratulations <a href="https://www.kaggle.com/thedrcat" target="_blank">@thedrcat</a>! Seems like we trusted LB too much. But, we wrer able to remain in 2nd place. I am happy and I can be GM! Thanks a lot, <a href="https://www.kaggle.com/kanbehmw" target="_blank">@kanbehmw</a>. His knowledge of NLP has boosted our score significantly.</p>
<p>Here is our tentative solution write-up. We will publish all our codes and datasets. Please post any questions in this thread if you have any.</p>
<h1>Resources</h1>
<ul>
<li>Our inference notebook is available here: <a href="https://www.kaggle.com/kanbehmw/tkm-tydi-rem-info-muril-top35" target="_blank">https://www.kaggle.com/kanbehmw/tkm-tydi-rem-info-muril-top35</a></li>
<li>Training code is in each dataset</li>
</ul>
<h1>External data</h1>
<p>We trained our models with the competition data, MLQA, and TyDi QA (only Bengali and Telugu). In particular, TyDi QA pushed our score largely (0.787 -> 0.799 in public LB). We think most of the top teams achieving >0.81 use the TyDi QA dataset. It may be due to the language similarity among Bengali, Telugu, Hindi, and Tamil. However, we suspect some possibility that some questions are shared even though they are translated to Hindi or Tamil since TyDi QA is owned by Google. This is just a guess.</p>
<h1>Model Training</h1>
<p>We just did 2 epochs of training with the whole data (no fold out) on the huggingface trainer. We do not distinguish between Hindi and Tamil. For more details, The training code should be included in each dataset. Please have a look.</p>
<h1>Loss function</h1>
<p>While the original cross-entropy score has the best performance, the loss function described in <a href="https://github.com/heartkilla/kaggle_tweet" target="_blank">https://github.com/heartkilla/kaggle_tweet</a> (used in the previous competition) worked to obtain more diverse models. In our kernel whose name ends "exp33" is trained with the "Jaccard-based Soft Labels Since Cross-Entropy".</p>
<h1>Local Evaluation (CV) Strategy</h1>
<p>Nothing. We tune everything based on Public LB. I did not even track local scores. This is the first time since I started Kaggle that I have not tracked local scores at all.</p>
<h1>Model Ensembling</h1>
<p>As in the CommonLit competition (<a href="https://www.kaggle.com/c/commonlitreadabilityprize/overview)" target="_blank">https://www.kaggle.com/c/commonlitreadabilityprize/overview)</a>, model ensembling seems to be the key technique to win NLP competitions. We ensemble the following models:</p>
<ul>
<li>XLM-R (7 models)<ul>
<li>deepset/xlm-roberta-large-squad2<ul>
<li>One model trained with Google-translated SQuAD<ul>
<li><a href="https://www.kaggle.com/tkm2261/google-translated-squad20-to-hindi-and-tamil" target="_blank">https://www.kaggle.com/tkm2261/google-translated-squad20-to-hindi-and-tamil</a> </li></ul></li></ul></li>
<li>AlexKay/xlm-roberta-large-qa-multilingual-finedtuned-ru</li></ul></li>
<li>RemBert (3 models)</li>
<li>InfoXLM (3 models)</li>
<li>MuRIL (2 models)</li>
</ul>
<p>We don't know why the models fine-tuned in Russian had a good performance. We stacked the models gradually and our scores gradually improved. (0.799 -> 0.816 -> 0.821 -> 0.827 -> 0.829). The MuRIL models pushed our score from 0.827 to 0.829. </p>
<p>Misc.: </p>
<ul>
<li>To handle the difference of the token size in a context, we took the softmax function on the entire text before ensembling different types of models.</li>
<li>I do not use the 0.792 models.</li>
<li>I do not use CV models. All models are trained with the entire data. I think changing seeds and ensembling them should be better than ensembling CV models.</li>
</ul>
<h1>Effect of max_seq_length and doc_stride</h1>
<p>As in my notebook <a href="https://www.kaggle.com/tkm2261/reproduction-of-0-792-notebook" target="_blank">https://www.kaggle.com/tkm2261/reproduction-of-0-792-notebook</a>, max_seq_length and doc_stride seem to have a large impact. Even though I explored a lot of values of these two parameters, it did not help to improve our score. So, we just use max_length = 384 and doc_stride = 128. Maybe, this part is not important since we ensemble models with different token sizes.</p>
<h1>Post-processing</h1>
<p>As shown in this kernel (<a href="https://www.kaggle.com/nbroad/chaii-qa-torch-5-fold-with-post-processing-765)" target="_blank">https://www.kaggle.com/nbroad/chaii-qa-torch-5-fold-with-post-processing-765)</a>, post-processing has a large impact on this competition. Since the test data is annotated by 4 people, they are not very likely to forget the trailing periods like “ई.पू.”, i.e., answers should be in the most formal format. Besides, we processed the Devanagari numbers (०१२३४५६७८९). We replace them with Arabic numerals (0-9) if the selected answer consists only of Devanagari numbers. It seems like the annotators agreed that they use Arabic numerals when the answer is a year. Without the postprocessing, our score dropped to 0.806. So, it pushed our score by around 0.02.</p>
<h3>Public LB score timeline</h3>
<ul>
<li>0.787: Teamed up</li>
<li>0.799: Trained with TyDi QA (only Bengali and Telugu)</li>
<li>0.816: Ensembled with Rembert and InfoXLM</li>
<li>0.821: Ensemble with more XLM-R, Rembert, and InfoXLM</li>
<li>0.827: Tune ensemble weights</li>
<li>0.829: Ensembled with MuRIL</li>
<li>0.831: Tune parameters in post-processing</li>
</ul>
<h1>Did not work</h1>
<p>After we found TyDi QA helps a lot, we explored a lot of datasets. But, almost all datasets did not work. In our case, Only MLQA and TiDy QA worked. XQuAD was not bad, but I did not use it.</p>
<table>
<thead>
<tr>
<th>#</th>
<th>Name</th>
<th>URL</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>Bengali Question Answering Dataset</td>
<td><a href="https://www.kaggle.com/mayeesha/bengali-question-answering-dataset" target="_blank">https://www.kaggle.com/mayeesha/bengali-question-answering-dataset</a></td>
</tr>
<tr>
<td>2</td>
<td>XQuad</td>
<td><a href="https://github.com/deepmind/xquad" target="_blank">https://github.com/deepmind/xquad</a></td>
</tr>
<tr>
<td>3</td>
<td>UQuAD</td>
<td><a href="https://t.co/OP6FYHbuju?amp=1" target="_blank">https://t.co/OP6FYHbuju?amp=1</a></td>
</tr>
<tr>
<td>4</td>
<td>TQuAD</td>
<td><a href="https://github.com/TQuad/turkish-nlp-qa-dataset" target="_blank">https://github.com/TQuad/turkish-nlp-qa-dataset</a></td>
</tr>
<tr>
<td>5</td>
<td>PersianQA</td>
<td><a href="https://github.com/sajjjadayobi/PersianQA" target="_blank">https://github.com/sajjjadayobi/PersianQA</a></td>
</tr>
<tr>
<td>6</td>
<td>MMQA dataset: Hindi QA dataset</td>
<td><a href="https://github.com/deepaknlp/MMQA" target="_blank">https://github.com/deepaknlp/MMQA</a></td>
</tr>
<tr>
<td>7</td>
<td>XQA: testset for Tamil QA</td>
<td><a href="https://github.com/thunlp/XQA" target="_blank">https://github.com/thunlp/XQA</a></td>
</tr>
<tr>
<td>8</td>
<td>bAbi 1.2 dataset: Has Hindi version of bAbi tasks in romanized Hindi.</td>
<td><a href="http://www.thespermwhale.com/jaseweston/babi/tasks_1-20_v1-2.tar.gz" target="_blank">http://www.thespermwhale.com/jaseweston/babi/tasks_1-20_v1-2.tar.gz</a></td>
</tr>
<tr>
<td>9</td>
<td>HiQuAD</td>
<td><a href="https://www.cse.iitb.ac.in/~ganesh/HiQuAD/clqg/" target="_blank">https://www.cse.iitb.ac.in/~ganesh/HiQuAD/clqg/</a></td>
</tr>
<tr>
<td>10</td>
<td>ARCD</td>
<td><a href="https://github.com/husseinmozannar/SOQAL/tree/master/data" target="_blank">https://github.com/husseinmozannar/SOQAL/tree/master/data</a></td>
</tr>
<tr>
<td>11</td>
<td>FQuAD</td>
<td><a href="https://fquad.illuin.tech/" target="_blank">https://fquad.illuin.tech/</a></td>
</tr>
<tr>
<td>12</td>
<td>Squad_Translated_to_Tamil for Chaii</td>
<td><a href="https://www.kaggle.com/msafi04/squad-translated-to-tamil-for-chaii" target="_blank">https://www.kaggle.com/msafi04/squad-translated-to-tamil-for-chaii</a></td>
</tr>
<tr>
<td>13</td>
<td>RuBQ: A Russian Knowledge Base Question Answering and Machine Reading Comprehension Data Set</td>
<td><a href="https://github.com/vladislavneon/RuBQ" target="_blank">https://github.com/vladislavneon/RuBQ</a></td>
</tr>
<tr>
<td>14</td>
<td>SberQuAD</td>
<td><a href="https://t.me/s/opendatascience?q=%23SberQuAD" target="_blank">https://t.me/s/opendatascience?q=%23SberQuAD</a></td>
</tr>
<tr>
<td>15</td>
<td>XOR-TyDi QA</td>
<td><a href="https://github.com/AkariAsai/XORQA#additional-resources" target="_blank">https://github.com/AkariAsai/XORQA#additional-resources</a></td>
</tr>
</tbody>
</table> |
chaii - Hindi and Tamil Question Answering | Would have been 35th place 🥲 [LB: 0.75 | PB: 0.745] | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: chaii - Hindi and Tamil Question Answering
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>:) sad for us. Best of luck for upcoming competitions</p>
<p>Thanks a lot man :)</p>
<p>No really. Tried with 2 different seeds. CV was around 0.66-0.68 range and public LB was 0.75 and 0.74 something. So gave up on MuRIL</p>
<h3>:(</h3>
<p>I was kind of expecting the shakeup because of the saturated leaderboard and no other models were released with a higher LB.</p>
<p>But to my surprise, MuRIL-Large outperformed xlm-roberta-large (Not in the LB). The xlm-roberta was overfitted to the public LB. </p>
<p>Muril-Large was introduced lately to the competitors and thanks to <a href="https://www.kaggle.com/simrankhanuja" target="_blank">@simrankhanuja</a> for introducing the model.</p>
<h1>The Solution :</h1>
<p>Simple blend of 5-fold of MuRIL-Large trained on chaii, mlqa and xquad dataset.</p>
<p>The public LB score was 0.75, did some tweaking, the score decreased. </p>
<p>Checked the CV, then lost tracks :)</p>
<p>No-one retrained the MuRIL model so no 0.745 score (my trained model was private, now made public)</p>
<h3>This is the <strong>would have been</strong> 35th place solution:</h3>
<h2><a href="https://www.kaggle.com/kishalmandal/muril-large-mlqa-xquad-chaii-infer" target="_blank">MuRIL-Large | mlqa, xquad, chaii | INFER 💫</a></h2>
<p>The treasure was in-front my eyes, just that it was covered with mud :(</p>
<p>And as <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a> says, <strong>'Trust your CV'</strong></p>
<p>Since this was my first competition I will keep that in mind.</p>
<p>Anyways these were my kernels and all were public :)</p>
<h2>xlm-RoBERTa:</h2>
<ul>
<li><a href="https://www.kaggle.com/kishalmandal/5-folds-infer-combined-model-0-792" target="_blank">5 folds | INFER | combined model (0.792)</a> : The shakeup model</li>
<li><a href="https://www.kaggle.com/kishalmandal/chaii-fit-2-epochs-mlqa-xquad-chaii" target="_blank">chaii | FIT - 2 epochs | mlqa, xquad, chaii</a> : Training kernel 1</li>
<li><a href="https://www.kaggle.com/kishalmandal/chaii-fit-7-epochs-extra-tamil-data" target="_blank">chaii | FIT - 7 epochs | Extra Tamil Data</a> : Training kernel 2</li>
</ul>
<h2>MuRIL-Large:</h2>
<ul>
<li><a href="https://www.kaggle.com/kishalmandal/muril-large-mlqa-xquad-chaii-fit" target="_blank">MuRIL-Large | mlqa, xquad, chaii | FIT 💫</a> : Training kernel</li>
<li><a href="https://www.kaggle.com/kishalmandal/muril-large-mlqa-xquad-chaii-infer" target="_blank">MuRIL-Large | mlqa, xquad, chaii | INFER 💫</a> : Inference kernel</li>
</ul>
<h2>EDA of CV scores of different models (xlm-rob):</h2>
<ul>
<li><a href="https://www.kaggle.com/kishalmandal/eda-experimental-results" target="_blank">EDA + Experimental results 💫</a></li>
</ul> |
dunnhumby's Shopper Challenge | 2nd Place Methodology | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: dunnhumby's Shopper Challenge
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><div>
<p><span>Sorry it took me so long to finish the methodology explanation for the visit_date.<span> </span>In addition, I will upload my code at the end of this file, along with the documentation for the winning submission.</span></p>
<p><span> </span></p>
<p><span>This post will go through the evolution of my projections for visit_date. In the beginning of the competition, I focused entirely on the spend amount for each customer.<span> </span>Every customer was given a return date of April 1st; since April
1st had the highest probability in the training data.<span> </span>After I developed the spend methodology, I began to build a visit date methodology.<span> </span>I created estimates in pieces and slowly combined them.</span></p>
<p><span>First, I projected a customer would return on their highest probable day of the week (April 1-7).<span> </span>Next, I project the most probable date of return based on their most common number of days between visits.<span> </span>If the projection
was before April 1st, I said they would return on April 1st.<span> </span>Then I combined the weekday probabilities and the number of days between visits probabilities.<span> </span>The date with the combined highest probability was selected.<span> </span>(The
weekday probability was no longer limited to April 1-7.)<span> </span></span></p>
<p><span>Other variations included allowing for the most common number of bays between visits to be selected from only those greater than the number of days already passed between the last visit and April 1st.<span> </span>The highest probable weekday was
also front weighted using a triangle weight damped by the cube root.<span> </span>(April 1, 2010 = (1)^(1/3) & March 31, 2011 = (365)^(1/3)).<span> </span>The combination of date projections were also limited to dates in the first week (April 1-7). This
method produced a score of 17.70.<span> </span></span></p>
<p><span>Next, I included priori information that the date of return followed the distribution of return dates in the training set.<span> </span>Including this with the customer's weekday and time between visits probabilities produced the submission on the
public leaderboard of 17.97 (18.01 on the private board).<span> </span>If more than nine days had already passed since the last visit, I used the highest probable weekday instead of the combination of probabilities.<span> </span>I attempted to optimize the
splits between methods by chosing the best method given certain criteria.<span> </span>While, some of these mixtures preformed better on the private leaderboard, none of them scored better on the public board.<span> </span>I stalled here for awhile.</span></p>
<p><span>In a discussion with a co-worker about the random forests, he mentioned boosting models and explained how they differed.<span> </span>I had used random forest model for other competitions and decided to use his advice and try a boosting model for
the date projection.<span> </span>When creating all the previous projections above I had gathered a large set of statistics describing a customer's shopping habits.<span> </span>I compiled these together and ran them through R's "gbm" package.<span> </span>These
variables are listed in the winning submission's documentation attached to this post.<span> </span>I felt the variables that would describe a visit on April 1st, would not be the same as those for a visit on April 2nd, and so on.<span> </span>So, I created
dummy variables for each day from April first through the ninth.<span> </span>(The ninth was chosen because there was a large drop in visits in the training data on the tenth.)<span> </span>A separate Bernoulli GBM model was created for each possible date
of return.<span> </span>Each model took 18-24 hours to run.<span> </span>This was because I ran the model with 5-fold cross validation and very low shrinkage with a large number of trees.<span> </span>After all the models were complete, I had a score for
each customer and each possible date of return.<span> </span>For the winning submission, I selected the date with the maximum score.<span> </span>This scored 17.77 on the public board and 18.67 on the private board.<span> </span>I also ran the gbm scores
through a nominal logistic regression.<span> </span><span> </span>This method scored better on the training data, but only scored a 17.43 on the public board.<span> </span>I was surprised and decided the regression was leading to overfitting.<span> </span>I
did not include this model for consideration at the end of the competition.<span> </span>It scored an 18.83 on the private board.<span> </span></span></p>
<p><span>I enjoyed this competition and data.<span> </span>Developing a strategy for projecting future shopping habits based on such sparse historical data was challenging.</span></p>
<p><span>Let me know if you any questions.</span></p>
<p> </p>
</div>
<p><span>I am gathering code and writing my process for Kaggle. As I am doing that I thought I would share findings and methodology as I go along.</span></p>
<p><span>I handled my data manipulations and spend predictions in SAS. I used R to run Generalized Boosted Regression Modeling (GBM package) to predict the visit date. I also used JMP to visualize the data along the way. </span></p>
<p><span> </span></p>
<p><span>First, I focused on predicting spend amounts. Testing was done on the actual next spend amounts in the training data regardless of the next visit date. I tried a suite of median statistics: entire year, most recent 3 months, and recent 17 spend (based
on roobs forum discussion). </span></p>
<p><span>Then I tried some time series projections. I used Croston's exponential smoothing for sparse data to develop projections. This is really projecting usage of groceries and produced strange results due to small purchases after a long time period and
large purchases after a short time period. I modified my formulas to predict needed inventory levels, i.e. how much does a customer need to refill their pantry. None of these time series methods outperformed the median estimates, so I abandoned this line
of reasoning.</span></p>
<p><span>Finally, after looking at the distribution of claims realized that the range covered by the median did not cover as much as other $20 ranges could. The final methodology used in the spend prediction is written below. This is from the documentation
I am preparing for Kaggle and will discuss the date methodology in later post. </span></p>
<p style="padding-left:30px"><span>Visit_Spend Methodology</span></p>
<p style="padding-left:30px"><span>All presented methods use the same spend amounts. The amounts will differ based on the projected day of the week for the shopper's return, but the methodology is the same. A members next spend amount was developed on historical
data only. There was no training a model on data past March 31, 2011. Training data are used later to optimize method selection. </span></p>
<p style="padding-left:30px"><span>The chosen method optimizes the results based on the testing statistic for this competition. The metric for determining if the projected visit spend amount is correct was being with $10 of the actual spend amount. Maximizing
the number of spends within the $20 window was accomplished by empirically calculating the $20 range that a customer most often spends. I termed this window the Modal Range. Typically, it is less than both the mean and the median of a customer's spending
habits. Predictions were further enhanced by determining a modal range for each day of the week. In the final submissions, these values were also front weighted by triangle weighting the dates from April 1, 2010. (A spend on April 1, 2010 has a weight
of one and a spend on March 31, 2011 has a weight of 365.)</span></p>
<p style="padding-left:30px"><span>The projected visit spend was based off the day of the week of the projected visit date. In cases where the customer does not have enough experience on the return day of the week, their overall modal range is assumed. The
training data were used to develop credibility thresholds for selecting a customer's daily modal range verse their overall modal range. The thresholds were hard cutoffs. If the customer did not have enough experience on a particular day of the week, the
overall modal range was projected. The overall modal range was not front weighted like the daily ranges.</span></p>
<p>Future considerations would have included replacing the thresholds cutoffs with a blending of the daily modal range and the overall modal range based on experience.</p>
<p style="padding-left:30px"> </p>
<p>EDIT: Added language that the fallback overall modal range was not weighted.</p> |
dunnhumby's Shopper Challenge | Great competition, but.. | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: dunnhumby's Shopper Challenge
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>I agree that this was a very enjoyable competition. Thank you.</p>
<p>But I am afraid it did not necessarily produce the best forecasting models for the shopper problem.</p>
<p>The problem as I see it is that the 30% test sample our submissions were evaluated against, were apparantly not very representative of the full dataset.</p>
<p>In my case I effectively dropped out of the competition in the beginning of September when my submissions seemed to become significantly worse. I made a submission on August 23 which produced an accuracy of 16.57% on the test sample (which as it turns out
was 17.44% on the full set) while I pretty much gave up after my improved methods submitted on September 5 only produced an accuracy of 16.13% on the test sample. In fact I now realise that the accuracy on the full dataset was much better, namely 17.89% which
at the time must have been one of the best entries. Had I known this I would have doubled my efforts on that approach instead of dedicating my spare time to my family.</p>
<p>So my family was happy but the bottomline is you may well have missed out on even better models because of the test sample not being representative of the full data set. I am not sure if the issue could have been avoided but I thought I should point it out. </p> |
iMaterialist (Fashion) 2019 at FGVC6 | 16-th Solution: Mask-RCNN on Steroids | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: iMaterialist (Fashion) 2019 at FGVC6
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>I'll probably create a pull-request in mmdetection to add multi-scaled testing for HTC, but currently my code is way too dirty.....</p>
<p><em>(Also known as Alex doesn't know how to code)</em></p>
<p>After seeing first place's solution, I was shocked at how close I was to reaching the gold area... This competition has taught me a lot on hyper-parameter fine-tuning and architecture selection for instance segmentation, and the greatest (and most painful) of them all is the threshold for non-maximum-suppression- and I fell for it. This is my first serious attempt at aiming for grandmaster status, and though the outcome is not so satisfactory, for the most part I enjoyed the competition.</p>
<p>For my solution, it is based on mmdetection, an awesome codebase for detection related tasks. </p>
<h3>I started with Hybrid Task Cascade with ResNeXt-101-64x4d-FPN backbone, with the following modifications:</h3>
<ol>
<li><p>DCN-v2 for part of the backbone</p></li>
<li><p>Removing mask fusion branch (since it takes too much memory and semantic information in this task is tricky to define)</p></li>
<li><p>Multi-scale training with flipping and resizing range (512,512) -> (1333,1333)</p></li>
<li><p>Guided-anchoring RPN</p></li>
<li><p>Increasing initial mask branch output from 14 pixels to 28 pixels</p></li>
</ol>
<h3>Then I modified the training progress a bit.</h3>
<ol>
<li><p>Default training schedule, 1 image per GPU for 4/8 GPUs</p></li>
<li><p>Changing mask loss from BCE to symmetric lovasz:</p></li>
</ol>
<p><code>def symmetric_lovasz(outputs, targets):
return (lovasz_hinge(outputs, targets) + lovasz_hinge(-outputs, 1 - targets)) / 2</code></p>
<ol>
<li><p>Changing smooth L1 loss to GHM-R loss</p></li>
<li><p>Changing sigmoid-based losses to Focal Loss and GHM-C loss</p></li>
</ol>
<h3>Then, for inference:</h3>
<ol>
<li><p>Used soft-nms ( </p>
<h1>but I didn't fine-tune the threshold :(</h1></li>
<li><p>Multi-scale testing in mmdetection wasn't implemented, so I wrote one myself that merge predictions in multiple scales in proposals, bboxes and masks. The following scales are used (with flipping): [(1333,1333), (1333,800), (800,1333), (900,600)]</p></li>
<li><p>Deleting masks with low confidence & low pixels (with fine-tuned threshold)</p></li>
<li><p>Heuristic that make sure that the same pixel are not assigned to two instances w. the same ClassId</p></li>
<li><p>(I couldn't submit for the final minutes so I didn't have a chance to test this out) Binary classifier that predict if an object is fine-grained or not. Then, in final submission, if an instance is determined to be fine-grained, it is excluded. The rationale is as below:</p>
<p>a) An included object that is fine-grained but the mask generated is not: 1 FP + 1 FN
b) Simply not predicting that object: 1 FN</p></li>
</ol>
<p>And for this network, it's a se-resnext50 based image classifier with 4-channel input (image + binary instance mask) and one-hot category fusion in the fc-layer. Honestly I could be doing a DAE before the classifier (from dirty testing mask to imagined clean train mask) but I ran out of GPUs and time.</p>
<h3>What didn't work:</h3>
<ol>
<li><p>Un-freezing bn / gn / synchronized bn</p></li>
<li><p>Parallelized testing</p></li>
<li><p>After-nms ensemble (like the one used in Top3)</p></li>
<li><p>Attribute classification (my local F1 was < 0.1)</p></li>
</ol>
<h3>Machines</h3>
<p>I used a variety of rented cloud machines, from 1 * 2080 Ti, 1 * P6000 to 8 * 1080 Ti to 4 * P100. I've basically spent all of the prizes I earned in the Whales competition on this one, plus a few hundred bucks :(</p>
<p>For your reference:</p>
<ol>
<li><p>For 1-image per GPU, you can fit an (1333,1333) image into an RTX 2080 Ti w.o. the enlargement of mask branch or GA-RPN</p></li>
<li><p>But if you want to add GA-RPN, only GTX 1080 Ti would suffice (the pitfall: <strong>1080 Ti has a little bit more memory than it's RTX counterpart</strong>)</p></li>
<li><p>My full setting fits barely in an P100.</p></li>
</ol>
<h3>Final words</h3>
<p>I have to say goodbye to kaggle for the following months due to internships and school-related stuffs, so it's a little discouraging to know that I missed the last chance this year to getting a solo gold by such a small margin. But I guess the journey and the things I've learnt is more important than the recognition itself. So I will come back with full strength after a small break. Also congratulations to the winners, you guys did a fantastic job!</p>
<p>Sorry, but I'm pretty busy with my internship......</p> |
iMaterialist (Fashion) 2019 at FGVC6 | 18 place write up (short meditation on results) | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: iMaterialist (Fashion) 2019 at FGVC6
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>I do not actually think that it is nice. But I think that our errors/approach is pretty funny. </p>
<p>What we was doing is like using small dynamite bomb to destroy the mountain, and then applying a swarm of hammers in attempts to finish the job. But <strong>we did not realized that a larger dynamite bombs</strong> exists until it was to late to apply them. Yes they were expensive, but they were affordable for our team, if we spend our computation budget more carefully. Nice thing is that swarm of hammers work, weird thing is that till last week of competition we thought that most of the top guys on leader board are doing some magic with attributes.</p>
<p>Conclusion: always check if you can buy larger bomb for your money</p>
<p>Unfortunately not :-(. Hopefully next year and with better results.</p>
<p>This is our short meditation on our results at strategy:</p>
<ol>
<li><p>r101 mask rcnn 1024x1024 (1 fold, it took 5 days on our machine)</p></li>
<li><p>ansemble predictions from multiple snapshots of r101 mask rcnn (select prediction with highest confidence, when multiple predictions overlap)</p></li>
<li><p>Train several multiclass classifiers (xception,resnext,densenet) two detect if particular object classes present on image. Fins optimal treshold for discarding predictions from mask rcnn. <a href="https://github.com/musket-ml/classification_training_pipeline">Our Classification Pipeline</a></p></li>
<li><p>Train a lot of segmentation networks at least one per class to refine masks from mask rcnn (<a href="https://github.com/musket-ml/segmentation_training_pipeline">Segmentation Pipeline</a>) -> This was our main source of improvements.</p></li>
</ol>
<p>At the same time we were desperately trying to find solution for attributes. Initially nothing worked. But later we have used following approach:</p>
<p>Take a mask crop and a full image, and feed them in two independent inputs of image classifier, with multiple outputs per independent attribute groups (we have found them by analizing attribute co occurances, and assuming that sometimes coocurences labeling errors).</p>
<p>Train a lot of such classifiers, then take only those objects for which 75% of classifiers agree with the result of blend and also when most of them are doing confident predictions.</p>
<p>This gave us very slight improvement of our score (~0.00500 in total) </p>
<p><strong>Errors:</strong></p>
<ul>
<li>Spending our pretty limited resources on attributes was a big error.</li>
<li>We did not actually realised and used full potential of mask rcnn :-(. Our initially high place on leaderboard, gave us false feeling that we used most of its potential. </li>
</ul>
<p><strong>Resources</strong></p>
<p>We have used: 2x1080Ti on my machine, 2x1080Ti on Denis machine and 1080 on Konst machine. </p>
<p>Regards,
Pavel</p> |
iMaterialist (Fashion) 2019 at FGVC6 | Attributes classification solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: iMaterialist (Fashion) 2019 at FGVC6
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Hi all!</p>
<p>We've tried to do attributes classification. The first attempts were with 92-way classifier, but later switched to thechnique from image captioning, which generates next attribute, using image features and knowledge about previous attributes. We've used implementation of the paper <a href="https://arxiv.org/pdf/1502.03044.pdf">"Show, Attend and Tell: Neural Image Caption Generation with Visual Attention"</a> from <a href="https://github.com/sgrvinod/a-PyTorch-Tutorial-to-Image-Captioning">this</a> repo. Below is some findings and visualizations.
<img src="http://i68.tinypic.com/rvfrtl.png" alt="t-shirt proper">
It works somehow. Above are the proper attributes for t-shirt. And this with one incorrect attribute, however it looks ambiguous (but I'm not an expert in this issue =)):
<img src="http://i66.tinypic.com/ta2emv.png" alt="t-short ambiguous">
Some annotations missed the length of t-shirt, so if model predicts it, this turns to false positive:
<img src="http://i67.tinypic.com/33ngcio.jpg" alt="t-shirt missed length">
Same for shorts and other classes. Model can predict correctly the majority of atributes:
<img src="http://i68.tinypic.com/2r6ehc5.png" alt="shorts proper">
However there are plenty of images with 1 wrongly predicted attribute, so all entry goes to false positive. But they are ambiguous, compare the shorts length on the first two images and style on the last two (frayed in annotations and no/washed in predictions):
<img src="http://i64.tinypic.com/2882ov5.png" alt="mislabeled shorts"></p>
<p>So with better metric on attributes this challenge can be more interesting.</p>
<p>For the train set if apparel label is just in range [0-12], then attributes for such apparel was not labeled. So I have taken those, wich has attributes, they starts from 0_ ... 12_.</p>
<p>It was trained from scratch. The training is really fast, about 1 minute per epoch (the dataset with attributes is not big, about 1k samples per class).</p> |
iMaterialist (Fashion) 2019 at FGVC6 | [Update] 1st place solution with code | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: iMaterialist (Fashion) 2019 at FGVC6
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>The ensemble is made using the nms algorithm. In the simple case, let one algorithm predict bboxes1, the second algorithm predict bboxes2. Next, we concatenate these answers and get bboxes = bboxes1 + bboxes2. Next we apply the nms algorithm. As a result, we get the ensemble_bboxes = nms(bboxes). For a more detailed information, see the scheme <img src="https://raw.githubusercontent.com/amirassov/kaggle-imaterialist/master/figures/ensemble.png" alt="ensemble scheme"> and the code.</p>
<p>I think it depends on the post-processing algorithm to avoid intersections of masks of the same class.</p>
<p>By default, <code>score_threshold = 0.001.</code> My post-processing algorithm for masks did not delete the masks. The result was a lot of masks with small scores. This greatly degraded the metric.
I added this algorithm to the solution description.</p>
<p>Soft-nms didn't increase my score.</p>
<p>Hi Kagglers,</p>
<p>My solution is based on the COCO challenge 2018 winners article: <a href="https://arxiv.org/abs/1901.07518">https://arxiv.org/abs/1901.07518</a>. </p>
<h1>Code:</h1>
<p><a href="https://github.com/amirassov/kaggle-imaterialist">https://github.com/amirassov/kaggle-imaterialist</a></p>
<h1>Model:</h1>
<p><a href="https://github.com/open-mmlab/mmdetection/blob/master/configs/htc/htc_dconv_c3-c5_mstrain_400_1400_x101_64x4d_fpn_20e.py">Hybrid Task Cascade with ResNeXt-101-64x4d-FPN backbone</a>. This model has a metric Mask mAP = 43.9 on COCO dataset. This is SOTA for instance segmentation.</p>
<h1>Validation:</h1>
<p>For validation, I used 450 training samples splitted using <a href="https://github.com/trent-b/iterative-stratification">https://github.com/trent-b/iterative-stratification</a>.</p>
<h1>Preprocessing:</h1>
<p>I applied light augmentatios from the <a href="https://github.com/albu/albumentations">albumentations</a> library to the original image. Then I use multi-scale training: in each iteration, the scale of short edge is randomly sampled
from [600, 1200], and the scale of long edge is fixed as 1900.
<img src="https://raw.githubusercontent.com/amirassov/kaggle-imaterialist/master/figures/preproc.png" alt="preprocessing"></p>
<h1>Training details:</h1>
<ul>
<li>pre-train from COCO</li>
<li>optimizer: <code>SGD(lr=0.03, momentum=0.9, weight_decay=0.0001)</code></li>
<li>batch_size: 16 = 2 images per gpu x 8 gpus Tesla V100</li>
<li>learning rate scheduler:
<code>if iterations &lt; 500:
lr = warmup(warmup_ratio=1 / 3)
if epochs == 10:
lr = lr ∗ 0.1
if epochs == 18:
lr = lr ∗ 0.1
if epochs &gt; 20:
stop</code></li>
<li>training time: ~3 days.</li>
</ul>
<h1>Parameter tuning:</h1>
<p>After the 12th epoch with the default parameters, the metric on LB was <strong>0.21913</strong>. Next, I tuned postprocessing thresholds using validation data:
* <code>score_thr=0.5</code>
* <code>nms={type: 'nms', iou_thr: 0.3}</code>
* <code>max_per_img=100</code>
* <code>mask_thr_binary=0.45</code></p>
<p>This improved the metric on LB: <strong>0.21913 -> 0.30011.</strong></p>
<h1>Test time augmentation:</h1>
<p>I use 3 scales as well as horizontal flip at test time and ensemble the results. Testing scales are (1000, 1600), (1200, 1900), (1400, 2200). </p>
<p>I drew a TTA scheme for Mask R-CNN, which is implemented in mmdetection library. For Hybrid Task Cascade R-CNN, I rewrote this code. </p>
<p>This improved the metric on LB: <strong>0.30011 -> 0.31074.</strong>
<img src="https://raw.githubusercontent.com/amirassov/kaggle-imaterialist/master/figures/tta.png" alt="TTA"></p>
<h1>Ensemble:</h1>
<p>I ensemble the 3 best checkpoints of my model. The ensemble scheme is similar to TTA. </p>
<p>This improved the metric on LB: <strong>0.31074 -> 0.31626.</strong>
<img src="https://raw.githubusercontent.com/amirassov/kaggle-imaterialist/master/figures/ensemble.png" alt="ensemble"></p>
<h1>Attributes:</h1>
<p>I didn't use attributes at all: they were difficult to predict and the removal of classes with attributes greatly improved the metric. </p>
<p>During the whole competition, I deleted classes with attributes: <code>{0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12}</code> U <code>{27, 28, 33}</code>. But two days before the end I read [the discussion] (<a href="https://www.kaggle.com/c/imaterialist-fashion-2019-FGVC6/discussion/94811#latest548137">https://www.kaggle.com/c/imaterialist-fashion-2019-FGVC6/discussion/94811#latest548137</a>) and added back classes <code>{27, 28, 33 }</code>. </p>
<p>This improved the metric on LB: <strong>0.31626 -> 0.33511.</strong></p>
<h1>Postprocessing for masks</h1>
<p>My post-processing algorithm for avoid intersections of masks of the same class:
<code>def hard_overlaps_suppression(binary_mask, scores):
not_overlap_mask = []
for i in np.argsort(scores)[::-1]:
current_mask = binary_mask[..., i].copy()
for mask in not_overlap_mask:
current_mask = np.bitwise_and(current_mask, np.invert(mask))
not_overlap_mask.append(current_mask)
return np.stack(not_overlap_mask, -1)</code></p>
<h1>Small postprocessing:</h1>
<p>I deleted objects with an area of less than 20 pixels. </p>
<p>This improved the metric on LB: <strong>0.33511 -> 0.33621.</strong></p> |
iMaterialist (Fashion) 2019 at FGVC6 | just a short tutorial to get 25th place | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: iMaterialist (Fashion) 2019 at FGVC6
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>After removing classes with attributes and following <a href="https://pytorch.org/tutorials/intermediate/torchvision_tutorial.html">this tutorial</a> with images that have a minimal edge length of 800, I got 25th place (Public: 0.22209/Private: 0.20699)</p> |
iMaterialist (Fashion) 2020 at FGVC7 | 1st place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: iMaterialist (Fashion) 2020 at FGVC7
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thank you, Krish! I'll come back to the code and will open-source it right after I finish with the next Spotty release.</p>
<p>Thank you!</p>
<p>You're welcome! Thank you for using it! :)</p>
<h1>Model</h1>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F853978%2F378129d2e9afb90bbd1f320858b9b73b%2Fimat2020_2.jpg?generation=1591441185889910&alt=media" alt=""></p>
<p>I used a single Mask R-CNN model with the <a href="https://arxiv.org/abs/1912.05027" target="_blank">SpineNet-143</a> + FPN backbone and added an extra head to classify attributes. The attributes head was trained with the focal loss. For the augmentations, I used one of the <a href="https://arxiv.org/abs/1906.11172" target="_blank">AutoAugment</a> policies (see details below). No TTAs during the inference.</p>
<p>All the changes were made on top of the <a href="https://github.com/tensorflow/tpu/tree/master/models/official/detection" target="_blank">TPU Object Detection and Segmentation Framework</a>. You can find the <strong>code and weights</strong> for the best model in <strong><a href="https://github.com/apls777/kaggle-imaterialist2020-model" target="_blank">this repo</a></strong>.</p>
<p>Switching from the ResNet-50 to the SpineNet-96 backbone improved my score by on the LB by ~+0.07 (private), ~+0.05 (public). Not sure though how better the SpineNet-143 backbone was as I trained it with a slightly different configuration.</p>
<h1>Data</h1>
<p>I split the training data to training and validation datasets in a way that the validation dataset contains at least 10% of images for each class and each attribute. I ended up with 39932 images in the training dataset and 5691 images in the validation dataset.</p>
<h1>Training</h1>
<ul>
<li>The model was trained on top of pre-trained on the COCO dataset weights.</li>
<li>It was trained on resolution 1280x1280.</li>
<li>The attributes head was trained with the focal loss. Switching to the focal loss improved the score by ~+0.012 (private), ~+0.018 (public)</li>
<li>For the augmentations I used random scaling (0.5 - 2.0) and <a href="https://github.com/tensorflow/tpu/blob/2d9507360e3712715c584e2c21c639b39efd6ad1/models/official/detection/utils/autoaugment_utils.py#L126" target="_blank">v3 policy</a> from the Google's <a href="https://arxiv.org/abs/1906.11172" target="_blank">AutoAugment</a> implementation. I modified the code to make it working with masks as it supports only object detection case at the moment.</li>
</ul>
<p>The model was trained with a batch size 64 for 91.6k steps on a v3-8 TPU for ~69 hours. Big thank you to <a href="https://www.tensorflow.org/tfrc" target="_blank">TensorFlow Research Cloud</a> for giving me free access to TPUs, it helped a lot!</p>
<h1>Predictions</h1>
<h2>Attributes</h2>
<p>For the attribute predictions, I used thresholds that maximize F1-score for each individual attribute within a category (so it's 294*46=13524 thresholds, but most of them actually >1 as there are no training examples).</p>
<h2>Best Predictions</h2>
<p>I don't think the metric used in this competition was good. Unfortunately, it's not taking into account false-negative predictions at all. That basically means that with just 1 prediction per image the score of 1.0 is still achievable (I first asked about FNs in <a href="https://www.kaggle.com/c/imaterialist-fashion-2020-fgvc7/discussion/141891" target="_blank">this</a> discussion and later I also contacted the organizer by email to make sure it's not a mistake, but they assured me that the metric is okay.).</p>
<p>So at first I just used one most confident class prediction per image. But high class confidence does not necessarily mean that the segmentation or attributes are good. So next I tried to compute a score for each prediction as an average of class confidence, category mask AP and category attributes F1-score, then I used it to select 1 best prediction per image. It improved my score on the LB by ~+0.041 (private), ~+0.036 (public).</p>
<p>In the end, I implemented <a href="https://www.kaggle.com/c/imaterialist-fashion-2020-fgvc7/overview/evaluation" target="_blank">the metric</a> used in the competition, computed AP for each model prediction (based on the validation dataset) and trained a regression model to predict APs for the Mask R-CNN predictions. Then, as usual, I just used 1 best prediction per image based on the predicted APs. It improved the score on the LB by ~+0.065 (private), ~+0.058 (public). For the regression model, I ended up using a random forest regressor from the scikit-learn package (100 estimators, max depth 8). It was trained on features like category ID, class confidence, mask area, number of predicted attributes, category mask AP, category attributes F1-score, and so on - 13 features in total.</p>
<p>> 1.How do you set 13524 thresholds?</p>
<p>I take prediction confidences and ground truth labels for each category and each attribute, compute precision-recall values at different thresholds and take one that corresponds to the best F1-score (see <a href="https://scikit-learn.org/stable/modules/generated/sklearn.metrics.precision_recall_curve.html">sklearn.metrics.precision_recall_curve</a>).</p>
<p>> 2.What is the effect of the random forest regressor ?</p>
<p>I use it to estimate Average Precision for each of the model predictions. Then I use the best predictions for the submission.</p>
<p>You can compute AP for each prediction on the validation dataset using the following function:
```
def get_precision_value(mask_iou: float, f1_score: float) -> float:
thresholds = [0.5, 0.55, 0.6, 0.65, 0.7, 0.75, 0.8, 0.85, 0.9, 0.95]</p>
<pre><code>tps = 0
for ti in thresholds:
for tf in thresholds:
if (mask_iou > ti) and (f1_score > tf):
tps += 1
return tps / (len(thresholds) ** 2)
</code></pre>
<p>```
Then you can use computed APs together with predictions' features (i.e. category ID, mask area, number of predicted attributes) to train a random forest regressor model (see <a href="https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.RandomForestRegressor.html">sklearn.ensemble.RandomForestRegressor</a>).</p>
<p>Thank you! No, I actually didn't know they released the model. But anyway it's just an exported model without the actual code, it would be hard to do anything with it.
Did you try to use it in the competition?</p>
<p>Thank you! I'm going to release it, but a little bit later, need to groom it a little.</p> |
iMaterialist Challenge (Fashion) at FGVC5 | Third place: my approach | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: iMaterialist Challenge (Fashion) at FGVC5
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Yes. I have modified the class ImageDataGenerator of keras for the case of multi level. The result will be shared in my GitHub</p>
<p>Thanks. I modified the flow_from_directory function for multi-label problems. <a href="https://github.com/tholor/keras/commit/29ceafca3c4792cb480829c5768510e4bdb489c5">https://github.com/tholor/keras/commit/29ceafca3c4792cb480829c5768510e4bdb489c5</a> </p>
<p>First of all I want to congratulate <a href="https://www.kaggle.com/radek1">radek</a> and <em>cybercore.co.jp</em> who have achieved first and second place respectively. I thank the sponsors for organizing the competition as well.</p>
<p>My approach can be grouped into 3 parts: CNN models, stacking and F1 score optimization.</p>
<p><strong>CNN models.</strong>
I have used the following models: 2 inception_resnet_v2, 3 Xception, 2 DenseNet201, NASNetMobile, restNet50, DenseNet121, DenseNet169, VGG16, restNet50, inception. All with keras. In the training process I applied some methods of image augmentation, using the ImageDataGenerator function of keras. Since the training set is more than 1 million images, instead of using all the images to train, I tried different values of steps_per_epoch, which significantly reduced the calculation time (each epoch with all the images lasted 9 hours approximately), that way I could try different models. For some models, I started by fixing some layers, then I added a small regularization on each layer and I continued to train with all the layers. For more information, please see the following example: <a href="https://www.kaggle.com/dingkun/xception-model-training-pipeline-lb-0-9798">https://www.kaggle.com/dingkun/xception-model-training-pipeline-lb-0-9798</a></p>
<p><strong>Stacking</strong>
Using the predictions of the CCN models, I trained in the validation set boosting models using cross validation. I used the LightGBM library.</p>
<p><strong>F1 score optimization</strong>
Finally using the code in the contest planet competition: Understanding the Amazon from Space that you can see it in the following link: <a href="https://www.kaggle.com/c/planet-understanding-the-amazon-from-space/discussion/32475">https://www.kaggle.com/c/planet-understanding-the-amazon-from-space/discussion/32475</a>, my F1 score improved significantly.</p>
<p><em>They did not work for me:</em></p>
<ul>
<li>I tried to test TTA in validation and test, but they did not improve the score.</li>
<li>I tried changing the objective function, giving more weight to the classes with less frequency, but neither did the score improve.</li>
</ul>
<p>Jahaziel Ponce Sánchez</p> |
iMaterialist Challenge (Fashion) at FGVC5 | [1st place solution] pretrained CNNs -> xgboost -> F1 optimization | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: iMaterialist Challenge (Fashion) at FGVC5
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>my own</p>
<p>Yes, this is what I used. It seems like a really nice project.</p>
<p>Each of the CNNs predicts all 228 labels. From each of them I get an array of probabilities that is N x 228 where N is the number of examples.</p>
<p>I concatenate the predictions along the vertical axis. If I have outputs from 2 CNNs, I end up with an array that is of shape N x 456.</p>
<p>For combining the outputs with xgboost, let's say I want to predict label with id 0. I run xgboost on the N x 456 array and provide just the information for label 0 as target. I then use the obtained model to predict on unseen data and use the output only as information for label 0. I need to repeat the procedure for any other labels I want to predict like this.</p>
<p>Hello Fellow Kagglers!</p>
<p>First of all, I would like to thank competition sponsors and Kaggle for organizing a competition that proved both to be a lot of fun and a very good learning experience!</p>
<p>My solution begins with training multiple architectures. I start off with pretrained models and train them on the training data. I used the <a href="https://github.com/fastai/fastai">fastai library</a> and <a href="https://pytorch.org/">pytorch</a> - this is important as both allow me to do things out of the box that as far as I am aware are very hard to do or impossible using other frameworks. </p>
<p>For training I used Adam initially but I switched to the 1cycle policy with SGD very early on. You can read more about this training regime in a <a href="https://arxiv.org/abs/1803.09820">paper</a> by Leslie Smith and you can find details on how to use it by <a href="https://twitter.com/GuggerSylvain">Sylvain Gugger</a>, the author of the implementation in the fastai library <a href="https://sgugger.github.io/the-1cycle-policy.html#the-1cycle-policy">here</a>.</p>
<p>The difference in results on the train set vs the provided val set raised my suspicion that the data might be coming from different distributions. Upon checking competition rules (which allow training on the val set), I subsequently trained with cross validation on the val set. As training on the test set was not explicitly allowed, I didn't do any form of training on it (for instance, have not tried pseudo labeling).</p>
<p>I grew the pipeline organically and even on the last day of the competition I added a new model (nasnet mobile).</p>
<p>I first attempted to combine model predictions via implementing heuristics for optimizing the f1 micro averaged score. I did a bit of research on this - the paper A Study on Threshold Selection for Multi-label Classification by Rong-En Fan and Chih-Jen Lin is a very nice starting point. It even has descriptions of algorithms one can start implementing! There is not much information available on this on the Internet as far as I could tell so this is quite a nice paper to have in your arsenal.</p>
<p>The optimizing heuristics take 7 minutes to run on a 1080ti. They focus on combining entire submissions / predictions for specific columns and for improving the per column threshold.</p>
<p>I suspected that I could still improve predictions for label k based on combining predictions for all labels. I ran all my model predictions through xgboost with cross validation (5 cv splits, training a single model in each split for the ~170 labels of interest). This entire procedure on an AWS c5.18xlarge instance takes ~ 1hr 15 m. Going forward I am planning on creating docker environments for most of the work that I do so this should make for even easier environment switches and those compute instances come at a really nice price point. Here I think a single round of training cost me ~1.5$ (I use spot instances)</p>
<p>Ultimately, I take raw model predictions, xg predictions based on model outputs, and send it all through my optimization pipeline. </p>
<p>Below I am providing model results on the validation set. The threshold has been optimized on a per label basis to keep the comparison as fair as possible. I realize that the models have been trained to various degree due to the tools / methods I used, but I think this is still a valuable reference point to have. In particular, resnet 34 and nasnet mobile (just over 5 mln parameters) do stand out as nice goto architectures. I feel they might be great candidates as initial architectures when working on majority of problems and starting with anything bigger could be detrimental to final results. That is of course depending on the resources you have available. I trained on a single 1080ti.</p>
<p><img src="https://i.imgur.com/LqckkpW.png" alt="architecture performance"></p>
<p>And here are prediction correlations</p>
<p><img src="https://i.imgur.com/ubKtC5Y.png" alt="prediction correlations"></p>
<h2>What did not work</h2>
<ul>
<li>darknet50 with attention - I took darknet50 from the <a href="https://github.com/fastai/imagenet-fast">fastai imagenet-fast repository</a> and added attention. I didn't get far in training the model possibly due to lack of resources (including running out of time that I was willing to spend on this) or an issue with the architecture. I am hoping to revisit this at some point</li>
<li>per label training - This ended up being quite a time sink of an activity. Very hard won minor improvements, training was very hard to optimize. I am suspecting that the reason why this is so hard might be that the architectures that I used have a lot of capacity and that training on 228 labels takes advantage of multi task learning. </li>
</ul>
<p>I am not sure if I am qualified to make such recommendations, but based on the experience of this competition I would like to suggest that if you are considering participating in a Kaggle competition, it might be worthwhile to consider joining one from start. The quality of the experience and what you will be able to learn will be very different depending if you join a competition late or from the very beginning.</p>
<p>See you on these boards and in competitions to come!</p>
<p>Radek</p>
<p>PS. I am very open to discussing anything that might be of interest to you regarding the solution. Also, I generally try to share things I feel might be useful to others. I am hoping to be more active on Kaggle forums but if you'd like you can also connect with me on <a href="https://twitter.com/radekosmulski">Twitter</a></p>
<p>Here are the settings I used for se_resnext50:
<code>
learn.fit(1e-1, 1, wds=1e-4, cycle_len=10, use_clr_beta=(100, 10, 0.95, 0.85))
</code>
It was one of the trickier architectures to train. I only used a single 10 epoch cycle while training on the val set - anything on top of that tended to decrease performance. </p>
<p>Optimizing the thresholds on a per label basis like you are doing is something similar to the optimization I did before calculating the scores here.</p>
<p>The cost function shouldn't matter much here unless you are doing something very non-standard :)</p>
<p>Let me know how it goes and if you have any further questions!</p>
<p>There were models where I only trained the classifier on the train set. Most of the models I fine tuned to greater or lesser extent on the train set and followed up with 5-fold training on the val set.</p>
<p>At this point I am not sure I am allowed to share the code. Also, I am not sure that this would be a good idea - the code is generally not of the quality of code I try to share.</p>
<p>I am happy to walk you through the key parts of increasing your model's performance if that is something that you might be interested in?</p>
<p>Are you training on the val set with cross validation? I suspect that even breaking the val set into two and taking your model trained on the train set and training on each half and predicting on the other and then combining the results would greatly help with its performance. Once this is done, for predicting probably using a threshold in the range of 0.2 - 0.3 would be best. You could run a search on the predictions say in 0.1 increments and see which works best.</p>
<p>If you are feeling particularly adventurous you might also attempt building a fully convolutional model based on resnet34 (lecture #8 should be helpful here I believe) or combining the predictions of resnet34 and nasnet mobile.I suspect even taking an average of the predictions should help with the score.</p>
<p>Let me know how it goes - happy to discuss this further if you would be interested.</p>
<p>I used a custom sampler that sampled from all the training examples. I don't think that seeing most of the data or even a large portion of it mattered much as long as you continued to train on the val set.</p>
<p>I don't think I have a good answer to this. I use techniques such as discriminative learning rates and 1cycle policy that were invented within the last year or so. And then there are a lot of other, smaller things, that the fastai library offers that other frameworks do not. </p>
<p>I could say that it all adds up but I really have no way of saying if that is the case for sure.</p>
<p>I trained with a bs of 64, image size of 224, learning rate of 1e-1 using the 1cycle policy and discriminative learning rates. </p>
<p>Yes.</p>
<p>I have not.</p>
<p>I picked the labels based on positive example count in the val set. I assumed xgboost needed some count of positive examples per fold to be effective.</p>
<p>I iterated over labels and trained a separate model for each of the 5 folds for each label. </p>
<p>fastai library gives you a very nice functionality where you can freeze / unfreeze layers / parts of architectures. For the archs that come supported out of the box I mostly first trained the model from the top (first the classifier part) and then continued onto the convolutional blocks. In most cases I used discriminative learning rates. Discriminative learning rates allow for training of different parts of the network with different learning rates and in general you might want to train the lowest conv layers with as little as 1/100th of the learning rate you use for the classifier part.</p>
<p>For archs that do not come with fastai and where it probably would take me quite some time to break them up into sections (some of the archs do quite tricky things) I generally first trained the classifier part (or the part of the classifier that I added - PyTorch makes amending already existing models very straightforward) and then trained the entire network without discriminative learning rates (subjectively, this seemed to perform not as well, but there are also other reasons then not using discriminative learning rates that might have contributed to subjectively results that were not as good as I expected).</p>
<p>I experimented with more extreme augmentation and crops but ended up using the default fastai 'transforms_side_on' for most of the models.</p> |
iMaterialist Challenge (Fashion) at FGVC5 | [2nd place solution] stacking with CNN | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: iMaterialist Challenge (Fashion) at FGVC5
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>We trained 9 models respectively and stack the draw prediction.</p>
<p>We used different size images for each model (224x224 for densenet121, 260x260 for xception, 192x192 for InceptionV3, 299x299 for InceptionResNetV2, etc.) and the same batch_size: 64 images</p>
<p>It got about 0.59xx on public LB (as I remember)</p>
<p>I'd like to thank <a href="https://www.kaggle.com/nguyenbadung">DungNB</a>, my teammate and also my co-worker, who have contributed 80% of our work in this competition (all single models are trained by him).
Our solution is stacking many CNN models: densenet201, inception_v3, inception_resnet_v2, resnet50, nasnet, xception by another CNN model and a xgboost model. Details below.</p>
<p><strong>Single models</strong></p>
<p>We built many CNN models with the same structure: base model -> GlobalMaxPooling2D -> Dense(2048, activation='relu') -> Dropout(0.5) -> Dense(2048, activation='relu') -> Dropout(0.5) -> Dense(NUMBER_OF_CLASSES, activation='sigmoid').</p>
<p>In training time, we applied some augmentations to avoid overfitting: Flip, rotate, AddToHueAndSaturation, GaussianBlur, ContrastNormalization, Sharpen, Emboss, Crop (thanks to <a href="http://github.com/aleju/imgaug">imgaug</a>)</p>
<p>In testing time, we applied only one augmentation: flip.</p>
<p>Best single model scores <em>0.64xx</em> on public LB</p>
<p>p/s: we didn't optimize threshold for f1 score, just using a simple threshold 0.3 for all classes.</p>
<p><strong>Stacking</strong></p>
<p>Stacking is the key to this competition. Thanks to <a href="https://www.kaggle.com/c/planet-understanding-the-amazon-from-space">Planet: Understanding the Amazon from Space competition</a> and the winner <a href="https://www.kaggle.com/bestfitting">bestfitting</a>. We have read his <a href="http://blog.kaggle.com/2017/10/17/planet-understanding-the-amazon-from-space-1st-place-winners-interview/">solution</a> carefully and tried to create our new method.
We found that the correlation between labels is really important. In Planet competition, bestfitting built 17 models to learn the correlation between 17 classes. Should we do that for 228 classes in this competition?
No, we didn't. Instead, we used a single CNN.</p>
<p>We had 9 single models, each model predicted on 9k samples in the validation set. Then we concatenate and reshape each output sample to [number of models, number of classes].
We built a stacking CNN with the structure: Input(shape=(number of models,number of classes,1)) -> Conv2D(filters=8, kernel_size=(3, 1)) -> Conv2D(16, (3, 1)) -> Conv2D(32, (3, 1)) -> Conv2D(64, (3, 1)) -> Flatten() -> Dense(1024) -> Dense(NUMBER_OF_CLASSES, activation='sigmoid').</p>
<p>With a window size (3,1) we hope the CNN can learn the correlation between the prediction of single models. And the last Dense layer can learn the correlation between 228 labels.
Training this model with Kfold (k=5) on the validation set, we can get <em>0.714x</em> on public LB.</p>
<p>We also tried to use xgboost and MultiOutputRegressor (supported by sklearn).
Training this model with Kfold (k=5) on the validation set, we can get <em>0.703x</em> on public LB.</p>
<p>Simple weighted CNN model and xgboost model give us final score <em>0.719x</em> on public LB.</p>
<p><strong>What didn't work</strong></p>
<p>Too much augmentation on testing time reduces our score.</p>
<p><a href="https://www.analyticsvidhya.com/blog/2017/08/introduction-to-multi-label-classification/">Problem Transformation, Adapted Algorithm</a> supported by skmultilearn give poor result in this competition.</p>
<p>Here is the code:</p>
<pre><code>from keras.preprocessing import image
import imgaug as ia
from imgaug import augmenters as iaa
sometimes = lambda aug: iaa.Sometimes(0.9, aug)
seq = iaa.Sequential([
iaa.Fliplr(0.5),
sometimes(
iaa.OneOf([
iaa.Affine(
rotate=(-10, 10), # rotate by -45 to +45 degrees
cval=0 # if mode is constant, use a cval between 0 and 255
),
iaa.AddToHueAndSaturation((-20, 20)),
iaa.Add((-20, 20), per_channel=0.5),
iaa.Multiply((0.8, 1.2), per_channel=0.5),
iaa.GaussianBlur((0, 0.5)), # blur images with a sigma between 0 and 3.0
iaa.ContrastNormalization((0.5, 1.5), per_channel=0.5), # improve or worsen the contrast
iaa.Sharpen(alpha=(0, 0.3), lightness=(0.7, 1.3)), # sharpen images
iaa.Emboss(alpha=(0, 0.5), strength=(0, 0.5)) # emboss images
])
),
iaa.Crop(percent=(0, 0.1))
],random_order=True)
def train_generator(df, batch_size):
while True:
df = df.sample(frac=1).reset_index(drop=True)
for start in range(0, df.shape[0], batch_size):
end = min(start + batch_size, df.shape[0])
sub_df = df.iloc[start:end,:]
x_batch = []
y_batch = []
for index, row in sub_df.iterrows():
img_path = '../../data/images/train/%d.jpg'%row['imageId']
img = cv2.imread(img_path)
img = cv2.resize(img,(SIZE, SIZE), interpolation = cv2.INTER_CUBIC)
img = seq.augment_image(img)
img = image.img_to_array(img)
img = preprocess_input(img)
x_batch.append(img)
y_batch.append(row[0:NUMBER_OF_CLASSES])
yield np.array(x_batch), np.array(y_batch)
</code></pre>
<p>Yes, I did. But I used my custom function for train_generator and valid_generator</p>
<p>Thank you, your write up is amazing too.</p>
<p>About tuning the base models, we use Adam with default learning rate: <em>1e-4</em> reduces with a factor 0.2 if val_loss does not decrease after 1 epoch, early stopping if val_loss does not decrease after 5 epochs.</p>
<p>yes, we used all images of the train set.
We have 2 GPU 1080 Ti, and it takes about 5h for one epoch. (19h per epoch for nasnet_large)
yes, all 9 models are finetuned.</p>
<p>Total images in the training dataset are 1,014,544 and I think ~41,000 images is too few. BTW, keras image augmentation sometimes gives a poor result, maybe you should try <a href="https://github.com/aleju/imgaug">imgaug</a></p>
<p>No, it has not. The input of the stacking model is [9, 228,1] (like an image with height=9, width=288, number of channel=1). And the first dense layer has a dimension of [1*228*64, 1024]. I attached the summary of my model below:</p>
<pre><code>_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv2d_1 (Conv2D) (None, 7, 228, 8) 32
_________________________________________________________________
conv2d_2 (Conv2D) (None, 5, 228, 16) 400
_________________________________________________________________
dropout_1 (Dropout) (None, 5, 228, 16) 0
_________________________________________________________________
conv2d_3 (Conv2D) (None, 3, 228, 32) 1568
_________________________________________________________________
dropout_2 (Dropout) (None, 3, 228, 32) 0
_________________________________________________________________
conv2d_4 (Conv2D) (None, 1, 228, 64) 6208
_________________________________________________________________
dropout_3 (Dropout) (None, 1, 228, 64) 0
_________________________________________________________________
flatten_1 (Flatten) (None, 14592) 0
_________________________________________________________________
dense_1 (Dense) (None, 1024) 14943232
_________________________________________________________________
dropout_4 (Dropout) (None, 1024) 0
_________________________________________________________________
dense_2 (Dense) (None, 228) 233700
=================================================================
Total params: 15,185,140
Trainable params: 15,185,140
Non-trainable params: 0
</code></pre> |
iMaterialist Challenge (Furniture) at FGVC5 | 4th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: iMaterialist Challenge (Furniture) at FGVC5
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Here is a brief overview of our solution. </p>
<ol>
<li>We used the data got from <a href="https://www.kaggle.com/c/imaterialist-challenge-furniture-2018/discussion/54222">https://www.kaggle.com/c/imaterialist-challenge-furniture-2018/discussion/54222</a>, 192171 images for training and 6309 images for validation.</li>
<li>At first, I used Caffe to train the models (InceptionResnetV2, SeResnXt50, Resnet101). It was easy to overfit. Then, I tried to use pytorch to train the models with the strategy learned from Dowakin. Thanks for his sharing. I tried almost all the models that I can use. You can see the attachment ( furniture - Youngkl_pytorch.pdf) for detailed training parameters and results. </li>
<li>We tried to ensemble the models we had. We tried average ensemble and weighted ensemble. We got the best score 0.12760 on public LB. For testing, we used TenCrop. Then, we decided use pseudo-labeled images from the testset. (However, I forgot to apply a threshold to refine the testset images.) This is pseudo label1. Afterwards, we got results scoring 0.12682 and 0.12656 on public LB. We got pseudo label2 and pseudo label3 from these results, and trained some models.</li>
<li>At last, I average ensembled all the models got from pseudo label1, pseudo label2 and pseudo label3, except dpn107_ck5 and xception_ck2. </li>
</ol>
<p>You can go to my github for more details. <a href="https://github.com/Youngkl0726/kaggle_iMaterialist-Challenge-Furniture-at-FGVC5">https://github.com/Youngkl0726/kaggle_iMaterialist-Challenge-Furniture-at-FGVC5</a></p> |
iMaterialist Challenge (Furniture) at FGVC5 | [1st place solution]pretrained models and ensemble | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: iMaterialist Challenge (Furniture) at FGVC5
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Yes, I delete the former FC layer in the pretrained networks and add a new FC layer with 128 outputs. I also retrain the base network with a low LR of 0.001 and 0.0001 while the LR of the new FC layer is 0.01 and 0.001.</p>
<p>First of all, we should thank competition sponsors and Kaggle for organizing a competition that is quite interesting. Besides, all the competitors are quite excellent, maybe we are just a little lucky. Next, we want to briefly share our methods here.</p>
<p>We use pytorch to do all the training and testing, and our pretrained models are almost from the <a href="https://github.com/Cadene/pretrained-models.pytorch">https://github.com/Cadene/pretrained-models.pytorch</a>. Thanks for Cadene's sharing.</p>
<p>Most of our codes can be found here <a href="https://github.com/skrypka/imaterialist-furniture-2018">https://github.com/skrypka/imaterialist-furniture-2018</a>, just as Dowakin shared before, and we have updated it.</p>
<p>We totally have 12688 pictures for testing, for the missing ones, we just use the result in the sample.csv .</p>
<p>The models we choose are inceptionv4, densenet201, densenet161, dpn92, xception, inceptionrResNetv2, resnet152, senet154, renext101 and nasnet. The accuracy in the val set is 85% to 87%,except the nasnet which is quite hard to train and the accuracy is only 84%. </p>
<p>For training I used SGD in most cases. The learning rate of the fully connected layer is ten times than others. The initial lr is 0.001 and switched to 0.0001 later.But for SEnet, it is hard to train with SGD, so I used Adam first. </p>
<p>For testing, we use 12TTA = (normal+horizontal flip)*6 crops(full image, center, 4 corners) for each model and save the result. Finally we use gmean to get the final result for the 12*10 12688*128 arrays. And the result is greatly improved in this way. Apart from that, we notice the imbalance of the result. The test set should be 100 pics for each class, but the training set is not imbalanced. Thus the softmax will jugde by the training rate. We wrote a function to calibrate the probablity. You can see <a href="https://www.kaggle.com/dowakin/probability-calibration-0-005-to-lb/notebook">https://www.kaggle.com/dowakin/probability-calibration-0-005-to-lb/notebook</a>.</p>
<p>We also add some weights to the 10 results when we do ensembing, which is useful to get a high score on the public leaderbord. However, for the final submit we removed it just to avoid the overfitting in the 30% datas. It seems to be effective for the final score.</p>
<p>Yes, you are right. Because I want to train the fully connected layer better. I have tried this trick in many classification tasks, proving it useful. As for "the loss declined slowly", I think maybe you should decrease the learning rate.</p>
<p>We did not use the fine-grained classification tricks since most of the classes are quite obvious to classfiy. If you check some training pictures, you will find most classes are not that similar .Thus we just used traditional ways to solve the problem.</p> |
iMet Collection 2019 - FGVC6 | 0.669 Public LB Solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: iMet Collection 2019 - FGVC6
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First of all, I would like to thank the organizers for an interesting competition, and also to congratulate all the participants who finished in high positions.</p>
<p>Oddly enough, but I would like to thank <a href="/bestfitting">@bestfitting</a> for his detailed description of his solution for <a href="https://www.kaggle.com/c/human-protein-atlas-image-classification/discussion/78109#latest-539913">Human Protein Atlas Image Classification</a>, which inspired me at the beginning of this competition.</p>
<p><strong>SOLUTION</strong></p>
<p><strong>First model</strong>
The first thing that caught my eye in this competition was that the target can be divided into two groups: tags and culture. It seems that the target is heterogeneous, since culture is responsible for the color spectrum and style, and the tag for the details and content. It also means that some augmentations can be good for one group of target but bad for another group. The first idea that comes to mind is try to train two models for culture and for tag...</p>
<p>When I traind my first model (with all classes), I simultaneously measured the following metrics:
F2, F2 for only tags, F2 for only culture. Then I decided to compare the metrics for the model trained for one group of target. But as you know multitask learning is a very great thing
(and the relationship between culture and tag is definitely exists), so I did the following:</p>
<p><strong>1)</strong> Instead of the last fully connected layer, I made two a fully connected layers, one for the culture and one for the tag.
<strong>2)</strong> To emphasize the importance of one of group of targets, I balanced the loss as follows:
<code>final_loss = alpha*culture_loss + (1 - alpha)*tag_loss</code>
where <code>alpha = 0.2</code> for tag model and <code>alpha = 0.8</code> for culture model
<strong>3)</strong> Then i just concatenate two parts of predictions from two models (i also have tried blend, almost the same result)
<strong>4)</strong> As I said before, I used different types of augmentations for such models.</p>
<p>I trained SeResNext50 for culture and Senet154 for tags, both models was trained on 288x288 crops. This model really gave a much better result than the standard model for all classes. But there is one drawback - it is time consuming to train 2 models for one prediction. Therefore, for the following models, I used the entire target</p>
<p><strong>Second model</strong>
The second model was quite simple. I just trained one more Senet154 on 288x288 crops</p>
<p><strong>Third model</strong>
This model was based on <a href="https://forums.fast.ai/t/mixup-data-augmentation/22764">MixUp</a> augmentation. At first I used the same crop size (288x288). But it seems strange to mix one defective (сrop does not describe the entire target) picture with another defective picture. So here I switched to resize. (352x352). I trained SeResNext101 here. And it was really good as a solo model and in the blend too.</p>
<p><strong>OTHER SETUP:</strong></p>
<p><strong>Hardware</strong>
4 x 1080Ti</p>
<p><strong>Validation</strong>
I used 5 folds CV with Multilabel Iterative Stratification</p>
<p><strong>Loss</strong>
And i mentioned the post from <a href="https://www.kaggle.com/c/human-protein-atlas-image-classification/discussion/78109#latest-539913">Human Protein Atlas Image Classification</a> for a reason. For me, Folcal loss + Lovasz loss was the best choiсe. I used it for all model.</p>
<p><strong>Training procedure</strong>
I used Adam optimizer.
In my case, i got a big improvement of my validation metric when i drop my learning rate, but it works only once during training(
So i started from big enough learning rate, then decreased it for five times and continue training.</p>
<p><strong>Data augmentation</strong>
The augmentations looked like this (sometimes changed a little):
<code>Compose([ <br>
OneOf([ <br>
RandomBrightness(limit=0.2, p=1), <br>
RandomContrast(limit=0.2, p=1), <br>
], p=1),
RandomCrop() or Resize(),
GaussNoise(var_limit=(0, 20), p=0.4),
HorizontalFlip() ])
</code></p>
<p><strong>Prediction</strong></p>
<p>I used 3tta for the first and the second models:
It was simple random crop, but I made it so that the crops were equally distributed on the picture. I pre-determined the places for crops and only added a random offset. Also i randomly flip the crops horizontally to increase diversity of the models</p>
<p>For the third model i used 2tta: simple horizontal flip</p>
<p><strong>Blend</strong>
For blending my models i used simple two-layer fully connected network. It takes concatenated predictions from four my models and gave aggregated prediction. Then i also blend this aggregated prediction with mean-prediction to get final prediction</p>
<p>To be honest, I spent little time on this network, so i think it’s possible to make it better.</p>
<p><strong>OTHER COMMENTS</strong>
I used batch accumulation (about 300-500 total batch), but did not notice any global improvement.</p>
<p>I decided to not use pseudo labeling, because we already had 100k samples in our train and additional 7k are not critical. Yes, it will increase my score on public LB, but i'm not sure about private (Maybe I'm wrong)</p>
<p>I hope this description was useful for you, thanks!</p>
<p>(CV / LB)
First model: 0.623 / 0.652
Second model: 0.618 / 0.646
Third model: 0.618 / 0.648</p>
<p>Yes, you are right</p> |
iMet Collection 2019 - FGVC6 | 10th Solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: iMet Collection 2019 - FGVC6
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks! </p>
<p>Thanks! Congrats on your solo gold as well! </p>
<p>We are in the process of cleaning the code. May take a while. </p>
<p>First of all, I would like to thank FGVC6,CVPR2019 and Kaggle for hosting such an interesting competition. I would like to thank my teammates Jiwei Liu and Yuanhao Wu for their hard work and brilliant ideas. Special congrats to Yuanhao for earning GrandMaster title! </p>
<p>We feel really lucky to get the last gold medal. We only scored ~0.615 on the team merger deadline, lots of efforts were done in the last week. </p>
<p><strong>Dataset</strong>: 5 fold CV with Multilabel Iterative Stratification
<strong>Models</strong>: seresnext50, seresnext101, pnasnet5large, inceptionv4 and senet154.
<strong>Augmentation</strong>:
<strong>Train</strong>: Resize, RandomSizedCrop, HorizontalFlip, Blur, GaussNoise, HueSaturationValue and CLAHE
<strong>Test</strong>: Only Resize
<strong>Loss</strong>: BCE loss for the 1st stage, F2 loss for the 2nd stage
<strong>Optimizor</strong>: Adam, LR=2e-4.
<strong>Learning rate schedule</strong>: ReduceLROnPlateau(patience=0, ratio=0.1)
<strong>Input size</strong>: 331x331 for pnasnet5large, 384x384 for other models.
<strong>Batch size</strong>: 128. </p>
<p>We started the journey with seresnext50 using BCE loss. Small modifications were made to the network, our logit takes input from the last two layers rather than the last one layer. When we finetuned the entire network, we found that a significant boost in f2 score is achieved right after the LR reduces, so we decide to set patience to 0. Later, we found that if we only train the last two layers, we can achieve similar score compared to training the entire network. Hence, we decided to only train the last two layers and classifier, this saves us lots of time and allows us to apply big batch size without batch accumulation. Meanwhile, in another experiment, we found that training the best model achieved by BCE loss using F2 loss for 3-5 epoches could give another big boost (~0.01). This finalizes our solution: train the model with BCE loss until converge (~0.603 on LB for single fold of seresnext50) -> switch loss to F2 for another 3-5 epoches (~0.612 on LB for single fold of seresnext50). </p>
<p>Then it comes to the final week. We don’t have time for more experiments, so we just applied the above strategy to other models and keep generating 5-fold results. A summary of our model performance (most models only have CV scores, since we don’t have enough submissions for all the models in the last 2 days):</p>
<p><strong>Seresnext50</strong>: 0.6078
<strong>Seresnext101</strong>: 0.6207 (0.646 on LB)
<strong>Pnasnet5large</strong>: 0.6066
<strong>Inceptionv4</strong>: 0.6030
<strong>Senet154</strong>: 0.6212 (0.642 on LB)</p>
<p>Our ensemble method is a simple weighted average over the above models, it scores 0.658 on public LB. </p>
<ol>
<li><p>What we tried but not work:</p></li>
<li><p>All crop related augmentation methods failed. </p></li>
<li><p>TTA does not work for us. </p></li>
<li><p>Efficientnet only has a CV of 0.5956, maybe we have not use its potential fully. </p></li>
<li><p>We tried batch accumulation, our test shows there around 0.001 loss in local CV score. We didn’t accumulate too many batches, perhaps this is the reason. Personally, I think there is a trade off between the big batch size and BN synchronization problem. </p></li>
</ol>
<p>Our solution is a little bit brute force compared to other teams. Congrats to all winners. Thanks for reading! </p> |
iMet Collection 2019 - FGVC6 | 2nd place solution [LB 0.667] | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: iMet Collection 2019 - FGVC6
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks for the competition and congratulations of all participants! Here is a summary of my solution.</p>
<h1>Models</h1>
<p>SE-ResNeXt-50 and SE-DenseNet-161 with PartialConv
I didn't have much computational power: only 2 x 1080Ti, so I didn't use very large models. I also tried other resnets and densenets, but these two worked best for me.
CV: 6 folds with iterative stratification.</p>
<h1>Augmentations</h1>
<p>Crop to 640x640 and then scale to 320x320. Random crop on train and center crop on validation and test. I also used random horizontal flip, rotate, gamma, brightness and contrast.
During test time: original + flipped augmentations.</p>
<h1>Training process</h1>
<p>All models were trained in 3 stages: first: with freezed encoder, second: whole network, third: tags only. Batch accumulation was used to achieve batch size 512. I used AdamW optimizer with weight decay 0.01 and lr=1e-4, and then SGD lr=1e-3 (stage 3). And Cosine scheduler with warmup in every stage. Label smoothing and mixup also worked in this competition.</p>
<h1>Threshold</h1>
<p>Separate thresholds for cultures and tags chosen by validation. For tags threshold appeared to be smaller.</p>
<h1>Cleaning and pseudo labelling</h1>
<p>I had two stages of cleaning and pseudo labelling the dataset. Removing high error samples and pseudo labeling greatly improved public LB score, but it seemed to be an overfit. I watched on high error samples and decided to throw them away or not. But anyway, this painstaking dataset cleaning was important.</p>
<h1>Second layer model</h1>
<p>As a second layer model I tried to use LGBM. The technique was quite similar to my quickdraw solution. I put top 30 tags and top 20 cultures to the dataset for LGMB. I mean, most confident classes. Then, predicted <em>is it true, that this class is in answer</em> for every class from this 50. But, unfortunately I didn't have enough time to implemented it in public kernel. But anyway, simple folds and checkpoints averaging worked fine on private here.</p> |
iMet Collection 2019 - FGVC6 | 3rd place solution description | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: iMet Collection 2019 - FGVC6
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks to the organizers for an interesting competiton!</p>
<h2>Validation</h2>
<p>I used 5 fold CV split by <a href="https://github.com/trent-b/iterative-stratification">https://github.com/trent-b/iterative-stratification</a></p>
<h2>Models Used</h2>
<p>All the experiments were carried out with the se_resnext50 model, then se_resnext101 and senet154 were trained with optimal parameters. Nasnet did not work for me good enough in this competition. I also tried Efficientnet and densenet161, but they were significantly worse than se_resnext and senet. I used models pretrained on imagenet from Cadene repo <a href="https://github.com/Cadene/pretrained-models.pytorch">https://github.com/Cadene/pretrained-models.pytorch</a>. I also added dropout to all the models as it improved the validation score. I also tried adding an additional output for predicting the resulting f2 score (to use as a metric for overall confidence), but that did not work, obviously, I guess.</p>
<h2>Preprocessing</h2>
<p>I used a custom version of RandomSizedCrop from albumentations <a href="https://github.com/albu/albumentations">https://github.com/albu/albumentations</a> . Crop sizes were chosen randomly in interval [200, 800] with height/width aspect ratio in [0.4, 1.6] and then resized into 320x320 input size. I also used HorizontalFlip, RandomBrightnessContrast and RandomGamma augmentations. Also, in the middle of my experiments, I discovered that Cutout (random erase) and light
rotations of +- 5 degrees slightly improved validation score, so I added them to the final models.
I used several label smoothing techniques:
1. 0 -> eps; 1 -> 1 - eps for eps = 0.05 for every label
2. Smoothing with empirical distribution as described in the Inception paper <a href="https://arxiv.org/abs/1512.00567">https://arxiv.org/abs/1512.00567</a>
3. Smoothing with using OOF predictions of my ensemble as an empirical distribution for the previous approach.</p>
<p>All of the approaches significantly boosted the validation score approximately the same. So I decided to keep all of them and to blend them together. The tricky part here is that you have to optimize the threshold for every smoothing technique separately.
I experimented a lot with Mixup in this competition, but it did not work for me at all. </p>
<h2>Loss Function</h2>
<p>I modified the FocalLoss with replacing (1 - p)<em>2 coefficient by (1 + (1 - p)</em>*2), which worked way better than the vanilla one. Also, from my first experiment, I noticed that the tags seemed to be way noisier than the cultures and my model had 2x higher error rate on tags. So I started weighting tags 2x more in my loss function.
Due to the fact that the competition metric was F2 score which weights recall higher, I tried to lower the number of false negative predictions while caring less for false positive ones. So I added another 2x multiplier for coefficient where (labels == 1).</p>
<p>At last, then I explored the dataset I found that:
1. Annotations are quite noisy
2. There are a lot of related tags/cultures such as France/Paris or Textile/Textile Fragments</p>
<p>So I applied an idea for loss funtion proposed in <a href="https://arxiv.org/pdf/1809.00778.pdf">https://arxiv.org/pdf/1809.00778.pdf</a> by last year open images runner-ups. I did not punish the model for co-occurrent classes. This hack improved my score significantly.</p>
<h2>Training</h2>
<p>I trained model in 3 stages:
1. fc layer only with Adam. I stopped the stage when the model reached ~0.4 F2 score on validation, otherwise, too much training with frozen encoder led to very overfitted and degenerate models.
2. Full model with Adam and ReduceLROnPlateau, until the lr drops to 1e-7 for 1e-4.
3. Cosine annealing with SGD. This stage provided me with several different local minimums like described in <a href="https://arxiv.org/abs/1704.00109">https://arxiv.org/abs/1704.00109</a> . I then ensembled the checkpoints in order to obtain a heavy ensemble for data cleaning and pseudo labeling</p>
<h2>General workflow</h2>
<ol>
<li>Using all the stuff I described above, I trained a heavy ensemble of se_resnext50, se_resnext101, senet154, nasnet, densenet161 with different smoothings. I also used snapshot ensembling with 6 best checkpoints and TTA10 (horizontal flip * 5 crops). </li>
<li>I obtained OOF predictions for train and public subset. I whitened the train subset by dropping samples with high errors. In total, I dropped around 5% of the samples. I used pseudo labels for ~5k images from the public dataset. While thresholding for pseudo labeling I did not use the optimal validation threshold, but I thresholded to obtain similar distributions of labels in train / public. This approach worked better for me. </li>
<li>I retrained se_resnext101 and senet154 on the cleared data with pseudo labels using all the described techniques. Here I did not use snapshot ensembling. Also, I used only TTA2 (horizontal flip).</li>
</ol>
<p>I also tried to train a second layer lgdm model, but I did not manage to do it because by the end of the competition I had to pass the exams at my university so I did not have enough free time to implement it.</p> |
iMet Collection 2019 - FGVC6 | 6th place simple solution with code | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: iMet Collection 2019 - FGVC6
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Congratulations to all the teams got the medal.
Compared to other teams, my solution is quite simple (though I tried other tricks, none of them worked).</p>
<h2>Solution</h2>
<p>My code is available at <a href="https://github.com/YU1ut/imet-6th-soltuion">https://github.com/YU1ut/imet-6th-soltuion</a>.
My solution is based on this kernel <a href="https://www.kaggle.com/lopuhin/imet-2019-submission">https://www.kaggle.com/lopuhin/imet-2019-submission</a>, and I modified it as follows.</p>
<ol>
<li>Change the input size to 320 by <code>RandomCrop(320, pad_if_needed=True)</code>.</li>
<li>Add Mixup and RandomErasing during training.</li>
<li>Add pre-trained models from <a href="https://github.com/Cadene/pretrained-models.pytorch">https://github.com/Cadene/pretrained-models.pytorch</a>.</li>
<li>Make a 40-fold CV and use some of them to train models.</li>
<li>Train se_resnext101 by 10 fold (1~10), inceptionresnetv2 by 5 folds (6~10) and pnasnet5large by 5 folds (1~5). As a result, 20 models are trained.</li>
<li>Use the average output of all models (#TTA 2). And adjust the threshold for each image according to the max probability of that image. Supposing we have a probability matrix whose shape is (N_SAMPLES x N_CLASSES), the binary results can be calculated by
<code>
prob_mask = []
for prob in probabilities:
prob_mask.append(prob &gt; prob.max()/7)
</code>
This threshold calculation can boost the LB score about 0.005.</li>
</ol>
<h2>What didn't work for me:</h2>
<ol>
<li>Graph Convolutional Networks <a href="https://arxiv.org/abs/1904.03582">https://arxiv.org/abs/1904.03582</a></li>
<li>Soft Label and Pseudo Labeling</li>
<li>EfficientNet</li>
</ol>
<p>Thanks.</p>
<p>Mixup is implemented here:
<a href="https://github.com/YU1ut/imet-6th-soltuion/blob/master/main.py#L235-L244">https://github.com/YU1ut/imet-6th-soltuion/blob/master/main.py#L235-L244</a></p>
<p>I tried different values on my local OOF validation data and 7 is the best.</p> |
iMet Collection 2019 - FGVC6 | 9th place solution (Knowledge Distillation) | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: iMet Collection 2019 - FGVC6
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>I tried different numbers and 0.7 was the best. 0.9 scored same lb but cv was too good and cv/lb gap was bigger. I got worse lb with 0.5.</p>
<p>Sure thing. </p>
<p><code>
0.00 0.00 1.00 0.00 (original hard target)
0.10 0.20 0.70 0.70 (output from parent model)
0.07 0.14 1.00 0.49 (new targets)
</code></p>
<p>This is what I wanted to tell. </p>
<p>It's a small modification on torchvision's RandomResizedCrop. It does random crop instead of center crop in case of fallback. There are many cases of fallback because of high aspect ratio images on imet and an edge of the image is more important than center of the image in some cases. Actually not very sure how much this alone affect cv/lb compared to vanilla but should be better with this.</p>
<p>```
class RandomResizedCropV2(T.RandomResizedCrop):</p>
<pre><code>@staticmethod
def get_params(img, scale, ratio):
# ...
# fallback
w = min(img.size[0], img.size[1])
i = random.randint(0, img.size[1] - w)
j = random.randint(0, img.size[0] - w)
return i, j, w, w
</code></pre>
<p>```</p>
<p>Thank you! I'm happy to share.</p>
<p>Hi everyone. This is my solution overview which scored 0.664 on public LB.</p>
<h3>Competition's challenge</h3>
<ol>
<li>9 hours kernels limit for inference. </li>
<li>Lack of proper labels. For example if an image has <strong>culture::american</strong>, there also should be <strong>culture::american or british</strong> but in many cases these labels are lacked. </li>
</ol>
<h3>Approach</h3>
<ol>
<li>I used <strong>knowledge distillation</strong> to compress the knowledge in an ensemble into student models so that I could use 9 hours more efficiently.</li>
<li>Knowledge distillation also attends to the lack of proper labels by providing soft targets to training images. Because competition metric favors recall over precision, this should improve lb.</li>
</ol>
<h3>Hardware</h3>
<p>I used kaggle kernels to train almost all the models. </p>
<h3>Model training</h3>
<p>The model training process can be split into 2 parts.
- Train <strong>teacher models</strong>
- Train <strong>student models</strong> using outputs of the teacher</p>
<h3>Teacher models</h3>
<ul>
<li>2x se_resnext101</li>
<li>2x se_resnext50</li>
</ul>
<p>I trained 4 models and averaged their outputs. I shared the performance of them during the competition and you can check that discussion here. <a href="https://www.kaggle.com/c/imet-2019-fgvc6/discussion/92159">https://www.kaggle.com/c/imet-2019-fgvc6/discussion/92159</a></p>
<p>Averaged output scored <strong>cv 0.6467</strong> and <strong>lb 0.656</strong>. This averaged output(cv 0.6467) on train images were used to train the student models next. </p>
<p>I did not treat tags and cultures separately.</p>
<p><strong>training method</strong>
I trained only a dense layer for first 2 epochs because their weights are fresh. I got this idea from the kernel Lopuhin kindly shared and result is better with this.
- 1st epoch (1/10 of baselr, dense layer only)
- 2nd epoch (5/10 of baselr, dense layer only)
- 3rd epoch (1/10 of baselr, all layers)
- 4th epoch (5/10 of baselr, all layers)</p>
<p><strong>se_resnext101</strong>
- image size: 320
- optimizer: Adam
- base lr: 1.2e-4
- batch size: 36
- scheduler: ReduceLrOnPlateau or StepLR
- best epoch: 12 to 15
- loss: BCEWithLogitsLoss + FBetaLoss
- augmentations: RandomResizedCrop, Horizontal Flip, Random Erasing
- TTA: RandomResizedCrop, Horizontal Flip (10 times)
- Training time: around 13 hours (Kaggle kernel)</p>
<p><strong>performance of 5-fold se_resnext101</strong>
- cv each: 0.627 | 0.629 | 0.627 | 0.628 | 0.630
- lb each: 0.630 | ---
- cv(concat): 0.628
- lb(mean): <strong>0.652</strong></p>
<p><strong>se_resnext50</strong>
- base lr: 1.4e-4
- batch size: 52
- Training time: around 7 hours</p>
<p><strong>performance of 5-fold se_resnext50</strong>
- cv each: 0.621 | 0.623 | 0.620 | 0.618 | 0.622
- lb each: 0.621 | ---
- cv(concat): 0.621
- lb(mean): <strong>0.641</strong></p>
<p><strong>augmentations</strong>
```
from torchvision import transforms as T</p>
<p>def train_transform(size):
return T.Compose([
RandomResizedCropV2(size, scale=(0.7, 1.0), ratio=(4/5, 5/4)),
T.RandomHorizontalFlip(),
T.ToTensor(),
T.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]),
RandomErasing(probability=0.3, sh=0.3),
])</p>
<p>def test_transform(size):
return T.Compose([
RandomResizedCropV2(size, scale=(0.7, 1.0), ratio=(4/5, 5/4)),
T.RandomHorizontalFlip(),
T.ToTensor(),
T.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]),
])
```</p>
<h3>Student models</h3>
<ul>
<li>1x se_resnext101</li>
<li>1x inceptionresnetv2</li>
</ul>
<p>I trained a new 6-fold se_resnext101 using outputs of the teacher model on training images. I multiplied it by 0.7 to prevent overtrusting his teacher. I took np.maximum of it and original hard targets to construct new targets. Test images were remained untouched. </p>
<p>Training parameters remained almost the same from teacher models. This 6-fold model scored <strong>cv 0.668</strong> and <strong>lb 0.662</strong> by itself. </p>
<p><strong>performance of 6-fold se_resnext101 (student)</strong>
- cv each: 0.669 | 0.669 | 0.668 | 0.664 | 0.665 | 0.668
- lb each: 0.649 | 0.648 | ---
- cv(concat): 0.668
- lb(mean): <strong>0.662</strong></p>
<p>I also trained inceptionresnetv2 and averaged with se_resnext101 and scored <strong>lb 0.664</strong>. </p>
<h3>Inference</h3>
<p>The inference part is very simple. I just averaged outputs of the student models. # of TTA is 7. </p>
<h3>Possible Improvements</h3>
<p>I have to admit there are lots of space for improvements.
- Train more teacher models / student models such as senet152, pnasnet for ensembling. My solution definitely lacks variants for better ensembling.
- Treat cultures and tags separately for training, thresholding as they have different characteristics. </p>
<p>I'd like to thank Lopuhin for sharing his great kernel where I borrowed some ideas/implementations such as making folds, binarizing outputs. <a href="https://www.kaggle.com/lopuhin/imet-2019-submission">https://www.kaggle.com/lopuhin/imet-2019-submission</a></p>
<p>I also would like to thank Bac Nguyen and his implementation of FbetaLoss. <a href="https://www.kaggle.com/backaggle/imet-fastai-starter-focal-and-fbeta-loss">https://www.kaggle.com/backaggle/imet-fastai-starter-focal-and-fbeta-loss</a></p>
<p>Thank you kaggle team and Zhang for hosting such a great competition. </p>
<p>Thanks for reading.</p>
<p>Thank you!
There is no preprocess except augmentations I mentioned. Aspect ratio of randomresizedcrop varies between 4/5 to 5/4 and this is the reason why I used 320px not 300px.</p>
<p>```
class FBetaBCE(torch.nn.Module):
def <strong>init</strong>(self, fbeta_weight=1.0):
super().<strong>init</strong>()
self.fbeta_loss = FbetaLoss(beta=2)
self.bce_loss = nn.BCEWithLogitsLoss(reduction='none')
self.fbeta_weight = fbeta_weight</p>
<pre><code>def forward(self, logits, labels):
loss_beta = self.fbeta_loss(logits, labels)
loss_bce = self.bce_loss(logits, labels)
loss_bce = loss_bce.sum() / loss_bce.shape[0]
return loss_beta*self.fbeta_weight + loss_bce
</code></pre>
<p>```</p>
<p>Almost the same I guess.
Implementation of fbeta_loss is from <a href="https://www.kaggle.com/backaggle/imet-fastai-starter-focal-and-fbeta-loss">https://www.kaggle.com/backaggle/imet-fastai-starter-focal-and-fbeta-loss</a></p>
<p>Do you mean did I use new targets to evaluate the performace on validation data? If so the answer is no. THe new targets are only used for calculating the loss.</p> |
iMet Collection 2019 - FGVC6 | My solution: 2nd Public | 5th Private | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: iMet Collection 2019 - FGVC6
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>All your comments look very emotional and not very constructive. I do not want to waste time on this kind of discussion.
All I can say about public results: seems that I overfitted. I have done this many times and not always successfully. TGS Salt: 11th (in Gold) Public -> 40th Private; Airbus: 2nd (!) Public -> 36th Private.
This time, more fortunate. And in the iMaterialist Fashion competition I unintentionally acted without local validation at all and the public LB correlated very well with the private LB.</p>
<p>I do not have a visa. And I hardly get it quickly. Can I delegate my speech to another person? We did not work together on this particular task, but he can perform very well. This is Vladimir Iglovikov <a href="/iglovikov">@iglovikov</a> .</p>
<p>It is very messy. I will try to find time to clean and share.</p>
<p>In this competence, I tried:
1. Nvidia apex.
I have been training different imagenet models with apex for a long time. And I and noticed only a minor drop in accuracy. So, I trained with O2 opt level even at 1080Ti. The speed did not boost, but the batch was higher, and therefore the BN statistics are more pleasant.
In readme of apex repo one promised that you need to add only 3 lines of code. In reality, of course, it is not. But if just apex converts the code into spaghetti, then adding distributed compatibility means throwing an additional pack of noodles.
At the same time, I haven't figured out how to scale the LR when training for 4-8 GPUs distributed. Fine tuning of scheduling goes away and SyncBN does not help.</p>
<ol>
<li><p>FocalLoss
weighted by crop and tag. Tags assigned more weight than culture. </p></li>
<li><p>Mixup.
I used implementation where the batch is mixed with its own shuffle version. As a result, at first it seemed that the nets were less overfitted, but this was due to the fact that the metric on the train was wrong. When I added additional validation train part without augmentations and compared it, it turned out that the nets just learn longer.</p></li>
<li><p>Batch accumulation.
With accum = 20.</p></li>
<li><p>Fine tuning of scheduling.
Nets overfitted a lot in this competition. So I tried to minimize the number of passes through the dataset. During grid search of right LR for different stages, it turned out that this is about how to raise the initial LR on one step and put ReduceLrOnPlatoe with Patience 0.</p></li>
<li><p>All experiments were performed using the se-resnext50. I still consider it as the best architecture in speed-accuracy trade-off. Most accurate was se-resnext101 and pnasnet-large. SENet154 I didn’t learn well for a long time with the explosion of gradients, and as a result, it didn’t perform very well. effnet-b3, se-densenet161, densenet161, didn’t work.</p></li>
<li><p>Label smoothing.
Tried with fixed epsilon and when it is selected individually by class depending on the frequency of occurrence of this dataset. As a result, the optimum threshold was slightly different. But these models helped in the ensemble.</p></li>
<li><p>Cleaning of data helps a lot. I've done: Clean up less than 20 labels and low per class AUC. Add labels by the confidence of OOF predictions. Clean up bad markup by confidence. </p></li>
<li><p>Augmentations, I also tried a lot of options.
Mostly from the article about tricks on Imagenet. CutOut did not boost but did not interfere. Large rotations and scale worsened score. As a result, I pad up to 320 along the narrow side with BorderReplicate, because it seemed to me that there were exhibits with a frame and I wanted the net to learn this pattern. Cut long sausages-style image and resize to net input. I end with:
`</p>
<pre><code>alb.PadIfNeeded(scale_size, scale_size, border_mode=cv2.BORDER_REPLICATE),
alb.ShiftScaleRotate(
shift_limit=0.1,
scale_limit=(-0.0, 0.0),
rotate_limit=5,
p=0.5,
interpolation=cv2.INTER_LINEAR,
border_mode=cv2.BORDER_REPLICATE,
),
alb.Cutout(p=0.2),
PowerFistRandomSizedCrop(
min_max_height=min_max_height,
height=scale_size,
width=scale_size,
w2h_ratio=w2h_ratio,
),
alb.HorizontalFlip(p=0.5),
alb.RandomBrightnessContrast(p=0.3),
alb.RandomGamma(gamma_limit=(95, 105), p=0.3),
post_transform(),
</code></pre></li>
</ol>
<p>`</p> |
iMet Collection 2019 - FGVC6 | Solution | Private 4th | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: iMet Collection 2019 - FGVC6
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Congrats to all the winners.
Thanks to Kaggle and hosting team for an interesting competition.</p>
<p>Here is my solution summary.</p>
<h3>Pure Classification Model</h3>
<p>Because LB scores of our classification models are similar or worse with many other competitors, I briefly describe the configurations.</p>
<ul>
<li><strong>Dataset</strong>: 10 folds CV with Multilabel Iterative Stratification</li>
<li><strong>Models</strong>: se-resnext101, senet154. After last convolution layer, one more fc layer is inserted.</li>
<li><strong>Augmentation</strong>: random resized crop, horizontal flip.</li>
<li><strong>Loss</strong>: class balanced focal loss. But it didn’t improve score compared to the pure focal loss.</li>
<li><strong>Optimization</strong>: AdamW,
<ul><li>learning rate 0.00025, weight decay 0.01 for se-resnext101.</li>
<li>learning rate 0.0001, weight decay 0.05 for senet154</li></ul></li>
<li><strong>Learning rate schedule</strong>: CosineAnnealingLR. T_max 15, eta_min 0.00001</li>
<li>Input size 320x320. Batch size 24</li>
</ul>
<p>With above configurations, the LB score of single fold se-resnext101 with hflip TTA was 0.618.</p>
<h3>Tag Relevance Prediction</h3>
<p>We’ve tried several methods, but all were failed. So we’ve decided to improve prediction methods.
We’ve found the method based on the <a href="http://class.inrialpes.fr/pub/guillaumin-iccv09b.pdf">paper</a> improves LB score significantly. </p>
<p>The detailed are the following:</p>
<p>For the training set, the presence probability of label <em>L</em> is defined as <em>1 – epsilon</em> if the example has label L, otherwise <em>epsilon</em>. The label presence probabilities of test example are the weighted sum over the nearest K train examples.</p>
<p>The weights are determined based on the distances between test and train examples. As a distance metric, 1 – cosine similarity between train and test example is used. The output of the last convolution layer is used as embedding features.</p>
<p>The weights of training example j for an test example i are defined as:</p>
<p>> \(\pi_{i,j}=exp(-d * distance(i,j))/\sum exp(-d * distance(i,j')))\)</p>
<p>The probability of class <em>w</em> for test image <em>i</em> is:</p>
<p>> \( p(y_{i,w}=1)=\sum \pi_{i,j} * p(y_{j,w})\)</p>
<p>The parameter d and K has been chosen based on the validation score.</p>
<p>With this method, the LB score of single fold se-resnext101 with hflip TTA was 0.634.
The average of two predictions is LB 0.650.</p>
<p><strong>Minor improvement</strong></p>
<p>We’ve found the images that have no close images have a low recall. So, we choose thresholds according to the distance from the nearest train image. If it has a higher distance, a lower threshold is assigned. Using this method, LB was improved +0.001~0.002.</p>
<p>Our final submission is an ensemble of 4 folds se-resnext101 and 6 folds senet154. </p>
<p>Thanks.</p>
<p>In my case, the score was different depending on the threshold.
The best threshold is 0.32 for classification model, 0.15 for tag relevant prediction and 0.22 for the ensemble of them. </p>
<p>In my case, additional FC layer with dropout 0.5 was helpful. I replaced last linear with linear -> batchnorm -> relu -> dropout -> linear.</p> |
iMet Collection 2019 - FGVC6 | Solution | Private Top 1 | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: iMet Collection 2019 - FGVC6
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>No, but I increased LR with increasing batch size (1e-4 -> 5e-4), with the same LR performance was worse</p>
<p>Yes, for both of them.
I generated it just as a features for LightGBM.</p>
<p>Probably after stage 2</p>
<p>yep, I found that moving from 64 to 64*4 increased my performance very good
So, I decided choose batch size ~ 64 * 20 (it increased a little bit more and I stopped tuning it)</p>
<p>Batch size accumulation
(you can perform optimizer step each N batches)</p>
<p>But anyway the most important for this competitions was a correct LR scheduler (which I set manually for each epoch) + preprocessing (CropIfNeeded + Resize)
Preprocessing speed up convergence which helped me to perform a lot of experiments</p>
<p>It is very hard to say how much each step influenced on the LB because I did not make a lot of submissions (validation was very good).
According to validation:
- filtering (~0.005 - 0.01, do not remember exactly)
- pseudo labeling (it increased (0.01)
- second level (it increased 0.005 - 0.007)</p>
<p>No, I used just pure pytorch as a DL framework
No, I did not use GAPNet, Just resnet, DenseNet, NasNet like architectures </p>
<p>No, nothing from this</p>
<p>I can't say it exactly because I found it improvements at the early stage of my trainings. At the beginning I got boost smth like 0.59->0.605 on CV (yes, it is great, but I did not compare the last models)</p>
<p>I have no visa ;(</p>
<p>Congrats to all who finished in the gold zone. Now, while we are waiting for second stage results let me share my solution part. I do not expect shake up, so I think leaderboard will be stay the same.</p>
<p>First of all, what is the main challenge in this competition? Of course it is noisy data and we should find a way to work with it.
So, i can divide my solution into several stages:</p>
<p>Hardware: 6x 1080 ti with 36 cores and 120 gb RAM in total.</p>
<p><strong>Stage 0. The same part for all stages:</strong>
Models: SENet154, PNasNet-5, SE-ResNext101 (all pretrained from cadene repo)
CV: 5 folds with Multilabel Iterative Stratification
<em>Data Augmentations:</em>
<code>
HorizontalFlip(p=0.5),
OneOf([
RandomBrightness(0.1, p=1),
RandomContrast(0.1, p=1),
], p=0.3),
ShiftScaleRotate(shift_limit=0.1, scale_limit=0.0, rotate_limit=15, p=0.3),
IAAAdditiveGaussianNoise(p=0.3),
</code>
<em>Data Preprocessing:</em>
This part was a great finding which speed up convergence and increased score very good. So, analysis of different models with Crops(224), Crops(288), Crops(300) and so on shows that it influence on tags a lots. Actually, lets imagine that you have picture 500x300 and this image labeled with tag «person» if there are persons somewhere at this image. There is a not so low probability that you crop it => you data becomes more and more noisy.
So, I decided to use CropIfNedeed + Resize. The main Idea is perform crop if it possible (for example crop 600x600 from 300x500 image — 300x500. from 300x900 — 300x600). Code below:</p>
<p>```
class RandomCropIfNeeded(RandomCrop):
def <strong>init</strong>(self, height, width, always_apply=False, p=1.0):
super(RandomCrop, self).<strong>init</strong>(always_apply, p)
self.height = height
self.width = width</p>
<pre><code>def apply(self, img, h_start=0, w_start=0, **params):
h, w, _ = img.shape
return F.random_crop(img, min(self.height, h), min(self.width, w), h_start, w_start)
</code></pre>
<p><code>
So, I used the following couple:
</code>
RandomCropIfNeeded(SIZE * 2, SIZE * 2),
Resize(SIZE, SIZE)
```
With SIZE = 320 for SEResNext101 / SENet154 and SIZE = 331 for PNasNet-5</p>
<p><em>TTA</em>: Using the previous approach for data preprocessing we can just use TTA2 (original + hflip image)</p>
<p><em>Scheduler</em>: manual, the correct scheduler increased accuracy a lot. I found it by analyzing plots with metrics / losses.
I started with LR: 0.005 then train 15 epochs, drop it by 5 times and train some epochs more.</p>
<p><strong>Stage 1. Training the zoo:</strong></p>
<p><em>Loss</em>: Focal
<em>Sampling</em> with logarithmic weights
<em>Batch size</em>: 1000-1500 (accumulation 10-20 times for different models)</p>
<p><strong>Stage 2. Filtering predictions:</strong>
Drop images from train with very high error between OOF predictions and labels (I consider them as very noisy and incorrect)</p>
<p><em>Loss</em>: Focal
+ Hard negative mining (Sample 5% of hardest samples each epoch)</p>
<p>Re-train models from scratch.</p>
<p><strong>Stage 3. Pseudo labeling:</strong></p>
<p><em>Loss</em>: Focal</p>
<p>The simplest version of pseudo labeling, add the most confident predictions (highest np.mean(np.abs(probabilities - 0.5)) ) to the training dataset.</p>
<p>Re-train models from scratch.</p>
<p><strong>Stage 4. Culture and tags separately:</strong></p>
<p>I found 2 main things about cultures and tags:
1. tags are less noisy than cultures (and starting from some epoch the culture accuracy did not increased very much)
2. some tags classes are very similar to ImageNet classes</p>
<p>So, using already trained weights as pretrain I continue training model for 705 classes (tags only) </p>
<p><em>Loss: Focal</em> -> BCE (yes, here I switched the loss, I helped too)</p>
<p><strong>Stage 5. Second-level model</strong></p>
<p>I construct the binary classification dataset: I took 1103 (number of classes) rows per each image and trying to predict that this class relates to this image (0 or 1). So it means the length of my train data becomes <code>len(data) * 1103</code>.</p>
<p>I extract next features:
- probabilities of each models, sum / division / multiplication of each pair / triple / .. of models
- mean / median / std / max / min of each channel
- brightness / colorness of each image (you can say me that NN can easily detect it — yes, but here i can do it without cropping and resizing — it is less noisy)
- Max side size and binary flag — height more than width or no (it is a little bit better for tree boosting than just height + width in case of lower side == 300)
- Aaaaand the secret sauce: ImageNet predictions ;) As I already mentioned — some tags classes similar to ImageNet classes, but ImageNet much bigger, pretrained models much more generalized. So, I add all 1000 (number of ImageNet classes) predictions to this dataset</p>
<p>So, then I trained LightGBM on all this data.</p>
<p><strong>Hints / Postprocessing:</strong>
- Different threshold for cultures and tags models.
- EDA shows that tags are fully labeled and cultures may be not. So, I binarize predictions using the following code:</p>
<p><code>
culture_predictions = binarize_predictions(predictions[:, :398], threshold=0.28, min_samples=0, max_samples=3)
tags_predictions = binarize_predictions(predictions[:, 398:], threshold=0.1, min_samples=1, max_samples=8)
predictions = np.hstack([culture_predictions, tags_predictions])
</code></p>
<p>(thank you to <a href="/lopuhin">@lopuhin</a> for the binarize_predictions code in his kernel)</p>
<p>Total training time: ~10-15 days (that's why I submitted not very often ;) )</p>
<p>P.S. I did not submit all this ensemble to the private stage… Due to I faced with kernel limit time :facepalm: (for example I used LGBM for pseudo-labeling but did not use it at the next cycle of test predictions)</p>
<p>Yep, I can</p>
<p>Sorry for late answer
1. I used WeightedSampler where weights was set to log of probability of classes in dataset
2. No, I add hardest data in each epoch (from previous). I sampled 5% of samples with biggest loss on the previous epoch to current. (number of images in epoch != max number of images)</p> |
iMet Collection 2019 - FGVC6 | [Solution] 22 Place, LB 0.650 | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: iMet Collection 2019 - FGVC6
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Here are my findings and tricks, which helped me to get LB 0.650 by combining 3 models: se_resnext101_32x4d, pnasnet5large, senet154.</p>
<p><strong>Validation</strong>: I used CV 5. Folds were made by the <a href="https://github.com/trent-b/iterative-stratification">iterative stratification package</a>. The package is extremely useful for unbalanced dataset and multiclass classification.</p>
<p><strong>Scheduler</strong>: is one of the most crucial part for the fast convergence. I used
CosineAnnealingWarmRestarts increasing the frequency by factor of two after each restart (see the sketch). Using this strategy 15 Epochs(3 restarts) were enough to converge. And I was able to test different ideas much faster.
<img src="https://i.ibb.co/ggyVkRJ/sgdr.jpg" alt="Learning rate"></p>
<p><strong>Augmentation:</strong> everything is very standard: resize + random crop, small color jitter, hflip and small affine:
<code>
transforms.Resize(img_size),
transforms.RandomCrop((img_size, img_size), padding=0, pad_if_needed=True),
transforms.ColorJitter(brightness=0.1, contrast=0.1, saturation=0.1, hue=0.1),
transforms.RandomHorizontalFlip(p=0.5),
transforms.RandomAffine(degrees = 10, translate= (0.0, 0.2), scale=(0.8, 1.2), shear=10)
</code>
<strong>Mixup</strong>: is a trick from the paper: <a href="https://arxiv.org/abs/1812.01187">Bag of Tricks for Image Classification with Convolutional Neural Networks</a>. The parameter of beta distribution was reduced by the same scheduler as the learning rate. It was reduced from 1 (very aggressive mixup) to 0 (no mixup at all). Mixup produced better validation results, but made the effect of TTA smaller. At the end it added around 0.002 per model. Adding this trick also made the convergence slower.</p>
<p><strong>Dropout</strong>: The finding for the se_resnext101_32x4d: the standard dropout is 0.3, after increasing the dropout to 0.5 the CV and LB increased by ~0.005.</p>
<p><strong>Loss</strong>: BCEWithLogitsLoss. The experiments with using Focal Loss or combining Focal Loss with BCE were not successful.</p>
<p><strong>Batch Size</strong>: gradient accumulation didn’t help me much, so I just fit the batch size to occupy the whole GPU memory.</p>
<p><strong>Threshold fitting</strong>: I guess it helped a lot to get nice results and made it easier to compare different experiments based on LB.
First, I used validation to get the optimal threshold value, then I used this threshold to calculate average amount of labels per instance for the validation. It was equal to 5.2. During the inference I dynamically calculated the threshold such that I will approximately have 5.2 labels per image. Here is the code:</p>
<p><code>
desired_mean = 5.2
for th in np.arange(1000)/10000. + 0.01:
pred = (np.array(res) &gt; th).astype(np.float)
if np.abs(pred.sum()/len(pred) - desired_mean) &lt; closest:
closest = np.abs(pred.sum()/len(pred) - desired_mean)
fix_th = th
</code></p>
<p>I also have another strategy, where I try to have a certain distribution of the predictions. It performed just a bit better (<0.001) than the previous strategy, but it is quite hard to explain (I can share the code if someone will be interested).</p>
<p><strong>TTA</strong>: All augmentations from training, parameters of RandomAffine and ColorJitter are smaller.</p>
<p><strong>Results</strong>:</p>
<p>| model | description | CV | LB |
| --- | --- | --- | --- |
| se_resnext101_32x4d | size: 300x300; + TTA4 | 0.607 | 0.642 |
| pnasnet5large | size: 331x331; + mixup + TTA4 | 0.609 | 0.641 |
| senet154 | size: 224x224; + mixup + TTA4 | 0.605 | 0.639 |</p>
<p>Ensemble of this 3 Networks with TTA2: 0.650.</p> |
iNaturalist Challenge at FGVC 2017 | 2018version | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: iNaturalist Challenge at FGVC 2017
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>FYI:<a href="https://github.com/visipedia/inat_comp/blob/master/README.md">https://github.com/visipedia/inat_comp/blob/master/README.md</a></p> |
iWildCam 2020 - FGVC7 | LB #1 Solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: iWildCam 2020 - FGVC7
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p><a href="/stevenyin">@stevenyin</a> Hi steven, Here's our solution, cheers.</p>
<p>Sorry for the delay in posting our solution.
Thanks to other teams for posting their solutions and the host for hosting this competition.</p>
<p>We would like to share our solution with the competition participants and also the fine-grained recognition community.</p>
<p>In general, we utilize Efficientnet-b6 and SeResNeXt101 as our backbone networks, several training methods and tricks are used to help the model to learn generalizable feature representations across camera locations. Note that, our method achieves 0.953 public and 0.906 private score, while the highest submission which we did not pick was 0.953 and 0.917 on public and private leaderboards, respectively. In details,</p>
<ol>
<li>A mixup strategy for the long-tailed data distribution which utilizes two data samplers, one is the uniformed sampler which samples each image with a uniformed probability, another is a reversed sampler which samples each image with a probability proportional to the reciprocal of corresponding class sample size. The images from those two data samplers are then mixed-up for better performance.</li>
<li>An auxiliary classifier for different locations is added to our network, in the front of classification for location, the feature will go through a gradient reversal layer to ensure the model can learn a generalizable feature across locations.</li>
<li>Adversarial training (AdvProp) was utilized to learn a noise robust representation.</li>
<li>We ran MegaDetector V4 on all images.</li>
<li>The bbox with the max confidence in each image is cropped to train a bbox model.</li>
<li>We train two versions of each model, one with original full image and one with the cropped max confidence bbox.</li>
<li>During test, for the image which has at least one bbox, the prediction is a weighted average of bbox model and full image model(0.3 full image + 0.7 bbox), for the images which have no bbox, we will use the full image model.</li>
<li>We first obtain the predictions of Efficientnet-b6 and SeResNeXt101 using the aforementioned procedure, then the final prediction is a simple average of the two network architectures.</li>
<li>We average the predictions which are clustered by location and datetime, as the sequence annotation appears to be noisy.</li>
</ol> |
iWildCam 2020 - FGVC7 | LB #2 Documentation | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: iWildCam 2020 - FGVC7
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p><a href="/stevenyin">@stevenyin</a> here you go :)</p>
<p>Apologies for the late submission documentation, and thanks to the organisers of the competition - this was great fun.</p>
<p>After attempting many different approaches my highest scoring solution was generally as follows:</p>
<ol>
<li>Removed noisy and/or 'human' class categories - 'start', 'end', 'unknown', and 'unidentifiable'. Retaining the empty class seemed important for reducing activation on background to generalise better.</li>
<li>Encorporated iNat 2017/2018 data for shared classes.</li>
<li>Ran MegaDetector v4 on all images.</li>
<li>Took only animal detection crops with confidence >0.3 and discarded detections which were too small. Also discarded crops within the same image if they were much less confident than the highest.</li>
<li>Made sure to expand height of crop to be 224 pixels, even if the bounding box was smaller, then used reflection padding to square the crop and resized to 224x224 resolution.</li>
<li>Applied various data augmentations including CLAHE, hue shift, gaussian noise, cutout, scale/rotate/shift, brightness/contrast adjustments and grayscale.</li>
<li>Created a custom classifier head which incorporated metadata from the image. This used cos/sin representations of of the time of year and time of day, and information regarding the crop (width, height, pitch, yaw).</li>
<li>Trained for 7 epochs with a single EfficientNetB4 model pretrained with imagenet noisystudent weights, using label-smoothed cross-entropy focal loss. This was 2 hours of training.</li>
<li>Used TTA with every training augmentation except for noise to get predictions.</li>
<li>Combined the predictions for each crop within an image to get a total average for the image.</li>
<li>Averaged these predictions across images which were clustered by time and location for the final submission. This was only done as the sequence ID labels appeared to be unreliable.</li>
<li>Achieved 0.930 public, 0.903 private LB score. The highest submission with this approach which I didn't pick was 0.908 private.</li>
</ol>
<p>This was my first proper experience with computer vision so I unfortunately ran out of time to try a lot of the things I wanted due to my slowness :). I wanted to try:
- Ensembling different EfficientNets at different input resolutions.
- Using the location multisat data to create a geo prior with which to augment the prediction. I had initially used this data directly in a single classifier model but it too easily overfit.
- Creating a more realistic infrared data augmentation rather than relying on grayscale.
- Using a siamese network to distinguish between the most confused species.
- Reintroducing the 'vehicle'/'human' megadetector crops with a prior or separate classifier. </p>
<p>Hi Justin. Since I'm using a crop from a larger image rather than the original the intuition was that knowing whether a crop is high above the camera's horizon or on the ground, or whether it was cropped at the edge of the image may help in making accurate predictions. The pitch is 0. -> 1. from the bottom of the frame to the top, and the yaw is normalized to be 0. at the center and 1. at the edge. It seemed to give a small bump in validation accuracy so I kept it.</p>
<p>I didn't record/don't remember how big the bump in accuracy from the iNat classes was, sorry. I don't believe it was too significant.</p>
<p>Done! :) </p> |
iWildCam 2020 - FGVC7 | LB #3 Solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: iWildCam 2020 - FGVC7
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Hi Kagglers,
Sorry for the late post about our solution. My teammate and I summarize our solution in the following perspectives:
<code>
- Data
- Preprocessing/Augmentation methods
- Models
- Prediction technique
- Class imbalance
- Fine-grained features
</code></p>
<ul>
<li><p>Data</p>
<ol><li>We used crops of animals for training provided by the official detector results. Both 64x64 crops and 224x224 crops were tried and 224x224 crops provided better score;</li>
<li>Some of the original bounding boxes were highly skewed, we tried to extend the skewed bounding boxes to make it quasi-square, and resized them to 224x224;</li></ol></li>
<li><p>Preprocessing methods</p>
<ol><li>We tried multiple augmentation methods, ranging from traditional physical augmentation to optical techniques. The final augmentation methods employed by us were "horizontal-flip" & " CLAHE/contrast limited adaptive histogram equalization".</li>
<li>Augmentations were applied during training on the fly;</li></ol></li>
<li><p>Models
<code>
| Model | Public LB | Private LB |
|-------------------------------|
| EfficientNet | 0.835 | 0.823 |
| ResNet152 | 0.794 | 0.815 |
| NTS with ResNet50 | 0.837 | 0.841 |
| NTS with ResNext50 | 0.840 | 0.850 |
|Final Ensemble Model = 6 NST models + EfficientNet | 0.843 | 0.848 |
</code>
Another trick we'd like to point out is using <code>GridSearch</code> to search for the optimal weights of each weak learner, using the local validation score as the benchmark.</p></li>
<li><p>Prediction technique
We find the test images are sorted in the following order:
<code>
Sequence ID --&gt; Locations --&gt; Time Stamp --&gt; image dimension
</code>
Therefore, test crops were sorted into clips following the above logic. While prediction, a prediction by clip was adopted naturally. Specifically, for each clip, we average predicted probs of all images to calculate the final label; then the final label was shared by all the images under this clip.</p></li>
<li><p>Class imbalance</p>
<ol><li>We tried multiple ways to counter class imbalance issue. For example, we used class weights calculated via the effective number pointed out by the paper, "Class-Balanced Loss Based on Effective Number of Samples", which didn't work. We also tried BBN Network but failed to make it run. At last, we still used routine class weights calculated by <code>1 / # of images</code> of each class.</li></ol></li>
<li><p>Fine-grained features</p>
<ol><li>We used a SOTA network, NTS, which consists of a Navigator, a Teacher and a Scrutinizer. </li></ol></li>
<li><p>Wrap up
So, that's basically our team's solution to Rank 3 in this competition. We'd like to learn from the rest teams, especially #1 & #2 team. If any team member from these two teams happens to see my post, please leave me a message so that I can drop you an email/Wechat message to connect. </p></li>
</ul>
<p>Thanks,
Steven Yin & Ryan Zheng</p> |
iWildCam 2020 - FGVC7 | LB #4 Documentation | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: iWildCam 2020 - FGVC7
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Our final solution is based on a simple InceptionV3 model pre-trained on Imagenet. This model was trained on the camera trap images whose animals were cropped out using the provided Megadetector bounding boxes as reference. Then, we choose the model that reached the best performance on a validation set, which was split from the training set taking into account the locations. The final prediction assigned for each image is obtained by using the majority voting among images within the same sequence. Due to the fact that the provided sequence data is noisy, we have used location and timestamp information to determine which images should belong to the same sequence.</p>
<h3>Dataset splitting</h3>
<p>Our validation set has approximately 11% of the original images of the training set. We have used a heuristic to try to keep both species and background diversity in the training and validation sets, described as follows. First, all the locations where the number of instances represents more than 1% of the total are added to the training set. This represents around 50% of the original training set. Then, for each class we randomly select locations to compose the validation set, one at once, until at least 5% of instances of the current class are added to the validation set. After that, the remaining locations available for that class are used to compose the new training set. Classes appearing on less than 6 locations or with less than 30 instances were ignored during the splitting process. The classes 'empty', 'start', 'end', 'unknown', and 'unidentifiable' were ignored as well. This heuristic does not produce splits following the original class distribution. However, we do not expect the test set to follow exactly the same distribution either. It also leads many classes not appearing in the validation set, especially those classes with fewer instances. Nonetheless, we have only removed classes that appear in the validation set but no in the training set. At the end, the training set was composed of 211 classes from 275 locations and the validation set was composed of 114 classes from 49 locations.</p>
<h3>Megadetector bounding boxes</h3>
<p>We have used the Megadetector results to crop images out before training. As the image resolutions vary along the dataset, we decided to use a square crop around the bounding box generated by Megadetector. The square size is the same as the largest dimension of the bounding box. Additionally, we have limited the square crop to have at least 450x450 of resolution and the maximum of the height of each image. In addition, the square crop will always keep the bounding box area centered unless the crop gets out the image. In those cases, we move the crop to get valid pixels and the animal is no more centered. For the images without a bounding box, we just crop a centered square with the image height. This procedure was applied on all images before training, including images from the test set.</p>
<h3>Model training</h3>
<p>We have used an InceptionV3 model pre-trained on Imagenet with input size 299x299, which was trained on our training subset for 48 epochs. The best accuracy model among all epochs was selected as the final model. Data augmentation was used during the training, using the following transformations applied to the input images: zoom, rotation, horizontal flip, shearing, horizontal and vertical shifts. The model was trained using the SGD optimizer with momentum of 0.9 and a simple learning rate schedule based on epochs, similar to the procedure employed in the paper “Automatically identifying, counting, and describing wild animals in camera-trap images with deep learning”. The learning rate schedule was 1-5: 0.005, 6-10: 0.001, 11-20: 0.005, 21-27: 0.001, 28-35: 0.0005, 36-onwards: 0.0001.</p>
<h3>Majority voting among images from the same sequence</h3>
<p>To determine the final prediction for each image, we first check for empty images. If our model predicts “empty” and Megadetector does not provide bounding box, the class “empty” is assigned to the input image. Otherwise, majority vote is calculated considering all images from the same sequence, excluding the ones already marked as empty. Images are considered as belonging to the same sequence when they are from the same location and the difference between their timestamps is at most 30 minutes.</p>
<h3>What we would like to have tested</h3>
<p><strong>Big Transfer (BiT)</strong>. This new model seems promising for transfer learning, requiring fewer instances per class to reach high performance. Unfortunately, we were unaware of it until few days before the deadline of the competition. Consequently, we weren’t able to test it in time.</p>
<p><strong>A model employing all images from the same sequence to identify animal species</strong>. Depending on the position of the animal on the scene, it may be very difficult even for experts to identify the species. However, if the same individual can be identified with high confidence on another image of the same capture event, this classification can be used for the whole event. An approach that uses this temporal information is described in the paper “Context R-CNN: Long Term Temporal Context for Per-Camera Object Detection”, but we didn’t try it.</p>
<p><strong>MegadetectorV4 bounding boxes</strong>. We would like to have tested the new version of the Megadetector. We think this new version could have generated bounding boxes more accurately. It also includes a new vehicle class, which could have improved our results for this class.</p> |
iWildCam 2020 - FGVC7 | LB #8 Documentation | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: iWildCam 2020 - FGVC7
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Hi everyone! Thanks to the organizers for putting together such a fun competition and really interesting set of data. Here is our submission (rank #8 or #10 depending on whether you count the [Deleted] entries) and some other things we tried / wanted to try.</p>
<p>Our submission was a pretty simple ensemble of a resnet 152 and a resnet 101.</p>
<p>We used all of the provided Megadetector annotations to create a dataset of cropped images with species labels. We then split the data into a training and validation set (just 2 fold), making sure the validation set came from distinct locations and keeping the class distributions approximately the same. We used a ‘mini’ version of each of these (~5-10% of the images) as a mini-training and mini-validation set, which we did our hyperparameter tuning on. </p>
<p>The resnets were pretrained on ImageNet and finetuned on the camera trap crops with pretty heavy data augmentation (random flip, random rotate, random zoom, random contrast and lighting modification, random warp, and mixup). Both models were further fine-tuned for a couple epochs with a class-balanced subset of crops which contained no more than 100 examples per class.</p>
<p>Within each image, each Megadetector detection is cropped, sent through the classifiers, and then weighted by Megadetector’s detection confidence. We tuned how much to weight this using a simple ‘diminishing factor’ <em>d</em>: (detector_confidence + d) / (d + 1) * crop_prediction.</p>
<p><strong>(Here’s the interesting part that didn’t really go anywhere)</strong>
We then multiply each image’s prediction vector by a set of ‘geographical priors’ we constructed in a similar fashion to host Elijah’s paper <a href="https://arxiv.org/abs/1906.05272">Presence-Only Geographical Priors for Fine-Grained Image Classification</a>. Since we didn’t have latitude and longitude information, we used the datetime (encoded to sin/cos as in the paper) and the mean normalized difference vegetation index (basically infrared minus red) of each satellite image to construct the geographical priors model. We used a ‘diminishing factor’ for these priors too, but only ever saw about a ~1% increase in our validation score, and a 0.2% increase in our test score. But we thought this was interesting and are excited to see what others tried with the satellite imagery.</p>
<p>We tried a couple other methods in addition to mean NDVI that didn't work as well. First, we tried just using resnet-extracted image features for each satellite image as features for the model (replacing latitude and longitude in the paper’s model) - but, since the data is ‘presence-only’ the model relies upon generated negative examples, which we couldn’t really generate reasonably for the image features. We tried just selecting random satellite images from the dataset to use as ‘negatives’, but this is such a limited number of data points it didn’t seem to work. And just generating a random length-2048 feature vector as a negative seemed silly, since it seems unlikely that a satellite image would ever generate those features.</p>
<p>Then we tried the mean NDVI thing. Cool. As a sidenote, we eliminated all the cloud, shadow, and 0-value pixels from the satellite images using the QA data, so the NDVI we calculate is only for visible land.</p>
<p>Then we tried making that more complicated. Instead of using the mean NDVI (one value per image), we modeled a normal distribution of pixel-wise NDVI values over each satellite image, created a cumulative distribution function for those values for each image, and sampled 100 points from the CDF for features. That allowed us to generate more complex negative examples by generating random normal distributions with a mean between -1 and 1 (the values that the NDVI can take). This seemed like it was helping, as the validation loss of the geo priors model went down…but when we actually used it for priors for classification our score went down.</p>
<p>Anyway, this didn’t really help much, but I thought it was cool. Would love to chat more if anyone has any thoughts on this (or maybe the hosts can tell me why my ideas are bad :P )</p>
<p><strong>(Back to the boring part)</strong>
We then perform a moving average of the image-wise prediction vectors over each sequence. So, each image’s prediction becomes the average of the images nearest to it. We tuned this window size.</p>
<p>We then tried a majority vote ‘with dissenters’ within sequences (this may have a real name). It’s just a majority vote for the final class of each sequence, but if an image has a differing classification with a confidence higher than our ‘dissenter threshold’, we let it keep its prediction. This was meant to help sequences that were mostly empty save for a few frames. We tuned this threshold as well as a ‘handicap’ for non-empty images, such that a majority vote can only be for ‘empty’ if it has at least ‘handicap’ more empty images than the 2nd place class in the vote.</p>
<p><strong>What we wanted to try</strong>
Sara’s paper! <a href="https://arxiv.org/abs/1912.03538">Context R-CNN: Long Term Temporal Context for Per-Camera Object Detection</a>
We actually implemented a modification of this which just uses the memory banks and attention networks during classification (still using Megadetector for detections). Unfortunately we entered this thing pretty late and didn’t have time to train it….but we’ll give it a go as a ‘Late Submission’ :)</p>
<p>Well, that was a spiel. If you read through hopefully it’s either interesting, or you’re reading this a year from now as a starting point for iWildcam 2021. Heh</p>
<p>So long, and thanks for all the ocellated turkey….</p> |
iWildCam 2022 - FGVC9 | 1st Place Solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: iWildCam 2022 - FGVC9
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Our method is based on filtering detection results of megadetector. No model training or tracking algorithm is involved. We just return the max number of objects among all images in sequence as our final predict.<br>
By observation, if we set 0.95 as confidence threshold, we found that we will undercount animals in images with high-density objects and overcount animals in images with low-density objects. So we divided all images in to low-object -density images and high-object-density images and then use different filtering methods to solve them separately. The threshold is 8 predictions in a image</p>
<p><strong>high-object-density images</strong><br>
To count more animals, we set the confidence threshold as 0.0. To remove the duplications we applied NMS method and set IoU=0.2. We also make little modifications to suppress smaller boxes. I cannot upload images but the filtering result is pretty good by observation. With the help of our NMS method, the public score can increase to 0.253</p>
<p><strong>low-object-density images</strong><br>
If we don't take confidence score into consideration, We assume that area with many bounding box overlap has higher tp probability than area without overlapping bounding boxes if set the confidence threshold as 0. After few times of fine-tuning, we found that 0.98(confidence threshold) for boxes without overlaps and 0.8 for boxes with overlap can get best public score. Public score can raise up to 0.249 now. We also applied second round of filtering for images with 1 or 2 object left after first round of filtering. The confidence threshold is 0.98 and now we get our best result 0.247</p>
<p><strong>Summary</strong><br>
We don't try any training methods because ground truth is not provided and if we use the detector's result as training data, the error propagation will happen and new model cannot outperform megadetector if we don't manually filter out FP prediction of megadetector. We attend this competition 1 week before deadline so we don't have enough time to focus on tracking algorithm. If time permits, we will continue the study and focus on tracking objects in image sequence. </p> |
iWildCam 2022 - FGVC9 | 9th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: iWildCam 2022 - FGVC9
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Congratulations to the winners and thank you to the competition hosts, current and past, for continuing to sponsor this important, enjoyable competition. I confess that it still challenges me even in its reduced animal-only, counting form. Goats! How can anyone count goats, wandering here and there, staring into the camera? I would love to hear how the top teams - anybody for that matter - approached this counting problem.</p>
<p><strong>Outline of my approach</strong></p>
<ul>
<li>Use <em>MegaDetector</em> v4 detections to train a 2nd animal detector based on <a href="https://github.com/ultralytics/yolov5" target="_blank"><em>Yolov5</em></a>.</li>
<li>Merge MegaDetector and Yolov5 detections using <em>weighted boxes fusion</em> <a href="https://github.com/ZFTurbo/Weighted-Boxes-Fusion" target="_blank">WBF</a>.</li>
<li>Apply a custom frame-to-frame object tracking algorithm focused on individuals within herds/packs. </li>
</ul>
<p><strong>The herd/pack species that I focused on:</strong></p>
<table>
<thead>
<tr>
<th>Category Id</th>
<th>Common Name</th>
</tr>
</thead>
<tbody>
<tr>
<td>2</td>
<td>white-lipped peccary</td>
</tr>
<tr>
<td>8</td>
<td>collared peccary</td>
</tr>
<tr>
<td>70</td>
<td>wild goat</td>
</tr>
<tr>
<td>71</td>
<td>domesticated cattle</td>
</tr>
<tr>
<td>72</td>
<td>domestic sheep</td>
</tr>
<tr>
<td>90</td>
<td>african bush elephant</td>
</tr>
<tr>
<td>96</td>
<td>impala</td>
</tr>
<tr>
<td>256</td>
<td>dromedary camel</td>
</tr>
</tbody>
</table>
<p><strong>Observations</strong></p>
<ul>
<li><p>I first learned about WBF in the <a href="https://www.kaggle.com/c/tensorflow-great-barrier-reef" target="_blank">Kaggle COTS starfish competition</a> where it was used by many teams to post process detections. I used it successfully to merge the sponsor-provided MedaDetector detections with my Yolov5 detections and it gave me a public/private LB score of 0.275/0.265 (above Benchmark: iWildCam 2021 winner). I spent some time tuning the algorithm's hyperparameters but I'm not sure it provided any significant benefit beyond my initial success.</p></li>
<li><p>I found plenty of anecdotal evidence that an inter-frame counting approach could improve on the standard <strong>max</strong> approach of the benchmarks, but I was never able to get my solution to provide any benefit once I scaled up to the whole dataset. Since the approach works well in video, I hoped it would work with some of our sequences. The first step in my process was to try to estimate the overall direction a herd was moving so that I could eliminate many candidates from the inter-frame matching process. Again, this was easy to do in many cases, but unreliable overall, leading to weaker matching in most cases. I'm still looking into the details of exactly what happened.</p></li>
<li><p>Additional detail for the inter-frame matching approach: Herd sequences were identified by a 9-class Yolov5 detector. For any sequence with a preponderance of herd detections, detection-level matching was performed using the following method. An autoencoder was trained on small chips from the center of mass of the DeepMac mask for each detection. The autoencoder produced a 512-element latent feature vector for each detection. These latent features (along with X,Y, area, delta-T) where used in detection-to-detection matching within the sequence. Using the DeepMac segmentation masks to select which chips are fed to the autoencoder showed improvement over using the entire detection (scaled) or the center of the detection. Presumably this is due to the elimination of non-individual pixels due to occlusion and/or background between the individual's legs.</p></li>
<li><p>My largest counting errors in the validation set usually involved domestic cattle. Consequently, I spent a lot of time looking at sequences of domestic cattle and getting a little discouraged about not being able to spend more time with the more <em>exotic animals</em>. Then I heard a story on public radio about how damaging cows can be in many different situations. This got me thinking about camera traps for habitat destruction monitoring and suddenly counting cattle seemed much more significant. </p></li>
</ul> |
iWildcam 2021 - FGVC8 | 1st Place Solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: iWildcam 2021 - FGVC8
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Our final solution is divided into two components: a sequence classifier based on an ensemble of EfficientNet-B2 [1] models and a counting heuristic following the one proposed for the competition benchmark.</p>
<h3>Classifier</h3>
<p>Following the previous year’s solutions, we trained two EfficientNet-B2 models: one using full images and a second one using square crops from bounding boxes generated by MegaDetectorV4. We used Balanced Group Softmax [2] to handle the class imbalance problem in the dataset. The prediction assigned to each image is the weighted average of the predictions assigned to the bounding box with the highest score and to the full image (0.15 full image + 0.15 full image (flip) + 0.35 bbox + 0.35 bbox (flip)). To classify a sequence, we average the predictions of all nonempty images for each sequence.</p>
<p><strong>Training procedure</strong>: Before using Balanced Group Softmax, we trained the classifiers using the standard softmax. We initialized EfficientNet-B2 with ImageNet weights and trained it for 20 epochs on the iWildCam2021 training set using the default input resolution (260x260). Then we fine-tuned the last layers for 2 epochs using a higher input resolution (380x380) to fix the train/test resolution [3]. Specifically for the bbox model, we pretrained the classifier layer for 4 epochs before unfreezing all layers to train. During the training, we used label smoothing and SGD with momentum. We also applied warmup and cosine decay to the learning rate. Still for the bbox model, we considered all bounding boxes with confidence > 0.6 as an independent training sample and applied a square crop around it with the size of the largest side. We used the image label as the bounding box label.</p>
<p><strong>Image preprocessing</strong>: In terms of image preprocessing, we applied random crop, random flip, and RandAugment (N=6, M=2) [4]. During the fix train/test resolution stage, we used the test preprocessing, which consists only of resizing the image/crop to the network input (Keras EfficientNet includes normalization layers).</p>
<p><strong>Handling the class imbalance</strong>: For balanced group softmax (bags), we grouped classes into 4 softmax according to the number N of training instances: N < 10, 10 < = N < 100, 100 < = N < 1000, N > = 1000. We also included a special softmax for the foreground/background. Following the original bags paper, we also used the class “others” on each softmax to represent instances from all classes not included on that softmax. For the final prediction, we remap all predictions to the original softmax, ignoring the “others” class. As in the original paper, the predictions are not true probabilities, since they do not sum up to one, but we consider the highest value as the bags prediction. We used bags on top of the pretrained models with the standard softmax, which was removed, and we kept all weights frozen. First, we trained the bags classifiers layers for 12 epochs using data augmentation with test input resolution (380x380). Then, we tuned for 2 epochs using test time preprocessing. During the training, we subsampled the “others” category instances to avoid dominating the softmax by sampling a maximum of 8 times the number of instances belonging to categories from that softmax at each batch. The inference used 380x380 as model input resolution.</p>
<p><strong>Validation</strong>: To validate the models, we split the training set into train/validation according to the locations. We also used the validation set to adjust hyperparameters. For the final submission, we used all training instances to train the model using the tuned hyperparameters.</p>
<p>Initially, we were using voting among images predictions to classify a sequence, but after adding bags we found out that averaging worked better.</p>
<h3>Counting heuristic:</h3>
<p>We followed the competition benchmark counting heuristic: the maximum number of bounding boxes across any image in the sequence. We only counted bounding boxes from MegaDetectorV4 with confidence > 0.8.</p>
<p>This counting strategy limits us to predict only one species per sequence. We see our solution as a strong baseline for counting animals on camera trap sequences, but we do believe that the best solution should be based on tracking animals across images (Multi-object tracking), classifying each track, and counting them. We tried DeepSORT to track animals, but it was not as good as this heuristics for counting (we describe this procedure below).</p>
<h3>Solution:</h3>
<p>Our code, training configs and models are publicly available at <a href="https://github.com/alcunha/iwildcam2021ufam" target="_blank">https://github.com/alcunha/iwildcam2021ufam</a>.</p>
<h3>Other things that we tried</h3>
<p><strong>Geo Prior Model</strong>: We tried to use the GPS coordinates and time of year (image timestamps) to train the Geo Prior model [5], but the model overfitted the iWildCam 2021 training set. We tried to add some noise to data by varying the GPS coordinates within 5km radius and timestamp within 10 days. We also replaced the original Geo Prior model loss with the focal loss. However, using only the classifier predictions was better than using them combined with geo priors. We believe GPS coordinates can be useful for the problem (it worked very well for our iNat 2021 solution, for instance), but it's necessary to develop a model to deal with camera trap specificities, such as the fixed position. Our Geo Prior implementation is publicly available at <a href="https://github.com/alcunha/geo_prior_tf/" target="_blank">https://github.com/alcunha/geo_prior_tf/</a>.</p>
<p><strong>Multi-object tracking with DeepSORT</strong>: We tried to use DeepSORT [6] to generate tracks over sequences and then classify each track by averaging the predictions of its bounding boxes list using the bbox model. We used the feature embedding from EfficientNet-B2 trained on bounding boxes. We also tried to use features from ReID models trained by the winner of the ECCV TAO 2020 challenge, but they performed slightly worse than using EfficientNet-B2 features. Finally, we tried to use DeepSORT removing its dependency on geometry cues as the ECCV winner, but it generated 37000 tracks which is much more than the original DeeepSORT which generated ~4000 or the counting heuristic, ~6000.</p>
<p>After some analysis on the validation set, we observed that our classifier could be improved, then we got back to it for the final sprint but we had not time to work on DeepSORT. We would like to have tuned better the DeepSORT hyperparameters. For the track classifier, instead of just averaging predictions, we could have trained a classifier using a set of images as inputs. To avoid GPLv3 licensing conflicts we kept our DeepSORT implementation on a separate repository: <a href="https://github.com/alcunha/deep_sort_iwildcam2021ufam" target="_blank">https://github.com/alcunha/deep_sort_iwildcam2021ufam</a>.</p>
<h4>References</h4>
<p>[1] Tan, Mingxing, and Quoc Le. "Efficientnet: Rethinking model scaling for convolutional neural networks." International Conference on Machine Learning. PMLR, 2019.</p>
<p>[2] Li, Yu, et al. "Overcoming classifier imbalance for long-tail object detection with balanced group softmax." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2020.</p>
<p>[3] Touvron, Hugo, et al. "Fixing the train-test resolution discrepancy." arXiv preprint arXiv:1906.06423 (2019).</p>
<p>[4] Cubuk, Ekin D., et al. "Randaugment: Practical automated data augmentation with a reduced search space." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops. 2020.</p>
<p>[5] Mac Aodha, Oisin, Elijah Cole, and Pietro Perona. "Presence-only geographical priors for fine-grained image classification." Proceedings of the IEEE/CVF International Conference on Computer Vision. 2019.</p>
<p>[6] Wojke, Nicolai, Alex Bewley, and Dietrich Paulus. "Simple online and realtime tracking with a deep association metric." 2017 IEEE international conference on image processing (ICIP). IEEE, 2017.</p> |
iWildcam 2021 - FGVC8 | 2nd place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: iWildcam 2021 - FGVC8
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Our solution is based on a few EfficientNetB2 classifiers using a combination of under sampling and over sampling during training.</p>
<p><strong>Species Classifier</strong></p>
<p>For the animal classifier we used the provided MD3 detections as input for training. The megadetector cutoff confidence for training was 0.8. We optimized the images prior to the bounding box cropping using the CLAHE algorithm. We trained a full EfficientNetB2 classifier with input size 256x256. Data augmentation is random crops, color jitter. We fiddled with CutMix and MixUp but didn't use that in the end. We used exponential moving average (EMA) on the weights during training.<br>
Key to the train/ valid split was the use of locations (the integer). The validation set had to be split to contain and more or less the same distribution of species and the locations should be completely independent between train and valid.<br>
For the imbalance problem we combined under and over sampling. The problem with oversampling is a higher chance of overfitting and the problem with undersamping is information loss.</p>
<p><strong>Geo Prior</strong></p>
<p>With the release of the Geo information we trained a separate (expert) EfficientNetB2 network for all of the countries in the datasets where the unknown location was bundled to a separate country.</p>
<p><strong>Inference</strong></p>
<p>For the inference we used the EfficientNetB2 classifier on the provided MD3 bounding boxes (confidence > 0.4) and discarded classifier results with a confidence > 0.7. When the prediction was an animal that wasn't in the trainset for that location we moved to the expert country networks and relied on that output. We didn't have all countries trained when the deadline passed, so there might be more to win here.<br>
Since the bounding boxes are classified independently we also trained a classifier for the entire image as done in the previous year. The objective of this classifier was to act as a prior for especially herds. In the case of the multiple boxes we would like to prefer to classify the same species. We combined the full image classifier with the species classifier using weighted 0.15 for the image classifier and 0.85 for the species classifier.</p>
<p><strong>Counting</strong></p>
<p>For the counting we just used the max operator on the number of bounding boxes. The classifier output has been validated on 0.75. This made our counting algorithm really conservative. We only counted the animals we were absolutely certain. For the final submission we added mean absolute difference to the max, which improved the results a little.<br>
The reason for this is the use of MCRMSE where miss-classification is a sever penalty. The other problem with the MCRMSE is in large herds weigh really heavy on the leaderboard scores. Since we undercount we took some risk there.</p>
<p><strong>Things that didn't make the cut (yet)</strong></p>
<p>We didn't use the inaturalist data. At all.</p>
<p>We have set up and auxiliary network where we use the provided metadata to try and act as a regularizer. We used circular encoding for GPS and for the time of day and month. We used this information together with the image data to train the species classifier. Unfortunately we didn't have sufficient resources to fully pursue and evaluate this direction.</p>
<p>We did however use the same trick to create an evaluation network. The principle was the same and we tried to predict the species based on only on the metadata provided. Surprisingly we got validation score greater than 80%. So we still believe this is route to investigate in the future.</p>
<p>The final thing we didn't do is assume that all animals in one image are the same. We remained conservative in our predictions. What a good option was to push the scores was to assume that all high confidence bounding boxes contain all the same animals. This pushed our scores (private and public) significantly, but just liked adding the mean absolute difference their was no justification for that. We didn't have or use the true count present in the sequences to optimize either of them. This is also the reason for not pursuing the tracking or re-identification based counting algorithms.</p>
<p><strong>Lessons learned</strong></p>
<ul>
<li>Classifier is still the most important contributor in this challenge</li>
<li>Therefore the train/validation/test split is really important</li>
<li>Metadata is valuable (but we don't know yet how much)</li>
<li>Cattle is the favorite species classifier output (after empty)</li>
</ul> |
iWildcam 2021 - FGVC8 | 3rd Place Solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: iWildcam 2021 - FGVC8
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Hi Charlie, thanks!<br>
To apply Megadetector V4 inference on iNat images (and for WCS ones too) I used a TF 1.15 GPU environment, separate from my main pipeline running on TF 2, because as far as I know they haven't released the trained model compatible with TF 2 models. I hope the next version of the model will be.</p>
<p>Regards</p>
<p>For this solution I was based on some of the ideas of the previous year's competitors, so I hope that the description of my solution will also be useful for someone else.</p>
<h1>Dataset</h1>
<p>I used the images from the WCS collection and those from the iNat 2017/2018/2021 collections for the shared classes, and I applied <a href="https://www.kaggle.com/seriousran/image-pre-processing-for-iwild-2020" target="_blank">CLAHE & Simple WB</a> to this whole set to enhance the nighttime images.</p>
<p>I applied Megadetector V4 inference on this set of images (including those of WCS labeled as <em>empty</em>) and I considered only detections with score ≥ 0.3 for WCS and those with detection label <em>Animal</em> and score ≥ 0.95 for iNat collections. Then, I used a square crop around the detection bbox, with the size equal to the largest side of the bbox, trying to keep the animal centered in the square, unless it was outside the image boundaries. I assigned to each of these crops the image-level annotation label (the species or <em>empty</em>) from the original full image, in order to create a <em>train bbox dataset</em>, that I split in a stratified fashion (train 80% - validation 20%), grouping the WCS images by location.</p>
<h1>Training</h1>
<p>I fine-tuned three models with pre-trained ImageNet weights, and the train was done in different image resolutions: ResNet152 (224 px), EfficientNetB3 (300 px) and EfficientNetB7 (600 px).<br>
During the training very simple data augmentation was performed: random rotation, random translation, random horizontal flip and random contrast.</p>
<h2>Geo-prior model</h2>
<p>I used the sin/cos representations of location and time-of-year information from the WCS/iNats sets to train a <a href="https://arxiv.org/abs/1906.05272" target="_blank">Geo-prior model</a>, which I used as complementary a priori information when performing model inference on the WCS test data. For this, I used the <a href="https://github.com/alcunha/geo_prior_tf" target="_blank">TF implementation of the Geo-prior model training</a>.</p>
<h1>Predictions on Test data</h1>
<p>For the WCS test data I followed a similar methodology as above (CLAHE & Simple WB, Megadetector V4, detections with score ≥ 0.3 and square image crops) to create a <em>test bbox dataset</em>.</p>
<h2>Movement detection</h2>
<p>One of the main drawbacks of classification pipelines based on Megadetector predictions is the large number of false detections it produces, which are sometimes difficult to be eliminated. One way to do this when the temporal and location information is available is to use some motion detection scheme.<br>
In my experiments, I tested the technique called Accumulated averaging to find those regions in the images where an animal was detected but that did not show any apparent change in time (movement). Thus, if a detection has a score < 0.9 and in the bbox region no movement was detected through the frames of the sequence, that detection was marked as invalid. For this, I have been inspired by the ideas of the <a href="http://benweinstein.weebly.com/motionmeerkat.html" target="_blank">MotionMeerkat project</a> and I have used pieces of their <a href="https://github.com/bw4sz/OpenCV_HummingbirdsMotion/blob/master/MotionMeerkat/BackgroundSubtractor.py#L28" target="_blank">code</a> and their parameter values.<br>
There are other methods to detect motion that seem to be more effective (e.g. Gaussian Mixture-based B/F Segmentation) but I did not test them in these experiments.</p>
<p>Below are two examples of sequences with detections that were correctly discarded (reds) and others that were correctly accepted (greens) by this method:</p>
<p><img src="https://i.imgur.com/WtUwdrd.gif" alt=""><br>
<img src="https://i.imgur.com/n5Gw0eC.gif" alt=""></p>
<h2>Model inference and averaging</h2>
<p>I applied the inference of the three classification models separately on the <em>test bbox dataset</em>, multiplying the output vector of the classifier by the output vector of the Geo-prior model for each image, and then performing a weighted average of the predictions of the three models. The weights of each model were based on the results of the evaluation on the validation set.</p>
<h2>Creation of the submission file</h2>
<p>To determine the species and the number of individuals present in each sequence I did the following:</p>
<ul>
<li>I assumed that there was only one species present in each sequence and this was determined from the final output of the weighted classifiers/Geo-prior on the valid detections (not discarded by the motion detection method and not classified as <em>empty</em>) that had a <em>classification probability</em> ≥ 0.5, taking as the <em>species of the sequence</em> the one that is repeated the most (mode) in the whole sequence.</li>
<li>The number of individuals is calculated simply by taking the maximum number of valid detections present in any of the images in the sequence.</li>
</ul>
<h2>Additional experiment</h2>
<p>Inspired by the <a href="https://www.kaggle.com/c/iwildcam-2020-fgvc7/discussion/158370" target="_blank">ideas of the winning team of last year's competition</a> I did the following:</p>
<ul>
<li>I used the original <em>full images</em> of the WCS train set to fine-tuned two models with pre-trained ImageNet weights: EfficientNetB3 (res 300 px) and EfficientNetB7 (res 600 px), and I did a weighted average of both model predictions of the full images of the WCS test set (also using the Geo-prior model).</li>
<li>I did a weighted average of the predictions of the EfficientNetB3 and EfficientNetB7 <em>bbox models</em> described above (I discarded ResNet152 predictions for this experiment) on the <em>test bbox dataset</em>.</li>
<li>Then, for the images that had at least one valid detection I did a weighted average of these two averaged predictions (<code>0.3 full image + 0.7 bbox</code>), and for the images that did not have valid detections I used only the full image model.</li>
</ul>
<p>This final predictions outperformed the public and private scores of all my previous submissions, but as it was a late submission it was not reflected in the LBs.</p> |
iWildcam 2021 - FGVC8 | 4th Place Solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: iWildcam 2021 - FGVC8
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thank you and congratulations to all the participants, Kaggle, and especially the organizers of this enjoyable and important competition! Thank you for making a difference!</p>
<h2>Summary</h2>
<p>My solution was based on a two-stage model, one image-based, the second, detection-based, where the output of first model was fed as metadata into the second model along with detection cropped images. The primary motivation for the design was the mixed species herds in eastern Africa. The two-stage pipeline was trained separately for each of the six regions. A single model architecture, an ImageNet, pre-trained EfficientNet B3, was used in both stages with images cropped/resized to 400x400. Minor image augmentation was used in both stages. The system was implemented in PyTorch Lightning.</p>
<h2>Data Preparation</h2>
<p>To identify the unknown locations and assign them to regions, I used a two-fold approach. In the first stage I clustered locations using associated species provided in the image annotations. In the second stage I used a supervised classification model (ResNet-18) where the labels were the known regions, run on the raw images, downsampled to 224x224. </p>
<p>In both full image and detection-level processing, images were converted into numpy 3x400x400 uint8 arrays - a compromise between speed and storage. All other metadata was stored in a relational database largely mirroring the iNaturalist annotation structure derived from the json files. </p>
<p>Google Colab was used to run MegaDetector V4 on all images and the V4 detections were used exclusively.</p>
<p>During initial EDA in the eastern African region I noticed that there were multiple species per sequence and that images could be labeled according to a minority member (e.g. a herd of impala (#96) with one zebra (#111) could be labeled zebra). There might have been a predictable labeling strategy employed at the sequence level but I did not spend the time to find it. Instead I emphasized those images where there were single detections per image, thus converting weak labels into strong. This worked well for many species but I think it had limitations for species that were rarely seen alone (e.g. domestic sheep (#72)). </p>
<p>Three strategies were used to deal with imbalanced classes. The first was to randomly under-sample the most common classes. A weighted random sampler was used to oversample the rarer classes. The third method used in some cases was to incorporate species data from other regions. For example, in the southern-most South American region, there were minimal samples of mountain lions (#6), jaguars (#24) and collared pecarry (#8) in the training data, but these species were abundant in the regions further north. This was only done at the detection level where the amount of background imagery presented to the model was minimal. </p>
<h2>Model</h2>
<p>My solution was based on a two-stage cascading model, one image-based, the second detection-based (see Figure). Six independent models were trained, one for each region; training and test data were partitioned into regions using locations. I used a single model architecture based on an ImageNet, pre-trained EfficientNet B3 model for both stages. The first stage model worked on resized camera trap images and associated image metadata (num_frames_per_sequence, frame_number, number_detections, time-of-day, day-of-year). The images were run through the feature extraction layers of EfficientNet, then combined with the metadata and fed into a small fully-connected classifier. The entire category prediction vector from stage 1, along with the original image metadata (plus zoom), was fed as metadata into the stage 2 classifier. Additionally, the imagery for each detection was cut from the full image, resized and fed into the feature-extraction portion of EfficientNet. The final output of the 2-stage model was a category prediction vector for each detection. </p>
<p><img src="https://cjturner.s3-us-west-1.amazonaws.com/iWildcam21.png" alt="Figure 1: Model"></p>
<p>I used a fine-tuning approach in model training. EfficientNet weights were kept frozen initially while the classifiers were trained, then the pre-trained network was fine-tuned. The final solution used a MultiLR scheduler to lower the learning rate for the fine-tuning phase of training. The loss function was CrossEntropyLoss in both models. Early stopping was used based on plateauing of validation loss.</p>
<p>Data was stratified by location within each region and run with five-fold cross validation. The only ensembling performed were the 5 folds. Minor augmentation was used in both stages (RandomHorizontalFlip, RandomErasing, and RandomGrayscale). A final step combined predictions from each region, applying a per-species threshold to the reported detections.</p>
<h2>Things that worked - but got lost along the way…</h2>
<p>I performed a number of experiments early on that had very promising results but for one reason or another did not make it into the final solution. </p>
<p>Training data was pseudo-labeled at the detection level. Single-detection images were used to train an initial model, then predict on the multi-detection images. High-confidence detections were added to the training data and the process was repeated until a fixed point was reached. The results of this were quite promising and predictable in many cases (e.g. many of the herd animals in Africa). In other cases (e.g. white-lipped peccary (2), domestic sheep (#72)), it did not work as well - possibly because there were two few singleton examples, or when they did congregate the detections were too densely packed. </p>
<p>Another area that showed promise but was never taken beyond the experimental phase was to preprocess the detection data, eliminating static detections within the same location. There were numerous cases of fixed tree limbs being classified as animal horns or tails but given their appearance in many sequences, they could have been safely removed, lowering false alarms.</p>
<p>I had partial success grouping the initially unknown locations into regions using a pertained ResNet-18 autoencoder followed by K-Means clustering. I used Landsat 8 bands 4 and 5 to compute NDVI images that were then duplicated to provide the 3 channels required by the ResNet model. The main difficulty seemed to be the similarity of tropical locations and the presence of clouds. It is also possible that ImageNet-trained models are less suitable for satellite imagery. I stopped work on this approach once the GPS locations were released.</p>
<p>I performed some initial experiments with obtaining additional training data by moving up the taxonomic tree and this also looked promising, both as a way to get additional training data and as part of a broader organizational approach.</p> |
iWildcam 2021 - FGVC8 | 7th Place Solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: iWildcam 2021 - FGVC8
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>I created a mini dataset (10% of the original dataset) with the same class proportion as in the original. And carried out many experiments to move quickly through the competition.</p>
<p>1) Preprocessing -<br>
In my initial experiments, I found that instead of directly cropping and resizing the detections, cropping and padding the detections using reflection padding improved the score. Hence my final solution used reflection padding while training and testing. My preprocessing pipeline was as follows</p>
<p>crop detection -> reflection padding -> augmentations -> standardize</p>
<p>2) Augmentations -<br>
In my experiments adding any type of augmentation reduced my validation score. Hence I kept simple augmentations with very little intensity. The augmentations that I used were as follows </p>
<p>Random Horizontal Flip (p=0.5)<br>
Color Jitter : Brightness and Contrast (0.9 to 1.2)<br>
Grayscale (p=0.5)<br>
Gaussian Blur (sigma = 0.0 to 0.8)</p>
<p>3) Handle Imbalance - <br>
Here is how I have handled imbalance: <a href="https://www.kaggle.com/c/iwildcam2021-fgvc8/discussion/242455" target="_blank">https://www.kaggle.com/c/iwildcam2021-fgvc8/discussion/242455</a></p>
<p>4) Model Architecture -<br>
For my mini dataset, I experimented with efficient b2 noisy student. For the final submission, I used the efficient b5 noisy student. I did not use pre-trained image net weights.</p>
<p>5) Submission pipeline -<br>
In a public kernel, I came across the max count logic. In this, the maximum number of detections from an image in a sequence (max count) was considered as the prediction value for that sequence. I modified this logic and also considered maximum frequency. For eg - in a sequence of 9 images having only one specie of animal, the number of detections for each image were - [1,1,1,1,2,2,2,4,2]. So the max count logic will predict 4 while max count + max frequency logic will predict 2. There might be a strong possibility of false detections, and thus image number 8 having 4 detections (out of which 2 could be false detections) would be eliminated. The frequency of both 1 and 2 is the same, so the max count will be considered, that is 2. I saw a validation accuracy boost and public score boost using max count + max frequency logic, and hence used in my final submission.</p>
<p>Also, a higher megadetector detection confidence threshold worked better, so I used 0.7 as detection confidence for the final submission pipeline.</p>
<p>6) Training -<br>
I experimented with mixed-precision training, but training without mixed-precision yielded better results for me. I did not use any LR scheduler for mini dataset experiments. But for the final training of efficient b5 noisy student, I used reduce on plateau. I underestimated the training time required for the efficient b5 noisy student. I spent all remaining 26 hrs of GPU in the last week in efficient b5 noisy student's training and still, it was a little underfit with ~ 78% val accuracy and 82% train accuracy. After exhausting my Kaggle GPU quota I switched to colab and resumed training. This time I changed some hyperparameters like batch-size and LR, but I could only reach 82% val accuracy and 85% train accuracy due to lack of time (deadline was near). But I couldn't make a submission using colab because I later realized that the test set was 30GB while on colab I had 28GB disk space. I couldn't even download the zip there. So my final model is still underfit (i.e. it needs more training). </p>
<p><strong>Here is my final submission pipeline <a href="https://www.kaggle.com/devashishprasad/iwildcam2021-submission" target="_blank">notebook</a></strong></p> |