Zigeng commited on
Commit
47bf233
1 Parent(s): 16dc823

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -89
README.md CHANGED
@@ -13,9 +13,6 @@ license: apache-2.0
13
  > [Learning and Vision Lab](http://lv-nus.org/), National University of Singapore
14
  > Paper: [[Arxiv]](https://arxiv.org/abs/2312.05284)
15
 
16
- ### Updates
17
- * 🚀 **December 11, 2023**: Release the training code, inference code and pre-trained models for **SlimSAM**.
18
-
19
  ## Introduction
20
 
21
  <div align="center">
@@ -57,63 +54,10 @@ We conducted a comprehensive comparison encompassing performance, efficiency, an
57
  <img src="images/paper/compare_tab2.PNG" width="50%">
58
  </div>
59
 
60
- ## Installation
61
-
62
- The code requires `python>=3.8`, as well as `pytorch>=1.7` and `torchvision>=0.8`. Please follow the instructions [here](https://pytorch.org/get-started/locally/) to install both PyTorch and TorchVision dependencies. Installing both PyTorch and TorchVision with CUDA support is strongly recommended.
63
-
64
-
65
- Install with
66
-
67
- ```
68
- pip install -e .
69
- ```
70
-
71
- The following optional dependencies are necessary for mask post-processing, saving masks in COCO format.
72
-
73
- ```
74
- pip install opencv-python pycocotools matplotlib
75
- ```
76
-
77
- ## Dataset
78
- We use the original SA-1B dataset in our code. See [here](https://ai.facebook.com/datasets/segment-anything/) for an overview of the datastet. The dataset can be downloaded [here](https://ai.facebook.com/datasets/segment-anything-downloads/).
79
-
80
- The download dataset should be saved as:
81
-
82
- ```
83
- <train_data_root>/
84
- sa_xxxxxxx.jpg
85
- sa_xxxxxxx.json
86
- ......
87
- <val_data_root>/
88
- sa_xxxxxxx.jpg
89
- sa_xxxxxxx.json
90
- ......
91
- ```
92
-
93
-
94
- To decode a mask in COCO RLE format into binary:
95
-
96
- ``` python
97
- from pycocotools import mask as mask_utils
98
- mask = mask_utils.decode(annotation["segmentation"])
99
- ```
100
-
101
- See [here](https://github.com/cocodataset/cocoapi/blob/master/PythonAPI/pycocotools/mask.py) for more instructions to manipulate masks stored in RLE format.
102
-
103
-
104
- ## <a name="Models"></a>Model Checkpoints
105
-
106
- The base model of our method is available. To enhance collaboration with our dependency dectection algorithm, we have split the original image encoder's qkv layer into three distinct linear layers: q, k, and v.
107
- <div align="center">
108
- <img src="images/paper/split.PNG" width="70%">
109
- </div>
110
-
111
-
112
 
113
- Click the links below to download the checkpoints of orginal SAM-B.
114
 
115
- - `SAM-B`: [SAM-B model.](https://drive.google.com/file/d/1CtcyOm4h9bXgBF8DEVWn3N7g9-3r4Xzz/view?usp=sharing)
116
 
 
117
 
118
  Fast state_dict loading for local uniform pruning SlimSAM-50 model:
119
 
@@ -130,38 +74,6 @@ masks = processor.image_processor.post_process_masks(outputs.pred_masks.cpu(), i
130
  scores = outputs.iou_scores
131
  ```
132
 
133
-
134
- ## <a name="Inference"></a>Inference
135
-
136
- First download [SlimSAM-50 model](https://drive.google.com/file/d/1iCN9IW0Su0Ud_fOFoQUnTdkC3bFveMND/view?usp=sharing) or [SlimSAM-77 model](https://drive.google.com/file/d/1L7LB6gHDzR-3D63pH9acD9E0Ul9_wMF-/view) for inference
137
-
138
-
139
- We provide detailed instructions in 'inference.py' on how to use a range of prompts, including 'point' and 'box' and 'everything', for inference purposes.
140
-
141
- ```
142
- CUDA_VISIBLE_DEVICES=0 python inference.py
143
- ```
144
-
145
- ## <a name="Train"></a>Train
146
-
147
- First download a [SAM-B model](https://drive.google.com/file/d/1CtcyOm4h9bXgBF8DEVWn3N7g9-3r4Xzz/view?usp=sharing) into 'checkpoints/' as the base model.
148
-
149
- ### Step1: Embedding Pruning + Bottleneck Aligning ###
150
- The model after step1 is saved as 'checkpoints/vit_b_slim_step1_.pth'
151
-
152
- ```
153
- CUDA_VISIBLE_DEVICES=0 python prune_distill_step1.py --traindata_path <train_data_root> --valdata_path <val_data_root> --prune_ratio <pruning ratio> --epochs <training epochs>
154
- ```
155
-
156
- ### Step2: Bottleneck Pruning + Embedding Aligning ###
157
- The model after step2 is saved as 'checkpoints/vit_b_slim_step2_.pth'
158
-
159
- ```
160
- CUDA_VISIBLE_DEVICES=0 python prune_distill_step2.py --traindata_path <train_data_root> --valdata_path <val_data_root> --prune_ratio <pruning ratio> --epochs <training epochs> --model_path 'checkpoints/vit_b_slim_step1_.pth'
161
- ```
162
-
163
- You can adjust the training settings to meet your specific requirements. While our method demonstrates impressive performance with just 10,000 training data, incorporating additional training data will further enhance the model's effectiveness
164
-
165
  ## BibTex of our SlimSAM
166
  If you use SlimSAM in your research, please use the following BibTeX entry. Thank you!
167
 
 
13
  > [Learning and Vision Lab](http://lv-nus.org/), National University of Singapore
14
  > Paper: [[Arxiv]](https://arxiv.org/abs/2312.05284)
15
 
 
 
 
16
  ## Introduction
17
 
18
  <div align="center">
 
54
  <img src="images/paper/compare_tab2.PNG" width="50%">
55
  </div>
56
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
57
 
 
58
 
 
59
 
60
+ ## <a name="Models"></a>Model Using
61
 
62
  Fast state_dict loading for local uniform pruning SlimSAM-50 model:
63
 
 
74
  scores = outputs.iou_scores
75
  ```
76
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
77
  ## BibTex of our SlimSAM
78
  If you use SlimSAM in your research, please use the following BibTeX entry. Thank you!
79