Spaces:
Running
Running
Realcat
commited on
Commit
•
aa46ae9
1
Parent(s):
b7f7f2c
add: xfeat(dense)
Browse files- README.md +27 -7
- common/config.yaml +4 -1
- hloc/extract_features.py +1 -1
- hloc/extractors/xfeat.py +1 -1
- hloc/match_dense.py +16 -1
- hloc/matchers/xfeat_dense.py +57 -0
README.md
CHANGED
@@ -16,7 +16,7 @@ license: mit
|
|
16 |
[![Issues][issues-shield]][issues-url]
|
17 |
|
18 |
<p align="center">
|
19 |
-
<h1 align="center"><br><ins>Image Matching WebUI</ins><br>
|
20 |
</p>
|
21 |
|
22 |
## Description
|
@@ -24,13 +24,22 @@ license: mit
|
|
24 |
This simple tool efficiently matches image pairs using multiple famous image matching algorithms. The tool features a Graphical User Interface (GUI) designed using [gradio](https://gradio.app/). You can effortlessly select two images and a matching algorithm and obtain a precise matching result.
|
25 |
**Note**: the images source can be either local images or webcam images.
|
26 |
|
|
|
|
|
|
|
|
|
|
|
27 |
Here is a demo of the tool:
|
28 |
|
29 |
![demo](assets/demo.gif)
|
30 |
|
31 |
The tool currently supports various popular image matching algorithms, namely:
|
|
|
|
|
|
|
|
|
|
|
32 |
- [x] [LightGlue](https://github.com/cvg/LightGlue), ICCV 2023
|
33 |
-
- [x] [DeDoDe](https://github.com/Parskatt/DeDoDe), ArXiv 2023
|
34 |
- [x] [DarkFeat](https://github.com/THU-LYJ-Lab/DarkFeat), AAAI 2023
|
35 |
- [ ] [ASTR](https://github.com/ASTR2023/ASTR), CVPR 2023
|
36 |
- [ ] [SEM](https://github.com/SEM2023/SEM), CVPR 2023
|
@@ -40,7 +49,6 @@ The tool currently supports various popular image matching algorithms, namely:
|
|
40 |
- [x] [SOLD2](https://github.com/cvg/SOLD2), CVPR 2021
|
41 |
- [ ] [LineTR](https://github.com/yosungho/LineTR), RA-L 2021
|
42 |
- [x] [DKM](https://github.com/Parskatt/DKM), CVPR 2023
|
43 |
-
- [x] [RoMa](https://github.com/Vincentqyw/RoMa), Arxiv 2023
|
44 |
- [ ] [NCMNet](https://github.com/xinliu29/NCMNet), CVPR 2023
|
45 |
- [x] [TopicFM](https://github.com/Vincentqyw/TopicFM), AAAI 2023
|
46 |
- [x] [AspanFormer](https://github.com/Vincentqyw/ml-aspanformer), ECCV 2022
|
@@ -48,6 +56,7 @@ The tool currently supports various popular image matching algorithms, namely:
|
|
48 |
- [ ] [LISRD](https://github.com/rpautrat/LISRD), ECCV 2022
|
49 |
- [ ] [REKD](https://github.com/bluedream1121/REKD), CVPR 2022
|
50 |
- [x] [ALIKE](https://github.com/Shiaoming/ALIKE), ArXiv 2022
|
|
|
51 |
- [x] [SGMNet](https://github.com/vdvchen/SGMNet), ICCV 2021
|
52 |
- [x] [SuperPoint](https://github.com/magicleap/SuperPointPretrainedNetwork), CVPRW 2018
|
53 |
- [x] [SuperGlue](https://github.com/magicleap/SuperGluePretrainedNetwork), CVPR 2020
|
@@ -59,11 +68,15 @@ The tool currently supports various popular image matching algorithms, namely:
|
|
59 |
- [ ] [SOSNet](https://github.com/scape-research/SOSNet), CVPR 2019
|
60 |
- [x] [SIFT](https://docs.opencv.org/4.x/da/df5/tutorial_py_sift_intro.html), IJCV 2004
|
61 |
|
|
|
62 |
## How to use
|
63 |
|
64 |
-
### HuggingFace
|
65 |
|
66 |
-
Just try it on
|
|
|
|
|
|
|
67 |
|
68 |
or deploy it locally following the instructions below.
|
69 |
|
@@ -74,8 +87,15 @@ cd image-matching-webui
|
|
74 |
conda env create -f environment.yaml
|
75 |
conda activate imw
|
76 |
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
77 |
|
78 |
-
###
|
79 |
``` bash
|
80 |
python3 ./app.py
|
81 |
```
|
@@ -85,7 +105,7 @@ then open http://localhost:7860 in your browser.
|
|
85 |
|
86 |
### Add your own feature / matcher
|
87 |
|
88 |
-
I provide an example to add local feature in [hloc/extractors/example.py](hloc/extractors/example.py). Then add feature settings in `confs` in file [hloc/extract_features.py](hloc/extract_features.py). Last step is adding some settings to `model_zoo` in file [
|
89 |
|
90 |
## Contributions welcome!
|
91 |
|
|
|
16 |
[![Issues][issues-shield]][issues-url]
|
17 |
|
18 |
<p align="center">
|
19 |
+
<h1 align="center"><br><ins>Image Matching WebUI</ins><br>Identify matching points between two images</h1>
|
20 |
</p>
|
21 |
|
22 |
## Description
|
|
|
24 |
This simple tool efficiently matches image pairs using multiple famous image matching algorithms. The tool features a Graphical User Interface (GUI) designed using [gradio](https://gradio.app/). You can effortlessly select two images and a matching algorithm and obtain a precise matching result.
|
25 |
**Note**: the images source can be either local images or webcam images.
|
26 |
|
27 |
+
Try it on <a href='https://huggingface.co/spaces/Realcat/image-matching-webui'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue'></a>
|
28 |
+
<a target="_blank" href="https://lightning.ai/realcat/studios/image-matching-webui">
|
29 |
+
<img src="https://pl-bolts-doc-images.s3.us-east-2.amazonaws.com/app-2/studio-badge.svg" alt="Open In Studio"/>
|
30 |
+
</a>
|
31 |
+
|
32 |
Here is a demo of the tool:
|
33 |
|
34 |
![demo](assets/demo.gif)
|
35 |
|
36 |
The tool currently supports various popular image matching algorithms, namely:
|
37 |
+
- [x] [XFeat](https://github.com/verlab/accelerated_features), CVPR 2024
|
38 |
+
- [x] [RoMa](https://github.com/Vincentqyw/RoMa), CVPR 2024
|
39 |
+
- [x] [DeDoDe](https://github.com/Parskatt/DeDoDe), 3DV 2024
|
40 |
+
- [ ] [Mickey](https://github.com/nianticlabs/mickey), CVPR 2024
|
41 |
+
- [ ] [GIM](https://github.com/xuelunshen/gim), ICLR 2024
|
42 |
- [x] [LightGlue](https://github.com/cvg/LightGlue), ICCV 2023
|
|
|
43 |
- [x] [DarkFeat](https://github.com/THU-LYJ-Lab/DarkFeat), AAAI 2023
|
44 |
- [ ] [ASTR](https://github.com/ASTR2023/ASTR), CVPR 2023
|
45 |
- [ ] [SEM](https://github.com/SEM2023/SEM), CVPR 2023
|
|
|
49 |
- [x] [SOLD2](https://github.com/cvg/SOLD2), CVPR 2021
|
50 |
- [ ] [LineTR](https://github.com/yosungho/LineTR), RA-L 2021
|
51 |
- [x] [DKM](https://github.com/Parskatt/DKM), CVPR 2023
|
|
|
52 |
- [ ] [NCMNet](https://github.com/xinliu29/NCMNet), CVPR 2023
|
53 |
- [x] [TopicFM](https://github.com/Vincentqyw/TopicFM), AAAI 2023
|
54 |
- [x] [AspanFormer](https://github.com/Vincentqyw/ml-aspanformer), ECCV 2022
|
|
|
56 |
- [ ] [LISRD](https://github.com/rpautrat/LISRD), ECCV 2022
|
57 |
- [ ] [REKD](https://github.com/bluedream1121/REKD), CVPR 2022
|
58 |
- [x] [ALIKE](https://github.com/Shiaoming/ALIKE), ArXiv 2022
|
59 |
+
- [x] [RoRD](https://github.com/UditSinghParihar/RoRD), IROS 2021
|
60 |
- [x] [SGMNet](https://github.com/vdvchen/SGMNet), ICCV 2021
|
61 |
- [x] [SuperPoint](https://github.com/magicleap/SuperPointPretrainedNetwork), CVPRW 2018
|
62 |
- [x] [SuperGlue](https://github.com/magicleap/SuperGluePretrainedNetwork), CVPR 2020
|
|
|
68 |
- [ ] [SOSNet](https://github.com/scape-research/SOSNet), CVPR 2019
|
69 |
- [x] [SIFT](https://docs.opencv.org/4.x/da/df5/tutorial_py_sift_intro.html), IJCV 2004
|
70 |
|
71 |
+
|
72 |
## How to use
|
73 |
|
74 |
+
### HuggingFace / Lightning AI
|
75 |
|
76 |
+
Just try it on <a href='https://huggingface.co/spaces/Realcat/image-matching-webui'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue'></a>
|
77 |
+
<a target="_blank" href="https://lightning.ai/realcat/studios/image-matching-webui">
|
78 |
+
<img src="https://pl-bolts-doc-images.s3.us-east-2.amazonaws.com/app-2/studio-badge.svg" alt="Open In Studio"/>
|
79 |
+
</a>
|
80 |
|
81 |
or deploy it locally following the instructions below.
|
82 |
|
|
|
87 |
conda env create -f environment.yaml
|
88 |
conda activate imw
|
89 |
```
|
90 |
+
|
91 |
+
or using [docker](https://hub.docker.com/r/vincentqin/image-matching-webui):
|
92 |
+
|
93 |
+
``` bash
|
94 |
+
docker pull vincentqin/image-matching-webui:latest
|
95 |
+
docker run -it -p 7860:7860 vincentqin/image-matching-webui:latest python app.py --server_name "0.0.0.0" --server_port=7860
|
96 |
+
```
|
97 |
|
98 |
+
### Run demo
|
99 |
``` bash
|
100 |
python3 ./app.py
|
101 |
```
|
|
|
105 |
|
106 |
### Add your own feature / matcher
|
107 |
|
108 |
+
I provide an example to add local feature in [hloc/extractors/example.py](hloc/extractors/example.py). Then add feature settings in `confs` in file [hloc/extract_features.py](hloc/extract_features.py). Last step is adding some settings to `model_zoo` in file [common/config.yaml](common/config.yaml).
|
109 |
|
110 |
## Contributions welcome!
|
111 |
|
common/config.yaml
CHANGED
@@ -28,10 +28,13 @@ matcher_zoo:
|
|
28 |
aspanformer:
|
29 |
matcher: aspanformer
|
30 |
dense: true
|
31 |
-
xfeat:
|
32 |
matcher: NN-mutual
|
33 |
feature: xfeat
|
34 |
dense: false
|
|
|
|
|
|
|
35 |
dedode:
|
36 |
matcher: Dual-Softmax
|
37 |
feature: dedode
|
|
|
28 |
aspanformer:
|
29 |
matcher: aspanformer
|
30 |
dense: true
|
31 |
+
xfeat(sparse):
|
32 |
matcher: NN-mutual
|
33 |
feature: xfeat
|
34 |
dense: false
|
35 |
+
xfeat(dense):
|
36 |
+
matcher: xfeat_dense
|
37 |
+
dense: true
|
38 |
dedode:
|
39 |
matcher: Dual-Softmax
|
40 |
feature: dedode
|
hloc/extract_features.py
CHANGED
@@ -211,7 +211,7 @@ confs = {
|
|
211 |
"grayscale": False,
|
212 |
"resize_max": 1600,
|
213 |
},
|
214 |
-
},
|
215 |
"alike": {
|
216 |
"output": "feats-alike-n5000-r1600",
|
217 |
"model": {
|
|
|
211 |
"grayscale": False,
|
212 |
"resize_max": 1600,
|
213 |
},
|
214 |
+
},
|
215 |
"alike": {
|
216 |
"output": "feats-alike-n5000-r1600",
|
217 |
"model": {
|
hloc/extractors/xfeat.py
CHANGED
@@ -18,7 +18,7 @@ class XFeat(BaseModel):
|
|
18 |
pretrained=True,
|
19 |
top_k=self.conf["max_keypoints"],
|
20 |
)
|
21 |
-
logger.info(f"Load XFeat model done.")
|
22 |
|
23 |
def _forward(self, data):
|
24 |
pred = self.net.detectAndCompute(
|
|
|
18 |
pretrained=True,
|
19 |
top_k=self.conf["max_keypoints"],
|
20 |
)
|
21 |
+
logger.info(f"Load XFeat(sparse) model done.")
|
22 |
|
23 |
def _forward(self, data):
|
24 |
pred = self.net.detectAndCompute(
|
hloc/match_dense.py
CHANGED
@@ -72,7 +72,7 @@ confs = {
|
|
72 |
"height": 480,
|
73 |
},
|
74 |
},
|
75 |
-
# Use
|
76 |
"aspanformer": {
|
77 |
"output": "matches-aspanformer",
|
78 |
"model": {
|
@@ -90,6 +90,21 @@ confs = {
|
|
90 |
"dfactor": 8,
|
91 |
},
|
92 |
},
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
93 |
"dkm": {
|
94 |
"output": "matches-dkm",
|
95 |
"model": {
|
|
|
72 |
"height": 480,
|
73 |
},
|
74 |
},
|
75 |
+
# Use aspanformer for matching feats
|
76 |
"aspanformer": {
|
77 |
"output": "matches-aspanformer",
|
78 |
"model": {
|
|
|
90 |
"dfactor": 8,
|
91 |
},
|
92 |
},
|
93 |
+
"xfeat_dense": {
|
94 |
+
"output": "matches-xfeat_dense",
|
95 |
+
"model": {
|
96 |
+
"name": "xfeat_dense",
|
97 |
+
"max_keypoints": 8000,
|
98 |
+
},
|
99 |
+
"preprocessing": {
|
100 |
+
"grayscale": False,
|
101 |
+
"force_resize": False,
|
102 |
+
"resize_max": 1024,
|
103 |
+
"width": 640,
|
104 |
+
"height": 480,
|
105 |
+
"dfactor": 8,
|
106 |
+
},
|
107 |
+
},
|
108 |
"dkm": {
|
109 |
"output": "matches-dkm",
|
110 |
"model": {
|
hloc/matchers/xfeat_dense.py
ADDED
@@ -0,0 +1,57 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
import torch
|
2 |
+
from pathlib import Path
|
3 |
+
from hloc import logger
|
4 |
+
from ..utils.base_model import BaseModel
|
5 |
+
|
6 |
+
|
7 |
+
class XFeatDense(BaseModel):
|
8 |
+
default_conf = {
|
9 |
+
"keypoint_threshold": 0.005,
|
10 |
+
"max_keypoints": 8000,
|
11 |
+
}
|
12 |
+
required_inputs = [
|
13 |
+
"image0",
|
14 |
+
"image1",
|
15 |
+
]
|
16 |
+
|
17 |
+
def _init(self, conf):
|
18 |
+
self.net = torch.hub.load(
|
19 |
+
"verlab/accelerated_features",
|
20 |
+
"XFeat",
|
21 |
+
pretrained=True,
|
22 |
+
top_k=self.conf["max_keypoints"],
|
23 |
+
)
|
24 |
+
logger.info(f"Load XFeat(dense) model done.")
|
25 |
+
|
26 |
+
def _forward(self, data):
|
27 |
+
# Compute coarse feats
|
28 |
+
out0 = self.net.detectAndComputeDense(
|
29 |
+
data["image0"], top_k=self.conf["max_keypoints"]
|
30 |
+
)
|
31 |
+
out1 = self.net.detectAndComputeDense(
|
32 |
+
data["image1"], top_k=self.conf["max_keypoints"]
|
33 |
+
)
|
34 |
+
|
35 |
+
# Match batches of pairs
|
36 |
+
idxs_list = self.net.batch_match(
|
37 |
+
out0["descriptors"], out1["descriptors"]
|
38 |
+
)
|
39 |
+
B = len(data["image0"])
|
40 |
+
|
41 |
+
# Refine coarse matches
|
42 |
+
# this part is harder to batch, currently iterate
|
43 |
+
matches = []
|
44 |
+
for b in range(B):
|
45 |
+
matches.append(
|
46 |
+
self.net.refine_matches(
|
47 |
+
out0, out1, matches=idxs_list, batch_idx=b
|
48 |
+
)
|
49 |
+
)
|
50 |
+
# we use results from one batch
|
51 |
+
matches = matches[0]
|
52 |
+
pred = {
|
53 |
+
"keypoints0": matches[:, :2],
|
54 |
+
"keypoints1": matches[:, 2:],
|
55 |
+
"mconf": torch.ones_like(matches[:, 0]),
|
56 |
+
}
|
57 |
+
return pred
|