Spaces:
Running
Running
Gallery instead of Examples & Applications
Browse files- app.py +1 -1
- examples.py +2 -2
- introduction.md +2 -2
app.py
CHANGED
@@ -11,7 +11,7 @@ PAGES = {
|
|
11 |
"Text to Image": text2image,
|
12 |
"Image to Text": image2text,
|
13 |
"Localization": localization,
|
14 |
-
"
|
15 |
}
|
16 |
|
17 |
st.sidebar.title("Explore our CLIP-Italian demo")
|
|
|
11 |
"Text to Image": text2image,
|
12 |
"Image to Text": image2text,
|
13 |
"Localization": localization,
|
14 |
+
"Gallery": examples,
|
15 |
}
|
16 |
|
17 |
st.sidebar.title("Explore our CLIP-Italian demo")
|
examples.py
CHANGED
@@ -3,7 +3,7 @@ import streamlit as st
|
|
3 |
|
4 |
|
5 |
def app():
|
6 |
-
st.title("
|
7 |
st.write(
|
8 |
"""
|
9 |
|
@@ -81,7 +81,7 @@ def app():
|
|
81 |
col2.markdown("*A rustic chair*")
|
82 |
col2.image("static/img/examples/sedia_rustica.jpeg", use_column_width=True)
|
83 |
|
84 |
-
st.markdown(
|
85 |
|
86 |
st.subheader("Un gatto")
|
87 |
st.markdown("*A cat*")
|
|
|
3 |
|
4 |
|
5 |
def app():
|
6 |
+
st.title("Gallery")
|
7 |
st.write(
|
8 |
"""
|
9 |
|
|
|
81 |
col2.markdown("*A rustic chair*")
|
82 |
col2.image("static/img/examples/sedia_rustica.jpeg", use_column_width=True)
|
83 |
|
84 |
+
st.markdown("## Localization")
|
85 |
|
86 |
st.subheader("Un gatto")
|
87 |
st.markdown("*A cat*")
|
introduction.md
CHANGED
@@ -37,7 +37,7 @@ to find where "something" (like a "cat") is an image. The location of the object
|
|
37 |
|
38 |
<img src="https://huggingface.co/spaces/clip-italian/clip-italian-demo/raw/main/static/img/gatto_cane.png" alt="drawing" width="95%"/>
|
39 |
|
40 |
-
+ **
|
41 |
different applications that can start from here.
|
42 |
|
43 |
# Novel Contributions
|
@@ -256,7 +256,7 @@ labels most probably had an impact on the final scores.
|
|
256 |
|
257 |
We hereby show some interesting properties of the model. One is its ability to detect colors,
|
258 |
then there is its (partial) counting ability and finally the ability of understanding more complex queries. You can find
|
259 |
-
more examples in the "*
|
260 |
|
261 |
To our own surprise, many of the answers the model gives make a lot of sense! Note that the model, in this case,
|
262 |
is searching the right image from a set of 25K images from an Unsplash dataset.
|
|
|
37 |
|
38 |
<img src="https://huggingface.co/spaces/clip-italian/clip-italian-demo/raw/main/static/img/gatto_cane.png" alt="drawing" width="95%"/>
|
39 |
|
40 |
+
+ **Gallery**: This page showcases some interesting results we got from the model, we believe that there are
|
41 |
different applications that can start from here.
|
42 |
|
43 |
# Novel Contributions
|
|
|
256 |
|
257 |
We hereby show some interesting properties of the model. One is its ability to detect colors,
|
258 |
then there is its (partial) counting ability and finally the ability of understanding more complex queries. You can find
|
259 |
+
more examples in the "*Gallery*" section of this demo.
|
260 |
|
261 |
To our own surprise, many of the answers the model gives make a lot of sense! Note that the model, in this case,
|
262 |
is searching the right image from a set of 25K images from an Unsplash dataset.
|