philschmid HF staff commited on
Commit
f7a8d8f
1 Parent(s): 33a1591

added custom handler

Browse files
Files changed (4) hide show
  1. README.md +96 -0
  2. create_handler.ipynb +0 -0
  3. handler.py +1 -1
  4. result.png +0 -0
README.md CHANGED
@@ -1,3 +1,99 @@
1
  ---
2
  license: openrail++
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: openrail++
3
+ tags:
4
+ - stable-diffusion
5
+ - stable-diffusion-diffusers
6
+ - text-guided-to-image-inpainting
7
+ - endpoints-template
8
  ---
9
+
10
+ # Fork of [stabilityai/stable-diffusion-2-inpainting](https://huggingface.co/stabilityai/stable-diffusion-2-inpainting)
11
+
12
+ > Stable Diffusion is a latent text-to-image diffusion model capable of generating photo-realistic images given any text input.
13
+ > For more information about how Stable Diffusion functions, please have a look at [🤗's Stable Diffusion with 🧨Diffusers blog](https://huggingface.co/blog/stable_diffusion).
14
+
15
+ For more information about the model, license and limitations check the original model card at [stabilityai/stable-diffusion-2-inpainting](https://huggingface.co/stabilityai/stable-diffusion-2-inpainting).
16
+
17
+ ---
18
+
19
+ This repository implements a custom `handler` task for `text-guided-to-image-inpainting` for 🤗 Inference Endpoints. The code for the customized pipeline is in the [pipeline.py](https://huggingface.co/philschmid/stable-diffusion-2-inpainting-endpoint/blob/main/handler.py).
20
+
21
+ There is also a [notebook](https://huggingface.co/philschmid/stable-diffusion-2-inpainting-endpoint/blob/main/create_handler.ipynb) included, on how to create the `handler.py`
22
+
23
+ **How it works:**
24
+ `image` | `mask_image`
25
+ :-------------------------:|:-------------------------:|
26
+ <img src="https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo.png" alt="drawing" width="300"/> | <img src="https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo_mask.png" alt="drawing" width="300"/>
27
+
28
+
29
+ `prompt` | `Output`
30
+ :-------------------------:|:-------------------------:|
31
+ <span style="position: relative;bottom: 150px;">Face of a bengal cat, high resolution, sitting on a park bench</span> | <img src="./result.png" alt="drawing" width="300"/>
32
+
33
+
34
+ ### expected Request payload
35
+
36
+ ```json
37
+ {
38
+ "inputs": "A prompt used for image generation",
39
+ "image" : "iVBORw0KGgoAAAANSUhEUgAAAgAAAAIACAIAAAB7GkOtAAAABGdBTUEAALGPC",
40
+ "mask_image": "iVBORw0KGgoAAAANSUhEUgAAAgAAAAIACAIAAAB7GkOtAAAABGdBTUEAALGPC",
41
+ }
42
+ ```
43
+
44
+ below is an example on how to run a request using Python and `requests`.
45
+
46
+ ## Run Request
47
+ ```python
48
+ import json
49
+ from typing import List
50
+ import requests as r
51
+ import base64
52
+ from PIL import Image
53
+ from io import BytesIO
54
+
55
+ ENDPOINT_URL = ""
56
+ HF_TOKEN = ""
57
+
58
+ import base64
59
+ from PIL import Image
60
+ from io import BytesIO
61
+
62
+ # helper image utils
63
+ def encode_image(image_path):
64
+ with open(image_path, "rb") as i:
65
+ b64 = base64.b64encode(i.read())
66
+ return b64.decode("utf-8")
67
+
68
+ prompt = "Face of a bengal cat, high resolution, sitting on a park bench"
69
+
70
+ # test the handler
71
+ pred = my_handler(request)
72
+
73
+
74
+ def predict(prompt, image, mask_image):
75
+ image = encode_image(image)
76
+ mask_image = encode_image(mask_image)
77
+
78
+ # prepare sample payload
79
+ request = {"inputs": prompt, "image": image, "mask_image": mask_image}
80
+ # headers
81
+ headers = {
82
+ "Authorization": f"Bearer {HF_TOKEN}",
83
+ "Content-Type": "application/json",
84
+ "Accept": "image/png" # important to get an image back
85
+ }
86
+
87
+ response = r.post(ENDPOINT_URL, headers=headers, json=payload)
88
+ img = Image.open(BytesIO(response.content))
89
+ return img
90
+
91
+ prediction = predict(
92
+ prompt="Face of a bengal cat, high resolution, sitting on a park bench",
93
+ image="dog.png",
94
+ mask_image="mask_dog.png"
95
+ )
96
+ ```
97
+ expected output
98
+
99
+ ![sample](result.png)
create_handler.ipynb CHANGED
The diff for this file is too large to render. See raw diff
 
handler.py CHANGED
@@ -66,4 +66,4 @@ class EndpointHandler():
66
  base64_image = base64.b64decode(image_string)
67
  buffer = BytesIO(base64_image)
68
  image = Image.open(buffer)
69
- return image.convert("RGB").thumbnail((768, 768))
 
66
  base64_image = base64.b64decode(image_string)
67
  buffer = BytesIO(base64_image)
68
  image = Image.open(buffer)
69
+ return image
result.png ADDED