Rendering your sketch with control net
April 28, 2023 - Written by Ho Chien Chang - 20 min read
#StableDiffusion #googleColab
This article covers:
  1. What Stable diffusion model to choose
  2. Which ControlNet preprocessor and model to pick
  3. Prompt design
  4. Stable diffusion settings
1- Choosing the corresponding model for your purpose
The example bellow focuses on product design; therefore, we will be using the [Product Design (minimalism-eddiemauro)] from Eddiemauro. (Link below)
https://civitai.com/models/23893/product-design-minimalism-eddiemauro (CIVITAI is currently one of the most popular go to cite to download models.)
It is also a fine go to option to use generalize models such as the below options. (It is also a good idea to use generalized models with lora models on top.)
Deliberate
All in One / Any Case Version
2- Applying the model
First move the model to the :\stable-diffusion-webui\models\Stable-diffusion folder of your stable diffusion folder.
Click the refresh button, then you'll be able to access the model that you've just downloaded. It will take a minute or few to load the model.
2- Setting up ControlNet
In the example below, we are using the AUTOMATIC1111 webui version of stable diffusion and ControlNet. Keep in mind that this is a on going projecd, so the interface might change when you are reading this article. However, the process of using ControlNet remains the same.
Drag and drop the image that you want to have influence on the image output to ControlNet image Column.
Check the "Enable" and "Allow Preview" box, a explosion icon will show after clicking the preview box. Click on this icon if you wish to see the preview of the processed image. If your GPU has a ram of below 8G, you can also check the "Low VRAM" box
In this article we'll use the "lineart_realistic" to preprocess the image of a shoe sketch. Then we will use the canny model. Remember to click the explosion icon to preview the preprocessed image. (Note: The icons will varie with different versions of ControlNet)
We can adjust the amout of detail that the preprocessor produces, by adjusting the preprocessor resolution bar. The higher the number the more detail we get. However, the purpose of using
Click here to know more about the effects that different preprocessors produce.
Read more > (Article coming soon)
4- Prompt design
Prompt design is cuttently a very important task to produce the desired image from Stable Diffusion. A good prompt includes both main prompt and negative prompt. In order to keep this article short I will explain prompt design in detail in another article. In this article I'll provide you the prompt I've used for you to modify. Main prompt: realistic amazing shoe, great color design, g3 surface design, matte finish, great branding logo <lora:graphicDesign20_v10:0.6>, good amount of detail, soft lighting, depth of view, 70mm lens, good design proportion, cinematic rendering. Negative prompt: floating, leather, worst quality, weird proportion
4- Setting up Stable diffusion
1- Set the Sampling method to "DPM++ SDE Karras" a convergence sampling method that will converge the generating process to the image that you desired. Other divergence method are suited for ideation process such as "any model that has an "a" in the name". 2- Then set sampling steps to "50". Anything above 30 with this sampling method will get you a decent image. 3- REMEBER to set the right size for the image. THE SIZE OF THE IMAGE THAT YOU ARE GENERATING SHOULD MATCH THE SIZE OF THE SKETCH THAT YOU INPUT INTO CONTROLNET This is the key to generate the exact detail from your inputed sketch. 4- Set CFG to 14. (More on CFG in another article) 5. Set batch size to 4 if you want 4 images at a time.
Click here to know more about sampling methods
Read more > (Article coming soon)
Click on the generate button.
5- Ejoy image generating!
For process of interior design and other prudoct design. Simply modify the prompt and image size to your desired description.