Most of the content in this article refers to this video.
Intro#
Resources used:
-
Stable Diffusion integration package: https://www.bilibili.com/read/cv22159609/
-
Stable Diffusion model: https://tusi.cn/models/605039709506335625
-
LoRA model: https://tusi.cn/models/620507699483853069
Specific Steps#
-
First, select a suitable photo: clear subject and content, no need for details. The photo I selected is this one:
-
Fill in these prompts in the image generation tool based on the video:
(white background:1.1),(simple background:1.1),(chibi),thick outline,happy,mugshot
-
Import the photo into the "WD 1.4 Labeler" and obtain these prompts:
outdoors, ocean, solo, day, black hair, 1boy, from behind, male focus, horizon, water, shorts, sky, photo background, rock, scenery, holding, short hair, standing, facing away, blue sky, pants, white shorts, long sleeves, shirt, shoes, bag, backpack, wide shot
-
Fill in these prompts in the image generation tool and delete some of them.
-
Import the image into the image generation tool in the generation window and change the following parameters:
Parameter Value Sampling Method (Sampler) DPM++ 3M SDE Iteration Steps 30 Redraw Magnitude 0.7 Total Batches 4 -
Click "Generate" to generate 4 images. You can adjust the "Redraw Magnitude" until you are satisfied with the result. The value is usually between 0.6 and 0.7.
-
Send the image you are satisfied with and the generation parameters to the post-processing tab. Choose "R-ESRGAN 4x+ Anime6B" for both "Upscaling Algorithm 1 and 2" and select a "Scale Factor" of 4. Click "Generate".
-
Finally, I obtained this image:
Let's compare:
:::grid {cols=2,rows=1,gap=12,type=images}
:::
This article is synchronized updated to xLog by Mix Space
The original link is https://xxu.do/posts/geek/Paint-avatars-with-Stable-Diffusion