The biggest tell that an otherwise beautiful AI image was made by a machine is almost always the hands. Six fingers. Three knuckles per finger. A thumb pointing the wrong way. A wrist that connects to the arm at an impossible angle. Even in 2026, with Flux 2 and Midjourney v8 having made big strides on hand accuracy, you will still hit bad hands on a meaningful percentage of generations.
The good news: inpainting fixes this entirely. You can take an image with a broken hand and selectively regenerate just the hand, keeping everything else in the image exactly the same. Once you learn the workflow it takes 60 seconds per image and the results are indistinguishable from a hand that generated correctly the first time.
This guide walks through inpainting for Stable Diffusion (Automatic1111 and Forge), Flux, and ComfyUI. Same underlying technique, slightly different interfaces.
Inpainting is image-to-image generation restricted to a masked area. You tell the model: "regenerate only the pixels inside this shape, keep everything outside untouched, and use this prompt to guide the regeneration." The model fills in the masked area using your prompt plus context from the unmasked surrounding pixels. Done right, the result blends seamlessly with the rest of the image.
For hand fixes specifically, you mask the broken hand (with a bit of wrist for context), prompt "a human hand, five fingers, anatomically correct, detailed fingers," and let the model try again on just that region. Because the mask is small, generation is fast and you can iterate through many attempts quickly until you get a hand you like.
This is the most common workflow and has been the dominant inpainting tool since 2023. The steps:
"a human hand, five fingers, anatomically correct, detailed fingers, realistic skin texture". Keep negative prompt set to your usual hand-specific excludes.ComfyUI inpainting takes a few more clicks to set up but gives you more control. The standard inpainting graph uses a VAE Encode (for Inpainting) node that accepts your image and mask, connects to a KSampler with your model and prompt, and outputs the fixed image. Several well-maintained ComfyUI inpainting templates exist on GitHub and YouTube tutorials walk through each node.
The advantage of ComfyUI is that you can chain the inpaint result directly into an upscaler, a detail enhancer, or a second-pass inpaint for further refinement. If you are inpainting production work this pipeline is worth the extra setup time. For one-off fixes, Automatic1111 is faster.
Flux.1 Dev supports inpainting as of late 2025, and the quality is noticeably better than SDXL-based inpainting for photorealistic work. The workflow in ComfyUI is nearly identical to the SD inpainting graph but swaps the model and uses Flux's dedicated inpainting variant. For people running Flux locally, a hand-focused inpaint takes 4-8 seconds on an RTX 4070.
Flux produces hands that blend into the surrounding image more naturally than SD does. If your whole image was generated in Flux originally, do your inpainting in Flux too to avoid style mismatch between the regenerated hand and the rest of the image.
ADetailer is a Stable Diffusion extension that automates the inpainting step for common failure regions. You install it, enable it with a "hand YOLOv8" model, and every generation automatically inpaints any detected hands with your chosen prompt. This moves hand fixing from a manual post-processing step to an automatic always-on pipeline step.
Trade-offs: ADetailer adds 5-15 seconds to every generation, so you are paying that cost even on images where the hand came out fine. For users who generate in bulk and want consistent hand quality without manually inpainting every image, this is the right tool. For one-off art where you inspect each generation anyway, it is overhead you do not need.
Some broken hands are so mangled that inpainting cannot save them cleanly, especially if the hand is holding a complex object or making a specific gesture. In those cases, the fastest fix is a cross-application workflow: export the image to Photoshop or Krita, find a reference photo of the pose you want, paste the reference hand over the broken area as a visual guide, then bring the image back into your AI tool and do a ControlNet-guided inpaint using the reference.
For most failures though, a straight prompt-driven inpaint in your tool of choice gets the job done in under a minute. Hands are not the unbreakable wall they were in 2023. They are just a small post-processing step.
If you are still throwing away AI generations because of bad hands, stop. Learn one of these inpainting workflows, save the prompt template above, and add 60 seconds of post-processing to every image that needs it. Your keep-rate on good generations will jump dramatically, and your finished work will stop carrying the single biggest AI tell. Bad hands are a solvable problem in 2026. You just have to know the trick.