Stable Diffusion Cheat Sheet

MA
Reviewed by Marouen Arfaoui · Last tested April 2026 · 157 tools tested

Last updated: April 2026

Quick Facts

Pricing

Open-source and free to run locally. Paid API access via Stability AI starts at $0.002 per image for SD3.

Free Plan

Yes + includes the core model, ability to run on your own hardware, and access to thousands of free community models.

Rating

4.5/5

Best For

Artists, tinkerers, and developers who want ultimate creative control, privacy, and no generation limits, and don't mind a technical setup.

Key Features

Tips & Tricks

TIP

Start prompts with the subject, then style, then quality terms (e.g., 'a knight, digital painting, intricate armor, masterpiece').

TIP

Use specific artists' names in your prompt (like 'by Greg Rutkowski') to instantly steer the style in powerful ways.

TIP

For photorealism, add technical camera terms: 'shot on a Canon EOS R5, 85mm, f/1.2, shallow depth of field.'

TIP

Negative prompt 'deformed, blurry, bad anatomy, bad hands, three hands, three legs, bad arms' to immediately improve quality.

TIP

Use a low 'denoising strength' (0.3-0.5) in img2img to tweak an image without completely changing it.

TIP

Download the 'EasyNegative' embedding and add it to your negative prompt; it's a community-trained catch-all for common flaws.

TIP

For consistent characters, generate a good face, then use it as an img2img source with a low denoising strength for new poses.

TIP

If you lack VRAM, use the '--medvram' or '--lowvram' command line arguments when launching Automatic1111.

TIP

Experiment with different samplers; Euler a is fast, DPM++ 2M Karras is my go-to for quality, and DDIM is good for img2img.

Common Commands

python launch.py --autolaunch --medvram

Launches the Automatic1111 WebUI interface in your browser, optimizing for medium VRAM GPUs (8GB).

Prompt: (keyword:1.3)

Uses prompt weighting. This increases the importance of 'keyword' by 30% in the final image.

Limitations

Alternatives

MidjourneyDALL-E 3 (via ChatGPT)Adobe Firefly
Stable Diffusion TutorialFull step-by-step guide

Frequently Asked Questions

What are the minimum PC requirements to run Stable Diffusion locally?+
You need a dedicated NVIDIA GPU with at least 4GB of VRAM (6GB+ is practical, 8GB+ is comfortable). 16GB of system RAM, a decent CPU, and 10GB+ of free storage for models. AMD GPUs can work but require more technical setup.
Where do I get started with Stable Diffusion?+
Download the 'Stable Diffusion WebUI by Automatic1111' from GitHub. It's the all-in-one package. Follow a YouTube tutorial for installation. Your first step after that is to download a good base model (checkpoint) from Civitai.
Is it legal to use images generated with Stable Diffusion commercially?+
Generally, yes, but you must check the license of the specific model you used. Most popular community models use the permissive CreativeML Open RAIL-M license. Always verify, especially for sensitive commercial work.
What's the single biggest mistake beginners make?+
Using vague prompts. The AI is a literal genie. 'A beautiful landscape' gives random results. 'A misty alpine landscape at sunrise, photorealistic, Canon EOS R5, dramatic lighting' gives you control and vastly better output.
How do I fix deformed hands and faces?+
First, use a negative prompt for 'bad anatomy, deformed hands'. Second, use a model specifically fine-tuned for photorealism. Third, use inpainting: generate the image, then use the inpainting tool to mask just the hands and regenerate them with a new prompt.
Was this helpful?