/bbwai/

(161 KB, 1600x900, qualities-of-a-good-tour-guide-cover-illustration.png)
Guides for various generation processes will be primarily documented here.
this thread is not for asking questions, please use >>2561 (Cross-thread)

Getting started from scratch on NVIDIA: >>3
(AMD: https://rentry.org/sd-nativeisekaitoo CPU (poorman's): https://rentry.org/cputard | https://rentry.org/webui-cpu)
Checkpoint merging and theory: >>4
Typical work flow (gen-upscale-inpaint): >>7
Using textual inversion/LORAs: >>8
Using wildcards: >>1683
Posing subjects with Openpose: >>1685
Advanced prompting tips and tricks: >>2538
Collaborating with others: >>2983
Upscaling beyond 512x512: >>3364
Barclay's Updated Guide for Dummies

Assuming you have a recent Nvidia GPU (1070+) and Windows, just 8 steps to get started:

+ Download and install Python: https://www.python.org/ftp/python/3.10.10/python-3.10.10-amd64.exe (ensure you check 'Add Python to PATH')
+ Download and install Git: https://github.com/git-for-windows/git/releases/download/v2.39.2.windows.1/Git-2.39.2-64-bit.exe
+ Download and extract AutoMatic1111's WebUI to it's own folder: https://github.com/AUTOMATIC1111/stable-diffusion-webui/archive/refs/heads/master.zip
+ Download at least one checkpoint to the webui's models/stable-diffusion folder (start with Bigger Girls V2): https://civitai.com/api/download/models/6327?type=Pruned%20Model&format=PickleTensor
+ Download at least one VAE to use for color correction if you are merging checkpoints or using checkpoints without a baked in VAE (any work fine really, you just need one or you will get purple splots disease or extremely faded or sepia tones) to models/stable-diffusion folder:

Anything v4 VAE (standard anime color scheme): https://huggingface.co/andite/anything-v4.0/blob/main/anything-v4.0.vae.pt (RECOMMENDED)
Xpero End1ess VAE (vibrant colors): https://civitai.com/api/download/models/7307?type=VAE&format=Other
Stable Diffusion VAE - Photorealism colors (this vae must downloaded to the separate stable-diffusion-webui/models/VAE folder instead): https://huggingface.co/stabilityai/sd-vae-ft-ema-original/resolve/main/vae-ft-ema-560000-ema-pruned.ckpt

Save the VAE in the models/stable-diffusion folder AND select it for use with all models after you run the webui. (Settings Tab - Stable Diffusion - SD VAE)

If you are running a Pascal, Turing and Ampere (1000, 2000, 3000 series) card, Add --xformers to COMMANDLINE_ARGS in webui-user.bat for slightly better performance/speed.
You can also add --listen if you would like the WebUI to be accessible from other computers on your network, your phone or tablet for example.

Run web-user.bat.
Wait patiently while it installs dependencies and does a first time run.
It may seem "stuck" but it isn't. It may take up to 10-15 minutes.
Once it finishes loading, head to 127.0.0.1:7860 in your browser to access the web ui.
Don't forget to setup your VAE as instructed earlier. (Settings Tab - Stable Diffusion - SD VAE)

You could also check 'Ignore selected VAE for stable diffusion checkpoints that have their own .vae.pt next to them' This will make the selected VAE only be used if the checkpoint you are generating from does not have one baked in or already downloaded right next to it. I wouldn't recommended using this option, as if you use many models and generate between them, the color scheme may not be consistent. Picking one from the drop down and using it for all generations, regardless of if the checkpoint has baked in VAE or a separate vae with it, is usually best in my opinion. Make sure to hit "Apply and Restart WebUI" for the change to take effect.

HOW TO PROMPT:

Start with simple positive prompts and build from there. A typical positive prompt might look something like this:

masterpiece, best quality, highres, 1girl, (chubby cute teenage anime cowgirl redhead standing in front of a desk), (beautiful green eyes), (cow ears), (cow horns), (medium breasts), (blank expression), jeans, (white t-shirt), (freckled face), deep skin, office lighting

PROMPT STRUCTURING:

masterpiece, best quality, highres, 1girl - This is telling the model primarily (by putting it at the front of prompt, i.e., weighting) make the generation resemble art tagged as masterpiece, was originally uploaded in high resolution, and was specifically tagged as 1girl, meaning it was tagged on a Danbooru as only having one female subject within frame. (Add the Danbooru autocomplete extension for help with learning those).

(chubby cute teenage anime cowgirl redhead standing in front of a desk) - putting this in brackets tells the model to focus more on this specific grouping of tokens more than those that are not in brackets. Emphasis.
This is also where you typically put the main subject of the generation in the form of ADJECTIVE DESCRIPTOR FLAVOR SUBJECT LOCATION ACTIVITY

(beautiful green eyes), (cow ears), (cow horns), (medium breasts), (blank expression) - these are also in brackets, but behind our main subject. This helps the model apply and emphasize these features AFTER the main subject is 'visualized' in frame by the AI in the first 10 steps or so. Applying these before the main subject could result in TOO much emphasis, i.e. cow ears everywhere, eyes on things that shouldn't have eyes, eyes and ears not aligned to the characters because they were 'drawn' first, etc.

You could further weight these emphasis keywords within themselves.
You just add a full colon followed by a decimal number to the word you want to emphasize. The decimal numbers are percentages, so they must add up to 1.

I.e. (massive belly:0.7) (huge breasts:0.3) would emphasize both prompts but emphasize the belly more than twice the strength it applies to the boobs.

jeans, (white t-shirt), (freckled face), deep skin, office lighting - we prefer jeans, but we do not mind if they are otherwise, same with office lighting. If the model decides hey maybe shorts and candlelight, hey, let the boy try. These terms are near the end of the prompt so they may or may not be respected, depending on CFG scale.

NEGATIVE PROMPTING:

(lazy eye), (heterochromia), lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry, bad feet, extra limbs, (multiple navels), (two navels), (creases), (folds), (double belly), thin, slim, athletic, muscular, fit, fat face, blemished stomach, rash, skin irritation

These are all things we DON'T want to see, and we can use emphasis here as well. you don't have to use a negative prompt, but it's often quite helpful to achieve what you're going for. In this example, I wanted to make sure that the subject would not be described as muscular or athletic.

Hit generate and watch the magic happen.

MISC BEGINNER TIPS:

+ Experiment and find your favorite sampler. I tend to favor the three DPM++ options. All samplers vary in speed, quality, amount of steps required for good results, variety, etc. It will take some experimentation to find your favorites and you may need to use different ones depending on context (if generating from scratch or img2img, for example). Note that, the original base model was trained on DDIM, so you may want to play with that one at least a little bit, to get an idea of how the model generated images by default, before we had an array of other samplers to choose from.

+ CFG scale refers to how closely the model should try to follow the text prompt. A lower scale of 6-8 will produce more variety but may not follow the text prompt as closely as a higher scale of 9-11.
Higher than 11 (13+) can 'overcook' an image, and lower than 6 (1-3) can produce messy, blurry, unfocused generations.

+ When running a batch size of more than one, try tick the 'Extra' checkbox and drag the variation strength to .5-.7 for interesting inspirations. Especially effective with simple prompts only describing a subject, not what they are doing or where they are.
(83 KB, 1276x617, merge.png)
Merge a checkpoint to keep most of the parameters / general style of the first but add variety and realism from the second:
Bigger Girls v2 (referred to as BGV2) is practically essential for our workflow, since getting ssbbw and usbbw sizes are easy. However, it suffers from same-face syndrome, has a number of low-quality, low-effort art included in the training data, and lacks a proper assortment of backgrounds and locations.
Being able to merge checkpoints essentially means you can create custom recipes and fine tune what the model generates.
When you see people saying they are using Bigger Girls with 30% this or that, they mean they have merged checkpoints using the checkpoint merger tab. After selecting two checkpoints, the slider indicates how much of the second checkpoint (B) you want to be represented in the merge compared to the first one (A). So if you leave the slider at the default .3, you are creating a 70% A / 30% B mix.
Again, if you select Bigger Girls V2 as checkpoint A and Abyss Orange as checkpoint B, leave everything at default and merge, you would have a 70% BGV2 30% AO Mix. The slider is from 0-1, with every increment representing a percentage of B. .15 would be 15%, .5 would be 50%, etc.
You can then merge that merged checkpoint with another at a lower percentage to add even further variety, but if you continue mixing checkpoints that are dissimilar enough you do start to get an 'overcooked' recipe where generations are unfocused, blurry, don't make sense, etc.

There are three schools of thought here.

"start fat, stay fat"

That is, always use a model that more or less defaults to the body size you want without having to strangle it with a prompt or abuse gimp's warp tool and img2img. Think mostly BGV2 but with a little sprinkling of other checkpoints.
What if we take BGV2 and dilute it just 25% with, say, a model like Abyss Orange NSFW. (https://civitai.com/api/download/models/5036?type=Model&format=SafeTensor)
Well, the merged checkpoint would still be primarily fat-focused, but you've given it way more depth to pull from when generating. Better faces, better backgrounds, beter colors.
It's like the difference between giving a cook an entire spice rack or just salt and pepper.
Well, then you could go even further right? You take the 70/30 mix and then dilute it 15% further with a model like, say, Chubby Girls Golden (https://civitai.com/api/download/models/4163).
Then, any overall/average/median size reduction hit you took from mixing in Abyss Orange (which was not as fat focused) is mitigated and fat is re-enforced. Not to mention, you're adding even more depth to pull from for generations.
You can see how there is a checkpoint and checkpoint recipe rabbit hole. If civitai.com is anything to go by, eventually, there will be a specific checkpoint for EVERYTHING, and the recipes will be endless.

"start chubby, get fat"

Say you are on a checkpoint recipe binge like I was and eventually, you dilute BGV2 so far that really, it's only 20% or so of the merged checkpoint.
You start finding that, while you really LOVE the aesthetic / colors / faces / etc, even when you prompt (massive huge belly) or (morbidly obese), your merged model spits out a barely overweight teenage diva.
Now in this instance, you could take your super lovely but just not ever fat enough diva, run her through gimp, use the warp tool (Tools-Transform Tools-Warp Tool) to expand her tiny potbelly just a tad, push out her tits and ass just a touch, then run her through the exact same model again in img2img.
With a combination of low (.1-.3) de noising strength (to make sure the generation is CLOSER to the original) and high (.55-.7) variation (to ensure the model has a chance to make her even possibly bigger or wide in some way and adding additional prompts like super wide, thick thighs, etc), doing this process over and over again CAN result in superfats that look exceedingly good.
The problem is this method is cumbersome, and slow, and generally starts getting cooked after about 3-4 loopbacks.

"start simple, get creative"

Another strategy is to run a bunch (1000's) of generations on a base model like BGV2 which only requires simple text prompts at lows steps (20) and a low CFG scale (7-8), cherry-pick good generations, then switch to a merged checkpoint, up the CFG Scale to 9 or 10 and run those cherry picks with a much more detailed and varied text prompt, higher variation setting, different samplers, textual inversions, LoRA models, etc.
Personally this seems to be the best of both worlds. BGV2 will often get the general gist of what I want but with bad faces/simple backgrounds, which are then more or less corrected in a merged checkpoint via img2img.
This has the least manual GIMP work (still some), but there's a lot of sorting through trash.

Some interesting checkpoints you might consider for recipes:

https://civitai.com/models/6231/xpero-end1ess-model
https://huggingface.co/WarriorMama777/OrangeMixs
https://civitai.com/models/3748/chubby-girls-golden
https://civitai.com/models/3449/anylactation
https://huggingface.co/andite/anything-v4.0/blob/main/anything-v4.5.ckpt
https://civitai.com/models/3627/protogen-v22-anime-official-release
https://huggingface.co/eimiss/EimisAnimeDiffusion_1.0v

More models: https://rentry.org/sdmodels
(509 KB, 2824x2048, 01.jpg) (367 KB, 3660x1194, 02.jpg) (200 KB, 1252x1988, 03.jpg) (151 KB, 1896x896, 04.jpg) (285 KB, 2886x1038, 05.jpg) (8.3 MB, 2688x2688, 06.png)
Try following along with this guide to get used to generating, sending a 'golden seed' to img2img, then inpainting imperfections out.

First prompt what you want.
The positive prompt will be: "masterpiece, best quality, highres, office lady, dress shirt, (button gap:1.1), undersized clothes, (huge belly on table, fat:1.2), pants, black hair, brown eyes, in an office"
And the negative prompt will be "(lowres, blurry:1.1), bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts,signature, watermark, username, artist name, loli, child, chibi, monochrome, sketch"

Finding a good seed
Run a big batch of 512x images. It's hard to get something good on the first try so running big batches on a low resolution and low step count is a very good idea. I'll do dpm++ 2sa karras on 28 steps for 16 images.

Upscaling that good seed
I like the first image a lot so that's the one I'll use. Turn on highres fix, and adjust the upscale to whatever you want. I will do 1.75x for 896x, but you could do higher or lower depending on your pc and patience. I will keep the steps set to 28, set upscaler sampler to "latent(nearest-exact)", set de-noising to anywhere between .45 to .55, and cfg scale to 8, and the batches/batch size back down to 1. Finally, I take the desired image's seed (located beneath the image when selected), punch it into the seed box, and re-generate it.

In img 2 of this post you can see an example of what the denoising slider that's made available when using hi-res fix does. Lower values makes the upscaler respect the source img more, and adds better details, but may introduce artifacting, or keep errors from the source. Higher values are good at preventing errors, but lack fine detail, and may differ too much from the source. I bring this up because there is no best value. It varies from seed to seed, tag to tag, and your preference, and you'll probably want to try more than one value when you upscale (but, .5 pretty much always works). Then, I'll select 'send to inpaint' on the .45 de-noising image.

In-painting that good seed
Inpaint will fix things you don't like by regenerating specific parts, basically img2img on an area you define. Don't do the hands or face yet. I used 0.6 denoising and "Just resize (latent upscale)", and inpaint tags should be the same as what you used to generate, but changed to reflect what you want/don't want in your image. For example, I specified "black office shirt" instead of just "office shirt" to get rid of the white collar and white undershirt, and I also added "necklace" to the negatives to get rid of that weird blue thing on her tits. It generates, and the changes are good, but her shirt is still a little off.

I send it back to inpaint. This time I only want to work on her shirt, and don't want to potentially undo the other changes I made, so I undo all my selections and only select the collar and the bottom of the shirt and leave everything else untouched. I remove button gap from the prompt, which appeared to be fucking up her collar. Img 3 shows the inpaints.

Next, I want to make her shirt cover her belly a little, because she's supposed to be wearing a shirt, not a crop top. I open my image editor, and scribble a little where I want there to be a shirt. Upload, inpaint, and now the shirt is draped over her gut. (this is also how you can easily do feeding tubes and slime and whatever else) Img #4

Now that I'm satisfied with the rest of the image, I can do the hands and then the face. I wanted to do them separately because the ai is more likely to give you a good generation if you do them separately, and I also don't want to end up with good hands, just to lose them in trying to redo the shirt or face at the same time. Generate, and then send it back to inpaint when I'm happy.

Touching up the face/hands in that seed
Finally, the face. Set inpaint area to "only masked". This makes the ai regenerate the area at the specified resolution (so here it generates at 896x instead of 100x or whatever the original size is), and then downscale it, making it easy to get pretty and detailed faces. Also, set "only masked padding, pixels" to the max. This is how much "context" the generation will have, which is important so that you actually generate a nice face instead of shit like a navel where a mouth is supposed to be like in img #5. I'll add "looking away, burping", generate, and call it there. Final image is #6

All in all it takes maybe 6-10 minutes to generate and clean up an image, but definitely could take more if you get unlucky generations or have a hard/undertrained subject (or have a dumpy computer). The image browser extension is also super nice to have because it lets you see your history and save images & prompts, so get it if you don't have it. For extra credit you could go to the 'extras' tab and run your final image through an upscaler before posting
(86 KB, 1185x643, lora.png)
Both features are accessed via the "Show Additional Networks" button in the WebUI.

Textual inversion
These are prompt injections that push generations towards a specific POV, scene, or pose without have to manually craft a specific prompt to do so.

People are sharing new textual inversions on civitai all the time.
These are downloaded to webui's embeddings folder, and then 'activated' by using the inversion's keyword in your text prompt.
For example, if you downloaded Corneo's Cowgirl Position embedding, you would add (corneo_cowgirl) somewhere near the front of your text prompt in order to use it.

https://civitai.com/models/4218/corneos-cowgirl-position-embedding-for-anime
https://civitai.com/models/4725/corneos-pov-bound-wrists-missionary-embedding
https://civitai.com/models/5811/corneos-spitroast-threesome-ti-embedding
https://civitai.com/models/6005/corneos-ball-gag-ti-embedding
https://civitai.com/models/4463/corneos-pov-oral-embedding
https://civitai.com/models/4475/corneos-pov-paizuri-embedding
https://civitai.com/models/5371/corneos-side-view-deepthroat-ti-embedding
https://civitai.com/models/4551/corneos-arm-grab-doggystyle-embedding
https://civitai.com/models/5202/corneos-covering-breasts-ti-embed-two-hands-version
https://civitai.com/models/5203/corneos-covering-breasts-ti-embed-one-arm-version
https://civitai.com/models/5241/corneos-covering-breasts-ti-embed-arms-crossed-version

LORAs:

LoRA are small sets of training data to supplement checkpoints without requiring a merge and guide generations like textual inversions, and are scalable.
Basically textual inversion++. Fairly new feature. These are also primarily shared on civitai.
These are downloaded to webui's models/Lora folder, and then 'activated' usually via combination of keywords.

For example, this cowgirl squat Lora (https://civitai.com/models/8877/pov-squatting-cowgirl-lora-1-mb) is activated with <lora:PSCowgirl:0.9>, 1boy, penis, squatting cowgirl position, vaginal, pov

Style Loras:
https://civitai.com/models/6526/studio-ghibli-style-lora
https://civitai.com/models/7094/arcane-style-lora
https://civitai.com/models/17704/goth-girl-lora

Popular Character Loras:
https://civitai.com/models/8484/yae-miko-or-realistic-genshin-lora
https://civitai.com/models/5373/makima-chainsaw-man-lora
https://civitai.com/models/4789/ahri-league-of-legends-lora
https://civitai.com/models/6610/loona-helluva-boss-lora
https://civitai.com/models/4829/raiden-shogun-lora-collection-of-trauters
https://civitai.com/models/4784/hinata-hyuuga-lora
https://civitai.com/models/4959/princess-zelda-lora
https://civitai.com/models/8679/bea-pokemon-lora-8-mb
https://civitai.com/models/16186/bowsette-or-character-lora-1860
https://civitai.com/models/18008/nico-robin-one-piece-pre-and-post-timeskip-lora

Sex position Loras:
https://civitai.com/models/8723/pov-doggystyle-lora-1-mb
https://civitai.com/models/18751/murkys-suspended-congress-carrying-sex-lora
https://civitai.com/models/12726/pov-paizuri-lora-1-mb
https://civitai.com/models/18962/murkys-cheek-bulge-fellatio-lora
https://civitai.com/models/18419/murkys-suspended-on-penis-lora
https://civitai.com/models/15880/nursing-handjob-or-test-sex-act-lora-869
https://civitai.com/models/18417/under-table-fellatio-paizuri-or-sex-act-lora-121

General Loras:
https://civitai.com/models/7706/shirt-tug-pose-lora
https://civitai.com/models/10085/extended-downblouse-or-clothing-lora-281
https://civitai.com/models/21618/two-person-lora-lora-update-for-character-lora
https://civitai.com/models/18377/across-table-or-concept-lora-208
https://civitai.com/models/8072/covering-eyes-pose-lora
https://civitai.com/models/19295/ass-on-glass-lora
https://civitai.com/models/18003/legs-together-side-or-test-pose-lora-587
(0.5 KB, body-heavy.txt) (0.8 KB, positive.txt) (0.7 KB, size.txt) (9 KB, scenes.txt) (0.8 KB, clothing.txt) (447 KB, 1289x797, wildcards.png)
Wildcards let you add creativity to your prompts in an easy way. From the extensions tab, add 'sd-dynamic-prompts' then apply and restart the WebUI.
You can then download and test some of the attached wildcards to extensions\sd-dynamic-prompts\wildcards.
Use them in a text prompt by using the name of the wildcard file surrounded by double underscores (without the .txt extension, of course)
When using a copious amount of wildcards with a high batch count and low batch size, you can create a wide array of styles/scenes/clothing/body shapes/sizes/expressions with no prompt smithing required at all.
(34 KB, 768x768, sitting_14.png) (25 KB, 768x768, standing_10.png) (367 KB, 1305x1137, openpose.png) (28 KB, 768x768, flexing_01.png) (26 KB, 768x768, flexing_03.png)
More information available here: https://old.reddit.com/r/StableDiffusion/comments/119o71b/a1111_controlnet_extension_explained_like_youre_5/

You can guide the pose/stance/framing of characters in a generation by leveraging Controlnet with Openpose. Add sd-webui-controlnet from the extensions tab, then apply and restart the WebUI. Download the openpose-fp16 safetensors model from https://huggingface.co/webui/ControlNet-modules-safetensors/tree/main to extensions\sd-webui-controlnet\models.

Expand the new controlnet option at the bottom the the generation window. Upload one of the attached poses. Click the enable tab, leave predecessor at none, and change the model to control_oppenpose-fp16.safetensors. Leave the rest of the settings at default and generate. Your generations should have the same pose as the model you added to the ControlNet.

You might consider adding one of the available openpose editor extensions to make your own skeletons from scratch, or modify existing ones, or make skeletons by layering a photo underneath the editor and layering the bones on top. You can even generate a skeleton automatically by submitting an image, but this function is finicky at best. Many pre-configured poses are available here: https://openposes.com/
(0.9 KB, artist.txt) (785 KB, 896x896, 00001-3010875849.png) (852 KB, 896x896, 00003-3010875849.png) (1.1 MB, 896x896, 00004-3010875849.png) (723 KB, 896x896, 00005-3010875849.png) (708 KB, 896x896, 00007-3010875849.png)
https://rentry.org/anime_and_titties
https://rentry.org/faces-faces-faces

Tips for generating larger ladies:

Known bbw-favorable western styles ((X art style)) in front of prompt: (see attached artist.txt for wildcards file)
Pierre-Auguste Renoir, Gaston Bussière, Édouard Manet, Daniel Ridgway Knight, Clara Peeters, Andrei Markin, Michael Garmash, Duane Bryers, Henri-Pierre Danloux, John William Godward, John Collier, John William Waterhouse, Eugene de Blaas, William-Adolphe Bouguereau, Adolph Menzel, Alexandr Averin, Alan Lee, Albert Lynch, Albrecht Anker, Alyssa Monks, Anders Zorn, Andrea Kowch, Andrew Atroshenko, Anne-Louis Girodet de Roussy-Trioson, Anton Mauve, Arthur Hacker, Helene Knoop, Briton Rivière, Dean Cornwell, Rembrandt, Goya

Known bbw-favorable eastern styles (many, many more in the anime guide above):
kanon, ruu, rossdraws, Nitroplus

Default Negative Prompt Example #1:
(lazy eye), ((heterochromia)), lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry, bad feet, extra limbs, multiple navels, two navels

BBW terms that aren't booru tags that are known to some effect:'
love handles, belly overhang, flabby belly, soft belly, weight gain, plus size, plus size model, fat, obese, fat face, double chin, fat arms, fat hips, fat thighs...

BBW-friendly booru terms:
plump, thick thighs, fat ass, navel focus, navel, deep skin, skindentation, undersized clothes, button gap, torn clothes, midriff

slob / eating friendly booru terms:
food on breasts, food on body, food in mouth, food on face, eating, food, cake, cream, cream on body, hand on own stomach, heavy breathing, blush

SSBBW tips:
Generate a bbw using txt2img, then take that result and warp it in photoshop/GIMP to increase the size of the subject. It can be a very hasty warp as long as the general shape is good. Take that warped image and feed it into img2img with the same prompt but with a different seed. Play around with the prompt as desired.
Alternatively, use a picture or illustration of SSBBW as a base, or make a crude tracing of it and fill in the desired colors, and feed that into img2img.

Q: My generations often have bad eyes?
A: To try fixing eyes, from least drastic to most drastic keywords can be added: "deep pupils", "wide pupils", "bright pupils", "beautiful pupils"
Well here is my attempt at making an image that isn't shit.
what is the chat gbd of fat woman pics
Note to anyone trying to install for the first time: the WebUI batch file in this thread will fail to install pytorch because of dead links. Run this in your command prompt before running the webui file.

pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu117
>>2972

never mind I was wrong somebody please delete this
(370 KB, 1384x541, 2.png) (424 KB, 1384x541, 1.png) (565 KB, 1288x816, 3.png) (280 KB, 1287x633, compare.png)
> What is this 'catbox' thing I keep seeing? It just looks like a normal image host.
It's a way of sharing prompts along with images at the same time.

> huh?
By default, every image generated with SD also saves a copy of its generation info to the image in the form of metadata, or exif data. This includes prompt, negative prompt, loras used, model used, sampler, upscaler, and more. So, you can put any ai image into the 'PNG Info' tab of your webui, it'll show you that generation info.

However, most image hosts and forums/image boards(including bbwchan!) automatically remove this image metadata, since it's more often than not a privacy concern (for example, some phones also share where and when a photo was taken in their metadata, and users doxxing themselves is bad). So, if you were to download an image off bbwchan and put it in PNG Info, it would show you nothing.

The way around this is to use alternative image sharing sites that DO preserve metadata. The most popular here being catbox, since it is free, fast, and does not require an account.
You can upload an image, all of the image metadata is also uploaded, and shareable with a handy little link so you can share your image parameters with anybody that cares.
You download the catboxed image and put it into PNG Info like normal.

> Why would I do that over just copy-and-pasting my prompt into my post?
As a courtesy. It's more legible, and it keeps the threads cleaner when you share a link instead of massive text dumps.

> Why don't I just share my prompt via image as part of my post, or as a pastebin link?
As a courtesy, since anons can't copy text from images (and that also takes up one of the six images you're able to post at a time).
And a pastebin is better than a text dump, but it's still more convenient to have a catboxed image so you could have one file with the image and metadata instead of two separate files.

> I have a catboxed picture, but the prompt/gen settings/model don't match what the image looks like/the poster said they used!
Happens when the catboxer uses inpaint or img2img. The parameters would only show what was used most recently, and not what was used for the original generation.

> I have a picture, but the parameters are empty!
The catboxer somehow shared an image that has no metadata, or you downloaded an image from a site that removes metadata. Or, you are using a site that doesn't read metadata properly- use PNG Info if you can.

> Don't links get you banned?
Not from catbox.moe.
(6.4 MB, 1920x1080, upscaling.mp4)
See attached for a comparison of common upscalers. Try pausing and scrubbing through the video.
Which upscaler / method do you think did the best job?

It's really hard to say and does come down to personal preference, but a few things can be noted.
Upscaling via Hi-res fix during the initial generation presents an overall more defined subject than img2img sd upscale, up UNTIL r-esrgan and anything developed after that.

At that point, is really is a toss up between Hi-res fix during initial or img2img sd upscale afterwards with r-esrgan+anime6b or otherwise. Many more upscalers are being developed that all have their pro and cons, just like the built-in ones.

There are so many minor changes between the eyes, clothes, hands, sweat, belly that, it could take multiple runs with multiple tweaks on multiple upscalers to find the one that best represents the idea of the original lowres generation.

More information on going beyond 512x512:

https://rentry.org/sdupscale | https://rentry.org/hiresfixjan23
This is going to sound idiotic, but can’t somebody just create an AI code for this process?
>>7
>>3364

how is upscaling different from img2img? or is upscaling based on img2img but is like a very narrow specific usage of img2img?

TLDR: If I'm never using txt2img and I'm starting with an img I want the ai to be inspired by, how does that effect me wanting to use 512x512 and upscaling after I end up with an image I like?
>>3984
'Upscaling' in this context means only taking a what would have been a standard 512x512 generation and instead scaling to 1.75x (896x896) or even 2x (1024x1024).

>TLDR: If I'm never using txt2img and I'm starting with an img I want the ai to be inspired by, how does that effect me wanting to use 512x512 and upscaling after I end up with an image I like?

Hi-res fix during the initial generation, or using the SD upscale script in img2img later, are both upscaling. In your case, there would be no "effect": you take a 512x512 image, run it through img2img until you get a 512x512 image you like, take that image and seed, then re-run with the SD upscale script in order to achieve up-scaling.

The only difference may be that, while both methods will differ from the original smaller 512x512 gen, the img2img SD upscale may be more different/varied from the original gen than a Hi-res fix during initial upscale.
Fat anime girl
Fat anime girl
>>4218
Shut up, braindead nigger
I have a question to ask is there any other way to use stable diffusion and the bigger girls models without needing a GPU? I try downloading it but it never works it always just freezes and doesn't do anything when I try downloading it.
>>1685
How much do controlnets really do for genning?
I know they're supposedly supposed to really help with anatomy (and especially fingers), but considering the absurd anatomy of a lot of BBW models, do these things do more harm then good? Does anybody here use them?
An updated guide would be nice, though I suppose the onus is on us anons to create one and put it here. Lots has changed since LtBarclay made the original barely a month and a half ago.
>>3958
>>4518
>>4685
You'd probably have better luck putting questions in the questions thread. The people who could give you answers probably don't need to visit the guide thread and won't see these
Has anyone found a consistent way (lora or prompt) to generate unbuttoned jeans?
(220 KB, 262x856, buffpup ref.png)
could i get a chunky Buffpup, PoV behind, with her looking back annoyed because she's getting her ass grabbed?
Is there any tool for easily changing how fat someone is using generative AI? I don’t mind paying a couple of dollars or whatever, but I won’t pay for photoshop. Ideally free looooool
>>8522
Having loads of fun playing with the widget embedded in StufferDB but would love to understand the limitations of that tool/what the full suite of things can achieve. Obv the StufferDB widget is great when you get inspired with prompts but can see that generating superfats is luck more than skill
Requesting Erika Mishima plump and round. Don't know if there's a lora tho.
fat womans bigger
Naked extremely big boobs and ass
chubby huge belly fat vampire anime girl humongous butt big tits
fat, cow, feedee
I want a really big ssbbw girl
(189 KB, 325x588, 5fc1cf2e-3d10-48e0-9c65-8513e10227d5.png) (187 KB, 325x588, image.png)
a woman is very overweight with orange hair in a ripping pink shirt, not wearing pants, 1girl, breasts, solo, navel, torn clothes, large breasts, belly, blue eyes
How does using multiple different lora in the same prompt work? syntax wise?

could someone give me an example?
what's the point of models based on "sd 1.5 inpainting"

versus sd 1.5 itself?

I see on civitai most popular checkpoints also have an inpainting version, but it seems like a pain to download both and swap between the two, so what's the difference? I've done inpainting before just fine without an inpainting checkpoint?

can anyone explain?
is there a logic to how ((keyword)) compares to (keyword:1.5) like how many ((())) equals what number?
(3.3 MB, 1024x1536, image.png)
okay, so SDXL1.0 I cannot get it to work, this is how all my images end up.

BUT they look normal at 50%, then instead of becoming crisper they distort their colors, which makes me think my vae is to blame since I guess the vae is meant for sd1.5? but I've never heard of this issue before, nor would I know how to resolve it, anyone ever get sdxl1.0 checkpoints to work?
(18 KB, 1077x163, Screenshot 2023-10-21 045944.png)
I figured it out, this didn't matter, because I was setting my vae in the commandline prompt .bat file
must have plugin

https://github.com/Bing-su/adetailer

installs as an extension in webguiautomatic1111

enable it and by default it will auto-detect faces and then auto-inpaint them, no longer will I have to look at ghastly faces before I click a bunch to inpaint them
can anyone please recommend a prompt for inpainting 6 toes down to 5 toes reliably?
(2.1 MB, 1032x1262, WideWorkshop1.png) (2.6 MB, 1152x1481, HeavyDance.png) (3.4 MB, 1560x1164, HeavyTile1.png)
So, I wrote a guide on my personal process for StableDiffusion. I'm not claiming my way is the right way, or that you'll instantly get good looking results. But I think some folks may find it handy, and the more people learn to not overprompt their images (or that there is no perfect prompt) the better, I think.

https://www.deviantart.com/theguywhodidathing/art/How-did-you-DO-that-989204336
>>16922
Can you do more with outfits like the red one there? Very sexy
anyone know how to run my own ai chatbot on 12gb of vram?

supposedly the top tier low cost option is running Mixtral 8x7B which only requires like 24-48gb of vram or something to perform similarly to gpt-3

but uh, maybe the brief google search i did is less capable of helping me than this thread!

Anyone know how to run a decent chatbot ai locally on my 4070ti? automatic1111webgui made stablediffusion so easy i'm feeling spoiled and I want it all

also sick of all the best online chatbot websites censoring nsfw stuff OR even just censoring unhealthy fat themed content as if that deserved censorship :(
>>19667

so I found 100000s of choices here https://huggingface.co/TheBloke and I can get the low end ones to work, but the question still remains what is the best usage of my paltry 12gb vram? (i need a model that is likely under 10gb to fully load in vram alongside windows and my browser for webui)
>>19705

kunoichi 7b and quants of solar10.7b seem like winners to me so far :)

I ran quant'd mixtral in my ram on my cpu and it was painfully slow so i nixed that idea
(94 KB, 827x1104, 1st comparison preview.jpg) (409 KB, 599x798, 2nd comparison preview.png)
Made two spreadsheets comparing all samplers, schedulers and upscalers available in current stable diffusion webui.

Links for spreadsheets are in descriptions:
Samplers and schedulers (2 pages): https://www.deviantart.com/n0tavirus/art/Samplers-and-Schedulers-Comparison-Spreadsheet-1049839193
Upscalers: https://www.deviantart.com/n0tavirus/art/Upscaler-Comparison-Spreadsheet-1042455296

Back to top