>>141898EXTRA CREDIT:Merge a checkpoint to keep most of the parameters / general style of the first but add variety and realism from the second:Bigger Girls v2 (referred to as BGV2) is practically essential for our workflow, since getting ssbbw and usbbw sizes are easy. However, it suffers from same-face syndrome, has a number of low-quality, low-effort art included in the training data, and lacks a proper assortment of backgrounds and locations.
Being able to merge checkpoints essentially means you can create custom recipes and fine tune what the model generates.
When you see people saying they are using Bigger Girls with 30% this or that, they mean they have merged checkpoints using the checkpoint merger tab. After selecting two checkpoints, the slider indicates how much of the second checkpoint (B) you want to be represented in the merge compared to the first one (A). So if you leave the slider at the default .3, you are creating a 70% A / 30% B mix.
Again, if you select Bigger Girls V2 as checkpoint A and Abyss Orange as checkpoint B, leave everything at default and merge, you would have a 70% BGV2 30% AO Mix. The slider is from 0-1, with every increment representing a percentage of B. .15 would be 15%, .5 would be 50%, etc.
You can then merge that merged checkpoint with another at a lower percentage to add even further variety, but if you continue mixing checkpoints that are dissimilar enough you do start to get an 'overcooked' recipe where generations are unfocused, blurry, don't make sense, etc.
There are three schools of thought here.
"start fat, stay fat"That is, always use a model that more or less defaults to the body size you want without having to strangle it with a prompt or abuse gimp's warp tool and img2img. Think mostly BGV2 but with a little sprinkling of other checkpoints.
What if we take BGV2 and dilute it just 25% with, say, a model like Abyss Orange NSFW. (
https://civitai.com/api/download/models/5036?type=Model&format=SafeTensor)
Well, the merged checkpoint would still be primarily fat-focused, but you've given it way more depth to pull from when generating. Better faces, better backgrounds, beter colors.
It's like the difference between giving a cook an entire spice rack or just salt and pepper.
Well, then you could go even further right? You take the 70/30 mix and then dilute it 15% further with a model like, say, Chubby Girls Golden (
https://civitai.com/api/download/models/4163).
Then, any overall/average/median size reduction hit you took from mixing in Abyss Orange (which was not as fat focused) is mitigated and fat is re-enforced. Not to mention, you're adding even more depth to pull from for generations.
You can see how there is a checkpoint and checkpoint recipe rabbit hole. If civitai.com is anything to go by, eventually, there will be a specific checkpoint for EVERYTHING, and the recipes will be endless.
"start chubby, get fat"Say you are on a checkpoint recipe binge like I was and eventually, you dilute BGV2 so far that really, it's only 20% or so of the merged checkpoint.
You start finding that, while you really LOVE the aesthetic / colors / faces / etc, even when you prompt (massive huge belly) or (morbidly obese), your merged model spits out a barely overweight teenage diva.
Now in this instance, you could take your super lovely but just not ever fat enough diva, run her through gimp, use the warp tool (Tools-Transform Tools-Warp Tool) to expand her tiny potbelly just a tad, push out her tits and ass just a touch, then run her through the exact same model again in img2img.
With a combination of low (.1-.3) de noising strength (to make sure the generation is CLOSER to the original) and high (.55-.7) variation (to ensure the model has a chance to make her even possibly bigger or wide in some way and adding additional prompts like super wide, thick thighs, etc), doing this process over and over again CAN result in superfats that look exceedingly good.
The problem is this method is cumbersome, and slow, and generally starts getting cooked after about 3-4 loopbacks.
"start simple, get creative"Another strategy is to run a bunch (1000's) of generations on a base model like BGV2 which only requires simple text prompts at lows steps (20) and a low CFG scale (7-8), cherry-pick good generations, then switch to a merged checkpoint, up the CFG Scale to 9 or 10 and run those cherry picks with a much more detailed and varied text prompt, higher variation setting, different samplers, textual inversions, LoRA models, etc.
Personally this seems to be the best of both worlds. BGV2 will often get the general gist of what I want but with bad faces/simple backgrounds, which are then more or less corrected in a merged checkpoint via img2img.
This has the least manual GIMP work (still some), but there's a lot of sorting through trash.
Some interesting checkpoints you might consider for recipes:
https://civitai.com/models/6231/xpero-end1ess-modelhttps://huggingface.co/WarriorMama777/OrangeMixshttps://civitai.com/models/3748/chubby-girls-goldenhttps://civitai.com/models/3449/anylactationhttps://huggingface.co/andite/anything-v4.0/blob/main/anything-v4.5.ckpthttps://civitai.com/models/3627/protogen-v22-anime-official-releasehttps://huggingface.co/eimiss/EimisAnimeDiffusion_1.0vINPAINTINGInpainting refers to masking an area of an image, then generating a prompt within the mask that takes into consideration a small area around the mask. Think a perfect generation with a horrendous face. Mask the face, then craft a new text prompt about a beautiful anime face and boom, you've got a bunch of now-good faces on without changing the already-good body/background. You could also mask out objects and remove or replace them in this way. Fix hands, fingers, blemishes. The possibilities with in-painting are huge
img2imgIn addition to using a text prompt, you can additionally use the img2img feature to supply an image prompt. This will tend to make the generations at least vaguely similar to the image prompt in some way, depending on CFG scale and variation settings. This can be especially helpful if you just can't seem to guide a model to a specific pose / scene / subject with a text prompt alone.
De-noising in the context of img2img is most helpfully explained by saying that, the lower the de-noising strength, the more like the image prompt the generations will be. The higher the de-noising strength goes, the more generations will deviate from the image prompt in fun and unexpected ways while still maintaining some aspect of the original, especially in combination with the the variation setting accessed from the 'extra' checkbox.
Download at least one textual inversion package to webui's embeddings folder.Textual inversion is somewhat new, but essentially, these are prompt injections that push generations towards a specific POV, scene, or pose without have to manually craft a specific prompt to do so.
People are sharing new textual inversions on civitai all the time.
https://civitai.com/models/4218/corneos-cowgirl-position-embedding-for-animehttps://civitai.com/models/4725/corneos-pov-bound-wrists-missionary-embeddinghttps://civitai.com/models/5811/corneos-spitroast-threesome-ti-embeddinghttps://civitai.com/models/6005/corneos-ball-gag-ti-embeddinghttps://civitai.com/models/4463/corneos-pov-oral-embeddinghttps://civitai.com/models/4475/corneos-pov-paizuri-embeddinghttps://civitai.com/models/5371/corneos-side-view-deepthroat-ti-embeddinghttps://civitai.com/models/4551/corneos-arm-grab-doggystyle-embeddinghttps://civitai.com/models/5202/corneos-covering-breasts-ti-embed-two-hands-versionhttps://civitai.com/models/5203/corneos-covering-breasts-ti-embed-one-arm-versionhttps://civitai.com/models/5241/corneos-covering-breasts-ti-embed-arms-crossed-versionDownload at least one LoRA model to the webui's models/lora folder: https://civitai.com/api/download/models/10069LoRA are small sets of training data to supplement checkpoints without requiring a merge and guide generations like textual inversions, and are scaleable.
Basically textual inversion+. Fairly new feature. This example makes adding milking machines/cups/pumps/hoses to a scene much more reliable.
Download and install re-synthesizer (content aware fill Filters-Enhance-Heal Selection) for gimp (edit - preferences - folders - plugins) extract everything from the folder in the zip there (but not the folder itself):
https://github.com/pixlsus/registry.gimp.org_static/raw/master/registry.gimp.org/files/Resynthesizer_v1.0-i686.zipHave almost the perfect generation for img2img but a certain defect keeps getting brought over?Lasso select and content aware fill, boom, now instead of having to edit out the defect in all subsequent gens, it's gone for the git go!
Or, use it to touch up near-perfect generations that just need an extra arm or hand removed or extra person / object removed.
Add wildcards extension and start using nameofwildcardfile (no file extension) surrounded by double underscores in your prompt to get effortless creativity. (extensions tab - available - wildcards)
Check other people's generations for good prompts/keywords/settings with the PNG Info tab. You can also send it directly to img2img, with the prompt (if it was generated in Automatic1111)!
If an image wasn't generated in Automatic1111, you can still use img2img 'interrogate with clip' to get a general idea of the text prompt parameters.
Steal the manga master font if you want to make anime panels:
https://www.dropbox.com/s/71rdeje512z9wwh/MangaMaster%20BB%20-%20by%20Blambot.rarThat's about all I can think of ATM.