/bbwdraw/

(290 KB, 512x512, index11.png)
Alright my fellow degenerates, the NovelAI model has been leaked. Let's get you set up using it.

GETTING STABLE DIFFUSION RUNNING

You need a good GPU. At least 4GB of VRAM. Probably a 1080 or newer.

Just follow this, but read the rest of this post first:
https://rentry.org/voldy#-voldy-retard-guide-

You don't need to install git if you download the project directly from the git page, but you won't be able to easily update your local copy when the codebase gets updated (and it's getting updated often).

You also won't need to download the 1.4 AI model either, unless you want to play around with it.
---

NOVEL AI MODEL

The guide is here, but read the rest of this post first.
https://rentry.org/sdg_FAQ

The torrent:
magnet:?xt=urn:btih:5bde442da86265b670a3e5ea3163afad2c6f8ecc&dn=novelaileak&tr=udp%3A%2F%2Ftracker.opentrackr.org%3A1337%2Fannounce&tr=udp%3A%2F%2F9.rarbg.com%3A2810%2Fannounce&tr=udp%3A%2F%2Ftracker.openbittorrent.com%3A6969%2Fannounce&tr=http%3A%2F%2Ftracker.openbittorrent.com%3A80%2Fannounce&tr=udp%3A%2F%2Fopentracker.i2p.rocks%3A6969%2Fannounce

You'll only need the following files:
novelaileak\stableckpt\animefull-final-pruned\model.ckpt
novelaileak\stableckpt\animevae.pt

Optionally pull any of the files in this folder that look interesting to you:
novelaileak\stableckpt\modules\modules\

You don't need xformers.
Now go ahead and follow the guide.

--Default Novel AI settings--
CFG: 11
negative prompt: lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry

--TRAINING NEW CONCEPTS--
You'll notice it's hard to get output that's too far away from "normal". If you can't get it to do something, there probably weren't enough images in the training set (if any) that pertain to the thing you're trying to do. You can train your own embeddings with textual inversion. This takes many hours on a new video card, so I suggest doing it overnight.
The guide at https://rentry.org/textard is out of date (as of me writing this).

1. Figure out a single concept you want to teach the model. Keep it simple, stupid. Examples: "force feeding", [insert character name here], etc...
2. Run the web ui of stable diffusion. In the settings, switch to the model you want the embedding to work with. Results are far better if you use the embedding with the same model it is trained on.
3. Select a set of 5 images that match your concept. Try to find square images (or crop to make square). Place them by themselves in a folder. Name each image like you were tagging the image with what is in it. Ex: "woman brown red hair laying on bed eating pizza.png"
4. Go to the "Textual inversion" tab in your stable diffusion web ui.
5. Enter the source directory path in the prompt. Click "Create flipped copies". Enter your desired output directory in the prompt.Click "Preprocess".
6. Enter a name in the "Name" prompt. This is the tag you want the embedding to be registered under. This is what you'll need to put into the text2img prompt to get the output you want. Repeat the tag in the initialization text. Set number of vectors per token to 8. Click Create.
7. Choose the embedding you just created from the drop down next to the "Embedding" prompt.
8. Set your "Dataset directory" to the destination directory for your preprocessed images.
9. Find the prompt template directory shown in "Prompt template file". Make a copy of "subject_filewords.txt". Name it something that makes sense. Edit the prompts to make sense. Remove any that don't make sense. Add a few for "drawing".
10. Put the path to the file you just edited in the "Prompt template file".
11. Change Max steps to 20000, especially for your first couple while you figure out what you're doing.
13. Check the final result in the textual_inversion/[dateyoustartedthetraining] folder.
14. Try your embedding in a txt2img prompt.
15. Consider sharing your embedding if it worked well.
>>117463 (OP)
Question is: how does NovelAI compare to waifu diffusion?
>>117470

Significantly better for generating characters. It doesn't do too much outside of that, though.
>>117463 (OP)
Why was this posted as it's own thread instead of being on the AI thread that already exists? Doesn't make any sense.
Question, how do you get two characters in one image eating food
>>117463 (OP)
Impressive. I've seen another ai diffusion thread on here. I've been seeing a pattern of a certain body type (an exagerated hourglass shape, gigantic tits and ass and small to medium sized pot belly.) Is there a way to change the porportions on the character in some way? Also I find that the ai can't deal with eating food well. When the image has a girl eating. The mouth and chin area is a little bad.
>>117477
Because this thread is a guide to help people set it up and open it up to the community easier and the other thread was for sharing art that was generated? You really want a guide to get buried by tons of art in a thread? Nobody's gonna see a text guide amidst tons of art. lol

>>117463 (OP)

I have no idea what I'm doing so I appreciate the handy link and guide, thanks anon.
>>117596
>You really want a guide to get buried by tons of art in a thread? Nobody's gonna see a text guide amidst tons of art. lol
That's your problem. The whole point of threads is to have art posted on them or to discuss artists within the community. This thread doesn't seek to do either of those, so it's pointless.
(302 KB, 512x512, index.png)
>>117537

I think the Novel AI model was built to do mostly single character portraits, however I got the attached image with the following--

Positive Prompt:
Photorealistic, cinematic lighting, two women sharing food, T-shirt, cleavage, blonde hair, hair over one eye, twintails, red eyes, open mouth, happy

Negative Prompt:
lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry

Steps: 28, Sampler: Euler a, CFG scale: 11, Seed: 4291519943, Size: 512x512
(318 KB, 512x512, index.png)
>>117578

It's a limitation of the model. These models need to be trained with a dataset of images. If the dataset doesn't contain images of a certain thing (or if the images weren't tagged with it), then it won't be able to understand what you want.

I suspect the dataset used for Novel AI's training had plenty of anime girls with big breasts, wide hips, and small potbellies at the largest.

Unfortunately, we're going to need to train a model with a dataset of fat anime characters to get it to output fat anime characters.

That's why I included the section about training new concepts. You can supply your own images and try to teach it to output fatties.
------
Attached image parameters (no embeddings used):

Photorealistic, soft lighting, two hungry fat women eating burgers, sweaters, red-brown hair, hair over one eye, twintails, red eyes, open mouth, happy
Negative prompt: lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry
Steps: 28, Sampler: Euler a, CFG scale: 11, Seed: 3220926519, Size: 512x512
Welp while trying to teach the AI about a certain character, it keeps popping out the same error message:
FileNotFoundError: [Errno 2] No such file or directory: 'C:\\STABLEDIFFUSION\\webui\\textual_inversion_templates/AN-94'.

Anybody has an already built template for anime characters, or how can I fix this?
>>117647
Whoever made this AI sure had thick girls in mind, eh? Too bad they're too small for many people lmao
>>117463 (OP)
i apparently only have 2gb vram. old computer, id replace if i could. any solutions or alternatives, or am i boned?
(260 KB, 512x512, 00151-724881946-photorealistic, soft lighting, fat, hips, belly, ssbbw, fat woman bbw sitting seductively on bed naked.png) (289 KB, 512x512, 00154-724881949-photorealistic, soft lighting, fat, hips, belly, ssbbw, fat woman bbw sitting seductively on bed naked.png) (275 KB, 512x512, 00083-3789087896-photorealistic, soft lighting, happy, bbw huge woman eating fatter next to thin friend.png) (279 KB, 512x512, 00146-724881941-photorealistic, soft lighting, fat, hips, belly, ssbbw, fat woman bbw sitting seductively on bed naked.png)
Clearly some limitations with generating really big girls - but the overall results can be pretty good with reasonable prompts - used many of the same negative triggers here and it cuts down a lot on the eldrich horrors this kind of thing can produce (especially when using photo images)
>>117463 (OP)
I've followed the guide, but It's giving me this when I run webui-user.bat
>Couldn't launch python

>exit code: 9009

>stderr:
>Python was not found; run without arguments to install from the Microsoft Store, or disable this shortcut from Settings > Manage App Execution Aliases.

What do I do?
Is Novel diffusion good for making drawings without anime-style faces? The art in here is really good but I prefer the more realistic or even celebrity-influenced appearances in the other SD thread.
>>117463 (OP)
Why is this its own thread, why do we have 3 threads all pertaining to diffusion AI's
>>117752
Pretty much all I've seen come out of novel was "cute anime girl #9420" so it kind of seems like it. I was able to pull some stylistic variation out of waifu diffusion; while novelAI seems extremely heavily geared towards anime.

Course I haven't put in "realistic" or "painterly" prompts in since I prefer the look heavily stylized animation; but still.
Question, has anybody else here got the Cuda Out of Memory Error when teaching the AI through textual inversion? Everything else seems to work fine, but I can't teach the AI due to that error.
>>117753
>>117601
>>117477
>>117751

Post a couple more times, samefriend, see if that helps. This is a thread for technical questions and help with the generator.
Complain here if it makes you feel better: >>117591 (Cross-thread)

>>117761
>>117752

Novel AI's model seems to be trained on exclusively anime stuff. The standard diffusion 1.4 model is more general, try using that. You can always create a mix of two models using the built-in tool.

>>117764
That's unfortunate. I've gotten this error myself. Unfortunately, I think you'll run into this if you don't have TI card (12GB VRAM).
It helps if using textual inversion is the first thing you do. I'm looking into a solution, as I've gotten this myself.
>>117779
welp I dont have a high end graphics card and neither the money to buy one so basically: :(

I guess we'll have to wait for a version that uses less resources, there's already a version of stable diffusion that does that but its not compatible with WebUI i believe, correct me if wrong.
>>117463 (OP)
>>117745
I'm sorry for clogging up the board with my intallation problems. But I promise this'll be the last one.

>OSError: Can't load the model for 'openai/clip-vit-large-patch14'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure 'openai/clip-vit-large-patch14' is the correct path to a directory containing a file named pytorch_model.bin, tf_model.h5, model.ckpt or flax_model.msgpack.

How do I fix this?
Not trying it, but can they legally do anything about this besides to the person who leaked it?
Im having a problem with the textual intergration
whenever i try to do it a always get this error
"RuntimeError: CUDA out of memory. Tried to allocate 512.00 MiB (GPU 0; 4.00 GiB total capacity; 2.99 GiB already allocated; 0 bytes free; 3.39 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF"
ive tried multiple things but they havent worked. any ideas on how to fix it?
>>117790
They technically can. But companies won't go through the legal fees just to sue everyone who got the leak for what would be little to no conpensation. All they would do is probably sue the person who leaked it and no one else.
>>117818

I don't think you have the VRAM for training, unfortunately. You can tweak the model to run at lower VRAM, but training just takes more space and power.
>>117779
I'm humbled that you'd attribute such efforts to me alone. Unfortunately Im not the only person who you're bothering.
So why the fuck is this not in the already existstant stabledifussion thread that even includes tutorials and discussions?
>>117463 (OP)
>>117764

Just realized I made a typo in the OP.

Set the max training steps to 2000, not 20000.

Hopefully that helps anybody who's had the training get stuck. If you still aren't getting what you want out the embedding, increase the number of steps after that.
gfd
So is stable or novel the one folks seem to be using the most of?
(344 KB, 512x512, Example 1 Modified(7)_0007.png) (488 KB, 576x768, Random time(56)_0002.png) (291 KB, 512x512, Без названия.png)
Also a thing for anons without a powerful enough machine or money to pay for NovelAI's subscription.
There's this thing called google colab which is basically a GPU for a day with a cooldown of as much as u used it.
And there are 2 free to use "notebooks" with code to make AI generated images.
One is "Stable Diffusion notebook by @pharmapsychotic". For it you will have to download a model yourself from HuggingFace and put it on your google drive to use. It also saves generated pictures as well as configs on your drive. There's a guide for it on youtube, beleive u can find it yourself.
The other is "StableDiffusionUI (adapted to NovelAILeaks)". Basically a NovelAI for free, just not very stable. Doesn't save your things anywhere by default and is updated regularly. In case of any troubles with that one just go to settings and press big orange reset button or turn a cell off and on again. If you run out of time for a day, just use or make a different google account
Here's also a couple of pics I got from these.
I don't know how to make this work still. Where's the .exe?
what is the actual correct format for entering tags for characters with a series tag? like, i'm trying to do say, Saria (Arknights)-

Do I enter it with the round brackets? I thought round brackets are supposed to emphasize a specific prompt. i can't seem to get it to recognize the character itself, it just seems to grab the series
Thanks for your guide! much appreciated!

Do you know if there´s a list of available embeddings for download?
Do it again
>>119442
This is actually pretty cool; it's like a mini story.
>>119442
wow this is awesome, i'd love to see more stuff just like this
this might not be that useful, but of the tags you can use to make characters fat, "obese" seems to wield the best results.
fat and chubby are very inconsistent, and plump is good for thighs, but will only create pot bellies at best
>>119442
I need to know how you did this.
(220 KB, 571x568, 102013760_p50.jpg) (195 KB, 512x576, waitress_by_fataicreations_dffqdap.png) (474 KB, 640x640, they_just_keep_giving_her_food___by_crosby345_dffsme9.png) (969 KB, 768x1152, the_biggest_chef_in_town_by_crosby345_dffrdu4.png)
>>117732
>Clearly some limitations with generating really big girls
The only "limitations" are your brain's capacity for creativeness and knowledge of the correct prompts

There are people out there experimenting with some absolutely fun and amazing stuff
Man, I exist in the one, minority population that evidently uses windows with AMD hardware for my cpu/gpu so none of the local setups really work.
I did get that one AMD for windows guide going but working via command prompt without the tools would be a bit rough.

The collab setup works but I will need to figure out training as I don't think I can simply have it reference the waifu cpks
>>119530
Is there like a community then for sharing tips and tricks.
Cause yes I am currently aware that I'm bad at figuring it out, though I'm getting some ok stuff.

>>119551
You can run on AMD, I was using the OC high ram model of a 580 and using a webui. It was... Rough and unstable but it was usable.
I did realize I would have an easier time with Nvidia and there are plenty of used ones for cheap now because crashing crypto.
>>119530

There are plenty of things simply not in the model. As soon as you get into niche territory, it clearly has no idea what to do. You can refine the prompt all you want, but it doesn't know what some of these things look like.

You can train it, yeah, but the base model isn't going to output what you want all the time.
>>119595
I was able to get a GUI going but it never output anything but, again, that was probably compatability issues.
It may be worth looking up an Nvidia with decent Vram that is used on the ebay market so I can keep my normal card for gaming but I'll need to think about it.
(1.6 MB, 2048x512, 767919946-scale8.00-ddim-animefull-final.png)
Tried to do a sequence running the same seed and using (()) {{}} weights. Its very rough around the edges and takes more tries the more you stray from the original seed's prompt.
(2.5 MB, 704x704, FARMER_IMG2IMG.webm)
>>119442
>>119477
Can confirm- problem detected.

>>119526
Stable Diffusion, model:NovelAI. It's just as it looks. Draw a vector, then run some IMG2IMG steps with low variance/noise to develop the basic look. Add/remove tags after every satisfactory output until you get bored. import the outputs as frames into the video editor of your choice and enjoy your AI adventure.
I have a 1650 and I tried to set up the web ui but I all I get are black squares.
>>119530
do you not consider having extra tits and arms "limitations"? lmao
>>119530
Anon these are genuinely awful.
>>119734 (Cross-thread)
Saw these results, some help here

How do you make these progression? or modifications?
I dont quite understand, after you get your image from txt2img, do you send it to img2img or what???

Do you change the prompt words, or what? i get different results and is not always the same base drawing like that example

I appreciate the help
(9 KB, 420x299, gtx1600 workaround.png)
>>119638
Known issue with 1600 cards. Use stable diffusion WebUI then edit webui-user.bat with pic related.

>>119654
If you can't crop the arm out of that waitress pic then maybe should go back to buying Axel commissions.

>>119669
Not as awful as your mom but that didn't stop me from banging her.
What kind of size descriptors do people use? I find that huge and massive work for things like breasts and belly, but do very little for thighs and asses. Has anyone found anything bigger, size-wise, and what works best for the lower body?
>>119631

The voldy guide mentions
Use ((( ))) around keywords to increase their strength and [[[ ]]] to decrease their strength

What's with everyone using {{{}}} - is the curly brace some kind of super emphasis?
(365 KB, 512x512, 00001-2844746339-wide hips, huge thighs.png) (310 KB, 512x512, 00002-726665020-(((wide hips))), huge thighs, looking down.png) (531 KB, 512x768, 00006-2768361733-masterpiece, best quality, (((wide hips))), huge thighs, full body, blush, bodysuit, skindentation.png) (325 KB, 512x512, 00004-749051608-masterpiece, best quality, (((wide hips))), huge thighs, full body, blush.png)
i had to settle for using CPU processing over GPU processing bc i don't have a nvidia gpu and im not smart enough to figure out the amd instructions. i followed the cputard guide in this thread: https://boards.4channel.org/g/thread/89268610/sdg-stable-diffusion-general

this is what i was able to make without any training. the bodies are good but the faces and arms are kinda fucked up
>>120177
I tried to do that as well but I also had an AMD CPU and it just flat out wouldn't generate images.
>>120183
I'll give that one a go then tomorrow.
Should hopefully have better luck.
(264 KB, 512x512, download (81).png)
If you are having issue with an AMD GPU have you tried this one? It's pretty helpfully straightforward forward guide.

Also I hate how simple it was to get huge tits out of this thing but can't figure out the secrets to get a good fat still.
>>120211
Oh try downloading the model from the NovelAI torrent and then replace the model in your virtual environment with it.

This part of the guide
>Once that's done, we can run the utility script.

>python convert_stable_diffusion_checkpoint_to_onnx.py --model_path="CompVis/stable-diffusion-v1-4" --output_path="./stable_diffusion_onnx"
>--model_path is the path on Hugging Face to go and find the model. --output_path is the path on your local filesystem to place the now-Onnx'ed model into.

Just replace the model path (compvis thing) with the full path location to the novel ai model in quotes if you have spaces
(4.7 MB, 1920x2496, BSRGAN.png) (3.1 MB, 1600x2080, bssrgan 4x.png) (538 KB, 576x768, 00021-4192427592-masterpiece, best quality, ((((wide hips)))), (((huge thighs))), full body, leotard, skindentation, steaming body, brown hair, c.png) (588 KB, 576x768, 00015-451525960-masterpiece, b.png) (326 KB, 512x512, 00014-1209088537-masterpiece, best quality, ((((wide hips)))), (((huge thighs))), full body, leotard, skindentation, steaming body, brown hair, c.png)
>>120221
i ended up just staying on the cpu version because then id be able to use the webui that comes with really nice extra tools. thanks very much for the help though

time to work on the faces
Also, I'm following up here but for anyone who has dealt with training, what learning rate do you use?
(236 KB, 512x512, 24453.png) (260 KB, 512x512, 2134124.png) (192 KB, 512x512, 234123.png)
>>120929

tried some training embeds, i use 0.007
0.005 doesnt use my gpu enough and 0.01 makes my training fail but i guess it varies from card to card

have you been able to get any decent embeds done? i tried making one for big asses, and it gets nice shapes, but the images always come out super blurry>>120929
>>121394
Not yet, I'm trying to get semi-reliable faces.
(418 KB, 512x576, 00001-3191731799-masterpiece, best quality, (face focus), steam, facing viewer, steaming body, brown hair, __, cat ears, red eyes, in a dark r.png) (419 KB, 512x576, 00002-2805980166-masterpiece, .png)
>>121407

one thing you could try it going to img2img and using inpainting to fix the face. you select the face, make sure 'restore faces' is checked, and use only use tags for the face you want ("masterpiece, best quality, :o, red eyes, sweat, looking at viewer, tan skin") and it does a pretty good job.

although it would be a lot better to not have to do that. consider sharing the embed if you get it done, it would be nice to have
>>121432
I think I'm getting some decent ones now, I'm just fine tuning a bit.
Was running off a few modules that weren't great and I'm just giving the Waifu version a go and I was also relying a bit too heavily on a characters name I feel.
>>117779
Do you not understand how post IDs work?
So I've seen some people here say to start with a furry model or something, does anyone have a bit more details behind that? I'm not sure if they meant trying it with a hypernetwork around furries or a completely different Stable Diffusion model
>>122108
Okay it is my turn to be the retard it seems. It was the SD models and not the hypernetworks. Let's see if this helps out in anyway
Has anyone tested how big of a belly the ai can generate? Genuinely curious

Back to top