(290 KB,
512x512,
index11.png)
Alright my fellow degenerates, the NovelAI model has been leaked. Let's get you set up using it.
GETTING STABLE DIFFUSION RUNNING
You need a good GPU. At least 4GB of VRAM. Probably a 1080 or newer.
Just follow this, but read the rest of this post first:
https://rentry.org/voldy#-voldy-retard-guide-
You don't need to install git if you download the project directly from the git page, but you won't be able to easily update your local copy when the codebase gets updated (and it's getting updated often).
You also won't need to download the 1.4 AI model either, unless you want to play around with it.
---
NOVEL AI MODEL
The guide is here, but read the rest of this post first.
https://rentry.org/sdg_FAQ
The torrent:
magnet:?xt=urn:btih:5bde442da86265b670a3e5ea3163afad2c6f8ecc&dn=novelaileak&tr=udp%3A%2F%2Ftracker.opentrackr.org%3A1337%2Fannounce&tr=udp%3A%2F%2F9.rarbg.com%3A2810%2Fannounce&tr=udp%3A%2F%2Ftracker.openbittorrent.com%3A6969%2Fannounce&tr=http%3A%2F%2Ftracker.openbittorrent.com%3A80%2Fannounce&tr=udp%3A%2F%2Fopentracker.i2p.rocks%3A6969%2Fannounce
You'll only need the following files:
novelaileak\stableckpt\animefull-final-pruned\model.ckpt
novelaileak\stableckpt\animevae.pt
Optionally pull any of the files in this folder that look interesting to you:
novelaileak\stableckpt\modules\modules\
You don't need xformers.
Now go ahead and follow the guide.
--Default Novel AI settings--
CFG: 11
negative prompt: lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry
--TRAINING NEW CONCEPTS--
You'll notice it's hard to get output that's too far away from "normal". If you can't get it to do something, there probably weren't enough images in the training set (if any) that pertain to the thing you're trying to do. You can train your own embeddings with textual inversion. This takes many hours on a new video card, so I suggest doing it overnight.
The guide at https://rentry.org/textard is out of date (as of me writing this).
1. Figure out a single concept you want to teach the model. Keep it simple, stupid. Examples: "force feeding", [insert character name here], etc...
2. Run the web ui of stable diffusion. In the settings, switch to the model you want the embedding to work with. Results are far better if you use the embedding with the same model it is trained on.
3. Select a set of 5 images that match your concept. Try to find square images (or crop to make square). Place them by themselves in a folder. Name each image like you were tagging the image with what is in it. Ex: "woman brown red hair laying on bed eating pizza.png"
4. Go to the "Textual inversion" tab in your stable diffusion web ui.
5. Enter the source directory path in the prompt. Click "Create flipped copies". Enter your desired output directory in the prompt.Click "Preprocess".
6. Enter a name in the "Name" prompt. This is the tag you want the embedding to be registered under. This is what you'll need to put into the text2img prompt to get the output you want. Repeat the tag in the initialization text. Set number of vectors per token to 8. Click Create.
7. Choose the embedding you just created from the drop down next to the "Embedding" prompt.
8. Set your "Dataset directory" to the destination directory for your preprocessed images.
9. Find the prompt template directory shown in "Prompt template file". Make a copy of "subject_filewords.txt". Name it something that makes sense. Edit the prompts to make sense. Remove any that don't make sense. Add a few for "drawing".
10. Put the path to the file you just edited in the "Prompt template file".
11. Change Max steps to 20000, especially for your first couple while you figure out what you're doing.
13. Check the final result in the textual_inversion/[dateyoustartedthetraining] folder.
14. Try your embedding in a txt2img prompt.
15. Consider sharing your embedding if it worked well.
GETTING STABLE DIFFUSION RUNNING
You need a good GPU. At least 4GB of VRAM. Probably a 1080 or newer.
Just follow this, but read the rest of this post first:
https://rentry.org/voldy#-voldy-retard-guide-
You don't need to install git if you download the project directly from the git page, but you won't be able to easily update your local copy when the codebase gets updated (and it's getting updated often).
You also won't need to download the 1.4 AI model either, unless you want to play around with it.
---
NOVEL AI MODEL
The guide is here, but read the rest of this post first.
https://rentry.org/sdg_FAQ
The torrent:
magnet:?xt=urn:btih:5bde442da86265b670a3e5ea3163afad2c6f8ecc&dn=novelaileak&tr=udp%3A%2F%2Ftracker.opentrackr.org%3A1337%2Fannounce&tr=udp%3A%2F%2F9.rarbg.com%3A2810%2Fannounce&tr=udp%3A%2F%2Ftracker.openbittorrent.com%3A6969%2Fannounce&tr=http%3A%2F%2Ftracker.openbittorrent.com%3A80%2Fannounce&tr=udp%3A%2F%2Fopentracker.i2p.rocks%3A6969%2Fannounce
You'll only need the following files:
novelaileak\stableckpt\animefull-final-pruned\model.ckpt
novelaileak\stableckpt\animevae.pt
Optionally pull any of the files in this folder that look interesting to you:
novelaileak\stableckpt\modules\modules\
You don't need xformers.
Now go ahead and follow the guide.
--Default Novel AI settings--
CFG: 11
negative prompt: lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry
--TRAINING NEW CONCEPTS--
You'll notice it's hard to get output that's too far away from "normal". If you can't get it to do something, there probably weren't enough images in the training set (if any) that pertain to the thing you're trying to do. You can train your own embeddings with textual inversion. This takes many hours on a new video card, so I suggest doing it overnight.
The guide at https://rentry.org/textard is out of date (as of me writing this).
1. Figure out a single concept you want to teach the model. Keep it simple, stupid. Examples: "force feeding", [insert character name here], etc...
2. Run the web ui of stable diffusion. In the settings, switch to the model you want the embedding to work with. Results are far better if you use the embedding with the same model it is trained on.
3. Select a set of 5 images that match your concept. Try to find square images (or crop to make square). Place them by themselves in a folder. Name each image like you were tagging the image with what is in it. Ex: "woman brown red hair laying on bed eating pizza.png"
4. Go to the "Textual inversion" tab in your stable diffusion web ui.
5. Enter the source directory path in the prompt. Click "Create flipped copies". Enter your desired output directory in the prompt.Click "Preprocess".
6. Enter a name in the "Name" prompt. This is the tag you want the embedding to be registered under. This is what you'll need to put into the text2img prompt to get the output you want. Repeat the tag in the initialization text. Set number of vectors per token to 8. Click Create.
7. Choose the embedding you just created from the drop down next to the "Embedding" prompt.
8. Set your "Dataset directory" to the destination directory for your preprocessed images.
9. Find the prompt template directory shown in "Prompt template file". Make a copy of "subject_filewords.txt". Name it something that makes sense. Edit the prompts to make sense. Remove any that don't make sense. Add a few for "drawing".
10. Put the path to the file you just edited in the "Prompt template file".
11. Change Max steps to 20000, especially for your first couple while you figure out what you're doing.
13. Check the final result in the textual_inversion/[dateyoustartedthetraining] folder.
14. Try your embedding in a txt2img prompt.
15. Consider sharing your embedding if it worked well.