>>68570The underwear layer is just a texture as of right now, it's underwear so I figured it's fine if it's not simulated as I'm not really planning on it going anywhere while the model is expanding. I also planned on having cloth simulation layered over it, so Code Blue wasn't exactly built around it. I know it's not the best way to do that but it works, and it's a high enough resolution that it doesn't look terrible stretched, aside from the aliasing on the edges because of the alpha, but I can probably find a way to fix that in After Effects.
As for the AO Displacement idea, I remembered I actually used that idea (sorta) in my car animation. I didn't have enough ram to fit a sufficiently high subdivision even with adaptive sampling, and the method I used wasn't exactly good, so it's VERY subtle because any deeper and it would look terrible, but you can see the light catching on a part of the model, and where it would be most visible is kinda hidden in the glare.
I'm pretty sure I used the alpha of the underwear texture as my displacement map but I'll have to look, AO and proper node setup with nice falloff would definitely improve it though as it was a very sharp displacement in this animation.
It was a fast solution though, and it worked, so I went through with it and managed to make that animation in about 2-3 days. If I spent more time on it, I could probably get it to look better because honestly it's kinda ugly in some places, even just making a custom alpha map just for it would be better, but I managed to catch the wave of those images like I planned, and figured it was better to get it out sooner rather than dwell on it too much and get back to my main projects. Plus it's animated, so it's unique in that regard too as most if not all of the others were just stills, even if I'm not satisfied with the quality overall.
I should have used that method for Code Blue, especially since I've upgraded my hardware since then, but with how long it's been taking on top of how much it's changed, I completely spaced it and doing so would definitely require a re-render depending on where and when it's used.
I setup the rendering for Code Blue for situations like this though, I might mess around with it and gauge whether or not it's worth delaying it one more time, especially since you've reminded me that AO node exists, and I really like the idea of using a stretched noise texture for the wrinkles, that's clever.
I'm curious how the AO Node would interact with other parts of the scene though, it might actually end up improving quality overall as you'd get it to squish with other parts of the scene like the ground or objects, but your baked lightmap idea would be a very good method of masking out the unwanted parts of the AO shader. The feet of my model don't really interact with the floor aside from leaving smudges, I might take a look at that.
Another thing I've thought about trying is using dynamic paint to generate an image sequence for displacement instead, but you need to manually apply brushes to each object the model is going to interact with and bake each time you need a new image sequence, so there's pros and cons there. Vertex based dynamic paint would probably work better, but I haven't messed with that yet.
I used dynamic paint and After Effects to achieve the blue spread on Code Blue, and I'm pretty confident the same workflow can be used for the squish, but procedural generation would be a much better solution and faster to iterate on even if you lose the fine control of some parts.
Finishing a project and moving onto the next instead of dwelling on achieving maximum quality honestly works way better for me since I used to constantly move the goal post on "best quality" all time time throughout an animation, lengthening the production and making the scene more complex as there's ton of ideas thrown into it, increasing the odds I'd miss something very subtle that tanks the entire animation and just lengthens the production even more. I've had multiple animations I've just given up on or canned because of how difficult they became to work on, I like the focus of pushing out an animation and moving onto the next and learning from the last.
But discussing this actually does make me want to delay Code Blue just one more time to include these ideas, re-rendering the character layer would only take around 1-2 days at most, especially if I can move from Optix to OpenImageDenoise or the compositor denoise node (not sure which denoiser that is, but I think it's Intel's denoiser iirc). There's also a small issue with how the normals of the character are read in the first 1300 frames too, so there's some subtle artifacts from that, and squashing that would be nice too. I did manage to catch and fix it as I rendered the last 700 frames though. It's subtle enough where I could release it without most people noticing, re-render, and replace it without anyone noticing, but since I'm already thinking about delaying yet again, I'd rather just fix it and release the better version.
Talking to someone who actually uses Blender is great though, I haven't run into a single artist trying to achieve the same goal that uses Blender enough to come up with solutions to problems like these, especially since you're more experienced with procedural shaders in Blender. The farthest I've gotten with procedural textures isn't even in Blender, I generally use Substance for that stuff. But I do know my way around the shader editor enough where it works well enough for me.
This probably isn't the best place to talk about these things though, so you can DM me on Twitter for my report my post or vice-versa and we can discuss this elsewhere if you'd like!
>>68653and blender backups have literally saved my skin numerous times, but I save and create copies quite frequently now since I have the capacity for it, I hit ctrl-s every minute or so because Blender can be pretty unstable sometimes, and I have more than enough ram now where I can have a very long history to undo and redo through.
Plus I've upgraded from a 2080 to a 3090, and the viewport render mode is so much faster even only being a generation apart, it's kinda ridiculous. I can work in the render mode much more than I could before and even playback animation somewhat, especially with the extra VRAM where it would fail to initialize on my 2080 if my scenes got too complex and I didn't optimize. Makes me want a 4090, especially after seeing it perform about as well as 2 3090's in NVLink. Those are benchmarks though, and NVLink would also net you memory pooling, but I've only managed to run out of VRAM once and only because I forgot I had another project open in the render viewport.
But since my animation is 4k, 2000 frames long, renders the three layers one at a time, and takes about 3-4 minutes to finish each frame, it's really easy to miss the frames where there's issues. I can't watch each layer for each frame being rendered like a hawk since it took 5 days to render the entirety of it and that would be absolute hell sitting there for a hundred hours straight, and I used multi layer exr to store the frames, so viewing them isn't as easy as popping open an image viewer. The way I produced this is also not the best way to work on long animations and I'm changing my workflow so I can work on and finalize individual shots rather than a 2000 frame straight animation, but that's how this animation was produced so there's nothing I can do about it now.
The separate layer rendering saved my ass though as not only did it make the overall quality better, the cloth layer got fucked because I missed an issue with the armature modifier, so re-rendering some of the frames wasn't a big deal compared to the major issue it was when I rendered the "final" in October.
>>68592these are idea's I've had too though, they're obviously not trying to ruin my projects. It's actually nice knowing I'm not the only one doing these things. I'd rather share my secrets and ideas rather than keep them to myself and try it all alone which I've been defaulted to doing, it's a breathe of fresh air for me.
I'd love to have more CG Berry artists, especially if they're better than me because that moves everyone forward, I'm not doing this forever you know, people will come and go and I for sure will be one of them. I don't know when, but eventually.
If it backfires, so be it, but I'd rather take the risk.
and the cloth render mistake was mostly my fault, for some reason it was assigned to an unused rig that had my mocap data, and de-synced when swapped over to the secondary rig for the last 700 frames. It looked good in the viewport, but failed in small subtle changes I made, but I missed that since I thought they were on the same armature. I caught this as it was finishing up, so I fixed the issue and re-rendered the problem frames.
I fixed that though, and was still on track to release that version on time, but then I watched the final render in it's entirety, and there was a lot of frames where the holdout straight up failed, not sure how I missed this while it was rendering, the cloth layer was the fastest rendered layer by far though so that might be why.
apologizes for the long essay lmao, I'm writing this while working on the edit for Code Blue lmao