サイトロゴ
投稿者コメント
expand_more
Original Video:\nbl88竖屏【雷电将军】SAY MY NAME腿 玩 年\n@user221116tuiwannianFor this video I used Control Net and Segment Anything Model(SAM) on the stable diffusion Web-UI, there are many video guides for these stuff on YouTube so if you want to research that is the best place to look for more detail. Use FFMPEG to separate frames from the video at around 16fps - 24fps (This video was made from 18fps) you can later use Flowframes to increase the frames of the video. Use batch rendering on Stable Diffusion WebUI to render each photo, then combine using Premier pro or FFMPEG. For Good face results use the adetailer add-on for control net(Thank you very much 以尘 @innerfire024). Later you could use Topaz Video AI to upscale or Instant 4k on After Effects. I did some color correction and added in the background through after effects but that\'s not needed.Recomend using Character Loras for when rendering photos, you can find them on CivitAI, I used Raiden Shogun character lora for this video: https://civitai.com/models/42776/raiden-shogun-genshin-impact-or-character-lora-1200For Good Face Results check out these videos --- https://www.bilibili.com/video/BV1kW4y1978P https://www.bilibili.com/video/BV1Mz4y1B71d/There was a reddit post for making AI MMD (https://www.reddit.com/r/StableDiffusion/comments/12xhd2t/experimental_ai_anime_w_cnet_11_groundingdino_sam/?utm_source=share&utm_medium=web2x&context=3) but there is currently a Reddit strike going on so that might be a bit difficult to access.if you want the 2k version of this project it is right here: https://mega.nz/file/kDEiCJhQ#dZ1wizzP5XZcPI5MIJG0NduvdcH2I2GT3bdBewoaTpoIm sorry 腿 玩 年 for not asking for permission to use your video, If you want me to take this down or remove this just message me or comment I will remove this the moment I see it.