サイトロゴ
投稿者コメント
expand_more
If anyone know how to get better face results please help <3Original Video:\n8.Wiggle Wiggle-HakuForget Skyrim.\n@forgetskyrimForget Skyrim.For this video I used Control Net and Segment Anything Model(SAM) on the stable diffusion Web-UI, there are many video guides for these stuff on YouTube so if you want to research that is the best place to look for more detail. Use FFMPEG to separate frames from the video at around 16fps - 24fps (This video was made from 18fps) you can later use Flowframes to increase the frames of the video. Use batch rendering on Stable Diffusion WebUI to render each photo, then combine using Premier pro or FFMPEG. Later you could use Topaz Video AI to upscale or Instant 4k on After Effects. I did some color correction and added in the background through after effects but that\'s not needed.Recomend using Character Loras for when rendering photos, you can find them on CivitAI, I used Yowane Haku(弱音) character lora for this video: https://civitai.com/models/25787/yowane-hakuThere was a reddit post for making AI MMD (https://www.reddit.com/r/StableDiffusion/comments/12xhd2t/experimental_ai_anime_w_cnet_11_groundingdino_sam/?utm_source=share&utm_medium=web2x&context=3) but there is currently a Reddit strike going on so that might be a bit difficult to access.if you want the 2k version of this project it is right here: https://mega.nz/file/0X8CnToa#4s7ZtG3FhfflimJVNaHFQZJF8PkRmFCo7Ao_wvlltKIIm sorry Forget Skyrim. for not asking for permission to use your video, If you want me to take this down or remove this just message me or comment I will remove this the moment I see it.p.s I cant wait for your next upload 0_0 <3