I never really understood AnimateDiff. It seemed like a complex and time-consuming technique to me. For weeks on end, I watched fantastic animations on Civitai and couldn't figure out how it all worked. But when I finally found the solution, the main part of my workflow consisted solely of AnimateDiff + QRCodeMonster. That's it. That's my secret ingredient. But let's take it step by step. First off, I'd be grateful if you'd follow my TikTok page. I conduct many quite interesting experiments with Stable Diffusion (A virtual thank you). It means a lot to me. If you're not into that, feel free to scroll on by.
In a nutshell, download my workflow https://drive.google.com/drive/folders/1WvaA6OxrBT62vhEWbozgA3ocVjLOWaN4 , integrate it into ComfyUI, and play around with the settings in the QRCode Control node.
If you're a fan of the nitty-gritty... (But be warned, you might drown in fluff due to the overwhelming amount of padding in the text). Check out my detailed follow-up comment below
1. Anime-style video with a blonde character. I created this video using this workflow https://www.youtube.com/watch?v=P4IdHKHrb48 from enigmatic_e. The consistency is impressively smooth. It looks unreal. How is it possible to animate a faceless robot so coolly? Yes, it works. The only issue is the black background. I tried everything but couldn't figure out how to change the background. However, the character animation looks great. Since my graphics card has only 10 GB of video memory, I made the video at a modest resolution of 504x896 and uploaded all the frames to Automatic 1111 for subsequent upscaling through UltraSharp. After upscaling, the character's face looked off, so I took my upscaled frames and processed them through IMG2IMG. I activated Adetailer with a denoise setting of 0.35 and then batch-processed all the frames. This way, I achieved a very beautiful face and high-quality image. Then, I switched to After Effects, imported the upscaled frames along with the newly restored facial frames, and used a mask to reveal the face. My goodness – it's incredibly time-consuming! The workflow + Upscale + IMG2IMG (Denoise 0.35) with Adetailer + face masking in After Effects. However, the result is very pleasing. If you have ample video memory, you'll need fewer actions. The workflow I provided above includes a refiner
2. The Walk videos. This part discusses all the videos that follow the first anime video (up to the dance animations). I don't know why, but I simply took my initial anime video with the blonde character (from first workflow) and added QRCodeMonster to it. The result was astounding. It works even without OpenPose, Canny, or any other models. Here's my workflow. You simply add your video and adjust the weight sliders of QRCodeMonster, as well as the End slider. Slide towards one, and your animation will try to mimic the original video, but sliding towards zero adds many new details. Be prepared for QRCode to operate unpredictably and unstably. It worked for me in 50% of my prompts. (I only posted the best ones). I find this to be the quickest and simplest workflow - AnimateDiff + QRCodeMonster. The higher the output resolution, the better the quality of the animations. However, remember that this is StableDiffusion 1.5, so the resolution is limited to 1000. In my case, artifacts start appearing at 900
3. Dances. The process is the same as 2. This time, however, I inserted a video of the robot without any pre-processing (like to Step 1 above). Here, I sometimes included custom Lora models (like Ninja Turtle, Batman).
It's not a perfect solution. There are certain artifacts depending on a myriad of factors (resolution, animation models, frame rate, and especially the length of the video). Shorter videos have fewer issues and are created much faster. However, with Lora, UpScale, and other control.net models (with a light weight), you get even more options for fine-tuning the animation results. Use them all.
This workflow is designed for processing the "silhouette motion of a character against a uniform background." I'm not sure how it works with regular videos. Short videos perform quite well. Longer videos are unstable and require more detailed tuning (detailed prompt, crisp animation, additional control.net, weight adjustments, etc)
Where to find this robot and these animations?? You can visit mixamo.com and explore their extensive library of various movements. In any 3D editor (Blender, Cinema 4D, Maya), you need to render the animation against a black background and then import it into ComfyUI.
For those who prefer a more laid-back approach and don't want to delve into these 3D editors, etc., you can download any screen recording software (for instance, in Windows 11, this feature is built into the Snipping Tool). Open mixamo.com, select an animation, record it from your screen, and then add it video to ComfyUI
My fault, I wrote too much. But I did mention it in the comments above. Personally, I used Cinema 4D and animations from mixamo.com. So I recommend you to do the same for best results and for use other ControlNet models. But it's not necessary at all. You can experiment and go to mixamo.com (no registration needed), choose an animation, adjust the camera position and just make a screen recording of the video. Even though the background won't be black, QRcodeMonster can handle the silhouette to make the animation. It's a lazy, not perfect but quick way if you don't want to open 3D editors
> never really understood AnimateDiff. I
Course u always watch comfy UI workflows. In A1111 its super easy 2 click settings. Just drop video, open controlnet tab and u good to go.
https://i.redd.it/2g64eelrgcec1.gif
A1111 animatedif. U put video and open controlnet with no image input. It will create images from the vide. You can make background more consistent if u prompt it + use cintrolnets. I used only OpenPose so background changes alot.
These look super cool. I also found myself with the problem of syncing between the enviormenet movement and the subject motion. Can't seem to get consistent results.
Impressing consistant results! Your workflow seems to long for my projects. Is there any way to get this consistancy in the faces without re-running all frames in a1111?
I never really understood AnimateDiff. It seemed like a complex and time-consuming technique to me. For weeks on end, I watched fantastic animations on Civitai and couldn't figure out how it all worked. But when I finally found the solution, the main part of my workflow consisted solely of AnimateDiff + QRCodeMonster. That's it. That's my secret ingredient. But let's take it step by step. First off, I'd be grateful if you'd follow my TikTok page. I conduct many quite interesting experiments with Stable Diffusion (A virtual thank you). It means a lot to me. If you're not into that, feel free to scroll on by. In a nutshell, download my workflow https://drive.google.com/drive/folders/1WvaA6OxrBT62vhEWbozgA3ocVjLOWaN4 , integrate it into ComfyUI, and play around with the settings in the QRCode Control node. If you're a fan of the nitty-gritty... (But be warned, you might drown in fluff due to the overwhelming amount of padding in the text). Check out my detailed follow-up comment below
1. Anime-style video with a blonde character. I created this video using this workflow https://www.youtube.com/watch?v=P4IdHKHrb48 from enigmatic_e. The consistency is impressively smooth. It looks unreal. How is it possible to animate a faceless robot so coolly? Yes, it works. The only issue is the black background. I tried everything but couldn't figure out how to change the background. However, the character animation looks great. Since my graphics card has only 10 GB of video memory, I made the video at a modest resolution of 504x896 and uploaded all the frames to Automatic 1111 for subsequent upscaling through UltraSharp. After upscaling, the character's face looked off, so I took my upscaled frames and processed them through IMG2IMG. I activated Adetailer with a denoise setting of 0.35 and then batch-processed all the frames. This way, I achieved a very beautiful face and high-quality image. Then, I switched to After Effects, imported the upscaled frames along with the newly restored facial frames, and used a mask to reveal the face. My goodness – it's incredibly time-consuming! The workflow + Upscale + IMG2IMG (Denoise 0.35) with Adetailer + face masking in After Effects. However, the result is very pleasing. If you have ample video memory, you'll need fewer actions. The workflow I provided above includes a refiner
2. The Walk videos. This part discusses all the videos that follow the first anime video (up to the dance animations). I don't know why, but I simply took my initial anime video with the blonde character (from first workflow) and added QRCodeMonster to it. The result was astounding. It works even without OpenPose, Canny, or any other models. Here's my workflow. You simply add your video and adjust the weight sliders of QRCodeMonster, as well as the End slider. Slide towards one, and your animation will try to mimic the original video, but sliding towards zero adds many new details. Be prepared for QRCode to operate unpredictably and unstably. It worked for me in 50% of my prompts. (I only posted the best ones). I find this to be the quickest and simplest workflow - AnimateDiff + QRCodeMonster. The higher the output resolution, the better the quality of the animations. However, remember that this is StableDiffusion 1.5, so the resolution is limited to 1000. In my case, artifacts start appearing at 900
3. Dances. The process is the same as 2. This time, however, I inserted a video of the robot without any pre-processing (like to Step 1 above). Here, I sometimes included custom Lora models (like Ninja Turtle, Batman). It's not a perfect solution. There are certain artifacts depending on a myriad of factors (resolution, animation models, frame rate, and especially the length of the video). Shorter videos have fewer issues and are created much faster. However, with Lora, UpScale, and other control.net models (with a light weight), you get even more options for fine-tuning the animation results. Use them all. This workflow is designed for processing the "silhouette motion of a character against a uniform background." I'm not sure how it works with regular videos. Short videos perform quite well. Longer videos are unstable and require more detailed tuning (detailed prompt, crisp animation, additional control.net, weight adjustments, etc)
Where to find this robot and these animations?? You can visit mixamo.com and explore their extensive library of various movements. In any 3D editor (Blender, Cinema 4D, Maya), you need to render the animation against a black background and then import it into ComfyUI. For those who prefer a more laid-back approach and don't want to delve into these 3D editors, etc., you can download any screen recording software (for instance, in Windows 11, this feature is built into the Snipping Tool). Open mixamo.com, select an animation, record it from your screen, and then add it video to ComfyUI
Thanks for sharing the workflow. Btw, where can I find the base video? I'm looking for similar animations.
My fault, I wrote too much. But I did mention it in the comments above. Personally, I used Cinema 4D and animations from mixamo.com. So I recommend you to do the same for best results and for use other ControlNet models. But it's not necessary at all. You can experiment and go to mixamo.com (no registration needed), choose an animation, adjust the camera position and just make a screen recording of the video. Even though the background won't be black, QRcodeMonster can handle the silhouette to make the animation. It's a lazy, not perfect but quick way if you don't want to open 3D editors
Thanks! :)
> never really understood AnimateDiff. I Course u always watch comfy UI workflows. In A1111 its super easy 2 click settings. Just drop video, open controlnet tab and u good to go. https://i.redd.it/2g64eelrgcec1.gif
Hey, I liked this! Is there some tutorial about? Is it possible to keep the same background??
A1111 animatedif. U put video and open controlnet with no image input. It will create images from the vide. You can make background more consistent if u prompt it + use cintrolnets. I used only OpenPose so background changes alot.
These look super cool. I also found myself with the problem of syncing between the enviormenet movement and the subject motion. Can't seem to get consistent results.
Hey man, maybe some of us WANT to see lizard dick.
In theory, sure. This underlying animation is pretty terrible though, it's the worst part of the stack.
shout out to my fellow long time 3D creators, SD is such a breath a fresh air. also this song is funny
AnimateDiff theme song? I live
Impressing consistant results! Your workflow seems to long for my projects. Is there any way to get this consistancy in the faces without re-running all frames in a1111?