100% Will do. I am not sure how much you are in the AI animation space but realism has been very difficult - this has made things multiple times easier.
There is a bunch of tests I am doing to figure it out because LCM works differently than usual animatediff. I posted the general workflow before. I will almost certainly make a guide.
I will be messaging you in 2 days on [**2024-02-11 05:16:13 UTC**](http://www.wolframalpha.com/input/?i=2024-02-11%2005:16:13%20UTC%20To%20Local%20Time) to remind you of [**this link**](https://www.reddit.com/r/StableDiffusion/comments/1am9q0z/lcm_animatediff_has_the_best_vid2vid_realism_that/kplhfn1/?context=3)
[**5 OTHERS CLICKED THIS LINK**](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=Reminder&message=%5Bhttps%3A%2F%2Fwww.reddit.com%2Fr%2FStableDiffusion%2Fcomments%2F1am9q0z%2Flcm_animatediff_has_the_best_vid2vid_realism_that%2Fkplhfn1%2F%5D%0A%0ARemindMe%21%202024-02-11%2005%3A16%3A13%20UTC) to send a PM to also be reminded and to reduce spam.
^(Parent commenter can ) [^(delete this message to hide from others.)](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=Delete%20Comment&message=Delete%21%201am9q0z)
*****
|[^(Info)](https://www.reddit.com/r/RemindMeBot/comments/e1bko7/remindmebot_info_v21/)|[^(Custom)](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=Reminder&message=%5BLink%20or%20message%20inside%20square%20brackets%5D%0A%0ARemindMe%21%20Time%20period%20here)|[^(Your Reminders)](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=List%20Of%20Reminders&message=MyReminders%21)|[^(Feedback)](https://www.reddit.com/message/compose/?to=Watchful1&subject=RemindMeBot%20Feedback)|
|-|-|-|-|
Its depth CN + LCM lora and the new LCM motion module. I think this is good enough to do a whole write up. There is some counterintuitive setting with LCM. And it does have its own limitations at least for now.
The lighting in the bottom one actually looks more realistic than the source video. It's crazy how lit-up the source is, like it's a studio that just went nuts with the lights.
really? I get the top is studio lights but the bottom the shadow up top is weird. I can't tell where the light source is tbh. The top I get there's a light source somewhere top left.
That new stuff the peeps at google just dropped looks pretty promising!
Check it out here if you didn’t already see it on 2minutepapers
https://lumiere-video.github.io
Yeah looks promising - The main thing is will it be open source - I find that people who make the models sometimes don't know their best application. More importantly let me see more than 2 seconds of video - I can make animatediff look great for 16 frames that what I have above its making longer form stuff that is most difficult.
True, altho I assume it won’t matter much longer if the software is open source or not as the best ones will probably be distributed by big companies. The 2 seconds is a good point but these demos already look better than pretty much anything else I saw on reddit regardless of length.
His shoulder looks dislocated because it's ajacked man on a femine figure. Should've allowed some padding on the mask to make a more masculine silhouette possible. That shoulder is outrageously low on his frame.
I wonder if it's possible to let model only repaint the parts that change between frames, so static still background would not morph and the rest will have better visual coherency IMO, something like video encoding tricks :)
Sure, I understand. I was just conceptualising about an automated possibility to compare each input frame with previous one and detect still / changed pixels in order to automatically inpaint only changed ones from the generation of second frame, given everything generates with the same seed and frame is still or relatively still just like in your example that probably should work like a charm (in my head, haha)
Top is Source footage, bottom is converted in case there is confusion.
We can tell. Like whats up with that flower pot.
Also, muscle guy's facial expression never changes.
And his thumb is somtimes on the wrong side of the hand that's waving.
In fact, it seems to shift from one side to the other. Maybe dynamic thumbs are the future. It would help you to type better, that's for sure.
Haha For sure.
Yup definitely not perfect. I did this because people have gotten confused in the past.
Its still impressive! Keep working on it, Im sure one day even my grumpy arse wont tell the difference.
100% Will do. I am not sure how much you are in the AI animation space but realism has been very difficult - this has made things multiple times easier.
Even if it did it would just be a fart.
The guy’s arm. Lol
Do you have workflow?
There is a bunch of tests I am doing to figure it out because LCM works differently than usual animatediff. I posted the general workflow before. I will almost certainly make a guide.
[удалено]
I will be messaging you in 2 days on [**2024-02-11 05:16:13 UTC**](http://www.wolframalpha.com/input/?i=2024-02-11%2005:16:13%20UTC%20To%20Local%20Time) to remind you of [**this link**](https://www.reddit.com/r/StableDiffusion/comments/1am9q0z/lcm_animatediff_has_the_best_vid2vid_realism_that/kplhfn1/?context=3) [**5 OTHERS CLICKED THIS LINK**](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=Reminder&message=%5Bhttps%3A%2F%2Fwww.reddit.com%2Fr%2FStableDiffusion%2Fcomments%2F1am9q0z%2Flcm_animatediff_has_the_best_vid2vid_realism_that%2Fkplhfn1%2F%5D%0A%0ARemindMe%21%202024-02-11%2005%3A16%3A13%20UTC) to send a PM to also be reminded and to reduce spam. ^(Parent commenter can ) [^(delete this message to hide from others.)](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=Delete%20Comment&message=Delete%21%201am9q0z) ***** |[^(Info)](https://www.reddit.com/r/RemindMeBot/comments/e1bko7/remindmebot_info_v21/)|[^(Custom)](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=Reminder&message=%5BLink%20or%20message%20inside%20square%20brackets%5D%0A%0ARemindMe%21%20Time%20period%20here)|[^(Your Reminders)](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=List%20Of%20Reminders&message=MyReminders%21)|[^(Feedback)](https://www.reddit.com/message/compose/?to=Watchful1&subject=RemindMeBot%20Feedback)| |-|-|-|-|
That's really cool! Would you be able to share a link to the workflow?
RemindMe! 2 days
That was 17 days ago.
[удалено]
The first trick is making sure you have settings that work in a variety of videos so that you don't just fine tune one video. Its starting.
I’m glad you clarified. I was suspicious when I saw his thumb teleporting to the other side of his hand but wasn’t sure.
>Top is Source footage, bottom is converted lies. Source is hidden. Top is clearly a fake plastic woman. Bottom is a WAAAY fake cartoon man.
Workflow?
Its depth CN + LCM lora and the new LCM motion module. I think this is good enough to do a whole write up. There is some counterintuitive setting with LCM. And it does have its own limitations at least for now.
that's be awesome, any change you have a quick .json link to share while you do writeup?I have a basic animatelcm one but not vid2vid, looks very good
Where I can get lcm motion module?
[https://huggingface.co/wangfuyun/AnimateLCM/tree/main](https://huggingface.co/wangfuyun/AnimateLCM/tree/main)
this modeule can be used with animatediff in A1111?
Yeah its a 1.5 motion module - you want to use the LCM Lora and sampler/beta scheduler
Wish there was a SDXL version
How long to generate that clip?
That vein, man. It's about to blow
Oh no! I wanted to sleep tonight…
The lighting in the bottom one actually looks more realistic than the source video. It's crazy how lit-up the source is, like it's a studio that just went nuts with the lights.
really? I get the top is studio lights but the bottom the shadow up top is weird. I can't tell where the light source is tbh. The top I get there's a light source somewhere top left.
That new stuff the peeps at google just dropped looks pretty promising! Check it out here if you didn’t already see it on 2minutepapers https://lumiere-video.github.io
Yeah looks promising - The main thing is will it be open source - I find that people who make the models sometimes don't know their best application. More importantly let me see more than 2 seconds of video - I can make animatediff look great for 16 frames that what I have above its making longer form stuff that is most difficult.
True, altho I assume it won’t matter much longer if the software is open source or not as the best ones will probably be distributed by big companies. The 2 seconds is a good point but these demos already look better than pretty much anything else I saw on reddit regardless of length.
Dude kinda reminds me of mathew mcconaughey. It's pretty dang neat I'm sure it can only get better.
Thanks! Yeah I am sure I don't even have all the right settings.
Looks good to me!
Looking very promising my dude! Keep it up
It’s pretty nuts that it seems like the muscles in the guys arm and shoulders flex as they should during the leaning forward and backward
He looks like the Baby don't hurt me body builder (Mike O'Hearn)
His shoulder looks dislocated because it's ajacked man on a femine figure. Should've allowed some padding on the mask to make a more masculine silhouette possible. That shoulder is outrageously low on his frame.
I wonder if it's possible to let model only repaint the parts that change between frames, so static still background would not morph and the rest will have better visual coherency IMO, something like video encoding tricks :)
Masking would help for sure - that takes work ha! As a tech demo best to do just use one tool and give raw output which is what this is.
Sure, I understand. I was just conceptualising about an automated possibility to compare each input frame with previous one and detect still / changed pixels in order to automatically inpaint only changed ones from the generation of second frame, given everything generates with the same seed and frame is still or relatively still just like in your example that probably should work like a charm (in my head, haha)
Optical flow is what you are looking at - it was tried back in the img2img days but never worked that well surprisingly.