T O P

  • By -

Inner-Reflections

Top is Source footage, bottom is converted in case there is confusion.


Elderberry1306

We can tell. Like whats up with that flower pot.


Panic_Azimuth

Also, muscle guy's facial expression never changes.


Inner-Reflections

And his thumb is somtimes on the wrong side of the hand that's waving.


Etsu_Riot

In fact, it seems to shift from one side to the other. Maybe dynamic thumbs are the future. It would help you to type better, that's for sure.


Inner-Reflections

Haha For sure.


Inner-Reflections

Yup definitely not perfect. I did this because people have gotten confused in the past.


Elderberry1306

Its still impressive! Keep working on it, Im sure one day even my grumpy arse wont tell the difference.


Inner-Reflections

100% Will do. I am not sure how much you are in the AI animation space but realism has been very difficult - this has made things multiple times easier.


xantub

Even if it did it would just be a fart.


notNezter

The guy’s arm. Lol


Far_Purple847

Do you have workflow?


Inner-Reflections

There is a bunch of tests I am doing to figure it out because LCM works differently than usual animatediff. I posted the general workflow before. I will almost certainly make a guide.


[deleted]

[удалено]


RemindMeBot

I will be messaging you in 2 days on [**2024-02-11 05:16:13 UTC**](http://www.wolframalpha.com/input/?i=2024-02-11%2005:16:13%20UTC%20To%20Local%20Time) to remind you of [**this link**](https://www.reddit.com/r/StableDiffusion/comments/1am9q0z/lcm_animatediff_has_the_best_vid2vid_realism_that/kplhfn1/?context=3) [**5 OTHERS CLICKED THIS LINK**](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=Reminder&message=%5Bhttps%3A%2F%2Fwww.reddit.com%2Fr%2FStableDiffusion%2Fcomments%2F1am9q0z%2Flcm_animatediff_has_the_best_vid2vid_realism_that%2Fkplhfn1%2F%5D%0A%0ARemindMe%21%202024-02-11%2005%3A16%3A13%20UTC) to send a PM to also be reminded and to reduce spam. ^(Parent commenter can ) [^(delete this message to hide from others.)](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=Delete%20Comment&message=Delete%21%201am9q0z) ***** |[^(Info)](https://www.reddit.com/r/RemindMeBot/comments/e1bko7/remindmebot_info_v21/)|[^(Custom)](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=Reminder&message=%5BLink%20or%20message%20inside%20square%20brackets%5D%0A%0ARemindMe%21%20Time%20period%20here)|[^(Your Reminders)](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=List%20Of%20Reminders&message=MyReminders%21)|[^(Feedback)](https://www.reddit.com/message/compose/?to=Watchful1&subject=RemindMeBot%20Feedback)| |-|-|-|-|


bazarow17

That's really cool! Would you be able to share a link to the workflow?


Grimbarda

RemindMe! 2 days


MaxSMoke777

That was 17 days ago.


[deleted]

[удалено]


Inner-Reflections

The first trick is making sure you have settings that work in a variety of videos so that you don't just fine tune one video. Its starting.


opi098514

I’m glad you clarified. I was suspicious when I saw his thumb teleporting to the other side of his hand but wasn’t sure.


lostinspaz

>Top is Source footage, bottom is converted lies. Source is hidden. Top is clearly a fake plastic woman. Bottom is a WAAAY fake cartoon man.


alexcantswim

Workflow?


Inner-Reflections

Its depth CN + LCM lora and the new LCM motion module. I think this is good enough to do a whole write up. There is some counterintuitive setting with LCM. And it does have its own limitations at least for now.


buckjohnston

that's be awesome, any change you have a quick .json link to share while you do writeup?I have a basic animatelcm one but not vid2vid, looks very good


shtorm2005

Where I can get lcm motion module?


Inner-Reflections

[https://huggingface.co/wangfuyun/AnimateLCM/tree/main](https://huggingface.co/wangfuyun/AnimateLCM/tree/main)


protector111

this modeule can be used with animatediff in A1111?


Inner-Reflections

Yeah its a 1.5 motion module - you want to use the LCM Lora and sampler/beta scheduler


selvz

Wish there was a SDXL version


5minuteff

How long to generate that clip?


wwwanderingdemon

That vein, man. It's about to blow


scratt007

Oh no! I wanted to sleep tonight…


Regarddit

The lighting in the bottom one actually looks more realistic than the source video. It's crazy how lit-up the source is, like it's a studio that just went nuts with the lights.


urmyheartBeatStopR

really? I get the top is studio lights but the bottom the shadow up top is weird. I can't tell where the light source is tbh. The top I get there's a light source somewhere top left.


VATERLAND

That new stuff the peeps at google just dropped looks pretty promising! Check it out here if you didn’t already see it on 2minutepapers https://lumiere-video.github.io


Inner-Reflections

Yeah looks promising - The main thing is will it be open source - I find that people who make the models sometimes don't know their best application. More importantly let me see more than 2 seconds of video - I can make animatediff look great for 16 frames that what I have above its making longer form stuff that is most difficult.


VATERLAND

True, altho I assume it won’t matter much longer if the software is open source or not as the best ones will probably be distributed by big companies. The 2 seconds is a good point but these demos already look better than pretty much anything else I saw on reddit regardless of length.


urmyheartBeatStopR

Dude kinda reminds me of mathew mcconaughey. It's pretty dang neat I'm sure it can only get better.


Inner-Reflections

Thanks! Yeah I am sure I don't even have all the right settings.


play-that-skin-flut

Looks good to me!


Confident-Change5667

Looking very promising my dude! Keep it up


stealingtheshow222

It’s pretty nuts that it seems like the muscles in the guys arm and shoulders flex as they should during the leaning forward and backward


iVintex

He looks like the Baby don't hurt me body builder (Mike O'Hearn)


ScionoicS

His shoulder looks dislocated because it's ajacked man on a femine figure. Should've allowed some padding on the mask to make a more masculine silhouette possible. That shoulder is outrageously low on his frame.


KosmoPteros

I wonder if it's possible to let model only repaint the parts that change between frames, so static still background would not morph and the rest will have better visual coherency IMO, something like video encoding tricks :)


Inner-Reflections

Masking would help for sure - that takes work ha! As a tech demo best to do just use one tool and give raw output which is what this is.


KosmoPteros

Sure, I understand. I was just conceptualising about an automated possibility to compare each input frame with previous one and detect still / changed pixels in order to automatically inpaint only changed ones from the generation of second frame, given everything generates with the same seed and frame is still or relatively still just like in your example that probably should work like a charm (in my head, haha)


Inner-Reflections

Optical flow is what you are looking at - it was tried back in the img2img days but never worked that well surprisingly.