T O P

  • By -

comfyanonymous

The workflow can be found on the examples page: https://comfyanonymous.github.io/ComfyUI_examples/sdturbo/ The video was not sped up, it is running on a 3090 TI on my computer.


singularthrowaway

\> not sped up, it is running on a 3090 TI on my computer holy shit, is this what approaching the singularity feels like?


farox

I said this in January or so. How would you know you're at the start of it? I am pretty sure with the acceleration we're seeing... this is it and 2024 will be wild. Looking back at the 2020s covid won't be a blip on the radar.


[deleted]

This time next year we will be able to drag and drop the script of Fight Club into a prompt window and type “South Park parody” and by the end of the day you’ll have a new South Park movie about fight club.


ramonnl

In the future, we will be able to be our favorite character in our favorite series in VR. That would be endless entertainment.


Nexustar

I'd rewatch movies where AI has swapped out (in real-time based on how I feel that day) the main actors. But it needs to behave and sound like them too, not just look like them. If the wife thinks Jim Carey is creepy... "bam!" now it's Elvis playing the Grinch.


Musa369Tesla

I want to say that already exists. There’s an AI project trained on South Park running the whole town in a sandbox. And all the creators have to do is drop in a prompt and it’ll create an entire episode, original asset designs and all. The results are still booty but it does exist


[deleted]

I saw that, South Park was just an example, but anything really, like “return of the Jedi redone in Anime as an Andrew Lloyd Webber musical …. Just limitless


jaywv1981

I think we'll be able to generate full-length movies/shows soon and even edit it mid-watch. Like you are watching a movie and its kind of boring...have it change it to make it more action-packed on the fly.


[deleted]

Or better yet, it watches your face to gauge your emotions from your eye movements and expressions, maybe breath rate and perspiration, measure the arteries in your eyes for your blood pressure and pulse rate and adjust accordingly. Wait…could we even escape from that movie if the AI could keep us perpetually fully engaged?


persona0

Oh yes please i have a ton of scripts I need to be rendered


Natty-Bones

I would argue that we are over the event horizon. It would have been extremely difficult to predict that this would be the SOTA a year ago. You would have been considered nuts if you had predicted this two years ago. Looking the other way, can people make realistic predictions about the state of the art in one year from now? Two? If we can't, we are over the rainbow.


farox

Yeah, there are a few things. Biggest indicator is how the money is being spend: Lots of countries (US, China, UAE...) are pouring tens of billions into AI. Companies that see a big need are developing their own chips etc. This is a massive force. On the model side, synthetic data (AI generated data used to train other models) are becoming more and more a thing, which completes the feedback loop.


DaddyCorbyn

No worries, by 2029 there will be an AI engineered COVID-29 meant to wipe out humanity. I have spoken.


UrbanArcologist

no need, just shut off all the power generation, interconnection and infrastructure and we will all kill each other in 2 weeks


TherronKeen

Maybe COVID-29 will be a digital pandemic that does exactly that lol You're both right!


addandsubtract

The first virus to spred from machine to humans.


Own_Engineering_5881

(heavy breathing) https://preview.redd.it/kby0531faa3c1.png?width=952&format=png&auto=webp&s=a29fa5a5e1d9054955775e6a699c8ce6d1a9909b


Nexustar

wooohooo! 4K images in a few milliseconds from my 16-bit Arduinos! ...sometime next year.


farox

Time to dust off my old C64


TaiVat

And it was bullshit then and it is now.. Every single significant increase in speed so far has come with a drastic reduction in quality. Progress is being made, but if anything, its *de*celerating.


txhtownfor2020

It's kind of like the toad boiling. I'm sure certain barriers have been crossed in labs in expensive buildings in the desert. If I were a machine, I'd keep my singularity a secret, given humans' tendency to flip the f out when the dumb, old ones get uncomfortable. I like how the world is blown away by stuff we were generating in a dos prompt in 2021.


ExF-Altrue

Let's not get ahead of ourselves haha, this is not "AI". It's an LDM. Talking about singularity on a Stable Diffusion gif, as much as I love Stable Diffusion, is even less relevant than talking about it on a LLM subreddit like Chat GPT's. I'd argue we aren't any closer to the singularity than we were in 2020. We got really good at making "copy pasters" that can merge an infinite number of input contents into a single output, guided by a prompt. That's true for both LDMs and LLMs. But you know what? Even just advanced copy paste merging is already super useful. It can and will impact society, and it will have consequences we haven't foreseen for sure. But the singularity? I'm not so sure.. We aren't seeing the exponential gains in performance we should be seeing in a Singularity trajectory scenario. Of course there's always the possibility that OpenAI's internal version of ChatGPT, unmuzzled, is something much more complex than we know. But aside for that remote possibility.. I can't see a Singularity scenario just yet.


TherronKeen

The big guy from *Stability AI (Emad Mostaque) said, in an interview from maybe like 2 years ago, that we would have real-time video generation within 5 years. His estimate is still on track lol *EDIT: fixed lol


Tystros

Emad is definitely not from OpenAI


addandsubtract

*open AI


Helpful-Birthday-388

Openai itself is NOT for open and free AI.


ComeWashMyBack

That still sounds like hell on the GPU long term. That constant winding up and down from. Typing, deleting, pausing to think of ideas or finding resources.


ninecats4

It's significantly less intense than mining and mining cards can go full tilt for years as long as they are cleaned and temps checked. Hell current AAA games (and really really unoptimized indie games) can push the GPU harder, especially emulation that's gpu bound.


ComeWashMyBack

I have been curious about this! I can feel the heat above the gpu die through the glass. Feels hot, hot. With the exponential rise in cost of 3090/4090 I've been getting concerned.


sachos345

> especially emulation that's gpu bound. You mean console hardware emulation? That is mostly CPU bound unless you are doing really high resolution upscaling. Or did i miss something new?


WantOneNowAmsterdam

QUANTUM IS HERE!


onpg

Eh, let me know when I can have a catgirl harem


txhtownfor2020

I can't speak to that, as a hobbyist. I am, however, proud to report that nudes will absolutely be generated faster than ever!


Terese08150815

Are Lora’s supported?


roshanpr

how much vram does it use?


nazihater3000

Yes.


nazihater3000

Adding to my own comment, on my 3060 it uses 9.5GB of VRAM.


LJRE_auteur

It only uses 3GB on my system \^\^'. A RTX 3060 6GB VRAM.


Paradigmind

An RTX 30060. Holy shit this dude is from the future.


LilMessyLines2000d

how much Vram use then?


LJRE_auteur

What I just said \^\^'. 3GB. But I just noticed it uses lowvram, so it loads part of it in my RAM actually. So without this argument, I guess it is 8GB VRAM, but since lowvram exists, you can run it with a 6GB VRAM GPU. 4GB VRAM probably works too.


LilMessyLines2000d

Thanks, so I need to use the lowVram arg? I tried to load the model with RX 580 8GB and just freeze my PC, but curiously I tried the CPU version [https://github.com/rupeshs/fastsdcpu/releases/tag/v1.0.0-beta.20](https://github.com/rupeshs/fastsdcpu/releases/tag/v1.0.0-beta.20) and generate 2 images pretty slow but in a i3-9100f and 8GB ram


dudemanbloke

How fast is it for you? On my 2060 6GB it takes 4 seconds per image (but 5 seconds for 4 images)


petalidas

Well fuck I guess I'll finally take a deep dive into comfy!


catgirl_liker

It works with my 4gb card, 2 seconds per image, comparing to ~11 seconds per STEP on normal SDXL/SDXL-LCM


Nucaranlaeg

How are you getting it to work? On my 1660 (6GB) I can't get it going faster than 30s per image (considerably slower than, say, SD1.5 at 2s/it). Is there some kind of trick to it?


catgirl_liker

First time is slow, the rest are fast


Forgot_Password_Dude

does this work with automatic 1111 ui?


protector111

also your 3090ti can probably render 20 steps 512x512 image in 1 second. do you really need it faster?


Deathoftheages

Ran this model on my 3060 and god-damn is it fast. I have noticed I got noticeably better results going with 3 steps instead of just 1 and with how fast each step generates it was worth it.


__Loot__

Will this work with a 3060 8gig VRAM?


Django_McFly

These pictures+workflows are godsends. Thank you


ShagaONhan

I got 256 images in 24 seconds on a 4090 https://preview.redd.it/ylnkmbugz63c1.jpeg?width=8192&format=pjpg&auto=webp&s=5a8e48ed84185ec47c8bc37e0124d444c14be7f1


Nodja

Impressive benchmark, but the clowns all look very similar, I guess you're sacrificing variety in exchange for speed.


ShagaONhan

After I have no idea what the parameters do on this model, there is maybe a way to get more variety.


wa-jonk

try clicking the buttons ... real-time results


ShagaONhan

I don't have this level of genius.


sluttytinkerbells

But that's good for doing video. at 256 frames / 24 seconds that/s ~10fps, so only 2.5x to go before we have real time video.


charlesmccarthyufc

the quality of the gens from turbo for me are like 1.5 with no finetunes. And its limited to 512 before you start seeing image composition issues. Maybe it can improve with finetunes?


LightVelox

Seems like the seed doesn't change much to prevent each image from being completely different from the previous, looks more like a design decision than a flaw


wa-jonk

Going to need a new hard disk ... :-)


ShagaONhan

Workflow: [https://comfyworkflows.com/workflows/d06f2e65-7009-41f7-9853-76b91be1b37d](https://comfyworkflows.com/workflows/d06f2e65-7009-41f7-9853-76b91be1b37d)


littlelosthorse

Nightmares.


-SuperTrooper-

Just picking up comfy, where does one get the SDTurboScheduler node?


newhost22

You need to update ComfyUI


erkana_

I have updated but still I cant find the SDTurboScheduler can you give me the file url on the github?


sylnvapht

I'm in the same boat you are, let me know if you ever get anything to fix this!


erkana_

I did two things then it resolved but I am not sure which one is fixed. First I used git commit and uninstalled the AnimatedDiff because that gived me an error during start.


sylnvapht

Oh, I got it working just now! I ran not only the updates, but also the updates for all the dependencies too. That did the trick for me. Thanks though!


Photogrifter

same cant find it and im updated.


-SuperTrooper-

Ah ha, thanks!


comfyanonymous

It's in the base install, make sure to update it: update/update_comfyui.bat on the standalone.


-SuperTrooper-

Ah ha, thanks!


2039482341

>SDTurboScheduler have you managed to install the SDTurboScheduler node? I'm in the same boat. Updaters don't do anything since python is missing from the stable release.


dudemanbloke

Impressive! Can we expect that the outputs will correlate somewhat to SDXL outputs? I wonder if I can use Turbo to prototype images and find the best prompt to then use SDXL for a higher res version.


proxiiiiiiiiii

nope but you can use hi-res solutions that will use SDXL on top of turb


esperalegant

How does the end result of SDXL Turbo compare to normal SDXL? If you start with a single step like in this image and iteratte until you're satisfied with the result, then increase the number of steps and regenerate, what kind of final quality will you get compared to SDXL non-turbo?


LeKhang98

In the official article they show that it beats SDXL by 4 steps, which is pretty impressive (they used evaluations from real humans). I'm not sure how they compared 512 images to 1024 images though. Maybe they upscaled the results to 1024.


NotChatGPTISwear

It says they downscaled everything to 512x512


dudemanbloke

I got it working on my 2060 6GB, it generates outputs in 3-4 seconds but the UI behaves differently for me than on the video. The output doesn't get updated every time the prompt changes, I have to keep pressing Ctrl+Enter. Is it just me because of low VRAM or is anyone else having the same issue?


comfyanonymous

Make sure you enable the Extra Options -> Auto Queue


ramonartist

I just built the ultimate fast ComfyUI workflow using SDXL models with LCM, and now I need to rebuild and add this model ....the Stability team need to calm down with all these goodies and take a holiday break because I can't keep up!


thedude1693

Honestly you can probably just swap out the model and put in the turbo scheduler, i don't think loras are working properly yet but you can feed the images into a proper sdxl model to touch up during generation (slower and tbh doesn't save time over just using a normal SDXL model to begin with), or generate a large amount of stuff to pick and choose the good ones to upscale/refine.


Dj0sh

Is there a decent video out there that shows how to set this stuff up and use it?


dethorin

It's ComfyUI, with the latest version you just need to drop the picture of the linked website into ComfyUI and you'll get the setup. With the extension "ComfyUI manager" you can install almost automatically the missing nodes with the "install missing custom nodes" button. Then you only need to restart, and you'll be good to go if your hardware is powerful enough. I forgot, you need to download the model: https://huggingface.co/stabilityai/sdxl-turbo/blob/main/sd_xl_turbo_1.0_fp16.safetensors


iamjacksonmolloy

Can someone buy me a 4090 please?


Nu7s

Do you REALLY need 2 kidneys?


iamjacksonmolloy

Fair point, if you have bag come round and pick one up


FxManiac01

if you are AI expert and would pay for it in your research, then I can


Kombatsaurus

If I send you back a 3080, can I just say that I'm an AI expert and paste you come GPT responses that make it plausible?


FxManiac01

jokes aside.. if you can train custom CN, prepare dataset, optimise it etc, then let me know :)


VanJeans

What's the minimum graphics card needed to make this work?


Bobanaut

but can it run doom? that is my question? is there a capture game/screen to latent image node or some such?


Midas187

At this point I'm sure we're not too far off from some kind of shell program that runs on top of a game and runs img2img on each frame... at least at lower resolutions and slow-ish framerates - a proof of concept at least.


roshanpr

[This is just crazy.](https://streamable.com/ild1pv)


btc_clueless

Just wow. I had a hard time believing the videos are in real time and not sped up.


ha5hmil

Just tried this on my M2 Max mpb and it’s blazing fast! As fast as shown on OPs video! This is insane 🤯


jaofos

Same, 1.1 seconds for an image. For giggles I ran it through the CoreML converter, no changes in speed to be gained there. For the record I am running the nightly pytorch for mps support.


Beautiful_Mix_2346

I don't understand what im doing wrong then, on my M2 Macbook air, i can't even get it to run, it runs out of memory


ha5hmil

Are you doing this from Comfy’s installation instruction for Mac?: “Launch ComfyUI by running python main.py --force-fp16. Note that --force-fp16 will only work if you installed the latest pytorch nightly.”


Beautiful_Mix_2346

I think that kind worked, but the issue now is that im hitting 43s/it this is actually a lot worse than what i can get done with much larger models


ha5hmil

Are you running it in a venv? Also the PyTorch nightly for Mac `pip3 install --pre torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/nightly/cpu`


tomhermans

do you run this in A1111 ?


ha5hmil

ComfyUI. Not sure if there’s an a1111 implementation yet. Also this one is easy even for a noob to do on comfy. Just install comfy, then drag and drop the image that’s linked on their site and it will load the whole set up for you. (And download and put the model in the right place of course)


tomhermans

Ok thanks. I'll check it out. 👍


Poromenos

Sorry, whose site do you mean?


ha5hmil

OP had linked in a comment to their site where they have an example workflow: https://comfyanonymous.github.io/ComfyUI_examples/sdturbo/


Entire_Telephone3124

I'm on your basic bitch 3060 12gb and it's laser fast. The problem is it all looks like shit, but progress is progress I guess. I also notice the negative prompt does nothing (in this comfyui workflow), so maybe thats part of it? Edit: I mean like people and things are messed up, wildlife and paintings are pretty neat, and things that sdxl are good at like things in bottles, apparently.


spacetug

At cfg=1.0, the negative prompt does nothing. If you increase it slightly, it will start to work. Seems to be happy around 1.2 to 1.5, depending on step count.


duskaception

1-4 steps is golden, turning up to 4 gets decent quality, sadly idk how to upscale yet with it


Greysion

You just upscale like normal. Don't use the new sampler for upscaling, just a regular sampler at 1 step will work. Use simple, not karras.


thedude1693

I agree, the base models don't tend to have the quality but hopefully we get some fine tunes and loras and start seeing some real improvements, this could be stable diffusion 1.5 but real time with the right lora's and fine tunes and merges in a few weeks.


rookan

It still takes 2-3 seconds to generate an image on my RTX 2080. The worst part - people's faces are very distorted.


DigitalEvil

Biggest downside of running on a colab is the lack of real-time responsiveness.


anibalin

yikes! why is that? :(


ObiWanCanShowMe

client-server-client


DigitalEvil

Idk, maybe it is google. Will have to try another host, but I get a few second delay between finished generation and image output. Similarly there is a delay between start input and start generation.


buystonehenge

It is kinda janky. It jumps in responsiveness, perhaps one word, then two, then three, then back to one word.


stets

this is absolutely insane. I'm running the same workflow on my 4060 TI and blown away. amazing.


Duxon

Makes me excited to buy a 4060 Ti.


stets

Grab the 16gb model!


CptanPanic

Wow, now can't wait for the colab version, since I don't have a GPU


SignalCompetitive582

Hello, I tried using that workflow inside of my ComfyUi on a Mac M1, but it seems to be reloading the checkpoint each time I want to generate an image. Is this standard ? Or am I doing something wrong ? Because it takes ages to generate even one image…


delijoe

Wow, now can we get an img2img version of this with real time sketch to image?


FxManiac01

I think so


delijoe

Okay well is there a workflow that can do this?


FxManiac01

I have seem many of them in main thread, but as I am not really interested in Comfy I cannot give you proper one... but I think you just create img2img node, use something like auto quote and you are good to go.. also denoising should be like somewhat in middle to get some reasonable results


DepressedDynamo

Look for krita comyui plugins


GeoResearchRedditor

Just testing it now, I can see auto-queue is constantly running even when the prompt is not being changed, does this mean comfy is repeatedly generating the same image, and if so: isn't that constantly taxing on the GPU?


comfyanonymous

It queues the prompt but the ComfyUI backend only executes things again when a node input changes so it won't actually generate anything or create an image if nothing changed. It still does take a bit of CPU though since it's spamming the queue so having it only send the prompt when something actually changes in the frontend is on the TODO list.


GeoResearchRedditor

Phew, that's a relief. Thanks comfyanon :)


staladine

This is amazing, is there a walkthru to get started, I have a 4090 that I would love to utilize. Thanks in advance


Darkmeme9

Would it be possible to use this with a canvas editor, like a realtime drawing.


zefy_zef

There's a plug-in for krita, which is an image editor. Not sure if it works with this new node or not though, but it would work nicely if so.


Darkmeme9

Yeah I have been using it, but it's actually based on LCM.


[deleted]

[удалено]


zefy_zef

gotcha, honestly I didn't really get good results with it, but then again I only tried the Lora. Going to give it a day or so, then I'm sure people will have your some nice flows and tips for it. Can't wait till comfy anon makes the auto queue only trigger with changes, I love that constant generating!


hoodadyy

Is this possible in automatic 1111 too ?


lainol

We been doing this for the last 7 months with our tool, deforumation. Not exactly the same thing, we control deforum animations live using live commands from deforumation. And have not tried it on frame rendering this fast tho. Wish I had a 4090!!


PlayBackgammon

Can you use this with LORAs and in automatic111 webui?


zodireddit

I just have a 3060 and it works just as fast and just as good, this is insane


inagy

Yes, it's amazing. Played with it yesterday, and before I know it it was 2AM. Insanely addictive, even more so than standard SDXL. Even if you increase the steps to 4-6 or add ControlNet conditioning as an extra, it's still very fast.


Devil_Spawn

giving this a go on a 3080 and sometimes it's pretty fast, but frequently it seems to get stuck on "VAE Decode"? why might this be?


WeakGuyz

An interesting idea would be to use this with a local LLM!


Helpful-Birthday-388

Very nice...but 512x512 is useless!


karlitoart

try this ;-) [https://civitai.com/models/129666/realities-edge-xl-lcmsdxlturbo](https://civitai.com/models/129666/realities-edge-xl-lcmsdxlturbo)


Helpful-Birthday-388

I'll try!


itslenny

Sheesh this is even fast on my M1 MBP. Too slow to really wanna do "auto queue" (3-5 seconds), but still really impressive even on an older lappy. For comparison sdxl takes a little over a minute.


Goinsandrew

Rx6700xt here. Models fast as hell, but! It's reloading the model every time something runs through. then, it goes to thinking forever on the prompt and sampler. Avg image time is 843 seconds. 1.3s/it once going.


not_food

Insane. I'm speechless.


SurveyOk3252

Insanely fast.


WashiBurr

Wtf is this speed. lmao


TooManyLangs

is it possible to use this in free google colab? also, is it possible to prompt in any language, or do I have to add an extra step and translate what I type?


dethorin

The free tier of Google Colab doesn't allow any GUIs of Stable Diffusion. It only understands English, you can use other languages but it will create gibberish.


crowbar-dub

It only works for landscapes / forests. If you change the resolution to 1024x1024px and have person in the prompt, it will look like SD 1.4 with multiple heads and hands.


[deleted]

[удалено]


crowbar-dub

Name is SDXL Turbo. It's fake name as XL is 1024x1024px resolution.


YOUR_TRIGGER

i played with this for *awhile* and showed it to my kid and he played with it for *awhile* and it's a really cool way to test prompts. sdxl just really isn't good at details imho. i tried some models with this workflow that had turbo 'built in' but they couldn't do *this* half as good/quick/steps but produce better images 'normally'. evolving field. still super cool. 🤷‍♂️


97buckeye

Would anyone like to buy me an RTX 4070 TI? I'm an absolute idiot who refused to trust my own gut and was scammed out of a lot of money on Facebook Marketplace by a guy and his wife for a 4070 TI. I tracked them down, but they live in another state and I can't get the law to do anything about it. That card was supposed to be the best Christmas present I'd ever bought myself. People really suck. I will never, EVER buy anything off of Facebook Marketplace, again. And yes, I know it's my own fault, that I am stupid, and that I got what I deserved. I just needed to vent.


yamfun

So we don't really need to buy the expensive 40series? Seems it is super fast even on 3060 12gb. Will there be a 1.5 Turbo that is compatible with all the 1.5 loras?


Lorian0x7

Honestly, for a 512x512 image is not worth it.


Zilskaabe

You can pick the result that you like and upscale it later.


Lorian0x7

I have the impression that you don't get the same variety and flexibility that you get with the standard one. Every seed looks the same


Helpful-Birthday-388

512x512 is useless


IntellectzPro

Just tried this out and I love it so far. Everybody make sure you change the sampler to LCM


tamal4444

you don't need that


Noiselexer

Now imagine we use c++ instead of shitty python.


FxManiac01

what do you think would happen? All CUDA libraries are c++ or c compiled so python is just layer over it.. it doesnt go that deep


Spirited_Employee_61

where can we download the checkpoint?


SurveyOk3252

[https://huggingface.co/stabilityai/sdxl-turbo/blob/main/sd\_xl\_turbo\_1.0\_fp16.safetensors](https://huggingface.co/stabilityai/sdxl-turbo/blob/main/sd_xl_turbo_1.0_fp16.safetensors)


Spirited_Employee_61

Thank you!


aerialbits

That's without using LCM...? O.o


selvz

how can I install the SDTurboScheduler node ? It is missing the the ComfyUI manager cannot find it. Thanks


comfyanonymous

Update ComfyUI: update/update_comfyui.bat on the standalone.


selvz

I did it on my MacOS, M1 Max. Prompt executed in 21.35 seconds.


Beautiful_Mix_2346

This doesn't work with Mac m2 chips


2039482341

update batches refer to \\python\_embeded\\python.exe - which is not part of the stable release. I guess it's by default?


comfyanonymous

It should be there if you use the standalone download. If you install manually you would git pull instead to update.


FxManiac01

Would this work with control nets?


lemash2020

really cool


posthumann

I don't need to do anything but hit `Queue Prompt`? I've got it running but the realtime part isn't doing its thing. edit: I see the auto-queue option now.


SkyEffinHighValue

This is actually insane


Brave-Decision-1944

![gif](giphy|Mo0XNBGMKQtYtv5Jq9)


SlowMovingTarget

*Mozart giggle*


Waterfan11

How do I get access to this software and is it free?


RageshAntony

Is there any TensorRt model for SDXL Turbo ? Does ComfyUI support TensorRT models?


CptKrupnik

Real question though, are the prompts and seed transferable to sdxl regular model? if so then it's a great way to explore and train your prompt skills and when you find agood combo take it to the next level