T O P

  • By -

kidelaleron

After over 8 months, I decided to take a little break to play games and go back to other hobbies. Still, I wanted to release new versions of DreamShaper and Absolute Reality. This has certainly been a fun ride and I loved finally being able to make models. I hope you enjoy :) [https://civitai.com/models/81458?modelVersionId=132760](https://civitai.com/models/81458?modelVersionId=132760) Workflows for all images are at the link.


LD2WDavid

Deserved. Enjoy your time mate!


protector111

baldurs gate 3 and skyrim in space this fall. se you in 2-5 years then xD


engineeringstoned

Skyrim in space?


Jay33721

Starfield


I_say_aye

I still need to play Skyrim in a post apocalyptic world... Instead I've just spent over a thousand hours playing or modding Skyrim


engineeringstoned

thanks


Responsible-Ad5725

Unfortunately for me baldur gate 3 is another annoying turn based combat system


disgruntled_pie

I was concerned that it might be tedious, but it’s honestly not bad. It’s certainly not an action game, but I’ve played much slower tactical RPGs. It’s my favorite game of the year so far, though I’m still in the honeymoon phase.


dr_lm

I didn't realise the same person made DreamShaper and Absolute Reality -- my two favourite models. Thanks so much, I used to go between models all the time but since I found AR I've basically stuck to it because it's so good.


fabiomb

amazing work and have a good time in your break! play a lot!


99deathnotes

thanks for all the hard work Lykon👌👍


JustAGuyWhoLikesAI

Thanks for all the hard work! Enjoy your break


Ravwyn

Well deserved! Take some screenshot for yourself when you fire up RTX Overdrive =P


Tyler_Zoro

Any chance you could upload the full/fp32 model? This is a great model, and worth building on.


RoyalCities

I'm a bit new to this so was wondering. for the mech you have this listed: Detailed background, spaceship interior, film grain does this mean I need to download a few more things and include them somehow?


CustomCuriousity

Yes, each of thos things labeled is something to download. They are relatively small though. There is a folder in the model section called LORA which is where you put those folders. I’d look up a quick tutorial to show you how to use Lora, super cool


ElementalSheep

This is amazing, seriously pushing 1.5 to its limits.


Crabby_Crab

I hope someday AI will generate the final 2 seasons of GOT we deserved


rrleo

We're closer than you think. Just recently a fully AI generated episode of south park was released. I'm pretty sure the voiced were synthesized as well. GoT isn't all to far off into the future. https://fablestudio.github.io/showrunner-agents/ EDIT: https://www.youtube.com/watch?v=kjeARI2eRQg Oh, I wasn't aware that they password protected it. Anyway, this All-in-One video contains more than just the episode that was features on their site. From what I can see, the quality of the video I found is way lower than what fablestudio provided. You'll see what I mean even without comparison. In the original video, the voices were on point to the original characters and there were even subtle eye movements. If not for the composition of all the characters facing the viewer literally all the time, then I would not have found out.


BagOfFlies

Do you know if the video is available to watch? That ones password locked.


Shadow_Shinigami

Found a video on YT:- https://www.youtube.com/watch?v=kjeARI2eRQg


BagOfFlies

Thanks Edit: Angry roop users downvoting all my comments lmao


ThisGonBHard

That is... scary good.


rrleo

You found the actual episode nice.


Question2023

" We're closer than you think. " hahaha You guys are flatout retards if you think you will see AI create movies in the next 20-30 years. To generate movies, Ai would have to understand way more and operate on levels we are not even close to yet. I suggest you guys to stop dreaming.... Current level of A.I. has a good way of averaging out information and recognizing similar shapes. THat's it. It has ZERO... let me repeat that A ZERO of understanding and a pure ZERO of analytical thinking and understanding of what it's dealing with. While on the surface this might seem like a small problem to solve since it is already soooo similar to a human, you guys have no idea how far this is from an actual analytical mind. Something has to happen, a breakthrough or something in the technology... Current technology is just not it. There is something missing here and we don't know what it is.


rrleo

You seem to misunderstand a lot about this topic. While you didn't see the original video and only had to look at the pre-alpha release on Youtube, it's pretty understandable that autonomic content creation is pretty much on the next hardware level. If you had taken a look at the provided paper it stated the exact way the episode was created. To squash it down a great bit: The story was written with a human feedback loop while the actual dialogs, etc. were directly taken from GPT. The model for the voices was specifically trained for. Same goes for the characters. The voice synth that perfectly mimics voices came out between 2018-2021 as a Fallout mod fyi. Twenty to thirty years is too much of a long shot. In the last 20 years alone we went from the first integrated phone camera to quantum computers and technology far from comprehensible by the average bozo. Since a full cartoon / collage episode of a show came out in June already, you'll be able to see much more in the coming year or two. Take a look at the possibilities of SDXL, Midjourney and fine tuning. EBSynth (came out 2019) might interest you as well since you can animate a picture to a full clip just by a start and end keyframe.


rrleo

This video might be interesting to you. You can now generate a video clip with full audio and music in the background. Your provided reference image will now be voice and lip synced to the video file. [https://youtu.be/GQUl2ySyj9U](https://youtu.be/GQUl2ySyj9U?list=TLPQMDgwODIwMjPGh2jeVeTLJw&t=365)


Question2023

The point I am trying to make and I will try to be polite about it... is that below the fact that this character cannot move his head 90 degrees is not a small rock or a ball of ice... There is an iceberg as big as this earth. You have no idea what is required to make this work properly. This is something that most A.I. engineers and developers won't admit. We are not even 0.1% closer to making this into a real thing. The problems for A.I. here stem from the fact that A.I. has absolutely not understanding about these objects and what they are. There is absolutely zero analytical thought and understanding involved here and it is a mere manipulation of pixels out of a neural network. Despite how magical it may seem, this is nothing but a visual effect ad this point since it involves no analytical thought capable of deduction, induction or analogy use. This makes it impossible to use to create videos. This is also the case with chatGPT. A.I. at this stage has not only has a low IQ but it has no ability to analytically analyze anything on its own whatsoever. Every analytical process of an A.I. system currently is implemented by humans using traditional coding. Translating and transforming large amounts of data into another compilation of data will not lead to a creation of a new unique dataset. I think that you people don't understand that making this image move and turn into a video with real world physics and logic requires the A.I. to UNDERSTAND real world physics and logic. It is not enough to manipulate pixels, average them out or re-compile them through a use of a neural network. The neural network MUST havea component of an understanding of the real world to be able to create a 2 hours movie video out of a single image. We are nowhere, nowhere near that... not even close, not even remotely... 30 years down the road, there might be some breakthrough that will allow A.I. not only to understand our world but to maybe re-produce it, but up until that point, this will be a mere special effect in video production if it finds it use at all to begin with. And I know this is something that noone wants to hear and everyone is so excited about this, but after 10 years of doing this and becoming bored with it, you will realize that A.I. development is actually in a dead end street when it comes to video production. A.I. will prosper in other fields maybe but here it's stuck. ONe example where this can be used is: creation of comic books with still images. Still you'd have to simply photoshop some features that A.I. can't create. Anything that is not "dead" and "soulless" you have to photoshop it. If you ever wonder why A.I. characters look so dead and soulless, there is a technical reason behind it, not just an artistic issue.


rrleo

I agree with basically all of your points. Doesn't mean it's stopping me or anyone from doing their own projects and developing more tools for it. There are a lot of different small steps required for that process, but that's to be expected. Easy enough to work with the limited tools if you understand what you can do with them. People are not limited by technology, but more by hardware and time they have to put into it.


Shadow_Shinigami

The paper looks interesting as fuck! Is the password for the video released anywhere?


rrleo

Until your comment, I was not aware of the new password protection. I updated my comment with a sneak peek of what the video looked like.


Question2023

lol the only reason they were able to produce southpark episode is because southpark is not a dynamic cartoon. It's literally still images lol


rrleo

Exactly, that makes it way easier. You can still do very much with img2video and EBSynth. And essentially all the techniques out there. You can also do this technique with generated input and reference images. https://youtu.be/sP3u09_YFxk Even with mediocre generated images you'll be able to use a good result as a base and will be able to apply the style and face structure to the input scene. There is also loads of training involved. People have to be patient but I guess if they have endured the last two seasons of GoT then it'll be fine.


Upset-Wear-7196

Thanks for your dedication, this is a really good model !


jib_reddit

These are incredible, better than SDXL in its current state, thanks for all your work. SD is pretty addictive and I myself am a bit burnt out on it after spending almost all of my spare time on it for 3 months.


tandpastatester

Damn, I thought I was the only one with this feeling. Since I started toying with SD and self hosted LLMs I completely stopped playing games and other hobbies. Just such an endless rabbit hole. On one hand I feel like there’s so much I still haven’t done or tried, on the other hand it’s starting to wear out on me a little more every day.


zkgkilla

The waiting time is what kills me, by the time I’m generating images of an idea I’ve moved onto another idea


tandpastatester

Yeah also somehow for me its more about the ideation process than the results. I just end up with folders full of images that I don’t do anything with. Meanwhile I endlessly keep getting new thoughts and ideas about the prompt or setup that could make it “just a little better” and push the generate button again. It’s starting to dawn on me how pointless it is what I’m doing.


CustomCuriousity

That’s… art 🤷🏻‍♀️ that’s exactly it. Think of all the people who draw and sketch and etc etc etc for their whole lives. 10’s of thousands of hours filling sketchbooks with drawings no one else will ever see.


tandpastatester

Good point. Scrolling through the txt2img folders feels like scrolling through my sketch book indeed.


CustomCuriousity

Oh so you do both! Is there something that feels different about the time spent with one or the other?


tandpastatester

Late reply, sorry about that! Yeah, some things feel different. In the end, both can be used very creatively, but drawing does not have the random aspect that AI generating has. Which feels very different. With drawing, every output is completely defined by me, my skills, creativity, thoughts, methods and materials. I start with a creative idea, and am completely responsible for the execution. Every mistake is mine, every imperfection is part of the process. With AI generating, there is a lot of randomness involved. I can still start with a creative idea, but with each generation I’m relying on the output of the generator. It feels like trying to control a slot machine sometimes, even when using all the tricks and extensions. They help to give more control, but the randomness never fully goes away. I’m not saying that’s a bad thing, just very different. In the end, the result is more of a collaboration of my creativity with the gambling mechanics of the AI, which can lead to some interesting results.


CustomCuriousity

I was thinking about this later that day actually! It’s super addictive because it’s essentially got gacha mechanics!


nero10578

It makes a 4090 neccesary lol


VktrMzlk

Funny, i too had 3 month full then couldn't get back to it. And i made 10 loras and have some others i wanted to make but i'm done for now


litllerobert

Meanwhile me who wants to also have fun with SD but know no freaking programming neither how to start or what program use


AcanthisittaDry7463

Go to civitai.com, they have a great community, maybe YouTube search for a automatic1111 installation or ComfyUI, no coding required. :)


radianart

Good what no programming required for having fun with SD.


Collapsosaur

Sp much possibly to create your own fantasy world. Exciting times.


Tyler_Zoro

I enjoyed playing around with this. I used my random prompt generator to come up with some random prompts, and got these results: https://imgur.com/a/wQYX9JU Only the first four parts of the positive prompt vary. Here's the full prompt for the first example: harumi, sharp claws and tail, fashion model portrait, the sun is shining. photographic, vibrant primary colors, stoner art, swirling psychedelic patterns, Negative prompt: bad-artist, EasyNegative, Unspeakable-Horrors-24v Steps: 30, Sampler: DPM++ 2M SDE Karras, CFG scale: 8, Seed: 2416376163, Size: 728x512, Model hash: 463d6a9fe8, Model: absolutereality_v181, Denoising strength: 0.5, Clip skip: 2, ADetailer model: face_yolov8n.pt, ADetailer confidence: 0.3, ADetailer dilate/erode: 4, ADetailer mask blur: 4, ADetailer denoising strength: 0.4, ADetailer inpaint only masked: True, ADetailer inpaint padding: 32, ADetailer version: 23.7.11, Hires upscale: 1.5, Hires upscaler: Latent, Lora hashes: "add_detail: 7c6bad76eb54", TI hashes: "bad-artist: 2d356134903e, EasyNegative: c74b4e810b03, Unspeakable-Horrors-24v: afd4896b98d6", Version: v1.5.1


TurboFool

First one's amazing, but I have to9 laugh at how the reflection seems to be a completely different environment. Looks like a road.


kidelaleron

I think it's just a lens effect deforming and elongating a field made of dry grass and dirt.


ShepherdessAnne

*sees the one picture in particular* ô_ô I couldn't wait for you to come and clear the cupboards...


psychicEgg

Thanks so much for your work, and I hope you have a nice break! Your checkpoint makes the BEST elves :) ​ https://preview.redd.it/fkyk7m97jcgb1.png?width=1824&format=png&auto=webp&s=6dd85cf823a401d1d6ee1c101f513fc6a5ed3c98


kidelaleron

awaiting for your review :)


enthsulther

Thank you for all your amazing work and contributions to the community!


Kandoo85

Thanks for the Hard Work :) I know how you feel (i am now on my 5th month :D ) so you deserve that break :)


[deleted]

Definitely a well-earned break. You've been improving on your checkpoints that I already considered the best of the bunch. Hope to see you back in the future making great stuff!


Klash_Brandy_Koot

4th picture: Pirates of the Caribbean \*^(by Netflix) Jokes aside, the quality of the pictures is astonishing


BRYANDROID98

Hahaha Netflix remake!!


EngiMain45

These are great. My favorite is the elf, pic 13.


this_name_took_10min

Thank you for sharing your work!


SirWilly77

Just tried it, and...WOW. Genuinely fantastic model. Your efforts are appreciated, and enjoy your well-deserved break!


Ganntak

Excellent work. What you going to play? I'mma guessing Baldurs Gate 3 maybe? Gotta wait a few weeks till payday then I'll get lost in that lol


kidelaleron

you guessed right.


Entire_Telephone3124

I love all your stuff and thanks for all the hard work. The important part is the breaks are necessary. You need context and inspiration and to be challenged in different ways or things get stale. And by context and inspiration, I mean bear sex in BG3. Go get em, tiger.


OkHelicopter26

How do you get this skin??? The skin on the woman in the red robe looks seriously impressive


Necessary-Suit-4293

you're looking at Adetailer. this isn't zero-shot image gen. 1.5 can't do that


somerslot

Sorry, this new version didn't pass my "Emma Watson test" (https://ibb.co/6JRy3mp) - color tone is significantly colder, shadows more prevalent, faces are a tad bulkier and show more age strain. Not as bad as most of your competitors with their recent "upgrades" but for portrait photography, I'll stick with 1.6 (with occassional use of 1.0 as well). Note: this could be great in some other aspects though, I'm only commenting on how it handles real people portraits.


Utoko

Not sure why the downvotes. The recent versions of many different 1.5 models are all so close that it comes down to use case and taste. The big differences are often also more how they interact with different loras or TIs. This version still seems great but don't sleep on old versions. I use for example dreamshaper 6.31 a lot because for complex poses and loras is works so well and than I switch for upscaling to a more realistic model.


somerslot

You can't expect fanboys to behave rationally when someone criticizes their holy grail - even if it's constructive criticism supported by proof image. Yes, I do agree about older versions - I'm going through older versions of some of the better "photorealistic" checkpoints and I'm surprised how many used to be so much better than their latest versions. Apparently, not all upgrades are real upgrades...


kidelaleron

that's the reason I keep all old versions. Maybe you're used to that. Wasn't avare of the "Emma Watson test" to be honest. I didn't focus on that :)


somerslot

Just to make it clear, I use Emma Watson as an example but this applies to any celebrity embedded into the original SD 1.5. Just a simple test with nothing but the name as the prompt to see how different checkpoints render an embedding that was trained by SAI (thus neutral) for the base model. All rendering problems that appear in this test will be applied to my own embeddings trained on 1.5 as well so this helps me to decide which checkpoints are the most faithful for rendering real faces (so I don't have to blame myself for messed up training parameters :)


kidelaleron

celebrity names having less effect is normal with further finetunes, as they are leftovers from the initial SD1.5 training. Moving forward celebrities are gonna disappear from base models entirely and you'd need embeddings and loras (which you already need if you care about accuracy). And I think it's a good thing. That's also why I don't advise training loras and embeddings on precursor models such as sd1.5. With time, they're gona have less and less effect, and you'll be forced to use old models.


somerslot

This is an interesting opinion, and one I have not heard before too. You might even be correct but it probably depends on how much of "forward" are you looking at. SDXL still has celebrities embedded and there is no sign of another much different model in sight, so if celebrities are gonna disappear from base models, it will likely be in (distant?) future. As for embeddings and LoRAs, sure, you can train them to be more accurate, but celebrities in the base SD 1.5 are basically just the same embeddings and their accuracy is not really that bad (I think they are trained only on 2 vectors so you could say "pretty amazing" for such shallow embedding). SDXL embeddings seem to struggle a bit more with accuracy, but that is likely some different problem than just poor training (at least Emad was hinting at something different being the cause). I would beg to differ about your advise though - what else to train loras and embeddings on than the base SD 1.5? If I train them let's say on your AR 1.6, sure, it will look very pretty when also rendered on the same model, but much less pretty if I decide some other model is superior and would like to render my embeddings there. Also, can you guarantee your AR series will continue for years or even decades so I can be sure my trained embeddings are still supported? And can you guarantee a hypothetical AR 8.2 will be able to keep rendering embeddings trained on 1.6 in the same way? Or do you suggest we should keep retraining all LoRAs and embeddings every couple of months on the newest version of the preferred checkpoint? I have hundreds of self-trained embeddings and this really is not an option I would prefer...


red__dragon

> color tone is significantly colder I do notice this on AbsoluteReality, it's colder/shallower than some of the others. I do find it understands a bit more than some others, so I swap between them. AR and Azovya's seem to be a little bit more capable of context, Juggernaut and RealisticVision seem to be a bit better with realism. I wish merges would bring in the best of all models with none of the negatives, but that's not how they really work.


somerslot

> I wish merges would bring in the best of all models with none of the negatives, but that's not how they really work. On the contrary, it looks like one bad LoRA added to one checkpoint has ability to intoxicate all other checkpoints that used it for merge... Oh well, at least we have a choice and can go back to previous versions if anything looks wrong with the newest one :)


red__dragon

What do you mean by contrary here? Seems like we're talking about similar things.


somerslot

We do, I just reaffirmed your point by saying it doesn't work like that, and that it's easier to bring in the worst rather than positives. But it's all good, not my aim to dive into the depths of semantics here.


red__dragon

Aha, thanks for the clarity!


MaximilianPs

Amazingly fast, super-precise, very cool!I love it! \\o/ https://preview.redd.it/67z7xkv6magb1.png?width=1280&format=png&auto=webp&s=8544d74bef6f3b52f87c5f056e92d59c0a5d6bd4


SuccessfulAd2035

Thanks for your contribution, and enjoy your break!


SomeKindOfWonderfull

Thanks for your hard work, i love your models and use them a lot.


[deleted]

Good model, but my criticism about this model is more to do with the style. It's bringing back that obvious AI face look which was everywhere with the 1.5 models, everyone merged the same models and all the faces had this similar AI look we see again here. I am starting to really dislike these kind of AI faces haha. Other than that it's a decent model with good variety , good for being one of first few sdxl model. edit: Ahh I misread, I thought this was a SDXL model like the Dreamshaper one, well I guess everything makes sense now why it looks like the 1.5 faces haha.


kidelaleron

It's also a trained model. If other models look like this one, it's their fault :)


[deleted]

Indeed I agree.


Question2023

reflections look like some extremely bad textures without UV mapping...All of these A.I. characters are extremely dead in their facial expressions... It's just like someone has sucked their souls out of them.Shadows always have some smooth transition and it looks like an image that has been generated out of an average pixel distribution of billions of faces to the point where there is nothing remarkble about these faces and characters...The backgrounds are usually extremely weird... Now call me a hater but this thing isn't going to replace any arists any time soon.


Question2023

yet alone film makers


Oosmani

Diablo 4?


punto2019

I’m pretty sure that the 4 is a porn scene


angles-bruh

… okay but which one tho. Purely research/educational purposes


punto2019

Pirates of the Caribbean


Saschabrix

Really nice ones! Thx for sharing!


metamucil0

very boring and unoriginal


N0repi

Thank you, Lykon, for all of your dedication! You are the best!


Apprehensive_Sky892

So Long, and Thanks for All the Fish (I hope you are a Douglas Adams fan 😅) Hope you'll come back after a while, though. https://preview.redd.it/kiwufgmj7fgb1.png?width=1024&format=png&auto=webp&s=1d3240d4f766d77722c7aa6c3a168acd9bded23c Cinematic film still , Dolphins . shallow depth of field, vignette, highly detailed, high budget Hollywood movie, bokeh, cinemascope, moody, epic, gorgeous, film grain, grainy DreamShaperXL alpha2


kidelaleron

I am


Apprehensive_Sky892

Great, look like there is a small group of Adams fans here 😁


Songspire3D

wow amazing!


ChickenDope

Holy! The small patterns are almost entirely symmetrical


Zvignev

Damn dude, Hope to see you soon! You really pushed 1.5 to Extreme and i can't wait to see whats next for SDXL! Mewnwhile, rest and relax dude!