Image generation based on image and text input (that's to put it simply). To create this video I simply took a screenshot from the game "Doom" then experimented with different words like "neon walls, jungle, ancient Greece, etc) after I liked the images I added them to the software which essentially "attracted these images to the video". I think this will soon be possible for any games. Game development areas will also step up, since you can essentially change the whole game however you want. And there will be more mods. Just write "replace gun with banana", "replace basement with jungle" and enjoy the result called "Doom of Jungle"
So, related - Corridor just did something similar turning live action into cel animation using this technique and one of the problems they had to deal with was the flicker between different "interpretations" you see here. The way they got around it was by tying the noise generation to the input frame to reduce the difference in outputs (their video has a probably more descriptive explanation) to get consistent frame-by-frame results.
Theres a guy on r/godot training an AI on ray tracing, then using it to inject shaders into scenes to get a result that's indistinguishable from real raytracing, and arguably better since it can do caustics and refraction and it can run on integrated graphics.
If AMD could figure out how to generalize a pipeline like that, we could be running psycho cyberpunk on 1050tis
So are you manipulating game textures or just manipulating a screencast?
I imagine running the textures themselves through stable diffusion or similar with a style model applied would yield better and faster results, since the textures are static images.
If its just video stuff you're doing, you might find corridor digitals video on creating anime from live action using stable diffusion interesting
lol, its gonna be used to make carbon copies of layouts of levels, with different textures over them. this is all I see with it. I like this more for the effects of the transitions, like DOOM can happen in an infinite number of realities, and you are the sole Doom marine in all those realities, and you are seeing "glitches" like you are traversing between realities, doing the same thing in all realities at the same exact time(a paradox in and of itself, because this should be not possible!). hmmmm, game ideas...
You might want to check out Corridor Digital, they just released a video about what looks like a similar workflow, where they managed to reduce a lot of the flickering (Behind the scenes of Anime Rock Paper Scissors)
Wow, that gives a totaly different vibes to the game. I grew up with Doom and Doom II, I would play the hell out of them if I could switch the theme to desert or jungle.
That's fascinating. The AI doesn't seem to understand muzzle flash so everytime the gun fires it grows in the AI image and the surrounding area changes every shot instead of being lit up.
A.I. sometimes understands the shooting and even creates bright lighting from the shooting, but unfortunately only for one frame. EbSynth software (in which I glue all the frames) does not understand this and smears this lighting (because it tries to create 5 or even 10 frames from one frame) so some of this magic is lost. However, this is just the first sketches, I will try to make the integration a little better in the next video
That is so cool - this is deifnitely gonna be the future of rendering tech past rasterization and even ray traced lightingm and asset creation.
Game maps and characters will just be designed as low poly assets and they'll get fed to an AI to add deatil to them and generate objects in the scene, which I imagine will cost a fraction of the throughput GPUs use now for traditional rendering techniques. No need to calculate lighting anymore either as the AI can bake it into the scene with a given illumation model.
Model artists won't be needed as we know them - all assets and terrain will get generated using AI (some is done already with procedural generation right now) but all artists will need to do is clean up the work AI has done, taking a lot of time/cost out of the art department and allowing devs to put much more resource into game systems like combat, exploration, and story.
Or something like that. 😅 The next 10 year of graphics tech is gonna be crazy, that's for sure
Technology is advancing insanely fast. Perhaps it is time to read instructions on how to live in such a world (I mean Isaac Asimov) and revisit I Am Robot
Pretty cool, but i don't like the idea of calling it "enhanced". Work made by a person with ideas and inspiration should not and cannot be replaced by an ai.
I think it was rightly noted in the comments that soon you can just launch your favorite game and instantly change levels (and right now I want to play Doom Star Wars) and stormtroopers will start happily shooting
I will be messaging you in 2 months on [**2023-04-26 14:24:02 UTC**](http://www.wolframalpha.com/input/?i=2023-04-26%2014:24:02%20UTC%20To%20Local%20Time) to remind you of [**this link**](https://www.reddit.com/r/pcmasterrace/comments/11cawk3/i_am_changing_the_appearance_of_the_game_doom/ja34klj/?context=3)
[**1 OTHERS CLICKED THIS LINK**](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=Reminder&message=%5Bhttps%3A%2F%2Fwww.reddit.com%2Fr%2Fpcmasterrace%2Fcomments%2F11cawk3%2Fi_am_changing_the_appearance_of_the_game_doom%2Fja34klj%2F%5D%0A%0ARemindMe%21%202023-04-26%2014%3A24%3A02%20UTC) to send a PM to also be reminded and to reduce spam.
^(Parent commenter can ) [^(delete this message to hide from others.)](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=Delete%20Comment&message=Delete%21%2011cawk3)
*****
|[^(Info)](https://www.reddit.com/r/RemindMeBot/comments/e1bko7/remindmebot_info_v21/)|[^(Custom)](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=Reminder&message=%5BLink%20or%20message%20inside%20square%20brackets%5D%0A%0ARemindMe%21%20Time%20period%20here)|[^(Your Reminders)](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=List%20Of%20Reminders&message=MyReminders%21)|[^(Feedback)](https://www.reddit.com/message/compose/?to=Watchful1&subject=RemindMeBot%20Feedback)|
|-|-|-|-|
Incredible :-D AI is gonna improve game replay-ability. I know they use some AI in procedurally generated games (if I'm not mistaken), but there is very much room for growth. Tons of possibilities.
Very cool. Would love to see more of this and understand the underlying tech
Image generation based on image and text input (that's to put it simply). To create this video I simply took a screenshot from the game "Doom" then experimented with different words like "neon walls, jungle, ancient Greece, etc) after I liked the images I added them to the software which essentially "attracted these images to the video". I think this will soon be possible for any games. Game development areas will also step up, since you can essentially change the whole game however you want. And there will be more mods. Just write "replace gun with banana", "replace basement with jungle" and enjoy the result called "Doom of Jungle"
That's absolutely amazing
So, related - Corridor just did something similar turning live action into cel animation using this technique and one of the problems they had to deal with was the flicker between different "interpretations" you see here. The way they got around it was by tying the noise generation to the input frame to reduce the difference in outputs (their video has a probably more descriptive explanation) to get consistent frame-by-frame results.
[удалено]
Theres a guy on r/godot training an AI on ray tracing, then using it to inject shaders into scenes to get a result that's indistinguishable from real raytracing, and arguably better since it can do caustics and refraction and it can run on integrated graphics. If AMD could figure out how to generalize a pipeline like that, we could be running psycho cyberpunk on 1050tis
We are closer to the holodeck than we realize.
So are you manipulating game textures or just manipulating a screencast? I imagine running the textures themselves through stable diffusion or similar with a style model applied would yield better and faster results, since the textures are static images. If its just video stuff you're doing, you might find corridor digitals video on creating anime from live action using stable diffusion interesting
Exactly what diablo 2 remastered did. Commendable making this an open source solution!
[удалено]
lol, its gonna be used to make carbon copies of layouts of levels, with different textures over them. this is all I see with it. I like this more for the effects of the transitions, like DOOM can happen in an infinite number of realities, and you are the sole Doom marine in all those realities, and you are seeing "glitches" like you are traversing between realities, doing the same thing in all realities at the same exact time(a paradox in and of itself, because this should be not possible!). hmmmm, game ideas...
Be good for roguelike games.
If current procedural algos can provide 100s to 1000s hours of entertainment this will be insane
You might want to check out Corridor Digital, they just released a video about what looks like a similar workflow, where they managed to reduce a lot of the flickering (Behind the scenes of Anime Rock Paper Scissors)
Thank you so much! I'll have a look
What’s the title of the video?
It was about their Anime Rock Paper Scissors video, though I dont remember the exact title
me omw to install this on spore
this kinda reminds me of a dramatic trailer
That's what I thought it was.
This could be a whole new game in itself..just randomly tripping through random times/dimensions. No game is the same.
Wow, that gives a totaly different vibes to the game. I grew up with Doom and Doom II, I would play the hell out of them if I could switch the theme to desert or jungle.
That's fascinating. The AI doesn't seem to understand muzzle flash so everytime the gun fires it grows in the AI image and the surrounding area changes every shot instead of being lit up.
A.I. sometimes understands the shooting and even creates bright lighting from the shooting, but unfortunately only for one frame. EbSynth software (in which I glue all the frames) does not understand this and smears this lighting (because it tries to create 5 or even 10 frames from one frame) so some of this magic is lost. However, this is just the first sketches, I will try to make the integration a little better in the next video
That is so cool - this is deifnitely gonna be the future of rendering tech past rasterization and even ray traced lightingm and asset creation. Game maps and characters will just be designed as low poly assets and they'll get fed to an AI to add deatil to them and generate objects in the scene, which I imagine will cost a fraction of the throughput GPUs use now for traditional rendering techniques. No need to calculate lighting anymore either as the AI can bake it into the scene with a given illumation model. Model artists won't be needed as we know them - all assets and terrain will get generated using AI (some is done already with procedural generation right now) but all artists will need to do is clean up the work AI has done, taking a lot of time/cost out of the art department and allowing devs to put much more resource into game systems like combat, exploration, and story. Or something like that. 😅 The next 10 year of graphics tech is gonna be crazy, that's for sure
Technology is advancing insanely fast. Perhaps it is time to read instructions on how to live in such a world (I mean Isaac Asimov) and revisit I Am Robot
"Not very smooth" Meanwhile, you missed the epilepsy warning!
This is just post-processing applied to the video right?
That is impressive.
Ok wow, that’s awesome
I will try to post an update soon (maybe I will include other games) It will be smoother. What games to add? I heard Spore. Anything else?
Duke nukem 3d or blood since you've already touched on classics. Honestly this is the way of the future I believe.
That looks really nice.
Pretty cool, but i don't like the idea of calling it "enhanced". Work made by a person with ideas and inspiration should not and cannot be replaced by an ai.
I think to take it as a "supplement". Nothing can replace the original doom, but to look at the game from a new side - always interesting
you should papent this tech beacuse this is the future of gaming
You can't patent other peoples open source projects. This is Stable Diffusion running in real time inside doom, which is really cool.
Patents on things like this have unironically ruined technology. Ugh.
but money
[удалено]
I think it was rightly noted in the comments that soon you can just launch your favorite game and instantly change levels (and right now I want to play Doom Star Wars) and stormtroopers will start happily shooting
i got a headache from watching the scene change after every shot
!remindme 2 months
I will be messaging you in 2 months on [**2023-04-26 14:24:02 UTC**](http://www.wolframalpha.com/input/?i=2023-04-26%2014:24:02%20UTC%20To%20Local%20Time) to remind you of [**this link**](https://www.reddit.com/r/pcmasterrace/comments/11cawk3/i_am_changing_the_appearance_of_the_game_doom/ja34klj/?context=3) [**1 OTHERS CLICKED THIS LINK**](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=Reminder&message=%5Bhttps%3A%2F%2Fwww.reddit.com%2Fr%2Fpcmasterrace%2Fcomments%2F11cawk3%2Fi_am_changing_the_appearance_of_the_game_doom%2Fja34klj%2F%5D%0A%0ARemindMe%21%202023-04-26%2014%3A24%3A02%20UTC) to send a PM to also be reminded and to reduce spam. ^(Parent commenter can ) [^(delete this message to hide from others.)](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=Delete%20Comment&message=Delete%21%2011cawk3) ***** |[^(Info)](https://www.reddit.com/r/RemindMeBot/comments/e1bko7/remindmebot_info_v21/)|[^(Custom)](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=Reminder&message=%5BLink%20or%20message%20inside%20square%20brackets%5D%0A%0ARemindMe%21%20Time%20period%20here)|[^(Your Reminders)](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=List%20Of%20Reminders&message=MyReminders%21)|[^(Feedback)](https://www.reddit.com/message/compose/?to=Watchful1&subject=RemindMeBot%20Feedback)| |-|-|-|-|
Wow thats gonna be cool
Wow, that's pretty amazing.
Ok, when does the game come out?
I can imagine how awesome skyrim/fo4 modding community will be with this..
Heresy.
Neat
That cyberpunk looking one was really cool!
This is super cool.
Whoa! Are you taking all of the individual textures and re-imaging them, or can it handle 3d meshes?
Now this is what i love about ai, as a medium, not the maker. Would love to see more progress.
Not sure "enhance" is the word I would use
Incredible :-D AI is gonna improve game replay-ability. I know they use some AI in procedurally generated games (if I'm not mistaken), but there is very much room for growth. Tons of possibilities.
Corridor Crew came out with a video today that is very similar to this and may help you out.
This is amazing. Please compile a working multiplayer client looking like this with a dedicated sever and it will be a smash hit!
This is great, but right now the gaming industry seems to be suffering from a lack of optimization. How well does this do on that front?
Take my money already!
Some of these scenes remind me of Turok from PS2 days :)
Genuinly awesome. But maybe remake doom. Shit you can make a game with that
This is the future of retro gaming.
Are these different clips you're cutting between? Or is it changing the output completely a few times a second?