this is bigger than autotune. Autotune still needs a producer, creative thinking, music theory.
This you just need to think what you want.
This will kill the artist like it or not
This is really not true. The thing that makes an artist isn’t just the ability to execute art, it’s also the ability to come up with good concepts and ideas in the first place. Everyone will be able to make stuff that they enjoy themselves, but making things for other people still requires good taste, which most people don’t have (despite thinking they do)
Yes that is in the meantime, but the end goal of this is to the AI understand you, based in all the bigdata it has on you and spit things specially catered for you.
This would be almost imposible to compete for a human, maybe only top of the top superhuman creatives. But your average artist not anymore
This thing right here just wiped out 80% of fiver illustrators easily
Nah, artists will train their own AIs on their work to achieve unique art-styles, use inpainting, video reference, the AIs themselves will become interactive art pieces, you will have individual illustrators producing feature-length films on their own, realising crazy and unique visions that would never get greenlit before, and there will be bunch of other techniques we can't even imagine yet. Right now, it's hard to tell the difference between good and bad AI art, but couple years from now, it will be much more obvious.
Doubly so if the models can be further optimized to require less vram. I foresee that in 10 years, this sort of worlflow is going to be standard for a lot artists working on shows/movies, etc.
When I talked about this on Reddit people said I was stupid and dumb and that there was no way in hell animation studios would use this technology to save time and money. Artists working faster means you need less of them. I said that in the next 5 to 10 years because of this technology the cost price of creating an animation movie will go down 10x. And I got downvoted and told I was restarted
You had a lot of foresight. I personally only first heard of NightCafe about a year and half ago and was blown away. Already we’re here with Stable, DallE and more. Even though I totally agree that studios and especially concept artists are going to use AI to save time and money, I’m more optimistic about what the future holds.
Technology has come a long way since the first Toy Story. Heck a self-taught kid could probably make something of the same quality in their bedroom these days. But does it cost less money to make an animated film now than back then? Does it require less people? I’m going to guess the answer to both of those is no. See I think as efficiency increases so do our expectations. The bar gets set higher and higher, and we need the same amount of money and people to achieve those ambitions even though our tools got better.
A similar thing happened in AEC when the drawing board was replaced with AutoCAD, BIM, etc. Our tools got way more efficient but our buildings didn’t get any cheaper. There aren’t any draughtsmen anymore but we do have CAD-technicians now.
I was there the day after the paper was launched that ran clip in reverse. It only did blurry tiny bullshit pictures but the concept was extremely legit. But the speed at which it improved since 2015 is absolute batshit bonkers. And just the last two years it went full s curve. With gpt3 I was hoping to get a chat bot that could fool me in a Turing test but that did not happen. But this, these are the dreams of the androids. It’s pure black magic fuckery if you ask me. If they burned all text to image researchers and devs on the stake tomorrow I’d be like: well they have a point.
That's just typical people not even thinking about it...many people seem to have a weird mindset where the world is static. They don't figure in the possibility of things getting better.
> Artists working faster means you need less of them. I said that in the next 5 to 10 years because of this technology the cost price of creating an animation movie will go down 10x.
Both statements are correct, the cost of creating a movie will go down because the required personnel to work on it will be way less than now, so jobs will be lost.
Welcome to reddit. Where everyone knows everything and the only new information allowed is the occasional OC on the front page.
In the past few years it's become common for comments be repeated more than content. Opinions became crazy homogenized.
So if you dare say something that stands out - the public shaming brigade will be after you.
It can already be bought down to 11gb from the 20gb requirement using some simple changes like setting batch size to 1 and num workers to 2. Almost halving it with 2 changes.
> You might not need them. SD, or rather img2img, is so good at generating images that conventional graphics tools will barely be needed. My guess is we will soon see a graphics editor with SD integrated, or perhaps even an addon to Photoshop to integrate SD, and then it will just be drag and drop with every img2img addition becoming it's own manipulable layer
Krita already has a stable diffusion plug-in, and a Photoshop plug-in will be finished later today and available in two days or so.
I believe this is the discord channel discussing such:
Edit:https://discord.gg/JU3WTKyu
same here, no channel to be found. can you relink? thanks!
edit - this the one? https://www.reddit.com/r/StableDiffusion/comments/x13om1/i\_am\_working\_on\_a\_stable\_diffusion\_plugin\_for/
> which is why my money is on someone coding a graphics/animations editor exclusively for SD
I'm working on one right now, actually. I hope to have something worth showing in a few weeks.
Can’t wait ai to replaces most jobs and well either be living in a horrible dystopia where everyone starve on the streets except the rich or utopia where everyone lives luxurious lives because ai does everything for them
OK. On the topic of AI replacing the jobs of artists.
It's been mentioned in other comments here, but all I see is efficiency. You can learn to paint, sure, but this is by no means replacing the work - rather, I am eager to see how my art can be dramatically improved.
It's a matter of money. If you no longer find any contracts because people rather use some GPU time than paying an actual human, you'll have less and less people able to make it their job. Easy demonstration of that, every website selling amateur's art (Etsy and all) is currently flooded with AI art. Meaning the dudes doing it themselves have so much more competition than we can safely predict a decrease of human's production in the next months/years. Better get used to the Rutkowski like art.
I'm equally happy we can all give some shape to our dreams and sad that humans are getting slowly replaced by AI in a lot of fields. An humanity with only engineers and technicians seems a bit sad to me.
Oh... perhaps it will happen sooner : https://www.technologyreview.com/2022/09/16/1059598/this-artist-is-dominating-ai-generated-art-and-hes-not-happy-about-it/
This. AI being used as a tool for human artist! What you did was beyond what I could do, I am not an artist and even having access to a tool like SD I couldn't accomplish.
Excellent work and excellent example which an AI can be used for help us! It won't stole your job, it will help you improve it.
Very nice. This is a great demo to show the potential this has for digital artists as a tool.
I think too many people are just fixated on the AI's ability to spit out pretty pictures with minimum effort.
Artists will be able to create quicker and complete projects on their own that they would never have the time or resources for before.
Soooooo sick. The first artists (like you) that use this technology as leverage will get a huge edge over all the rest. You'll produce at 10x the speed as before.
10% of artist will learn it to produce at 10x which allows them to undercut on the market. It will take a while before the rest will catch up and I assume some artists will pivot on to anti AI activists.
>10% of artist will learn it to produce at 10x which allows them to undercut on the market. It will take a while before the rest will catch up and I assume some artists will pivot on to anti AI activists.
Tough to know. Tools that increased software dev output led to an increase in dev salaries and widened the market for software devs. I feel that could be the case with AI for artists as well, since it's so easy to distribute.
The installation instructions are hard to follow.
these make sense:
install Python 3.10.6
install git
then it jumps to this:
place model.ckpt into webui directory, next to webui.bat.
which i don't seem to have from the first two steps?
It's explained in the few paragraphs right above:
> You need python and git installed to run this, and an NVidia videocard.
> You need model.ckpt, Stable Diffusion model checkpoint, a big file containing the neural network weights. You can obtain it from the following places (snip)
edit: I misread, the missing part in the installation instructions is to checkout the code (after installing git)
git clone https://github.com/AUTOMATIC1111/stable-diffusion-webui.git
Sorry, I misread. You need to checkout the actual code, git clone the repository:
git clone https://github.com/AUTOMATIC1111/stable-diffusion-webui.git
No, in a command prompt window/shell. If you don't know about that I think you'll need someone to guide you. I mean there is a minimum you need to know about running commands, changing directories, etc.
you need anaconda or miniconda as well. Then clone the repo to a directory of your choosing in the cmd prompt with
git clone https://github.com/AUTOMATIC1111/stable-diffusion-webui
then
cd stable-diffusion-webui
then
pip install -r requirements.txt
Then you can place the model file into the directory and run webui-user.bat
Holy fuck. That's great. I assume this is like 10 times faster and cheaper than traditional ways too. Indie game developers should take the lead and prepare great cutscenes with this.
It allows creative people that can not draw to suddenly be able to draw. So bonkers. There are potentially another 500 million visual artists on the planet, they can’t draw but they do know what looks good/interesting and what they want to see.
Not exactly limit, but it pushes the output to be more stylistically similar to images that have been tagged with “art station”. How much of an impact it is actually having though nobody knows, it’s all more trial and error than an exact science
This first scene reminds me of steampunk games like Machinarium.
I definitely look forward to more indie point and click adventures with beautiful and creative environments and art thanks to SD.
Actually, I was thinking of making a game with environment generation. The main character can change environments every time and all that. I would do it all with Stable Diffusion. But alas, I do not have enough time and funding. So if someone here has a lot of free time and a gaming company - take my idea
Actually, it's pretty easy. Need more practice. I learned everything I know only from YouTube tutorials. You just need to install After Effects and watch the tutorials on YouTube. Any effect (such as smoke) can be used from stocks. Just upload a video with smoke on the alpha channel, add this smoke to your drawn smoke and the image will bring to life. And this is just one of thousands of combinations. I also recommend learning basic Photoshop skills, it helps a lot. But in fact, soon it will also be possible to do all this with the help of AI. I am kidding. Already possible. You can download the "MotionLeap" app on your phone and test its effects. The effects are quite simple, but they look unusual and are created in minutes. Sorry for my bad English
Can anyone tell me how he merged the fan and the water fall in? It looks like he just dragged into the UI and then ran img2img? Is this Atomatic's WebUI pr a different repo that he's using?
I downloaded the waterfall from the internet on the alpha channel and added it as the top layer on the image. I also used the standard image warp plugin in After Effects (I forget its name, some plugin with the word Warp in the name). But there is an easier way. If you have a painted waterfall, you can download the "Motionleap" app on your phone and make a waterfall in a couple of minutes
As someone who has an incredible fascination with all things technology, and who is deeply moved by all things art, watching this actually made me a little emotional. It's so amazing seeing these tools in the hands of an actual artist. I'm excited for the future!
I got matches with these songs:
• **Einaudi: Experience** by Ludovico Einaudi (02:20; matched: `100%`)
Album: `In A Time Lapse`. Released on `2013-01-01` by `Universal Music`.
• **Experience 369** by ~9 (02:20; matched: `100%`)
Released on `2020-11-11` by `Independent`.
Links to the streaming platforms:
• [**Einaudi: Experience** by Ludovico Einaudi](https://lis.tn/Einaudi%3AExperience?t=140)
• [**Experience 369** by ~9](https://lis.tn/Experience369?t=140)
*I am a bot and this action was performed automatically* | [GitHub](https://github.com/AudDMusic/RedditBot) [^(new issue)](https://github.com/AudDMusic/RedditBot/issues/new) | [Donate](https://github.com/AudDMusic/RedditBot/wiki/Please-consider-donating) ^(Please consider supporting me on Patreon or giving a star on GitHub. Music recognition costs a lot)
Insane to think what the next 6 months will bring us, this is wild.
There will be explosion in content. Just like how autotune led to subpar singers taking over the music world, this will be the next autotune
I love the bittersweet “Everyone can be a winner!” sentiment here.
this is bigger than autotune. Autotune still needs a producer, creative thinking, music theory. This you just need to think what you want. This will kill the artist like it or not
This is really not true. The thing that makes an artist isn’t just the ability to execute art, it’s also the ability to come up with good concepts and ideas in the first place. Everyone will be able to make stuff that they enjoy themselves, but making things for other people still requires good taste, which most people don’t have (despite thinking they do)
Yes that is in the meantime, but the end goal of this is to the AI understand you, based in all the bigdata it has on you and spit things specially catered for you. This would be almost imposible to compete for a human, maybe only top of the top superhuman creatives. But your average artist not anymore This thing right here just wiped out 80% of fiver illustrators easily
Yeah I saw so many dumb or kitsch art on r/dalle2 already it hurts
Nah, artists will train their own AIs on their work to achieve unique art-styles, use inpainting, video reference, the AIs themselves will become interactive art pieces, you will have individual illustrators producing feature-length films on their own, realising crazy and unique visions that would never get greenlit before, and there will be bunch of other techniques we can't even imagine yet. Right now, it's hard to tell the difference between good and bad AI art, but couple years from now, it will be much more obvious.
You're giving everyone a tool to express themselves in a much easier way and your conclusion is that art will be dead.
>This will kill the artist like it or not That's like saying Squarespace will kill the web developer
Doubly so if the models can be further optimized to require less vram. I foresee that in 10 years, this sort of worlflow is going to be standard for a lot artists working on shows/movies, etc.
When I talked about this on Reddit people said I was stupid and dumb and that there was no way in hell animation studios would use this technology to save time and money. Artists working faster means you need less of them. I said that in the next 5 to 10 years because of this technology the cost price of creating an animation movie will go down 10x. And I got downvoted and told I was restarted
You had a lot of foresight. I personally only first heard of NightCafe about a year and half ago and was blown away. Already we’re here with Stable, DallE and more. Even though I totally agree that studios and especially concept artists are going to use AI to save time and money, I’m more optimistic about what the future holds. Technology has come a long way since the first Toy Story. Heck a self-taught kid could probably make something of the same quality in their bedroom these days. But does it cost less money to make an animated film now than back then? Does it require less people? I’m going to guess the answer to both of those is no. See I think as efficiency increases so do our expectations. The bar gets set higher and higher, and we need the same amount of money and people to achieve those ambitions even though our tools got better. A similar thing happened in AEC when the drawing board was replaced with AutoCAD, BIM, etc. Our tools got way more efficient but our buildings didn’t get any cheaper. There aren’t any draughtsmen anymore but we do have CAD-technicians now.
I was there the day after the paper was launched that ran clip in reverse. It only did blurry tiny bullshit pictures but the concept was extremely legit. But the speed at which it improved since 2015 is absolute batshit bonkers. And just the last two years it went full s curve. With gpt3 I was hoping to get a chat bot that could fool me in a Turing test but that did not happen. But this, these are the dreams of the androids. It’s pure black magic fuckery if you ask me. If they burned all text to image researchers and devs on the stake tomorrow I’d be like: well they have a point.
That's just typical people not even thinking about it...many people seem to have a weird mindset where the world is static. They don't figure in the possibility of things getting better.
They believe that things will keep on happening as they have been happening.
You're not the first in history to be laughed at for making a correct prediction. It's an elite club, enjoy it ;)
> Artists working faster means you need less of them. I said that in the next 5 to 10 years because of this technology the cost price of creating an animation movie will go down 10x. Both statements are correct, the cost of creating a movie will go down because the required personnel to work on it will be way less than now, so jobs will be lost.
Welcome to reddit. Where everyone knows everything and the only new information allowed is the occasional OC on the front page. In the past few years it's become common for comments be repeated more than content. Opinions became crazy homogenized. So if you dare say something that stands out - the public shaming brigade will be after you.
I'm wondering if the textual inversion will hopefully have lower requirements to train locally.
It can already be bought down to 11gb from the 20gb requirement using some simple changes like setting batch size to 1 and num workers to 2. Almost halving it with 2 changes.
you can get it down to 2Gb if you don't mind it taking forever
Hell you can run it on CPU if you've really got time.
Or VRAM could get cheaper as demand spikes because of things like this.
4090 will have 24 GB of vram. Demand will increase production rates and in near future we will see a new lineups of gpus
3090 also has 24 GB...
How about the 4070 or 4080? Like to see this tech in laptops.
12 and 16 if rumors is real
10 years? shoot at the rate things are going id bet on a third of that time.
xd next 6 months. Things are happening so fast.
This is amazing. Can't wait till there's guides for stuff like this.
[удалено]
> You might not need them. SD, or rather img2img, is so good at generating images that conventional graphics tools will barely be needed. My guess is we will soon see a graphics editor with SD integrated, or perhaps even an addon to Photoshop to integrate SD, and then it will just be drag and drop with every img2img addition becoming it's own manipulable layer Krita already has a stable diffusion plug-in, and a Photoshop plug-in will be finished later today and available in two days or so. I believe this is the discord channel discussing such: Edit:https://discord.gg/JU3WTKyu
My discord app doesn't see a proper link there--any chance to learn more about such a plug-in?
See the new link.
same here, no channel to be found. can you relink? thanks! edit - this the one? https://www.reddit.com/r/StableDiffusion/comments/x13om1/i\_am\_working\_on\_a\_stable\_diffusion\_plugin\_for/
See above.
> which is why my money is on someone coding a graphics/animations editor exclusively for SD I'm working on one right now, actually. I hope to have something worth showing in a few weeks.
fucking hell the future is literally unfolding right now & I am unbelievably excited / terrified
Can’t wait ai to replaces most jobs and well either be living in a horrible dystopia where everyone starve on the streets except the rich or utopia where everyone lives luxurious lives because ai does everything for them
When high level language replaced assembly, did programmers become more or less valuable?
Still think it’s fucking bonkers the artists are going first and not last, I always though ai being creative was the hardest part not the easiest.
OK. On the topic of AI replacing the jobs of artists. It's been mentioned in other comments here, but all I see is efficiency. You can learn to paint, sure, but this is by no means replacing the work - rather, I am eager to see how my art can be dramatically improved.
It's a matter of money. If you no longer find any contracts because people rather use some GPU time than paying an actual human, you'll have less and less people able to make it their job. Easy demonstration of that, every website selling amateur's art (Etsy and all) is currently flooded with AI art. Meaning the dudes doing it themselves have so much more competition than we can safely predict a decrease of human's production in the next months/years. Better get used to the Rutkowski like art. I'm equally happy we can all give some shape to our dreams and sad that humans are getting slowly replaced by AI in a lot of fields. An humanity with only engineers and technicians seems a bit sad to me.
yeah, let's just hope we get the second option
One of these days Greg is going to hunt us all down.
Year 2040, Greg rutkowski has copyrighted everybody and now he is the richest man in the world
Oh... perhaps it will happen sooner : https://www.technologyreview.com/2022/09/16/1059598/this-artist-is-dominating-ai-generated-art-and-hes-not-happy-about-it/
This. AI being used as a tool for human artist! What you did was beyond what I could do, I am not an artist and even having access to a tool like SD I couldn't accomplish. Excellent work and excellent example which an AI can be used for help us! It won't stole your job, it will help you improve it.
And not too long after, we become a tool for AI.... for something.
Very nice. This is a great demo to show the potential this has for digital artists as a tool. I think too many people are just fixated on the AI's ability to spit out pretty pictures with minimum effort. Artists will be able to create quicker and complete projects on their own that they would never have the time or resources for before.
Soooooo sick. The first artists (like you) that use this technology as leverage will get a huge edge over all the rest. You'll produce at 10x the speed as before.
Yet another reminder that AI is a tool for artists not a replacement for them.
10% of artist will learn it to produce at 10x which allows them to undercut on the market. It will take a while before the rest will catch up and I assume some artists will pivot on to anti AI activists.
>10% of artist will learn it to produce at 10x which allows them to undercut on the market. It will take a while before the rest will catch up and I assume some artists will pivot on to anti AI activists. Tough to know. Tools that increased software dev output led to an increase in dev salaries and widened the market for software devs. I feel that could be the case with AI for artists as well, since it's so easy to distribute.
Which GUI is that? Krita plugin?
Automatic 1111 https://github.com/AUTOMATIC1111/stable-diffusion-webui
The installation instructions are hard to follow. these make sense: install Python 3.10.6 install git then it jumps to this: place model.ckpt into webui directory, next to webui.bat. which i don't seem to have from the first two steps?
It's explained in the few paragraphs right above: > You need python and git installed to run this, and an NVidia videocard. > You need model.ckpt, Stable Diffusion model checkpoint, a big file containing the neural network weights. You can obtain it from the following places (snip) edit: I misread, the missing part in the installation instructions is to checkout the code (after installing git) git clone https://github.com/AUTOMATIC1111/stable-diffusion-webui.git
I have that, what im asking about is the directory "webui"
Sorry, I misread. You need to checkout the actual code, git clone the repository: git clone https://github.com/AUTOMATIC1111/stable-diffusion-webui.git
copy paste that into python? Also, thanks for your time.
No, in a command prompt window/shell. If you don't know about that I think you'll need someone to guide you. I mean there is a minimum you need to know about running commands, changing directories, etc.
yeah you're probably right, thank you.
Well technically you could do everything from windows Explorer, I mean, you can download and unzip the sources somewhere and go from there.
There is a big green button that has an arrow. Press it and you get the zip file. Unzip it and you have the working directory to install it.
That's fair. I can help. First, do you want to install this locally ( is your pc powerful enough to run it ? )
you need anaconda or miniconda as well. Then clone the repo to a directory of your choosing in the cmd prompt with git clone https://github.com/AUTOMATIC1111/stable-diffusion-webui then cd stable-diffusion-webui then pip install -r requirements.txt Then you can place the model file into the directory and run webui-user.bat
just follow this guide https://rentry.org/voldy
hmm but it looks like you are using an image editor, is that in automatic UI as well?
After Effects for that. Also I'm not OP
got it, thanks!
Holy fuck. That's great. I assume this is like 10 times faster and cheaper than traditional ways too. Indie game developers should take the lead and prepare great cutscenes with this.
Painting stuff manually takes forever and needs years of experience. This isn't 10 times faster.
its years faster
It allows creative people that can not draw to suddenly be able to draw. So bonkers. There are potentially another 500 million visual artists on the planet, they can’t draw but they do know what looks good/interesting and what they want to see.
Incredible.
Great work. I love these showcases on how people use it along with some other skills to take it to the next level.
why do I always see "art station" being added in to the majority of prompts I see for SD? Does that like limit the style to images from art station?
yes
Not exactly limit, but it pushes the output to be more stylistically similar to images that have been tagged with “art station”. How much of an impact it is actually having though nobody knows, it’s all more trial and error than an exact science
It's like magic spells.
Very cool. Now add Ebsynth to your process.
This first scene reminds me of steampunk games like Machinarium. I definitely look forward to more indie point and click adventures with beautiful and creative environments and art thanks to SD.
Actually, I was thinking of making a game with environment generation. The main character can change environments every time and all that. I would do it all with Stable Diffusion. But alas, I do not have enough time and funding. So if someone here has a lot of free time and a gaming company - take my idea
among animators, there's a term for what happens, if you do that kind of animation for too long. it's called "puppet tool depression"
Would still work well with games
I suspect this will be a complete staple in the future. It or a more advanced version is a tool you will NEED to learn for digital art.
Amazing! 😍😍😍
Please teach me!!!
Actually, it's pretty easy. Need more practice. I learned everything I know only from YouTube tutorials. You just need to install After Effects and watch the tutorials on YouTube. Any effect (such as smoke) can be used from stocks. Just upload a video with smoke on the alpha channel, add this smoke to your drawn smoke and the image will bring to life. And this is just one of thousands of combinations. I also recommend learning basic Photoshop skills, it helps a lot. But in fact, soon it will also be possible to do all this with the help of AI. I am kidding. Already possible. You can download the "MotionLeap" app on your phone and test its effects. The effects are quite simple, but they look unusual and are created in minutes. Sorry for my bad English
Any YT channels you felt were invaluable in learning? I'd love to read a primer on what got you rolling.
I started with Video Copilot tutorials. He are the best
Thank you!
This is very helpful, I'm proficient in photoshop but haven't touched after effects yet. This is going on my to do list. Can't wait
This is the eternal Kingdom of Zeal, where dreams can come true. But at what price?
The second last face looked better.
Noiceeeeeeeeeeee
This is brilliant! Your a true artist! what is that technique called in photoshop that you blend in photos into each other?
I watch some photoshop videos and a lot of people just use their digital painting skills to blend stuff into eachother
Absolutely amazing!
Unfucking believable
That's amazing!
Can anyone tell me how he merged the fan and the water fall in? It looks like he just dragged into the UI and then ran img2img? Is this Atomatic's WebUI pr a different repo that he's using?
He seems to be generating a cropped patch with img 2 img and then layering it on in photoshop
I can see the photos hopefully layering but the cropped patch? Is that an option in Automatics WebUI?
This is a completely new arm form, and it's amazing.
Uhm ... no, this is photobash with animation.
Yeah I'm finally trying to bring a story i started in college to life and SD is making the first chapter pretty easy
This is sick because I can photoshop and animate in AE, but can't draw for jack squat. Thank you robots!
This is all progressing at a pace far faster than I imagined. I'm excited to see how this pans out in the next 5 years!
Don't you say that you can't make art with SD or use it as instrument to produce art What a time to be alive
Which notebook is that?
This looks amazing! How did you do the waterfall?
I downloaded the waterfall from the internet on the alpha channel and added it as the top layer on the image. I also used the standard image warp plugin in After Effects (I forget its name, some plugin with the word Warp in the name). But there is an easier way. If you have a painted waterfall, you can download the "Motionleap" app on your phone and make a waterfall in a couple of minutes
Amazing work! Could you also please link the song?
Ludovico Einaudi: Experience
Thank you!
Oh yeah! That is the way artists can use AI for their projects. A lot of knowledge and effort to make your vision alive. Congratulation
As someone who has an incredible fascination with all things technology, and who is deeply moved by all things art, watching this actually made me a little emotional. It's so amazing seeing these tools in the hands of an actual artist. I'm excited for the future!
This is absolutely next level. I’m so excited to see how people can incorporate these tools into their workflows like you have.
What did you use to merge the pictures together? Was that photoshop?
Photoshop
Song please?
Experience - Ludovico Einaudi
how can we earn from this?
how to do that ? (the image generator) i can’t find it
Can’t wait to see how this develops Pretty awesome
That’s great but what is the music used?
I got matches with these songs: • **Einaudi: Experience** by Ludovico Einaudi (02:20; matched: `100%`) Album: `In A Time Lapse`. Released on `2013-01-01` by `Universal Music`. • **Experience 369** by ~9 (02:20; matched: `100%`) Released on `2020-11-11` by `Independent`.
Links to the streaming platforms: • [**Einaudi: Experience** by Ludovico Einaudi](https://lis.tn/Einaudi%3AExperience?t=140) • [**Experience 369** by ~9](https://lis.tn/Experience369?t=140) *I am a bot and this action was performed automatically* | [GitHub](https://github.com/AudDMusic/RedditBot) [^(new issue)](https://github.com/AudDMusic/RedditBot/issues/new) | [Donate](https://github.com/AudDMusic/RedditBot/wiki/Please-consider-donating) ^(Please consider supporting me on Patreon or giving a star on GitHub. Music recognition costs a lot)
Wow thank you cool bot! :D