The network is trained to to figure out how to correct a noisy image that is guided by our textual input which references prior training. E.g. we claim that a noisy image is an image of a dog.
The architecture iterates on making adjustments to the noisy images and the results are tested against the likelihood that the resulting image compares favorably within an image recognition network and against what we told it. It stops when some programmed threshold is reached, e.g. the AI sees a reasonable representation of a dog. This means that there can be artifacts but the image is close enough for a machine (or human) to recognize.
I was thinking of it more as true artistic chicanery⌠the identity-less, featureless slaves of the present futureâŚa world of pseudo realness but vaguely familiar in proportion, color, & form. The antihero to utopian virtual reality goals. No expansion, just dullness.
@Northern_Loki you have lit the path, I will be using Virtual reality to make a art gallery with 360 photo spheres and phasing picture displays with 3d image reliefs to walk around in. All things take time, I understand itâs always in short supply. I started working with nightcafe because itâs basically free, the links you provided will definitely increase the knowledge base to learn from. Iâve got a lot to learn about A.I. Art to be able to use it to Make specific pictures for my stories.
The rhino with wings holds a puzzle, tried with butterfly wingsâŚ
Iâm beginning to grasp how teaching the A.I. is the most important part. Turning text ideas into a unseen vision of what can be imagined is a pleasure, Thanks for the new hobby.
i love the art youâre creating with the a.i. tools there @Heliosphear! i agree that thereâs some amazing and new, unique perspectives in creativity that can be discovered here. i like nightcafe as well. they give you enough tools and levers to really play around, although it doesnât seem quite as strong or creative in some ways as dall-e. i made like three accounts on nightcafe so that i can mine those 5 daily credits. that way i feel like i can really play around when the mood strikes me, rather than feeling like i have to be precious with my credits. i canât wait to see more of what you create.
The results are so incredible that I honestly find it a bit terrifying.
The fact that we have gotten to a point where we can teach a machine human concepts, and how to apply them, to this degree makes me nervous about the future.
Who knows how quickly weâll be able to apply the same concepts to weapons or technologies that donât even exist yet.
On top of that, in the shorter term, jobs that seemed fit only for humans will be given more and more to machines.
I agree. I will add that some of what you are seeing tends to portray more than it actually is. Although, the current pace of advancement with the underlying technology is astounding. If that pace continues then significant disruption is on the horizon, in unexpected domains, and potentially applied in ways that cut across moral and ethical boundaries.
For me Dalle-2 is one of those unexpected domains. People tend to think of automation as kiosks in McDonaldâs taking your order, or a robotic arm in an assembly line, but apparently even domains that were previously only occupied by human creativity (in this case image art) can be taken over by a machine.
Many people who predict a bleak future caused by job loss due to automation proposed a theoretical solution where income is universal and people are free to pursue creative endeavors like art. However, if Dalle-2 is already capable of this level of simulated creativity, what kind of productivity (creative or otherwise) will be left for human beings?
Yeah, Iâm with you guys, and thought all of that as soon as I saw it all day that day. Maybe, the human input would still be necessary for interesting art, for example.
I was bit of hesitant on that query and figured this is where it would be going. Not really interested in going down that rabbit hole but I can suggest looking at some of the public comments from the researchers regarding how training bias occurs along with the backgrounders on the source images. That is all Iâll comment on that.