Man⌠crazy stuff.
how about Newly born mosquitos flying around looking for (victims, blood, humans, etc), as seen from the eyes of the mosquito.
I can send a small donation to you for the fun, next week or the Mon after. I read that other post about the credits. peace
1 Like
Whaaaaat? I can make a donation? PM headed your way after this post @Northern_Loki the skull beeâs are incredible and gave me several ideas for my own artwork, the forest fire run away squatches invoke child hood hauntings and a work i once painted called witch crosses. I havenât had time to stop by, just finished 48 hours of shifts and have another 12 to go.
As i think about where the mind can look toâŚ. Polar bears on a boat, fishing for people Iâm sure Dall-e will fill in the blank areaâs.
I wonder about the ghost in the machine, can it see the future⌠impressionist painting of first planet or moon colonized by humans 50 years after first contact.
Rhinoâs with butterfly wings.
Pirate skeletons with dragon fly wings
A grey alien and the flying nun riding a tandem bike.
The greatest use for me would be to illustrate my storyâs, is it possible to feed a paragraph into code or would i need to break down a scene into blocks to help Dall-E understand what is trying to be achieved as a end result that puts the words into pictures.
For example.
The quarter moon lite a sandy trail down to the crashing waves, on a midnight adventure to the beach they walked in pair, looking to the stars for guidance. All of their thoughts and dreams put on hold for the battle in the days ahead that might take away everything they have known as home. Trying to ease the tension with a joke about how humanity sprouting a seed on the dark side of the moon, they laugh and smile in the starlight knowing that the next generation may never know the pleasure of wishing on a star.
In the time before now, when the images of a second earth prototype planet was found within reach of modern science and space travel we rejoiced at the second chance probabilities. In the now⌠coastal cities are mostly abandoned and flooded world wide, earth quakes, tornados, hail storms and hurricanes all over the planet that last for months are common everyday worries. Survival is the only thing that matters, as always, it is at hand, seventy two hour count down clock to the meteor storm flashes on their wrist sleeve coms.
Making their way back to the shuttle craft, they both give a grime look to the sign that reads âDeath Valley, it never rains hereâ. The time in the fresh air is always limited when thereâs a job to be done, return to Omnihells station is on the schedule because earth needs itâs best pilots to fight against the waves on meteors inbound threatening to destroy the space stations and moon bases. These farming communities are the last hope for the next generation to survive long term space flight to anywhere but here, each station and base has become a city where we live and breathe in cubes inside of tubes to survive.
Breaking away from earths gravity the shuttle changes bearing and head towards home, a communication screen flickers to life on the control panel as commander Chia makes a universal statement as a talking head âAll squadron leaders report to the flight deck for situation report, attack formation protocols and logistics on the triple!â The landing bay door slides open to reveal a world at work in a three thousand meter long silver tube, with a radius of one thousand meters there are no windows to break, only maneuvering thrusters and end to end connectors are visible on the exterior. For as far as the eye can see, enormous slow spin silver solar tubes are pulled along in orbit around a planet cleansing itself of the cosmosâs most feared bacto-virius⌠Humans.
gotta fly
7 Likes
What does it think God looks like?
Really nice of you to share @Northern_Loki ! Can you try âlost gardenâ when you have a chance?
1 Like
DALL-E Mini can be accessed here. This is a much simplified version of DALL-E that can be utilized right away from the web. It is slower, being open source both in terms of software and hardware, but it is still interesting to get to know. The training is also probably on a reduced set. Some sample prompts and outputs:
These images are unique. That is, they didnât exist before*. But, they are based on what has been learned. What does this mean?
A better way to think about this, it is a combination of a large set of images and the knowledge about those images that is used to generate a new image from what amounts to ⌠noise and some sort of supplied direction.
More specifically, humans take images and apply their knowledge to them. For instance, an image of a pepperoni pizza may have tags of: pepperoni, round, cheese, dinner, edible, table, plate, napkin, etc. The knowledge is applied to images in what is termed âtagsâ.
This image is then placed into a hyperspace (a multi-dimensional space, a fictional/mathematical construct stored within a computerâs memory). The location in the hyperspace is determined by the knowledge (the tags). Images with similar tags will be close to each other within that hyperspace. Images with dissimilar tags will be far from each other.
When we ask the computer for something, letâs say pepperoni pizza, it is used as a parameter to look into that hyperspace. These relate to the âtagsâ. Images nearby to where we are looking will have similarities to the images that have been tagged with pizza, pepperoni. Pepperoni pizza will be very close to what we are looking for. Pizza will also be close but perhaps further away. Car will be much much further away.
For an image that we or the computer has never seen before, we solve that by finding the minimal distance between those points within the hyperspace. Essentially creating a new imagined point within the hyperspace.
Pepperoni pizza is unique. Hot dog is unique. How about a pepperoni pizza hot dog?
The machine will utilize those parameters to look into the hyperspace and find ⌠nothing. So, instead, it tries to determine the closet set of parameters within the hyperspace that contains something. That would be pepperoni pizza and hot dogs. It will then try to recreate an image based on what itâs found for both the pizza and the hot dogs. VoilĂ .
There is much more complexity as to how all of this is achieved but itâs the general idea. It should also help to explain why simple prompts and/or prompts containing commonly known objects tends to create more realistic looking images. Tags that are nearby are also likely to be more contextual accurate.
When there are lots of empty spaces in the hyperspace or larger distances between points, the machine has to do a lot of work trying merge and create a new image. There are both positives and negatives to such results. Iâll try to explain how it does this sometime in the future.
Also, the ability of the machine to create expected results is reliant on accurate knowledge (tags).
If for instance, someone were to tag images of hot dogs with pepperoni pizza (obviously in error), when we ask for the pepperoni pizza weâll get something back containing ⌠hot dogs instead ⌠opps.
Itâs really no different if we were to teach a child that the color âredâ is actually the color âgreenâ. Without any other knowledge, that child would misidentify the color âredâ as being green ⌠the point in the hyperspace is incorrect.
So, our knowledge is of key essence in all of this.
8 Likes
How aboutâŚmysterious stranger with a terrible secret?
Maybe too broadâŚ
3 Likes
Very cool and thank you kindly.
1 Like
Ok how much for a link @Northern_Loki? I could spend forever on that. Permanent acid trip.
Socially distant society
Open sesame
Apocalypse now, apocalypse then
Rhizosphere consuming human brain lawnmower man on safari in 1890
The only hard part is staying pg13
1 Like
I donât know if we only get one. Do we only get one?
If thereâs some flexibility Iâd like to see a goat riding another goat while eating a submarine in the middle of a hurricane. For craziness sakes.
But itâs cool if we only get one.
2 Likes
https://overgrow.com/uploads/default/original/4X/5/9/7/59742341c9fee4542dff5b1b3817d5b2579baaf5.jpeg
Notice the distorted faces?
we donât need no educationâŚ
And the kimono/aloha pantsuit on mama San is impressive for a computer
2 Likes
Can confirm this is what @Foreigner looks like in real life.
5 Likes
Visited the mini Dall-E
Rhino with wings
Found lots of other A.I. drawing sites⌠None as good as Doll-E Thank you @Northern_Loki for the experience. I apologize for over requesting. Thank you for the valuable information on how it works, I hope i get invited to use the Dall-e A.I. in the future and use the heads up you have given.
5 Likes
Sorry for not being able to respond in detail, somewhat busy on stuff.
There are a number of quality variants out there, some need approval such as DALL-E and it may take a couple of weeks. Midjourney is one alternate. StarryAI as well.
If you have the patience, ambition, and suitable equipment, you can also set-up your own DALL-E replica by using their pre-trained models (the training is the heavy lift). Youâll need a good PC, good GPU, plenty of storage, and be able to manage installing things such as Anaconda. Third party researchers have created architectures that replicate what the OpenAI team has created. If youâre ambitious, check out HuggingFace.co where things go deep into that world.
4 Likes