My love has always been art but with kids and stuff I worked as a Linux system architect for over twenty years. The open-source world has given us so many things that we take for granted and I could not miss this extraordinary work done in generative images. In the short time I have been working with it AI has moved extremely quickly.
The story of an AI apple
Two Stable Diffusion AI apples were upscaled to a larger image and imported into Photoshop beta. Using the new AI generative fill I have replaced the background and edited it as I would a normal photo.
But how did I get here…
It started here.
Multiple iterations with progressively more instructions until I was generating apples with some blemishes in light that were somewhat natural. I didn’t worry about the background as this would be replaced later in Photoshop
Between each of the generations, small changes were made to the Stable Diffusion prompt to suggest a more natural apple
This was the original apple to get the process started.
I changed the prompt to include blemishes on the skin and ran some more steps. It kept doing this strange lighting where half the apple was lighter than the other. It did look more natural though
This went too far and made me a nicely lit set of apples with bruises, but the apple at the back was deformed and had sprouted leaves. I decided that two apples might be simpler
I picked this one as t the shape and skin texture looked the most natural I have also introduced:
Lens type
Depth of field
Lens model
Lens error
Lighting type
Lighting intensity
It would have been faster to just take the photograph but there is an interesting side effect.
I can apply the same prompts to other things entirely
Using the photo above as input, changing the name of the “thing”