Imagine yourself drawing out the following scene: “a red furry monster looks in wonder at a burning candle.” How would you start? You might see yourself picking up a pen and paper, sketching out an outline of the monster, and perhaps gradually adding in some details of the face, hair, legs, and so on.
These words (and a description along the lines of “in 3D animation style”) were what Chad Nelson, 52, a San Francisco-based digital creator, typed into the prompt bar of DALL-E 2, an AI system powered by Open AI that turns natural language into art and images. Within seconds, the generator produced four versions of the illustration, showing a little red creature with its jaw dropped and eyes popped in front of a burning candle.
The red furry monster is one of Chad’s first projects after experimenting with DALL-E 2 after signing up as a tester before the program became open to the public. Last week on April 6th (exactly one year after DALL-E 2’s release), he successfully created and released an animated short film using AI-generated visuals only, featuring more furry monsters that he named “critterz”.
“I felt like I was talking to some sort of magic mystical box, like the Wizard of Oz,” said Chad over the video call on that weekend. “I felt like I had superpowers, because now I can work through ideas so much faster than I had ever been able to before.”
He is not alone in feeling this way. Artists are now jumping on board to use text-to-image models like DALL-E 2 (among others, including Midjourney and Stable Diffusion) to create generative art—art made using a predetermined system like a computer program. To do so, they insert a prompt with their subject matter, desired attributes and styles, and sometimes camera and lighting conditions—all under 400 characters—into the AI system and let it do its magic. They say that instead of replacing them, the new tool is enhancing them as creatives.
Image generated by Chad Nelson using the prompt “a red furry monster looks in wonder at a burning candle.” Open AI.
“I absolutely love the technology that is coming out these days,” said Jacque Charak, 37, a 2D artist from Illinois over the phone last week. Her artworks show street sceneries with purple, blue, and orange brushstrokes as if they come out of an utopian fantasy. She creates her AI artwork by taking inspirations from illustrations, film and television, role-playing games, and occasionally prompts by others. “The ability to use things from different programs and what I do in real life in my artwork brings it to an entirely new level,” she said.
For Aaron Tang, 43, a part-time artist from Boston, AI systems like DALL-E 2 allow him to do projects that he could not have done before. There are close-up portraits with floral makeup, realistic commercial photography of fast food in burning flames, or infused cocktails in mist. His most recent project has become more abstract and shows waves of hands in pastel-colored rubber gloves that circle and flow.
“All my projects are things that I was never trained on, like toy design, fashion design, and high-speed photography,” he said over the video call. With a background in Industrial Design, Aaron described himself as “pigeonholed” in the past. “The fascinating thing with AI is that you can do a new thing, and that within an hour, you can be pretty good at it.”
AI has also helped artists to visualize their ideas immediately. “When I have a thought or question, I just type and generate,” said Aaron. “In five months, I have 70 projects, which would have probably taken me two years.”
A Different Creative Process
To be sure, some artists remain skeptical of these AI models, criticizing them for taking away the time and experience that make something a true work of art. Though, for those who have grasped the new technology, the creative process is not unlike photography.
“You can shoot a thousand photos and maybe get two or three good iterations. Generative art, I find it the same thing,” said Jacque. “Not everything is going to be a winner. It takes a lot of finessing.”
“AI is really enabling people,” said Aaron. “I met so many people that could not do art anymore because of age, health, or funds, but now they can do art again. That’s incredible.” Like many other AI artists, he echoed the view that this technology will give people—especially non-visual creatives—the tool to create the images in their minds.
The Future of Generative Art
Challenges remain for the future of generative art. Since the AI systems are neural networks trained on databases of images and their text descriptions, they can create aesthetic biases, such as producing only images of a certain race or gender based on the prompts. The technology can replicate and amplify social inequity.
There is also the issue concerning AI's infringement on artists’ copyrights. Some databases were not built with artists’ consent. Artists have already seen how art produced in their styles shuts down their business opportunity.
“Artists need to be compensated if we are using their work,” said Jacque. She revealed that she credits them every time that she uses a database trained on their works. After tagging the artist, Misstigri, in the description of one of her posts last year, the artist appreciated her and responded saying “I love it. Amazing.”
“As an artist, I want to make sure that my creations are my own. I would hate the idea that I’m just ripping off someone else without my knowledge,” said Chad.
Edited by Samuel Blackburn