The Robots Have Landed

Creative destruction of cultural production is near.

Chris Perry
7 min readJul 14, 2022

About a month ago, a new Cosmopolitan “hit the newsstands” with a greeting from the future. The publication featured no celebrity news, dating, or relationship advice. Just a simple message with material subtext.

Meet the World’s First Artificially Intelligent Magazine Cover → It took only 20 seconds to make.

Cosmopolitan’s stroke of genius is a ground-breaker. The prevailing narrative on automation is that it’s the domain of repeatable, routine work. The cover proposes we update this story.

Robots are here to exert artistic muscle too. They’ll change the way we look at, think about, and rationalize creativity in the future. Our perspective on cultural production will expand in the age of AI. The robots have landed.

WARNING FOR THE UNPREPARED

For me, creative robots trigger unpleasant flashbacks. I lived through the first wave of creative automation in the early 1990s. Despite the promise evangelized by technologists, the real-life impact was devastating.

My first job out of school came via a family business, a graphic arts shop. We worked in the shadows of the GM Tech Center, one of the most prolific design campuses in the world. We sold commercial art that sold the automotive dream.

Back then, ad giants like Campbell-Ewald, Ogilvy & Mather, and J. Walter Thompson outsourced creative services. We were their back room to concept, design, and produce the work. It was largely done by hand, by artists, in a super-charged, deadline-driven environment. Our graphic house made craftwork by craftspeople.

And then, out of the blue, life as we knew it changed.

Steve Jobs’ bicycle for the mind propelled automation. Desktop publishing running on his Macs brought the house down. Software ate the commercial art world and our business with it.

Desktop publishing — created by Pagemaker and Illustrator running on Macs — changed production craft into computer code in the early 1990s

DESTRUCTION AGAIN, ON A BIGGER SCALE

I discovered early in my career it’s a bad place to be on the wrong side of technology movements. The lesson frames the way I see the world and consult with teams on innovation. Yes, new things bring new potential. Then there’s the flip side. It sucks to do work that’s on the down and out. Apdating is the only way to survive.

Writing this, I feel a strong sense of deja-vu. Creative destruction looms again. This time it’s coming for more than commercial art or social content. AI will alter most forms of cultural production and the sensibilities of those who make it.

It might feel like a non-sequitur to reference a political economist, but Joseph Schumpeter’s theory of creative destruction warrants a double-click.

Schumpeter is famous for recognizing that capitalism is never stationary and constantly evolving. He said: “Situations emerge in the process of creative destruction in which many firms may have to perish that nevertheless would be able to live on vigorously and usefully if they could weather a particular storm.”

To weather disruptive change, one must adopt new technologies. The significance of Cosmopolitan’s cover is not in the presentation. It’s the source behind it.

There you see an automation storm on the horizon.

AN UNUSUAL COLLABORATION

The editorial team at Cosmopolitan joined forces with artificial intelligence research lab OpenAI to create the cover. They used DALL-E 2 to produce it. DALL-E-2 turns spoken words into pictures. It’s an AI trained on hundreds of millions of images. On command, the system builds images pixel by pixel. Each rendering is unique.

According to the accompanying cover story, Cosmopolitan’s team input commands into the system like:

“A young woman’s hand with nail polish holding a cosmopolitan cocktail.”

“A fashionable woman close up directed by Wes Anderson.”

“A woman wearing an earring that’s a portal to another universe.”

And what determined the final image.

“Wide-angle shot from below of a female astronaut with an athletic feminine body walking with swagger toward camera on Mars in an infinite universe, synthwave digital art”

DALL-E 2 is in what OpenAI calls a “preview” phase. It’s a limited release to a thousand users a week. Throttling use allows engineers to make tweaks or smoke our problematic use cases. As Open AI’s CEO tweet shows, it’s scaling fast.

NEW POSSIBILITIES UNLOCKED

Casey Newton calls DALL-E 2 one of the most disruptive new products since covering tech. He likens it to “a before and after moment.”

In the Platformer, he said:

I remember the first time I Shazam’d a song, summoned an Uber, and streamed myself live using Meerkat. What makes these moments stand out, I think, is the sense that some unpredictable set of new possibilities had been unlocked.

It’s been a few years since I saw the sort of nascent technology that made me call my friends and say: you’ve got to see this.

Newton adds:

Imagine using the Google search bar like it was Photoshop — that’s DALL-E. Borrowing some inspiration from the search engine, DALL-E includes a “surprise me” button that pre-populates the text with a suggested query, based on past successes. I’ve often used this to get ideas for trying artistic styles I might never have considered otherwise — a “macro 35mm photograph,” for example, or pixel art.

A sample of images produced by DALL-E’s neural networks

Beyond robotic magic tricks, the intelligence advance should give creators pause. The cost-benefit model doesn’t favor flesh and blood in current practice.

From ARK Investment’s research team:

A human graphic designer would take more than 5.25 hours to recreate those images at a cost of ~$150 based on an average hourly wage of ~$29.1 DALLE-2 hasn’t announced its commercial pricing yet but, given 22 seconds of compute on an A100, we estimate that the inference cost for a single image could be ~$0.01 — more than 99.99% lower than human labor cost. As a result, we have been debating whether or not powerful AI image models might displace human graphic designers.

Where things go from here is hard to predict. They added:

AI training costs should decline 60% per year, suggesting that at today’s cost, researchers should be able to train models 100x larger within the next five years. ARK’s Director of Research, Brett Winton, tweeted, that scaling a model from 350 million to 20 billion parameters has re-created language. If DALLE-2 were to scale 100x, we wonder what other tasks it might be able to accomplish.

What looms begs lots of questions, depending on where you sit. For the:

  • Student: Is graphic design or photography an obsolete career path?
  • Information analyst: Is this image real or fake (or does it matter anymore)?
  • Lawyer: Do robots have copyright protection or rights?
  • Human rights advocate: Can you curb sexually explicit images, new forms of hate speech, or lack of diversity in output?
  • Culture watcher: Will automation retrieve craft and creativity communities?
  • All of us: How will we ever know what is real or rendered?
DALL-E’s makes photo-realistic images of people who don’t exist.

DALL-E has already pushed boundaries in creative use. It’s already a collaborative, high-velocity meme machine. A way for chefs to imagine new dishes. An engine to populate the metaverse. A potential replacement for Photoshop.

Perhaps what DALL-E means is best captured by Karen X Cheng. She’s an engineer, designer, videographer, and entrepreneur. Her work has racked up more than 75 million views on the Internet. Cheng designed the Cosmopolitan cover.

She also used DALL-E to create a Nina Simone video, posted here. More than the video, what struck me was her mixed message on what creative automation means.

Karen X Chang also produced an AI-generated music video. Nina Simone’s Feeling Good interpreted by AI

Cheng says:

The invention of the camera didn’t mean the death of painting. Rather, it forced painting to evolve to other styles — more abstract and impressionist, since cameras could capture photorealism so well. Eventually, the invention of social media coupled with cameras gave rise to an explosion of ways for a new generation of painters to share their art.

With AI, we are about to enter a period of massive change in all fields, not just art. Many people will lose their jobs. At the same time, there will be an explosion of creativity and possibility and new jobs being created — many that we can’t even imagine right now.

Developments like DALL-E will force creatives to collaborate with AI. The robots have landed. Those who wait or turn a blind eye will see careers and creative potential eaten by software.

I’ve seen it happen before. The storm clouds of disruption are forming again.

--

--

Chris Perry

Innovation Lead @ Weber Shandwick. Start-up board adviser. Student mentor.