The Year That A.I. Came for Culture

Subscribe to our newsletter

This March, news broke that the latest artificial intelligence models
could pass the LSAT, SAT, and AP exams. It sparked another round of A.I. panic. The
machines, it seemed, were already at peak human ability. Around that time, I
conducted my own, more modest test.
I asked a couple of A.I. programs to “write a six-word story about baby shoes,” riffing on the famous
(if apocryphal) Hemingway story. They failed but not in the way I expected.
Bard gave me five words, and ChatGPT produced eight. I tried again, specifying
“exactly six words,” and received
eight and then four words. What did it mean
that A.I. could best top-tier lawyers yet fail preschool math?

A year since the launch of ChatGPT, I wonder if the answer
isn’t just what it seems: A.I. is simultaneously impressive and pretty dumb. Maybe
not as dumb as the NFT apes or Zuckerberg’s Metaverse cubicle simulator, which Silicon Valley also promised would revolutionize all aspects
of life. But at least half-dumb. One day A.I. passes the bar exam, and the next, lawyers are being fined for citing
A.I.-invented laws. One second it’s “the end of writing,” the next it’s recommending recipes for “mosquito-repellant
roast potatoes.” At best, A.I. is a mixed bag. (Since “artificial intelligence”
is an intentionally vague term, I should specify I’m discussing “generative A.I.”
programs like ChatGPT and MidJourney that create text, images, and audio.
Credit where credit is due: Branding unthinking, error-prone algorithms as
“artificial intelligence” was a brilliant marketing coup.) 

The flaws are amusing and a relief to many artists
who—when ChatGPT was released—feared their profession might be over. If a
computer program could produce a novel or painting at the press of a button, why
were we spending countless hours torturing ourselves in cafés and studios for little
recognition and even less pay? Yet as the limitations became apparent, artists’
despair was replaced with anger. Visual artists learned A.I. was being used to copy their work. Actors realized
Hollywood studios wanted to use A.I. recreations of them for
eternity. And authors discovered their books had been pirated by corporations
whose billionaire investors cried poverty at the suggestion they should pay even a penny in
compensation. Artists’ anger turned to strikes and lawsuits.

The legal questions will be settled in court, and the
discourse tends to get bogged down in semantic debates about “plagiarism” and
“originality,” but the essential truth of A.I. is clear: The largest corporations
on earth ripped off generations of artists without permission or compensation
to produce programs meant to rip us off even more.

I believe A.I. defenders know this is unethical, which is why
they distract us with fan fiction about the future. If A.I. is the key to a
gleaming utopia or else robot-induced extinction, what does it
matter if a few poets and painters got bilked along the way? It’s possible a
souped-up Microsoft Clippy will morph into SkyNet in a couple of years. It’s also
possible the technology plateaus, like how self-driving cars are perpetually a
few years away from taking over our roads. Even if the technology
advances, A.I. costs
lots of money
, and once investors stop subsidizing its use, A.I.—or at least
quality A.I.—may prove cost-prohibitive for most tasks.   

Instead of guessing
at the future, what about the problems that the existing technology wielded by
real humans poses today? As funny as it may be to cite A.I.’s flaws—did you hear
about the chatbot that suggested killing
the Queen of England
?—even mediocre A.I. can cause a lot of problems.  

One threat is what science fiction author Ted Chiang called A.I. as the new
McKinsey. That is the possibility that A.I. will serve as a tool to further
enrich the rich while immiserating the rest of us. Imagine bosses using A.I. to
fire unionizing workers, police citing A.I. to defend racist profiling, or health
insurance companies blaming A.I. for denying needed treatments. Right now, UnitedHealth is being sued for allegedly using an A.I. with a “90% error rate” to deny
coverage. What’s notable is that A.I. being dumb is a benefit. The higher the
error rate, the more money saved.

This was a central issue in the Writers Guild of America strike. While A.I. boosters mocked screenwriters as snobs or technophobes, in fact, the WGA had
quite practical concerns. In Hollywood, credit determines payment, and
screenwriters knew studios could underpay them by making them rewrite ChatGPT
output, even if the pages were unusable and no time was saved. Thankfully,
the WGA won. But this same
threat looms over every creative industry. Novelists get paid less and retain
fewer rights for ghostwriting than for selling an original book. Music
industry payments are based on credit. And so on. This isn’t about whether A.I. may or may not aid human creativity—that’s a distraction the tech companies
want us to focus on. It’s about material concerns: credit, contracts, and
control.

To put it another way, the problem is human greed and
capitalism. Same as it ever was. A.I. is merely the newest excuse. 

I think there’s another threat A.I. poses to the arts, and to
everything, that’s been under-discussed. Let’s call it the pipeline problem. A.I. fans argue that if artists ignore ethical questions and embrace A.I., we could
produce more. “A.I. could help you write 100 books in a year!” they say. My
question is always: Will A.I. produce 100 times as many readers? There are already
far more books published than there is a readership to support them. And the
books published are a fraction of the manuscripts written. The same is true in
any artistic field. So, there are a series of pipelines that winnow this
content down and let people discover work. A.I. may produce garbage, but it can
produce an enormous amount of it. I fear our metaphorical pipelines are not
equipped to handle this garbage any more than our physical pipes could handle a
hundredfold increase in sludge. A.I. could clog up everything. 

Earlier this year, several
science fiction magazines shut
down their submissions
after being overwhelmed with A.I.-written stories. The
trend was possibly caused by TikTok hustle influencers claiming easy money in
selling short stories. (Cue every short story writer laughing, crying, throwing
up.) The editors said these stories were poor and unoriginal. Many had the same
dull title: “The Last Hope.” But it didn’t matter that the stories were bad. Even
if it takes only a few minutes to read and reject, a few minutes adds up when
multiplied by a thousand or more.

What happened to
literary magazines can happen to newspapers, agencies, publishers, grants, and really
everything. A.I. fans like to say the technology is democratic, allowing anyone
to be an artist without time or effort. But the result may be the opposite. I
fear that in pipelines with gatekeepers, the gates will slam shut. Connections
will matter even more than they do now as editors and agents avoid a torrent of
A.I. gunk. Independent platforms may fare even worse. A.I. books have already
clogged up Amazon’s Kindle store so much that Amazon had
to limit
self-published e-book uploads.

Outside of the arts
specifically, how will social media or internet searches handle increasing
volumes of search engine optimization–rigged
A.I. content
and A.I.-plagiarized
articles
? Platforms will have to heavily invest in moderation or
else watch the platforms drown in A.I. sludge. The tech industry’s only solution
is more A.I. A.I. editors editing A.I. writers for A.I. magazines. But who is going to
read any of this? 

A year into
ChatGPT, I’m less concerned A.I. will replace human artists anytime soon. Some
enjoy using A.I. themselves, but I’m not sure many want to consume (much less pay
for) A.I. “art” generated by others. The much-hyped A.I.-authored books have been
flops, and few readers are flocking to websites that pivoted to A.I. Last month, Sports Illustrated was so embarrassed by a report they published A.I. articles that they apologized and
promised to investigate. Say what you want about NFTs, but at least people were
willing to pay for them.

But I’m also haunted by something I saw in Google’s A.I. demo. The video featured A.I. briefly summarizing emails someone
hadn’t read. Then it demonstrated generating new emails to reply with. It’s
easy to extrapolate. The recipients will use A.I. to avoid reading that email and
generate new A.I. replies for others to avoid reading. How soon until everyone’s
inbox is overflowing with emails no human has read or written? Why stop at
emails? A.I. can write book reviews no one reads of A.I. novels no one buys, generate playlists no one listens to of A.I. songs no one hears, and create A.I. images
no one looks at for websites no one visits.

This seems to be
the future A.I. promises. Endless content generated by robots, enjoyed by no one,
clogging up everything, and wasting everyone’s time.

Read More