I’ve avoided this topic because I didn’t think I had anything to add that wasn’t already being loudly expressed across the Internet. But lately, I’ve seen friendships fall apart, bullying, and shaming over this, and it occurs to me that some things have been missing from much of the conversation.
I don’t know if I have the right arguments, but I’m doing my best to find a nuanced perspective on this because if there’s one thing I can say with relative certainty, this thing isn’t going away. So, we need to decide what life with AI should look like before it becomes so integrated into people’s daily lives that it’s too late for discussion.
What is art, and why let AI take that from us?
I define art as humans creatively communicating some part of their humanity with other humans. Those other humans might relate (and feel related to) or find their minds expanded by a new perspective.
I understand the temptation to use AI for content creation. It saves time and money, and you can count on a good (even if not great) result. But for writing fiction? For screenwriting? For composing music? For visual art? For any form of real human expression?
To let AI do the creating is to rob us of that communication. This will inevitably isolate us from one another even more than we already are. Before AI, art was often formulaic or created with a cynical drive to squeeze money from consumers. But there was some humanity in even the most cynical creative works, and AI removes any last vestige of that humanity. Art created by algorithm misses the entire point. At best, it could become riveting but empty entertainment—a dopamine dispensary to keep us from ever facing reality or looking more deeply within. AI is, by definition, a product of the consensus, so anything challenging the consensus is stripped out.
I’m not ready to give art up—none of it. Not the struggle, the craft, the learning, the practice, or the slow molding of an idea through dozens or hundreds of iterations until it’s the expression I intended. All of this matters. All of it is intrinsically fulfilling, especially at its hardest. To remove any part of that from the process is to rob us of the full power of our creativity.
People should have a craft
What do the most fulfilled people tend to have in common? They have some craft that took time to practice, something they’ve worked hard to become good at, something they can do for hours without ego and without pervasive thoughts reminding them of the complications in their lives. Followers of the social sciences would call it the flow state.
This isn’t a distraction. This makes life worth living, and we should all find some creative outlet that we enjoy. Not everybody will be an artist, nor should they be, but there are other ways to be creative in your work and life.
The years of hard work are what make getting good at something so rewarding and satisfying. If I could go back and make things easier for myself, I wouldn’t. Every hardship and every hour of grueling practice I’ve put into my writing and music have shaped me into the artist I am. What we’re doing with AI could take away a central component of human development at a young age and an important meaning-making part of adulthood.
Wabi-Sabi
I learned of this concept some years back and felt an immediate affinity with it because I’d been doing it in my work for years, particularly in my music. I don’t usually enjoy perfectly quantized, pitch-corrected music—it sounds robotic. But since I made music on a computer (as almost everyone does now), sometimes my music sounded that way. So I looked for ways to make it feel more human because it’s in those imperfections that the music becomes beautiful. Most would agree that beauty is organic.
In a world of AI art, those human imperfections will no longer exist. That doesn’t sound interesting to me.
Is it just a tool?
Many argue that AI is just a tool. They compare it to Grammarly and Photoshop. This is the most disingenuous of all the arguments. Before AI, Photoshop was only a medium, like a canvas and brush. That’s a tool. Writers need extensive proofreading, and Grammarly helps to catch those last bits the editor may have missed. That’s a tool. And I’d concede that powering those tools with AI makes them more powerful.
Generative AI is more than a tool because AI does the part that has always been uniquely human: it makes creative decisions and can make them without much input. Using writing as an example, it can help you come up with a topic, provide as many narrative options as you ask for, outline your story using any preexisting story structure that exists, write the entire draft, offer solutions when you get stuck, and entirely remove the necessity for learning the craft at all. Then, it can help name your story, write your descriptions, make a book cover, and so on. What’s left for the human creator? Will future writing be choose-your-own-adventure?
Can it just be a tool?
I’ve resolved to keep AI separate from my core writing process. Writing is not just about stringing words together; it’s a personal expression. When others suggest that AI is simply a tool, I concede to a point. I could use AI to transcribe my voice notes and punctuate them accurately. It could be a practical aid for tedious tasks that distract me from the actual art. It could be used for note organization, marketing copy, or synopses. It can help write emails to agents. None of this is part of the craft, and I have no issue with people using AI to make those tasks easier.
Who is the creator?
I may wind up using it for those boring tasks, but there’s a line. If an AI influences narrative direction or refines my prose too heavily, it’s no longer an assistant—it becomes the writer. I’ve always believed that while ideas initiate the process, they’re just the starting point. The real essence of a story lies in its execution. The subtle ways characters interact, the emotional beats, and the tiny but meaningful details—these can’t be algorithmically generated with the same depth a human can provide. They may be able to fake it, but I ask again, if it’s not from a human mind, what’s the point?
If you have AI write a book, I think that’s ok. Just be sure to list the author as “ChatGPT and those whose work trained it.” Since you entered a prompt, you can list yourself as a contributor in the acknowledgments.
Is AI theft?
I’m not a law expert, so I won’t get into the legalities. But it’s not just about what’s legal, is it?
These AI programs had to learn how to write by reading the works of human authors. If an AI program writes a book in the style of a human author whose works it ingested, should that author get credit or royalties?
I do have to concede that no creativity happens in a vacuum. We’re all inspired by the artists who came before us, and uniqueness comes from taking little bits from disparate inspirations. You could argue that what AI is doing is no different. But it’s different because it’s AI. It’s automated and industrialized. As a musician, if I cut my teeth by learning some classic rock songs, then some blues songs, then some classical songs, and a personal style emerges from that mix of influences, I’m carrying on a tradition that has existed since art was first invented. But writing a program to take bits from other artists and pump out replicas at rates humanity could never achieve threatens to put the people whose work trained the AI out of business entirely.
And that’s my issue: human beings can’t pump out new material at a rate that threatens the livelihood of their mentors and inspirations. AI can and will. And let’s not forget the primary beneficiaries: the corporations who own the AI software.
AI shaming
This is where we come to my main motivation for writing this: AI shaming. I may disagree with letting AI take our art from us, but I’m equally against public shaming. Any time someone on social media is suspected of even thinking about using AI, the pile-on begins. I get that people feel it’s their only available recourse, but it only benefits the corporations. Of course, it does because this is the same thing they always do: they fuck us over and get away with it by using the media to persuade us to point our fingers at one another instead of at them. I’ll never support mob shaming of individuals who mean well and are just trying to find a way to make it through life, even if I disagree with how they’ve chosen to go about it. It’s disproportionate, misses the point, and absolves the real culprit.
Consider a small publisher: it could be a modest operation with only a handful of employees struggling to stay afloat. An AI cover could be a desperate attempt to cut costs. This doesn’t necessarily justify it, but is that really who we want to persecute?
I get why we do this. How the hell can we go after the corporations? At least if we go after some random indie author, there’s no PR machinery defending them, and we can be satisfied watching the destruction of that person’s career prospects. Going after corporations is much harder and less immediately effective. But going after the individual or small company does nothing to help the problem in the long run. It only hurts the person.
Opportunistic outrage
My other issue with online shaming is how much of it is driven by people weaponizing it to build their brands. Social media algorithms reward outrage, and many indie creators have realized that. So when they see an opportunity to shame an individual for using AI, they jump on it because they know it will get lots of engagement.
I’m confident that those doing the shaming have their own skeletons they’d prefer the rest of us not discover. So maybe it’s time we did less finger-pointing and turned our attention to the real problem.
A life dictated by algorithms
We’ve already used machine learning in search engine algorithms, social media sites, and elsewhere. Social media algorithms, in particular, have worried me for years. Their primary function is to increase engagement—the more people engage, the more ads they can sell. The more people engage, the more we build a habit of continuously checking and scrolling, which generates more ad revenue and helps collect more data about each user’s interests, habits, and preferences.
If an impersonal machine learning algorithm is directed at increasing user engagement and content generation, it will quickly discover that outrage, fighting, and trolling are the best drivers. You can see that for yourself just by scrolling for a few minutes. You’ll find far more back-and-forth on controversy, arguments, and posts with heightened emotions than any other type of post.
Look at Twitter / X and see who and what tends to rise to the top. You’ll mostly find people stirring pots. Some are just caught up in the algorithm’s manipulation, while others actively use it to increase visibility. They know what the algorithm is looking for, and that’s what they give it. Outrage. Anger. Conflict. In politics, compromise doesn’t get engagement, so it doesn’t get promoted, so politicians don’t bother much with it. But hateful hyperbole gets engagement, so it gets rewarded, so politicians who understand the algorithms exploit this. Outside of politics, we see the same pattern repeated everywhere.
These things erupt like online gang wars, even over the most absurd things. Zack Snyder versus James Gunn. iPhone versus Android. Famous ex-boyfriend versus famous ex-girlfriend. Sometimes, it gets so heated that it leads to doxxing and death threats and damages our collective mental health.
If you engage with conflict, you’ll see more of it because the algorithms show you more of what you engage with (positively or negatively…the algorithm is indifferent to that). People tend to return to a thread more often during an active argument. They feel compelled to log back in repeatedly to see the replies and come up with a demeaning comeback. They have to end it by feeling like they won. And the algorithm is watching them all. It sees your increased engagement and feeds you more of the same.
Similarly, people are less likely to engage with dry reporting on important topics from an investigative journalist than with a conspiracy theorist who has made all the connections to reveal the hidden truth. We all know this, and so do the algorithms. It’s no wonder we’re collectively drifting so far from reality.
Bots
On top of this, we have bots. I use that word loosely to describe automated programs designed to push an agenda or people hired to do the same. When you’re having those arguments online with people you don’t know, there’s a good chance you’re arguing with a bot. And those bots know how to work the algorithm, which brings us back to outrage, anger, conflict, and conspiracy theories—now fully automated.
AI-powered bots
Now consider how those bots will be with the latest AI advancements…and the AI improvements to come. Soon, knowing what’s real and what’s not will be impossible. AI can fake images, videos, and voices and mimic writing styles. These AI-driven bots will be able to create outrage and disinformation at scale and with a sensitivity not seen before. It will be effective because the AI will get constant feedback in the form of engagement stats and can learn in real time what’s working and what’s not. Even if we know we’re being manipulated, that won’t give us the power to stop it. It will know us too well. Just look at research done on manipulation tactics of years past (read Robert Cialdini). Not even the researchers who knew they were being manipulated were always able to resist.
Imagine a company selling survivalist gear to paranoid people who believe the government is out to get them. The AI will poke at that paranoia to move products, bombarding those people with just the right personalized warnings to increase their fear until they buy the company’s products…which could be guns, gold, or anything else conspiracy theorists are already using fear to sell.
This is already happening, but I’m worried about scale. If you think it’s bad now, it will get a lot worse.
The AI utopia
Imagine an AI utopia from the tech industry’s perspective: Devices monitor our brain activity, and AI uses that data to optimize every experience. Imagine a game where an AI can make fine adjustments as we play to keep our dopamine levels high. It becomes an addiction we can’t escape, and we wouldn’t want to because we’re always happy. No more discomfort or need for human expression, and constant entertainment and distraction. The ultimate Brave New World. Is this the sort of world we want to move toward?
Now imagine another utopia from the perspective of regular people: AI has replaced our shitty, dangerous, unfulfilling jobs. Meanwhile, we humans spend our time socializing, building bonds, struggling to learn crafts, helping one another through hard times, embracing our imperfections, and making art for one another.
These are both options. One strikes me as ideal. The other strikes me as inevitable if we don’t start talking about what kind of world we want to live in.
Let’s think about it
I don’t think the particular issues and concerns with AI I’ve listed here are even the beginning of the unintended consequences we have ahead. Nobody anticipated the consequences of social media algorithms on every aspect of our lives, and this will be the same.
We can disagree over whether the AI-powered forces for good will outweigh the loss of so much of what we’ve always defined as human. I think we should argue over that. Arguments are healthy when they work toward figuring out what direction we want humanity to take. But let’s agree that we have to talk about this and, maybe more than ever, we have to listen to each other’s concerns because if we don’t, I believe we’re going to regret giving AI all that used to give us a sense of meaning.