AI and the Sin of Sloth

AP26067584099710-scaled.jpg


Why did the thank-you email from my high school principal last month sound exactly like the one from the sophomore in my fourth period who always turns in quizzes half blank? The first time I knew I was reading something with a human’s name on it that was actually generated by artificial intelligence, it was a Substack post about being your authentic self without making anyone uncomfortable. It read as if written by a woke-but-chill frat boy who is oddly obsessed with HR rules and proper grammar. Did you know that “real authenticity is when your inner monologue and your outward deliverables finally align”?

This isn’t just my inbox. A Wired analysis, which examined AI-detection scans of posts from Substack’s top newsletters, estimated that about 35 percent likely use AI in some capacity, and about 20 percent rely on it heavily. Another Wired analysis of more than 274,000 Medium posts in 2024 estimated that over 47 percent were likely AI-generated. And a November Brookings Institution survey found that 57 percent of participants used generative AI for personal communication.

Why do I find this so irritating? Perhaps it’s because I feel like I’m being conned. There’s something that seems unfair about spending more energy reading an article than the supposed “author” spent writing it. Or maybe it’s because it’s stealing. The programmer Simon Willison called the training for Large Language Models (LLMs) “money laundering for copyrighted data.” Or maybe I’m just jealous that anyone can now vomit out in seconds something that takes me hours. This article took me two weeks. But still, who cares if a robot wrote something if it communicates the point? It matters because it’s a symptom of a broader cultural trend: outsourcing anything that takes time, energy, or focus.

Writing Without a Writer


Any writing that deserves your attention is the result of effort expended by the person who wrote it. I still treasure handwritten letters from my grandmother, Dorothy, with her distinctive D’s. These are specific and personal; they’re one-to-one. Today, our connections are increasingly one-to-many: Updates are posted for an often anonymous, potentially limitless audience with no responsibility to respond. We can reach everyone without connecting with anyone.

We’ve gained efficiency while losing the personal—trading individuality and authenticity for speed and convenience. But relationships aren’t simply exchanges of information; they can’t be transmitted like data. They must be felt. When my sister texts “How’s it going, fuck-o?” she’s not asking for a detailed update on my latest spat with a neighbor so much as she’s just checking in. Intent matters more than content. AI is streamlining depersonalization with products like Apple’s iOS 26, which can summarize messages.

Using AI to communicate with friends isn’t merely impersonal. It’s also rude. As Dan Brooks wrote in The Atlantic, sending ChatGPT-generated messages is like playing a recording of yourself saying “Oh, that’s interesting” every time someone speaks. It makes others do more, so we do less. A recent 4,500-word “personalized” email from my financial advisor bore the hallmarks of ChatGPT. It took seconds to generate, but it would have taken me 45 minutes to read had I not tossed it. (This applies to all the AI products like Anthropic’s Claude, Microsoft’s Copilot, and Google’s Gemini, as well as OpenAI’s ChatGPT.)

Where effort goes, intent goes with it. One of the most meaningful messages I received this school year was a card from my eight-year-old daughter, Phoebe, on my first day of teaching. It was a drawing of me labeled “Dad,” surrounded by hearts, with the message, “Hope you have a good time being a teacher.” Its significance came from the effort and care she put into it, not its eloquence. When you read a sentence written by a human, you assume there’s a mind behind it with a belief and a purpose. When you read a sentence generated by a LLM, that connection evaporates. The words still exist, but the intention and sentiment that give them personal and moral weight are gone. That’s why I found it so depressing when people offered “condolences” when my dad died.

When I prompted ChatGPT to write a note like Phoebe’s, it gave me 10 options ranging from “simple and warm” to “more heartfelt.” The “more heartfelt” option was “I hope you enjoy your first day teaching, Dad. It’s a big step, and I’m really proud of you for doing it.” It might have been “more heartfelt,” but my daughter no longer wrote it.

Phoebe-picture-1-scaled.jpg

Credit: Courtesy of the author.

If Phoebe didn’t come up with these new words, who or what did? When you type 100 words into a generative AI prompt and ask for a 1,000-word story, the model must fill the extra 900 words. There are a few ways it can do this. One is to average the decisions other writers have made; another is to mimic a specific writer, such as Ernest Hemingway, if prompted. In the former case, we get bland, formulaic writing, and in the latter, we get derivatives of Hemingway’s BrainyQuote.com page. AI can imitate a writer, but it can’t be Papa himself—because what made Hemingway wasn’t just output. It was a rare judgment and point of view. It’s not something that can be reproduced by jamming copies of The Sun Also Rises through a probability-weighted shredder.

Good writing is almost always hard work. AI makes it look easy only through the accumulated labor of those who fuel it. When that effort is hidden, we start to mistake competence for frictionless ease. As David Chase, the creator of The Sopranos, put it, “Hard work looks like magic.” Chase once made his crew idle for two hours under the Manhattan Bridge because a tugboat he wanted in a five-second shot was late. Like great television, the magic of generative AI isn’t that it’s effortless; it’s that it appears effortless.

The Law of Least Effort


My mother was raised in a Mennonite immigrant household where laziness was a moral failing. When my 80-year-old aunt got COVID-19 and stopped working, she said her sister was “always quick to take to the bed.” I learned sloth-as-sin from my mother, who learned it from hers.

From an evolutionary perspective, laziness is a survival strategy. Biologists call it the theory of least effort. When rewards are equal, evolution favors organisms that conserve energy. In other words, we’re wired to be lazy because organisms that picked the path of least resistance had an edge.

For most of evolution, however, effort was inseparable from survival. Animals that sat on their ass all day would be underfed, outcompeted, and unlikely to reproduce. At some point, humans crossed a threshold where effort became less a condition for staying alive than a matter of choice. At first, that choice belonged to a small group of elites: aristocrats, landlords, courtiers—people whose food and shelter were produced by others’ hands. As societies grew affluent, that option became available to an increasing number of people.

The Meaning of Life


The human brain was built for scarcity and necessity, not abundance and endless choice. Skills such as tracking animals, hunting, finding a mate, and raising children require practice. They are hard, repetitive, and come with a lot of failure, which is why evolution coded them to feel fulfilling—tying effort to feelings of meaning and value. This is why activities with immediate rewards (e.g., shooting heroin, scrolling TikTok) can leave you feeling empty over time, while activities with delayed rewards (e.g., going to the gym) can provide a sense of satisfaction. James Clear, in his bestseller Atomic Habits, defines a “good habit” as one in which the cost is in the present and a “bad habit” as one in which the cost is in the future, which explains why good habits are hard to start and bad ones are difficult to stop.

The daily tug-of-war between comfort and effort creates conflict. By avoiding effort to save energy, we miss out on the things that give meaning to our lives. If we follow the least-effort instinct too far—doomscrolling, excessive TV watching—we avoid discomfort but also stymie growth, competence, and pride.

Daniel Kahneman, the famed psychologist and author of Thinking, Fast and Slow, distinguishes between the experiencing self, which lives in the present moment, and the remembering self, which lives in the past. While the experiencing self mostly tracks pain vs. pleasure, the remembering self reflects and decides whether life was meaningful. For the experiencing self, effort is a bad deal. It’s uncomfortable, frustrating, or boring, and often fails. Avoid effort when there’s an easier option available, it tells you. Why stare at a blank page when ChatGPT can write the essay for you?

But the remembering self plays by different rules. When you ask people what makes their life meaningful, they rarely mention playing Candy Crush. They talk about raising kids, getting sober, and finishing a degree. These things aren’t always pleasurable in the moment, but they are the things that make life worth living. For the remembering self, effort isn’t a cost but proof that something mattered.

The more we optimize life around the experiencing self’s bottomless craving for another hit of dopamine, the more we starve the remembering self of meaning and self-satisfaction. The first dopamine hit—a text notification, porn, Netflix, junk food, a perfect ChatGPT essay—feels great. The tenth is fine. The hundredth barely registers. Psychologists call this insatiable need for greater intensity hedonic adaptation.

Immediate gratification characterizes modern life. Our economy runs on one-click purchases, same-day shipping, and fast-lane checkouts. Consumers have become so twitchy, impulsive, and allergic to lag that adding 100 milliseconds to an Amazon order can reduce sales by 1 percent (or $6.38 billion), and the average American checks their phone every five minutes. Students have stopped reading books, and teachers have stopped assigning them; one survey found that the number of books assigned annually by high school English teachers has fallen by a third in the last 15 years. Even dating has become too much trouble for some: 57 percent of single adults now say they aren’t even looking for a relationship.

A life built around minimizing discomfort and maximizing pleasure flattens experience. Silicon Valley’s wares over the last two decades have raised the baseline of stimulation so high that doctors are increasingly diagnosing anhedonia (chronic joylessness) among those most immersed in digital life.

I’m not a Luddite and I don’t begrudge clever tools to avoid genuinely menial or mindless work. The washing machine turned laundry from an all-day ordeal into a short chore, freeing women to join the labor force or do something more rewarding than removing stains. Outsourcing mental labor with calculators, calendar apps, and search engines (what psychologists call cognitive offloading) saves time and increases efficiency, but also comes with drawbacks. Tools create dependencies: calculators made us less numerate, Google weakened our ability to remember, and the fears that too much television would make us dumber and less happy proved true. The history of technology isn’t one of progress without trade-offs; it’s progress that shifts costs—often into places we don’t notice until long after they occur. One widely used life-events scale found that survey participants rated the same disruptions—foreclosure, divorce, or moving—as about 28 percent more stressful in 2023 than in the late 1960s, suggesting a broad decline in resilience over the past half century.

The distinction between a technology that complements our skills and one that substitutes for them is critical. Tools that complement our skills still leave the user doing the work, whereas tools that replace them destroy competence. As AI becomes increasingly capable of doing intellectual work—drafting emails, writing essays, summarizing vast troves of information—the tendency to label tasks “mindless” to justify skipping them becomes greater. Is reading a book, instead of skimming bullet points, a waste of time? Is forcing yourself to birth an original sentence mindless? Generative AI isn’t merely a labor-saving device—it’s becoming the writer and thinker. And AI doesn’t just eliminate “drudgery”; it increasingly competes with us in the cognitive tasks it frees us to do. Today, generative AI can not only write students’ papers: It can also complete online courses.

AI isn’t making tasks easier; it’s making them optional—and slowly convincing us that doing them is pointless.

Getting Stupider


I can’t navigate my own neighborhood without Google Maps. I still don’t know how to get to the tennis courts I play at every week, and I’ve lived here for five years. I know the places I lived before navigation apps far better than I’ll ever know my home, Greensboro, North Carolina, because once GPS became available, I stopped building a mental map and just followed the machine’s directions. When ChatGPT was released, I found myself reaching for it in the same way—letting it answer questions for me and outsourcing the effort of thinking.

I don’t ask my students to write essays because I need to read more. I do it to sharpen their critical thinking. An MIT study found that ChatGPT-written essays led to less student brain engagement during the task, lower than when they write unaided (or even when they use Google as a research tool). Struggling with sentences is also called thinking, and the less tolerance students have for this effort, the less able they are to solve problems. In an essay for The New Yorker, the award-winning science fiction writer Ted Chiang described using ChatGPT to complete assignments as “bringing a forklift into the weight room.” What’s the point? I’ve noticed an erosion of my own ability to concentrate and struggle with words in my writing. The more I rely on the machine for synonyms or alternate phrasing, the less patience I have for the discomfort of working out the next sentence by myself. Ironically, I’m feeling that impatience right now.

The more I use generative AI, the more I feel the important things in life are being stolen from me. We are approaching a world where college essays are written and read by LLMs with little human involvement; students ask AI to write essays that their instructors ask AI to grade, producing comments that never get read. For the low, low tuition price of $60,000 a year, you too can minimize effort and learn as little as possible! If AI saves time, it begs the question: what are we saving it for? More episodes of 90 Day Fiancé?

We live in an era where appearing to do things matters more than actually doing things. But people who stop doing things eventually become people who can’t do things. The high school students I teach use their phones to add 12 + 15. If a chatbot writes my essays, emails, texts, and thank-you notes, I lose the ability to think, and I become a spectator to my own life. What we used to call “thinking” gets swapped for supervising outputs we didn’t create and signing our names.

Silicon Valley has become an unlicensed pharmacist, handing out joy like junk food. And in our haste to arrive—in the age of instant everything—we’ve erased the very gap that made getting there mean anything at all.

There’s widespread concern about the existential, catastrophic risks posed by AI, from terrorists building bioweapons to machines enslaving humans. But the biggest risk is happening now. AI is doing what we want it to do, making us comfortable and idle—sedentary sacs devoid of confidence, competence, and meaning, for whom effort itself has become a kind of torture.

The post AI and the Sin of Sloth appeared first on Washington Monthly.

Continue reading...
 
Back
Top