THE TOP TEN OBJECTIONS TO USING AI FOR WRITING (AND WHAT TO SAY WHEN THEY COME UP)
You can accept the bad—and still get the best out of AI.
Last week, I spoke to a group of writers about using AI in the creative process. One person thanked me afterward with:
“Thanks for getting into the ring with us.”
Yeah, that’s what it felt like… And I get it. The concerns are real: Theft. Bias. Environmental harm. Job loss. Creeping creative mediocrity.
But here's the problem: pretending AI isn't here won't make it go away. And using it without engaging the hard questions? That’s going to hurt us in the long run.
The estimates of how many writers are currently using AI in one form or another range from 45% (a survey of 1200 authors) to around 80% (per Erik Barmack in The Ankler). But just because all the other kids are doing it doesn’t make it right. So I made this list, a Top Ten countdown, in the proud tradition of my Letterman days, for anyone who wants to push back on the pushback. (Okay… not really in the tradition… it’s not funny. I owe you a funny list…)
THE TOP TEN LIST!
10. "Professionals HATE AI. Publishers won’t even look at AI-assisted manuscripts."
Some do. Some don't. The landscape is shifting.
The Copyright Office has ruled that works generated *entirely* by AI aren't copyrightable. But if you write a manuscript using AI as a drafting or idea tool—and you control the results—it's still your work. (Not legal advice. But I DID talk to a lawyer.)
Publishers are wary because they don’t want AI-generated junk. Fair. But that doesn’t mean they’ll reject your work if you used Claude to brainstorm a character arc or outline a few scenes.
Try this instead: Keep your drafts. Document your process. Be transparent about your workflow.
9. "When I try to sell my work, I’ll be accused of using AI—whether I did or not."
Irony Indeed! Many detection tools are AI themselves.
Reality check: If you use AI ethically, you can show your receipts. Keep outlines, drafts, and even screenshots of your sessions. Build a digital paper trail.
And if you’re working with AI in interesting, intentional ways? Say so. It’s a story. And stories sell.
8. "AI is terrible for the environment."
Yes, AI has a significant carbon footprint. I’ve read the stories about towns being drained of water by data centers. It's a serious issue.
But here’s the thing: every creative tool we use has an impact. What matters now is how we use that tool and what we demand from those who build and deploy it.
Don’t just wring your hands—get involved. Join conversations on AI ethics, support organizations pushing for greener infrastructure, and amplify voices calling for sustainability in tech. Use the tool and push for better policy.
7. "AI output is biased."
Correct. Because the internet is biased. And because models reflect the assumptions of their creators.
But here’s the flip side: AI can also help detect bias. I’ve used it to identify blind spots in my own writing—and to pressure-test characters, tropes, and assumptions.
AI won’t fix systemic bias. But it can help you interrogate your work more rigorously.
6. "AI makes writers lazy."
Only if they were lazy to begin with.
Seriously—lazy writers don’t need AI to cut corners. They’ll do that all by themselves.
But writers who care deeply about their work? Who obsess, revise, cut, and polish? They’ll use AI the same way they use any tool: to go deeper, not faster.
The best writing comes from our passions and obsessions. AI can help you understand the audience that shares your passions and obsessions.
AI won't replace your process. But it might extend it.
5. "AI is going to put writers out of work. Why would I enable my own destruction?"
I hear this one a lot. And it stings.
Yes, some jobs will disappear. Others will change. But writing—real, intentional writing—isn't going anywhere.
The thing that makes us valuable—the thing in our heads and hearts that says: "There’s something out there that has never been expressed, or quite expressed in this form, that will grab people and help them understand..." That’s the thing AI cannot do!
By design, AI cannot know that it doesn't know something. It doesn't feel the itch that says *this can be better*—especially when "better" can’t even be defined yet.
4. "The writing from AI is flat and mediocre."
You’re not wrong. When you let it write for you, it usually is.
But when you write with it? When you stay in charge and treat it like a collaborator or assistant? You can steer it toward something more useful, surprising, even... good.
As for "not good enough"... remember MP3s? Remember when people said, "Who’s going to listen to this crap?" Technology gets better. And the artists who thrive in this new era will be the ones who know how to pull the new levers.
We’re in the skeuomorphic phase of AI writing. Still mimicking old forms. But just wait. The artists who thrive will be the ones who bend the tools to their voice, their story, their structure.
> Related reading: "ChatGPT Never Went to Fat Camp,” a post about character development and AI.
3. "AI will train on your data. There's no privacy."
This risk is real. But it varies by platform.
Anthropic (Claude): says they don’t train on user data.
ChatGPT: has a "temporary chat" mode and a setting to prevent training.
NotebookLM: Doesn’t train on uploaded materials. Conversations may still fall under Google’s broader AI terms.
Gemini: follows Google policies.
Pro tip: Don’t upload sensitive work. Use these tools for early drafts, ideation, or structural analysis. When something’s valuable, protect it.
2. "LLMs are full of hallucinations and errors. You spend as much time correcting them as you would writing."
Yes. They get things wrong. That’s why you fact-check. Just like you would a flaky intern or half-remembered Wikipedia page.
Don’t treat LLMs like search engines. Treat them like smart but error-prone collaborators. And if you're writing fiction? Sometimes the hallucination is the prompt you didn't know you needed.
1. "LLMs were built on theft. It’s unethical to use them."
This is the core debate. And it matters.
Many models were trained on copyrighted work without consent. That’s not okay. Writers should be compensated.
A judge recently sent the Anthropic case to a jury. That’s a big deal. I hope the courts make it untenable for AI companies to keep stealing our work.
There are some terrific companies (like Vermillio) working on tracking and tracing systems that will make it possible for writers to get paid when our work is used to train models. That matters. And it’s coming soon.
Meanwhile, clean models are emerging—like Marey from Asteria—trained only on licensed or generated content.
If you're using AI and recognize something derivative or lifted? Buy the book. Credit the source. Be better than the bots.
Final Thought
As we used to say at Microsoft -- AI is a COPILOT! It does great things when YOU pilot the plane. Using AI responsibly and ethically goes hand in hand with using it creatively. Your voice, your vision, your creative integrity is enhanced by AI.
This isn't a pitch for blind optimism. It's a call for active engagement.
Because the only thing worse than a biased, thieving, error-prone robot?
A biased, thieving, error-prone robot that was trained without us and replaced us because we never showed up.
💬 Your Turn: Which objection have you heard most? What’s missing from this list? Leave a comment or share this post with a skeptic.



I'm super curious on how you use AI to interrogate your ideas!
I find prompts on r/WritingWithAI that work quite well at critiquing my work.