In August 2023, longtime publishing professional and author Jane Friedman published a blog post that struck a nerve across the literary world. Titled “I Would Rather See My Books Get Pirated Than This,” the post (janefriedman.com/i-would-rather-see-my-books-pirated) detailed how AI-generated books—misleadingly attributed to her—had been uploaded to Amazon and Goodreads, exploiting her name and reputation to sell low-quality content to unsuspecting readers.
“I know my work gets pirated and, frankly, I don’t care,” she wrote. “But here’s what does rankle me: garbage books getting uploaded to Amazon where my name is credited as the author.”
Today, nearly two years later, Friedman says that specific ordeal has passed, but the broader challenges facing writers have only grown more complex.
“That was a fairly unique, one-off occurrence,” she told us in April 2025. “The specific problem of false attribution has melted away. The problem we must now deal with is AI-generated copycat titles that release prior to the real thing … in an effort to skim off sales, especially during those first weeks of publicity.”
The Rise of Copycats and Platform Responses
Since 2023, generative AI tools have only grown more powerful and more accessible. That has opened the floodgates to a new wave of content that blurs the lines between original and derivative. Friedman has observed a shift in tactics from AI-generated titles falsely attributed to real authors to more sophisticated copycat books that mimic the titles, topics, and even author names of popular or upcoming releases.
“I see copycats pop up all the time in Bookstat data … trying to take advantage of new releases or popular titles; they’re so obvious it’s laughable,” she says. “For a while, copycat books would misspell the author name or make it mimic the real name—e.g., Samantha Gathrie instead of Samantha Guthrie or Frank Gioia instead of Ted Gioia (real examples). That has become harder to pull off, but there remains a big market of copycats.”
In response to mounting pressure, some major platforms have implemented modest reforms. Amazon now allows only three book uploads per day per account and has introduced author verification tools. Friedman notes that other retailers—including Draft2Digital, Barnes & Noble, and Ingram—have taken a more aggressive stance against AI-generated junk, likely due to the direct reputational risk they face with consumers
and booksellers.
“In my conversations with Draft2Digital, they said it’s glaringly obvious when someone is trying to take advantage of their system and publish/distribute AI work, so it’s not hard to block that activity,” she says. “In other words, such behavior stands out when compared to your average author who’s treating writing and publishing as their profession and trying to build a name for themselves, and a readership, over time.”
Still, Friedman remains skeptical that platforms like Amazon are doing all they could. “While Amazon operates on a vastly different scale than Draft2Digital, they also have more resources. I have to assume there is much more they could do, but choose not to do, to stop copycat work.”
A Pragmatic View on AI’s Role in Publishing
Despite her firsthand experience with AI misuse, Friedman doesn’t believe the solution is to reject the technology outright. “I’m not anti-AI,” she says. “I worry about writers or publishers who automatically adopt an anti-AI attitude. I fear this just puts them at a great disadvantage compared to those who employ it ethically and effectively.”
She uses AI daily—often to brainstorm, summarize information, or boost efficiency. “Right now, it’s akin to having a really great intern,” she says. “Or—for some specific tasks—a clone of yourself.”
What’s more concerning to her than AI itself is the lack of transparency and evolving gray zones around attribution. “Purely AI-generated work should be labeled as such,” Friedman says. “Where I have a lot of skepticism is with these ‘human-authored’ certifications. I don’t think technology has advanced to a point yet where any tool can say for sure that something is 100% human-authored. What if someone uses AI to copy edit and proofread? To brainstorm an outline? To help them come up with different phrasing? To write a summary of something? Does that revoke the human authorship? What if people aren’t honest about their AI use in the first place?”
Instead, she advocates for honest disclosure. “What’s more important to me is knowing what human being or what business takes responsibility for the credibility or creation of the work, and that transparency is offered about that work. I don’t want to feel tricked or misled in my assumptions about the work and author’s involvement with it.”
A Cautious Hope for the Future
Even as misinformation, misattribution, and AI-generated clutter continue to muddy the waters of the publishing world, Friedman believes the industry will find its footing. “We’re in a bad place right now,” she says. “There’s so much uncertainty and lack of transparency. …
The situation may get worse before it gets better. But eventually, some norms will be established. The industry will develop best practices and have some guardrails. I have to believe that. I have to believe the people who choose to work in writing and publishing ultimately care deeply about the written word and do collectively have integrity.”
Ultimately, she remains hopeful that the human drive for creativity and connection will prevail. “I think AI is a greater existential danger to fields that are quantitative and have more formulas associated with them, more black-and-white or right-and-wrong answers. As I see it, AI poses less of a threat to creative fields and other areas of human idiosyncrasy. Humans will not stop creating or connecting with each other. They’ll use AI to support greater creativity and connection, to explore their own weird obsessions. I look forward to seeing how great minds use it.”
Alexa Schlosser is the managing editor of IBPA Independent. Have you dealt with copycat titles of your books? Email her at alexa@ibpa-online.org.