Public KDP Disclosure Guide

AI-Assisted vs AI-Generated on Amazon KDP

What authors must disclose. What the labels actually mean. Why orchestration still counts as work. And why the market should stop confusing provenance with value.

Updated March 14, 2026Research and reporting by CarverBuilt for authors using BookWriter

The short answer

If AI created the actual text, images, or translations in the book, Amazon KDP currently treats that material as AI-generated and expects disclosure.
If you created the material yourself and AI only helped brainstorm, edit, or refine it, that is generally AI-assisted.
Neither label tells the whole truth about effort. They describe origin. They do not measure the intelligence required to orchestrate the result.

What KDP is really classifying

KDP is classifying the origin of the content in the file you publish. It is not issuing a cultural verdict about whether the book counts, whether the author worked, or whether the process was artistically respectable.

What the culture keeps missing

AI-generated does not automatically mean effortless. Serious operators still create the concept, choose the structure, reject weak output, rewrite the soft parts, and force the book into commercial and emotional shape.

Quick decision framework for BookWriter authors

Scenario

BookWriter Co-Writer drafted scenes, chapters, or passages that remain in the book.

KDP answer

Disclose AI-generated text.

Why

Amazon KDP says content is AI-generated when an AI tool creates the actual text.

Scenario

You wrote the prose yourself and used AI only to brainstorm, outline, edit, or proofread.

KDP answer

This is generally AI-assisted, not AI-generated.

Why

KDP defines AI-assisted content as work you created yourself with AI help on revision or idea development.

Scenario

Your cover or interior images were created with an AI image tool.

KDP answer

Disclose AI-generated images.

Why

KDP applies the same disclosure logic to images as it does to text.

Scenario

You used AI to translate the book.

KDP answer

Disclose AI-generated translations.

Why

KDP explicitly includes translations in the disclosure rule.

Scenario

You are unsure because the workflow mixed human drafting and AI drafting together.

KDP answer

Disclose honestly. When in doubt, do not understate AI-created content.

Why

The real risk is misclassification, not over-clarity.

The Real Rule

Amazon is asking a provenance question, not a moral question

The KDP form is not asking whether you worked hard. It is asking whether AI created the actual content.

Based on current official KDP guidance.

The public argument about AI in books gets confused because people answer the wrong question. They argue about talent, honesty, effort, taste, and whether a machine can feel. Amazon KDP is not asking any of that when it presents its AI disclosure step. Its current rule is narrower and more mechanical. Amazon says content is AI-generated when an AI-based tool creates the text, images, or translations. Amazon says content is AI-assisted when the author creates that content and uses AI only to refine, edit, brainstorm, or otherwise help along the way.

That distinction matters because many authors still think the KDP disclosure box is a philosophical confession. It is not. It is metadata. It is provenance. It is Amazon asking how the finished material came into existence. If the actual words in the book began as AI-created text and remain in the manuscript, that falls on the AI-generated side of Amazon's line. If the author wrote the words and AI helped clean them up, that falls on the AI-assisted side of Amazon's line.

This is why so many people answer badly. They hear the phrase AI-generated and immediately translate it into lazy, fake, shallow, or effortless. That emotional translation is understandable, but it is not the same as the platform rule. Amazon is not grading your dignity. Amazon is classifying the origin of the content in the file you are uploading.

For BookWriter Authors

How BookWriter users should answer the KDP disclosure question today

If BookWriter helped create the actual prose that remains in the manuscript, disclose AI-generated text on KDP.

BookWriter authors need a clean rule, not a cloudy one. If you used BookWriter Co-Writer to generate scenes, chapters, dialogue, or passages that remain in your final manuscript, you should treat that as AI-generated text for KDP disclosure purposes. If you used BookWriter to generate cover concepts or finished cover art, treat those as AI-generated images. If you used AI for translation, disclose AI-generated translations. The disclosure rule follows the content itself.

If, on the other hand, you wrote the text yourself and used AI only as an editorial or planning instrument, that is closer to Amazon's AI-assisted category. Examples include brainstorming titles, stress-testing plot logic, checking continuity, trimming repetition, suggesting stronger verbs, or proofreading a draft you authored. The important phrase in Amazon's guidance is actual content. The question is not whether AI was nearby. The question is whether AI created the material the reader receives.

There are edge cases. Suppose an author explores several AI-generated drafts, rejects them, and then writes a final scene from scratch. That leans toward AI-assisted because the final text was created by the author, not by the model. But if the final scene is still built from AI-created language, even after heavy revision, the safer and more honest answer is disclosure. When the workflow is mixed and memory gets fuzzy, clarity should win.

What The Phrase Gets Wrong

Why AI-generated does not mean easy, hollow, or thoughtless

A tool can generate language. It cannot carry taste, restraint, standards, and accountability for you.

The phrase AI-generated has become a shortcut insult because most people imagine a fantasy workflow. They picture a person typing one lazy sentence, pressing enter, and receiving a finished classic. That fantasy is useful for internet arguments because it is simple, contemptuous, and dramatic. It is not useful for understanding serious production. Real books are not only made of sentences. They are made of premise control, character pressure, scene selection, emotional sequencing, pacing, continuity, category awareness, revisions, and dozens of refusal decisions that happen after the machine offers its first answer.

A model can generate surface language quickly. It cannot tell you whether the opening promises the right story. It cannot decide which chapter should end with dread instead of relief. It cannot notice that the heroine became too passive in the middle act unless someone with judgment is checking the spine of the book. It cannot know what should be cut for commercial force, what should be slowed for intimacy, what should be withheld for tension, or what must stay because it pays off emotionally ten chapters later. Human intelligence still sits above the system if the work is serious.

That is why the sneer misses the real picture. AI can compress execution time without eliminating labor. In many cases it replaces one kind of labor with another. Less raw typing. More orchestration. Less mechanical drafting. More decision density. Less blank-page paralysis. More responsibility for structure, evaluation, and standards. None of that is imaginary work. It is simply harder to see if your idea of authorship begins and ends at the keyboard.

Invisible Labor

Orchestration is work, and serious orchestration is skilled work

The operator is not just asking for text. The operator is building the conditions under which the right text becomes possible.

A serious BookWriter workflow is not one prompt and a shrug. It starts before generation. The author has to define the category, the emotional promise, the reader expectation, the market position, the premise, the central lie, the pressure points, the tone, the pace, and the destination. Then the operator has to build enough structure that the system can move with intention rather than spray words into the dark. That is why the best outputs almost always come from the authors who know what they are trying to make.

Then comes the part outsiders miss. Every output has to be judged. Does this chapter actually turn? Did the argument intensify? Is the scene too explanatory? Did the voice drift? Did the cliffhanger land cheaply? Did the dialogue start sounding like the same person wearing different names? Did continuity break? Did the chapter respect the character bible? Did the line edits improve the page or merely make it busier? Orchestration means deciding, rejecting, reshaping, and running the process again until the book starts behaving like a book instead of a transcript of machine enthusiasm.

No one presses a button and wakes up to a finished catalog, audit trails, aligned tutorial videos, polished metadata, compliant KDP disclosures, and page-turners readers cannot put down. Someone still has to direct the machine, force quality, protect standards, and take responsibility for the outcome. That someone is not a spectator. That someone is the operator. In a serious system, the operator is doing authorship labor even when the tool is doing part of the sentence production.

The Honest Line

Transparency and pride can live in the same sentence

The mature position is not denial. It is accuracy without shame.

Many people make a strategic mistake the moment the phrase AI-generated enters the room. They become evasive. They start searching for softer language, hoping they can escape the social penalty by finding a technicality. That is weak positioning. It creates distrust before the book is even judged. The stronger posture is exactness. If AI created some of the content, say so. If AI only assisted the writing you created, say so. Transparency is not surrender. It is control.

In practice, this means separating two conversations that people keep trying to merge. Conversation one is platform disclosure. That is where you answer Amazon's question plainly. Conversation two is cultural interpretation. That is where you defend the seriousness of orchestration, direction, revision, and taste. You do not need to falsify the first conversation to win the second one. You can disclose accurately and still say, with a straight face, that excellent outcomes required intelligence, discipline, and labor.

That is especially important for authors who are building a long-term name instead of trying to game a moment. Readers can forgive a new production method faster than they forgive evasiveness. If the book is compelling, if the storytelling is strong, if the voice is sharp, if the emotional payoff is real, the reader's loyalty forms around the result. The label becomes context. The reading experience becomes the verdict.

What Readers Actually Buy

Most readers are not purchasing a labor diary. They are purchasing a story.

Readers stay for tension, payoff, voice, surprise, heartbreak, relief, and obsession. They do not stay for ideology.

The market will always contain a loud minority that wants to turn process into morality. That is normal. Every new tool triggers it. Some people once said word processors cheapened writing because they removed the discipline of retyping. Others said digital publishing cheapened writing because it removed gatekeepers. The public always confuses friction with virtue for a while. Then the market settles, and readers go back to the oldest test there is: is the book good enough to make me miss my exit, stay up too late, or talk about it the next day?

This is why the right strategic frame is not defensive. It is confident. The author does not need to claim that every sentence came from isolated manual struggle in order to deserve a loyal readership. The author needs to deliver. If the book holds attention, if the scenes land, if the emotional logic is sound, if the pacing keeps tightening, the reader feels the work whether they can name the workflow or not. Execution is still visible on the page.

What changes now is that authors can build with more leverage. Some will use that leverage badly and flood the market with hollow product. That will happen. It already has. But leverage does not automatically destroy craft. In disciplined hands it can give good storytellers more chances to build, refine, and publish. Readers do not need a sermon about that. They need proof on the page. The winners will be the ones who can provide it repeatedly.

BookWriter's Position

Our job is not to help authors hide the workflow. Our job is to help them master it.

The better system is not the system that hides AI. It is the system that tracks provenance, clarifies disclosure, and still drives better books.

This is why BookWriter should keep two ideas separate in the product. One is creative-control measurement. The other is marketplace disclosure. They are related, but they are not identical. A provenance report can show how much human direction, revision, and control shaped the manuscript. That is useful. It respects the reality that serious authorship is larger than raw keystroke count. But it should never be mistaken for Amazon's disclosure logic, because Amazon is classifying origin, not honoring nuance.

So the mature product posture is simple. We help authors keep better records. We explain the KDP rule plainly. We tell them when their current manuscript appears to contain AI-origin text. We do not train them to answer cleverly. We train them to answer accurately. Then we train them to be proud of the intelligence required to produce work that actually moves readers.

That posture is stronger for the brand and stronger for the author. It avoids a fragile marketing position built on denial. It builds trust. It also gives us the right to say something important in public: if the culture insists on calling serious, directed, revised, quality-controlled work AI-generated, then the culture needs a better understanding of what the labor now looks like. The answer is not shame. The answer is standards, transparency, and books that hit so hard the argument becomes secondary.

Frequently asked questions

Does Amazon KDP currently require disclosure for AI-generated books?

Amazon KDP currently requires authors to disclose AI-generated text, images, and translations. It does not require disclosure for AI-assisted content when the author created the material and AI only helped with brainstorming, editing, or refinement.

If I heavily edit AI-written chapters, can I call the book AI-assisted on KDP?

Not safely. If AI created the actual text that remains in the book, Amazon's current rule still treats that content as AI-generated. Heavy editing may matter artistically, but it does not automatically erase disclosure.

If I only used AI to brainstorm or line-edit my own writing, do I need to disclose?

That is generally closer to Amazon's AI-assisted category, not its AI-generated category, because you created the actual text yourself.

Does AI-generated mean the author did not do real work?

No. The label describes provenance, not effort. Serious AI-driven publishing still requires direction, structural thinking, selection, rewriting, quality control, and accountability for the final result.

What should an author do if the workflow was mixed and hard to classify?

Take the honest, conservative route. Review what content in the final book was actually created by AI. If you are uncertain whether AI-origin text remains, over-clarity is safer than under-disclosure.

Next step

Use the rule clearly. Wear the work proudly.

If your book includes AI-generated content, disclose it. Then let the market judge the book by its grip, not by its mythology. If your process was AI-assisted, say that just as clearly. The strength is in the standard, not the spin.

Research and reporting by Carver