Intellectual Property Law Archive

Tuesday

7

November 2023

1

COMMENTS

Question: Is It Legal to Use AI to Write a Book?

Written by , Posted in Copyright Law, Intellectual Property Law

author sitting in front of books bookself and computer monitors writing ai content blue

Answer: Yes it is legal.  There are no specific laws prohibiting the use of AI for writing and publishing books. The legality of using AI to write a book in the United States primarily depends on copyright and intellectual property laws.

Relevant US Law

  • Copyright Law: Under US Copyright Law, works created by AI might not qualify for copyright protection as they are not created by a human author. However, the person programming or instructing the AI can claim some rights. In my opinion, most traditional authors such as novelists would be very unlikely to disclose any use of AI as it may tarnish their reputation.
  • Intellectual Property Rights: If the AI uses existing copyrighted material to create a book, it might infringe upon the copyright holder’s intellectual property rights. The onus however would be on the infringed to spot this and to take action. Chat GPT, and especially Bing Chat powered by OpenAI is quite notorious for plagiarizing small chunks (1-5%) for its answers in my experience, often times taking 1-2 sentences verbatim from the source and including it in its output. This could lead to being caught if someone used Copyscape.com or another plagiarism detection tool that would detect these standalone plagiarized sentences.

Hypothetical Example

Imagine an AI that writes a novel based on input from an author. The author provides themes, character ideas, and plot points, and the AI generates the text. In this scenario, the resulting book may be considered a collaborative work.

In most likelihood however, the author’s technical and creative engagement with the AI system, alongside the careful vetting for originality, positions them to claim copyright ownership of the work under current interpretations of U.S. Copyright Law. The book’s publishing and sale on Amazon, contingent on policy adherence and full disclosure, is an exercise of the author’s right to distribute their original work.

Is it legal to sell AI written books on Amazon?

Answer: Yes it is legal to sell AI written books on Amazon and Kindle. However, in September 2023 Amazon introduced new rules and guidance for Kindle books generated by artificial intelligence on their Kindle Direct Publishing (KDP) portal. There is already quite an influx of AI-generated content on Kindle (such as self-help books, as well as coloring books and other visual type mediums). This is because of the low barrier of entry, so it makes sense that Amazon decided to clamp down here specifically.

The new rules require authors to inform Amazon when content is AI-generated. The company has also added a new section to its content guidelines focused on AI, which includes definitions of “AI-generated” versus “AI-assisted” content. Importantly, sellers are not required to disclose when content is AI-assisted.

In my view, what would most likely lead to legal trouble is if you use AI to mimic the style of a specific author or to replicate elements from their copyrighted works. This can lead to accusations of intellectual property misappropriation, claiming that you are diminishing the market value of their original works and can could create quite the legal headache for you, least of which is being potentially banned off of the KDP platform, or Amazon itself.

Selling AI-generated books that closely imitate another author’s style, especially if marketed as original works, can be considered false advertising under U.S. law. It would be akin to the sale of deep fakes in the art world which is a pretty low-brow thing to do, and a scheme that I would not recommend. “In the style of” is growing in popularity but not sure what the long term prospects are there.

Can you detect AI Generated Text

Answer: No, current public tools can’t reliably detect AI content. Even OpenAI’s very own AI detector tool didn’t work, and the tool was removed from their website within several months. Originality.ai also lost the public’s trust when Bible verses and other ancient works were flagged as 100% AI written. 02/12/2024 update: Originality.AI published their latest accuracy study, and we’ve played with it and it’s quite a bit better than before. It seems to do quite well at correctly identifying 100% boilerplate informational AI content. However if the content is tweaked a bit with examples or second-person language, Originality’s accuracy falls apart in our tests.

Detecting AI-generated content, including text and images, is possibly to a certain extent. However, it’s important to note that AI detection is a complex and evolving field, and no method is foolproof. AI-generated content, especially from advanced models, can be very sophisticated and challenging to distinguish from human-generated content.

Cuppa for instance , is a well-regarded AI writing tool that uses the latest GPT-4 Turbo API and who’s output isn’t detectable with any statistical significance and is a popular starting point for people interested in writing long-form content with AI due to their proprietary SEO algorithm, data integration, and dynamic prompting tools.

For text, certain indicators can suggest AI authorship, like repetitive phrasing, unusual word choices, or a lack of deep contextual understanding. For images, signs such as unusual patterns, artifacts, or inconsistencies in the image might indicate AI generation.

It’s also useful to consider the context in which the content appears, as AI-generated content often lacks the nuanced understanding or personal experiences that a human might convey.

[CONTACT THE ATTORNEY WHO ANSWERED THIS QUESTION]