Also, I followed
@Faolyn's link, and it has nothing to do with AI being able to recreate existing books.
What is the story told in the aforementioned link?
They explain that for a long time, scams have existed in the book industry, with low-quality writers trying to piggy-back the success of other authors by writing "companion books" or "scam book". Now, with AI, the entry barrier has lowered, one doesn't need to type a fake book, one just has to do ask an AI to type it for them.
The practice they complain about is illustrated by someone publishing "workbook" for a noted manual, hoping that people who enjoyed the manual would order the workbook, assuming it's the same quality, and be disappointed. Or someone would write a book called 40 shades: the making of a BDSM master, hoping that people would buy it after reading 50 shades of grey and thinking its a prequel.
Since I am ethically bankrupt and morally corrupted, I'll go as low as copy a sentence of the article for reviewing and commentary purpose:
Ultimately, every new book is at risk of having several AI-generated biographies, copycat books, summaries, or workbooks meant to divert sales posted right alongside the book.
Non of the examples they mention involves copying the initial book. If someone likes The Hobbit, sure, they might be disappointed to read that Biographies of The Shire, a great add-on to the Lord of the Rings, is just some random and pointless stories about gnomes and unrelated to Tolkien's book. Or a book about the epic story of a diminutive hero going to fight a dragon and finding a magical ring along the way. Or a CliffsNote (ie, AI-Note) of the Lord of the Rings, without enough copy of the book to infringe copyright, but with summaries and analysis (of bad quality, I presume) about a newly published book.
Instead, the scam companion books simply regurgitate the key points of the original work in a condensed form, which is clearly infringing.
Except that it isn't. If it's written differently, it's a summary. Since the idea of copyright was never to protect ideas, a summary, if well done, isn't infringing
at all. A bad summary (excerpts thrown together) would be infringing, though.
They continue saying that it's sad that an analysis that took someone 13 years of research to do can be plundered by regurgitating the conclusions in another book, much less researched. It is, however, outside of the scope of copyright.
Also, the problem they mention is that AI can write cheaper than a writer from a cheap-labour country, and quicker. So, this long existing scams can be produced faster and cheaper, lowering the entry barrier and making it potentially more lucrative. Since the article is speaking about newly released book having companions on day 1, obviously, the AI engine wasn't trained on them (if anything, because it takes time). It's the author of the "summary" that scans the book and drops it into an AI and asks "write a 50 pages summary of this book", much like he'd scan the book, mail it to an English student in, say, Cambodia, and ask him to write a summary as soon as possible.
Which is using an AI for doing a task, that one might find morally abbhorent, and blaming the AI instead of the person who did the thing. It is like blaming copiers because people copy book with them -- a reasonable stance, but removing copiers because of that limited use case wasn't attempted (as far as I know).
The article also links some other practice like trying to sell a book under a name that is the same as a popular writer with just a letter changed, which is unrelated to AI (one could do that for traditional books, and it could even happen by chance, we were happy to never have two Steve Jacksons in the hobby) and may or may not be fraudulent, but in any case having nothing to do with how the content is produced.
Then it's a section about lobbying Congress to mandate books to have a label saying it may contain nuts, eggs or other allergens. Erm, it may contain AI-edited content.