How an AI-written Book Shows why the Tech 'Terrifies' Creatives
Alena Dominguez heeft deze pagina aangepast 3 maanden geleden


For Christmas I got a fascinating gift from a good friend - my really own "best-selling" book.

"Tech-Splaining for Dummies" (great title) bears my name and setiathome.berkeley.edu my image on its cover, and it has radiant evaluations.

Yet it was completely composed by AI, with a couple of basic prompts about me provided by my friend Janet.

It's an intriguing read, and extremely amusing in parts. But it also meanders rather a lot, and is someplace between a self-help book and a stream of anecdotes.

It imitates my chatty style of composing, but it's also a bit repeated, and really verbose. It might have surpassed Janet's triggers in collating information about me.

Several sentences begin "as a leading technology journalist ..." - cringe - which could have been scraped from an online bio.

There's also a mysterious, repetitive hallucination in the type of my feline (I have no family pets). And there's a metaphor on nearly every page - some more random than others.

There are dozens of business online offering AI-book composing services. My book was from BookByAnyone.

When I called the president Adir Mashiach, based in Israel, he informed me he had actually sold around 150,000 personalised books, asteroidsathome.net primarily in the US, considering that pivoting from compiling AI-generated travel guides in June 2024.

A paperback copy of your own 240-page long best-seller costs ₤ 26. The firm uses its own AI tools to produce them, based upon an open source big language model.

I'm not asking you to buy my book. Actually you can't - only Janet, who produced it, can purchase any more copies.

There is presently no barrier to anyone producing one in anyone's name, including stars - although Mr Mashiach says there are guardrails around violent content. Each book includes a mentioning that it is fictional, created by AI, and developed "entirely to bring humour and pleasure".

Legally, the copyright belongs to the firm, but Mr Mashiach worries that the product is intended as a "customised gag gift", and the books do not get sold further.

He hopes to widen his variety, generating different genres such as sci-fi, and perhaps providing an autobiography service. It's developed to be a light-hearted kind of consumer AI - offering AI-generated products to human clients.

It's likewise a bit scary if, like me, you compose for a living. Not least due to the fact that it most likely took less than a minute to produce, and it does, certainly in some parts, sound much like me.

Musicians, authors, artists and stars worldwide have actually expressed alarm about their work being used to train generative AI tools that then produce similar material based upon it.

"We should be clear, when we are discussing data here, we in fact indicate human creators' life works," says Ed Newton Rex, founder of Fairly Trained, which campaigns for AI companies to regard creators' rights.

"This is books, this is posts, this is pictures. It's works of art. It's records ... The entire point of AI training is to discover how to do something and after that do more like that."

In 2023 a song including AI-generated voices of Canadian vocalists Drake and The Weeknd went viral on social media before being pulled from streaming platforms due to the fact that it was not their work and they had not granted it. It didn't stop the track's developer trying to choose it for a Grammy award. And despite the fact that the artists were fake, it was still hugely popular.

"I do not think using generative AI for innovative purposes should be prohibited, but I do believe that generative AI for these purposes that is trained on people's work without permission should be banned," Mr Newton Rex includes. "AI can be extremely powerful however let's construct it ethically and relatively."

OpenAI says Chinese competitors using its work for their AI apps

DeepSeek: The Chinese AI app that has the world talking

China's DeepSeek AI shakes industry and damages America's swagger

In the UK some organisations - consisting of the BBC - have chosen to block AI developers from trawling their online material for training purposes. Others have chosen to team up - the Financial Times has partnered with ChatGPT creator OpenAI for example.

The UK federal government is considering an overhaul of the law that would permit AI developers to utilize creators' material on the web to help develop their designs, unless the rights holders pull out.

Ed Newton Rex describes this as "madness".

He explains that AI can make advances in areas like defence, health care and drapia.org logistics without trawling the work of authors, journalists and artists.

"All of these things work without going and altering copyright law and ruining the livelihoods of the nation's creatives," he argues.

Baroness Kidron, a crossbench peer in the House of Lords, is likewise strongly versus eliminating copyright law for AI.

"Creative markets are wealth creators, 2.4 million jobs and a great deal of delight," states the Baroness, who is likewise an advisor to the Institute for Ethics in AI at Oxford University.

"The federal government is weakening among its finest performing industries on the unclear pledge of growth."

A federal government spokesperson stated: "No relocation will be made until we are definitely confident we have a practical plan that delivers each of our goals: increased control for right holders to help them accredit their material, access to high-quality material to train leading AI designs in the UK, and more openness for right holders from AI developers."

Under the UK federal government's new AI strategy, a nationwide information library containing public information from a vast array of sources will also be offered to AI researchers.

In the US the future of federal rules to manage AI is now up in the air following President Trump's return to the presidency.

In 2023 Biden signed an executive order that aimed to improve the safety of AI with, to name a few things, companies in the sector required to share details of the functions of their systems with the US federal government before they are launched.

But this has actually now been repealed by Trump. It stays to be seen what Trump will do instead, but he is said to want the AI sector to deal with less guideline.

This comes as a number of lawsuits against AI firms, and particularly against OpenAI, continue in the US. They have been gotten by everybody from the New York Times to authors, surgiteams.com music labels, and even a comic.

They claim that the AI firms broke the law when they took their content from the web without their approval, and used it to train their systems.

The AI business argue that their actions fall under "reasonable use" and are therefore exempt. There are a number of elements which can constitute fair usage - it's not a straight-forward meaning. But the AI sector [users.atw.hu](http://users.atw.hu/samp-info-forum/index.php?PHPSESSID=3cf0f5c485858dfbbee8002639f54764&action=profile