WotC WotC: 'Artists Must Refrain From Using AI Art Generation'

WotC to update artist guidelines moving forward.

After it was revealed this week that one of the artists for Bigby Presents: Glory of the Giants used artificial intelligence as part of their process when creating some of the book's images, Wizards of the Coast has made a short statement via the D&D Beyond Twitter (X?) account.

The statement is in image format, so I've transcribed it below.

Today we became aware that an artist used AI to create artwork for the upcoming book, Bigby Presents: Glory of the Giants. We have worked with this artist since 2014 and he's put years of work into book we all love. While we weren't aware of the artist's choice to use AI in the creation process for these commissioned pieces, we have discussed with him, and he will not use AI for Wizards' work moving forward. We are revising our process and updating our artist guidelines to make clear that artists must refrain from using AI art generation as part of their art creation process for developing D&D art.


-Wizards of the Coast​


F2zfSUUXkAEx31Q.png


Ilya Shkipin, the artist in question, talked about AI's part in his process during the week, but has since deleted those posts.

There is recent controversy on whether these illustrations I made were ai generated. AI was used in the process to generate certain details or polish and editing. To shine some light on the process I'm attaching earlier versions of the illustrations before ai had been applied to enhance details. As you can see a lot of painted elements were enhanced with ai rather than generated from ground up.

-Ilya Shlipin​

 

log in or register to remove this ad


log in or register to remove this ad

Umbran

Mod Squad
Staff member
Supporter
The Jesuits? We’ve moved just a bit past the Jesuits. Respectfully, you are missing my point. My point is that AI is easing foundational questions about both how human minds work, and how we should be training them.

While ChatGPT may inspire us to ask questions about how our own minds work, it does not inform us about how our own minds work.
 

Umbran

Mod Squad
Staff member
Supporter
New fun information about the system collapse. While ChatGPT is now less than 30% accurate, people are 77% likely to prefer its answers because it states them formally and authoritatively.

Wave of the future.

Not a new wave, though. That people tend to respond positively to formal, authoritative presentation is old news. It isn't like government and corporate leaders wear suits because they are more comfortable.
 

That's not what I am talking about. Whether its a link to an image or an embedded image, the dataset is still being used to create commercial content by referencing the dataset (see using artists names as prompts).

Yes, but you're conflagrating several, very distinct steps here, each can have ethical and legal hurdles to be cleared. Note that I don't intent to convince you on ethical grounds as I think your stance is already decided, and you'd also be against generative AI like Firefly, whose author have given their rights to Adobe for the training of the AI, because the outcome (potentially reduced income for artists) would be the same. Or if there was a magic item called the Canvas of Creating, that allow anyone to just ask for a picture using a short description (or even better, by reading your brainwaves), and get a magnificent, breathtaking work of art, in return, for free, available on the Internet. I think you're making your decision based on the outcome of IA, not IA by itself, but I'd be delighted to be proven wrong.


STEP ONE : collating a database of potential training images URLs
STEP TWO : having the training program collect the images and use them in generating the model.
STEP THREE : distributing the model
STEP FOUR : using the model to guide denoiser programs (generator) toward an image that would satisfy the prompt entered by a human.

You asked why LAION-5B had copyrighted works, it is a question related to step one. Your question is now a question about "why are people creating commercial content after using LAION-5B, whose licence prohibit commercial use?" . It's more about step two and a different thing.
Let's see.

First, from reading the LAION-5B's website, they say: "The motivation behind dataset creation is to democratize research and experimentation around large-scale multi-modal model training and handling of uncurated, large-scale datasets crawled from publically available internet. Our recommendation is therefore to use the dataset for research purposes." It seems to be only written a recommandation that it would be used for research use, but not a literal licensing restriction against for-profit use. They'd prefer their database to be used for that, much like some nuclear researchers might have preferred to have their work used for civilian and peaceful use rather than killing people. But it doesn't seem to be mandatory as a licensing term.

Second, Stable Diffusion by Stability AI has made their models free and open-source, so they are respecting the "non profit" use even if it is read as an actual licensing term. I am not sure whether Adobe used LAION-5B, but they don't claim to and have their own databases to generate models with.
 
Last edited:

Not a new wave, though. That people tend to respond positively to formal, authoritative presentation is old news. It isn't like government and corporate leaders wear suits because they are more comfortable.

Indeed. Unfortunately, people will shock people to death with electricity if a blouse-wearing scientist tell them to (this might be a gross overexaggeration of reality and Milgram's experiment is frightening, but still).
 

Snarf Zagyg

Notorious Liquefactionist
Supporter
While ChatGPT may inspire us to ask questions about how our own minds work, it does not inform us about how our own minds work.

Are you sure?

Let me give you two famous examples of earlier AI screwing things up.

One was for cancer screening. Skin cancer. Anyway, the AI was trained on images of skin cancer. The AI was as accurate as a human dermatologist at diagnosing malignant lesions. The problem was this- the image set it was trained on had rulers next to the malignant lesions, so the main factor that the AI was using was, you guessed it, the presence of a ruler. So this tells on very little about how people think.

Second use. AI was trained to select resumes after combing through a data set. An audit of the AI found that the two factors that were most important for the AI were, wait for it, being named "Jared" and playing high school lacrosse. Because, again, it was trained on a data set of resumes that had been accepted by people.

... I would say the second example, unfortunately, gives us an unfortunate amount of insight into how people think.
 

FrogReaver

As long as i get to be the frog
Are you sure?

Let me give you two famous examples of earlier AI screwing things up.

One was for cancer screening. Skin cancer. Anyway, the AI was trained on images of skin cancer. The AI was as accurate as a human dermatologist at diagnosing malignant lesions. The problem was this- the image set it was trained on had rulers next to the malignant lesions, so the main factor that the AI was using was, you guessed it, the presence of a ruler. So this tells on very little about how people think.

Second use. AI was trained to select resumes after combing through a data set. An audit of the AI found that the two factors that were most important for the AI were, wait for it, being named "Jared" and playing high school lacrosse. Because, again, it was trained on a data set of resumes that had been accepted by people.

... I would say the second example, unfortunately, gives us an unfortunate amount of insight into how people think.
I think that’s just a case of correlation not equating to causation.
 

Snarf Zagyg

Notorious Liquefactionist
Supporter
I think that’s just a case of correlation not equating to causation.

....as pithy as that saying is, I think that the fact that the AI found that these were the salient factors quite notable.

To give you an easier-to-grasp example, there is a reason that orchestras went to "blind auditions" (performance behind a curtain).

That the AI determined that this is what was most important ... should likely make us reconsider our own processes.
 

Art Waring

halozix.com
Yes, but you're conflagrating several, very distinct steps here, each can have ethical and legal hurdles to be cleared. Note that I don't intent to convince you on ethical grounds as I think your stance is already decided, and you'd also be against generative AI like Firefly, whose author have given their rights to Adobe for the training of the AI, beucase the outcome (potentially reduced income for artists) would be the same.


STEP ONE : collating a database of potential training images URLs
STEP TWO : having the training program collect the images and use them in generating the model.
STEP THREE : distributing the model
STEP FOUR : using the model to guide denoiser programs (generator) toward an image that would satisfy the prompt entered by a human.

You asked why LAION-5B had copyrighted works, it is a question related to step one. Your question is now a question about "why are people creating commercial content after using LAION-5B, whose licence prohibit commercial use?" . It's more about step two and a different thing.
Let's see.
Thank you for the explanation this does help to separate out the steps, however...

First, from reading the LAION-5B's website, they say: "The motivation behind dataset creation is to democratize research and experimentation around large-scale multi-modal model training and handling of uncurated, large-scale datasets crawled from publically available internet. Our recommendation is therefore to use the dataset for research purposes." It seems to be only written a recommandation that it would be used for research use, but not a literal licensing restriction against for-profit use. They'd prefer their database to be used for that, much like some nuclear researchers might have preferred to have their work used for civilian and peaceful use rather than killing people. But it doesn't seem to be mandatory as a licensing term.

Second, Stable Diffusion by Stability AI has made their models free and open-source, so they are respecting the "non profit" use even if it is read as an actual licensing term. I am not sure whether Adobe used LAION-5B, but they don't claim to and have their own databases to generate models with.
Just because its not mandatory doesn't mean its legit to use for commercial purposes (thus why the law currently states that AI content cannot be copyrighted). Laws are in the wild west phase regarding datasets and training models so we don't actually know if this argument will hold up in court.

As for stability AI making their training models open source, here is a direct quote from stability ai regarding their AI music generator:
"Dance Diffusion is also built on datasets composed entirely of copyright-free and voluntarily provided music and samples. Because diffusion models are prone to memorization and overfitting, releasing a model trained on copyrighted data could potentially result in legal issues."
Their words, not mine.

Why are (stability ai) datasets and models for music entirely copyright-free and voluntary, when (many) ai-image generator datasets and models don't have this requirement?

Edited for clarity.
 
Last edited:

Umbran

Mod Squad
Staff member
Supporter
Indeed. Unfortunately, people will shock people to death with electricity if a blouse-wearing scientist tell them to (this might be a gross overexaggeration of reality, but still).

Hyperbole does not increase the validity of a point.
 

Remove ads

Remove ads

Top