WotC: 'We made a mistake when we said an image not AI'

It seems like AI art is going to be a recurring news theme this year. While this is Magic: the Gathering news rather than D&D or TTRPG news, WotC and AI art has been a hot topic a few times recently. When MtG community members observed that a promotional image looked like it was made with AI, WotC denied that was the case, saying in a now-deleted tweet "We understand confusion by fans given...

Screenshot 2024-01-07 at 18.38.32.png

It seems like AI art is going to be a recurring news theme this year. While this is Magic: the Gathering news rather than D&D or TTRPG news, WotC and AI art has been a hot topic a few times recently.

When MtG community members observed that a promotional image looked like it was made with AI, WotC denied that was the case, saying in a now-deleted tweet "We understand confusion by fans given the style being different than card art, but we stand by our previous statement. This art was created by humans and not AI."

However, they have just reversed their position and admitted that the art was, indeed, made with the help of AI tools.

Well, we made a mistake earlier when we said that a marketing image we posted was not created using AI. Read on for more.

As you, our diligent community pointed out, it looks like some AI components that are now popping up in industry standard tools like Photoshop crept into our marketing creative, even if a human did the work to create the overall image.

While the art came from a vendor, it’s on us to make sure that we are living up to our promise to support the amazing human ingenuity that makes Magic great.

We already made clear that we require artists, writers, and creatives contributing to the Magic TCG to refrain from using AI generative tools to create final Magic products.

Now we’re evaluating how we work with vendors on creative beyond our products – like these marketing images – to make sure that we are living up to those values.


This comes shortly after a different controversy when a YouTube accused them (falsely in this case) of using AI on a D&D promotional image, after which WotC reiterated that "We require artists, writers, and creatives contributing to the D&D TTRPG to refrain from using AI generative tools to create final D&D products."

The AI art tool Midjourney is being sued in California right now by three Magic: The Gathering artists who determined that theirs and nearly 6,000 other artists' work had been scraped without permission. That case is ongoing.

Various tools and online platforms are now incorporating AI into their processes. AI options are appearing on stock art sites like Shutterstock, and creative design platforms like Canva are now offering AI. Moreover, tools within applications like Photoshop are starting to draw on AI, with the software intelligently filling spaces where objects are removed and so on. As time goes on, AI is going to creep into more and more of the creative processes used by artists, writers, and video-makers.

Screenshot 2024-01-07 at 19.02.49.png
 

log in or register to remove this ad

Blue

Ravenous Bugblatter Beast of Traal
I think lumping everything labeled “AI“ under the same umbrella is going to be problematic. A tool for filling in blank spaces where something was selected and deleted is not the same as generating an entire image in the style of a particular artist. The ai training for the type of tools in the first example likely depend on a completely different method than scraping images off the web.

Maybe. Some sort of certification process might be needed for “ethically trained“ ai.
No, it's the same technology behind it. It exists right now, called inpainting, and it's uses the same models and such. It is the same technology.
 

log in or register to remove this ad

Mercador

Adventurer
I mean the debates around this will be entertaining to say the least.
Yeah, it will be interesting to live in but I'm not sure it will be an easy ride. Currently, Morrus wrote about creativity jobs but I can assure you that we are going AI full throttle in several sectors. I haven't seen that kind of craziness since the happening of the mobile phones, heck, even it reminds me the first years of Internet...
 

I'm pretty sure EN World doesn't get much say in what sorts of ads run on the browsers of visitors, but it did amuse me to see this pop up while reading this thread:
1704684698330.png

Man, the world is so weird now, I'm almost wondering, "Is this a fake ad meant to discredit Adobe? Because that looks pretty mid compared to what I normally see Photoshop used for."
 



I think lumping everything labeled “AI“ under the same umbrella is going to be problematic. A tool for filling in blank spaces where something was selected and deleted is not the same as generating an entire image in the style of a particular artist. The ai training for the type of tools in the first example likely depend on a completely different method than scraping images off the web.

Maybe. Some sort of certification process might be needed for “ethically trained“ ai.
Yeah, I mean based on the very vague definition that is being used, the "clone stamp" is not far off being "AI". I think as long as its not stealing another artist's work (ie the training data) then it should be legit.

The real solution to this, of course, is to enforce that the AI companies pay (ongoing) licensing fees for each and every single piece of art or creative work that their tools are "trained on". Of course the side effect might be completely destroying these AI generaters as viable on a financial level but...
200.gif
 

Bloomberg recently posted that Adobe was likely to spend 6 billions on AI next year, instead of buying Figma. I am pretty sure they could have used a better example of an AI-generated jaguar in their ad by using a tiny part of this money...
 

GreyLord

Legend
It will be impossible to determine in short order. Again, I remember the 'art' we generated via these tools over a year ago. It was a hallucinatory mess of color, and while interesting, was not all that impressive.

Fast forward to today, and we could already improve of a vast amount of WotC/D&D art with AI generated pieces.

Fast forward another year? Good luck telling anything apart.

There are ways, but considering the current trends in the art community (as I understand them) they will be seen as increasingly hostile to artists seeking work or contracting.

The easiest way is to require hard copy from the artist (aka...an official painting/drawing/work...etc) from them for validation that they actually did it.

With many working only on computer tableau that would instantly be seen as controversial by many.

The problem with works that are only done via computer, however, it is getting more and more difficult to determine whether there was any AI involvement or not. As AI gets further integrated into various programs I expect this problem will only get harder to stop and/or detect.

The easiest, which is to require hardcopy with materials seen (such as paint) is probably the one where you will actually encounter more hostility in the coming days (ironically from many of those who claim they want no AI doing artwork, but ONLY do artwork via computer making things harder to tell if it is truly just the artist, or an artist with a little AI aid) rather than those who are opposed solely to AI.

Right now, the tide is strongly against AI, but I expect over the next decade (could be sooner) that AI artwork or assisted artwork will be seen as more acceptable and companies that require a hardcopy for verification that it is NOT AI or AI assisted will be taboo'ed by the very artists that currently are angry about AI.
 

Yeah, I mean based on the very vague definition that is being used, the "clone stamp" is not far off being "AI". I think as long as its not stealing another artist's work (ie the training data) then it should be legit.

The real solution to this, of course, is to enforce that the AI companies pay (ongoing) licensing fees for each and every single piece of art or creative work that their tools are "trained on". Of course the side effect might be completely destroying these AI generaters as viable on a financial level but...

You can't really prevent artists from managing their own IP by forbidding deals where they sell all rights to their creations for a lump sum. If you could it wouldn't change the situation a lot except of the very short term: the existing datasets wouldn't be affected (so the Adobe situation wouldn't change) and progress in AI is showing that the size of the dataset matters less than the quality of the captioning. Dall-E 2 used a smaller dataset than Dall-E 1 and while Microsoft didn't pubish a lot of information on Dall-E 3, a few papers from them let think that its ability to follow prompt comes from better (AI-generated) captioning.

It is a big problem with dataset scraped from the Internet: they lack a good caption and using the alt tags isn't sufficent. Let's take an example: look at the image used in this board to illustrate the topic. What do you see? I'd say "a steampunk workshop, with lamp bulbs, a gauge from an unknown machine and a box on the table in front of the image, a window opening toward a drab street, a bookshelf in the background. At the center of the image are five colourful cards from Magic, leaning on cylindral box. The lighting is warm and comes both from the lamps and the viewer of the image". This is a basic description that could be used in training. If the site was scraped to train an AI, it would have the site's image and the alt, which is currently "screenshot-2024-01-07-at-18-38-32-png". That's not very useful as an alt (so if you're blind, you've no way of having information of what is displayed) but that reflects the poor state of alts on the Internet, few people know about the use of alt -- at best they use it to have a pop-up text when the mouse is on the image and provide additional information, like "buy one, get one free until January 31st". Most of the images within scraped dataset are tagged like this one, and they won't really help the AI training. D3 worked on an undisclosed database of image, but they used a process (probably using chatgtp-vision? That's my guess but they could have paid for distributed work from a low-wage country) to provide accurate caption, far reducing the size of the database the training process needed and improving quality, because even in detailed alts, it's not common to provide positional details or things that are readily assumed by human readers).

And public artworks database are being released (Europeanea, a project from the EU commission, released a 25 million high-quality image database from artworks in European museum with a licence that does allow training) and they aim for 58 millions. That's a fourth of what Adobe has, but you can get a decent experimental model with as low as 14 millions images. If a worldwide regulation somehow disallowed flat selling of IP, it would just prompt training from datasets like this after a captioning pass. There are community efforts to train on this right now, but the computing costs are high for enthusiasts, so it's not happening overnight, unfortunately. On the other hand, I doubt it would stop Adobe, Microsoft, Amazon or Alibaba from literally doing it overnight.
 
Last edited:

Vincent55

Adventurer
A.I. art saves me a lot of time in many cases mostly landscapes or other things in general, making simple NPC character types and such or buildings and other scenes. I don't think you can stop companies from using it and it will only get better as well, to a point that you will not know if it is or not. Art in general is great due to the passion put into it, this can not be duplicated by A.I. because it lacks imagination and emotion. But those who are average or subpar artists will suffer that is sure, but maybe it will push them to be better in the end or do something else. Anyway being an artist myself I for one am ok with it, and use it to inspire my creations, best of all I can take the ai generated idea and do my version of it using it as a base.
 

Remove ads

Remove ads

Top