WotC Hasbro CEO optimistic about AI in D&D and MTG’s future

GreyLord

Legend
I for one cannot wait for my post scarcity Star Trek Vinyard.

To whom should I contact about the land I want?

Not the capitalist...and especially not the mega-corps.

I think you are pretty screwed right now...as even those who are opposed to these things normally have things even more worse off.

Which is why in Star Trek it didn't happen until after World War 3, and apocalyptic event, and then being uplifted as a race by a bunch of pointy eared space elves.
 

log in or register to remove this ad

Anon Adderlan

Adventurer
It's no accident that #PeppaPig was mentioned in the same breath as D$D as #Hasbro's plans are the same for both: generating content on demand. They have all the media they'll ever need to train with, so they don't need to hire anyone. The results don't need to make sense, as the audience won't care. The biggest hurdle is getting past discerning parents, but given how easy that is these days it's not much of one. And they're already filing claims against shows like #Wolfoo for grokking their style, so they'd love nothing more than laws which protect such concepts.

Meanwhile their most profitable brand deal to date has been #MonopolyGo, a skinner box which has made $2.7 billion to date, with a bigger marketing budget than the $220 million used to develop #LastOfUs2. For comparison #BaldursGate3 made about $712 million to date, with #Hasbro getting about $90 million of it. This is why major studios are shutting down their AAA studios and/or moving to mobile. And if all that wasn't depressing enough here's a #Reddit post about how adding game mechanics is a financial liability as it actually distracts players from spending money.

The biggest threat from AI isn't that it will take our jobs, but that it will end up in the hands of the few. Make no mistake, a future where any media you desire can be generated on demand is less than a decade out, and corporations will do everything in their power to maintain total control over this technology, which will not be to the benefit of independent artists.

Seriously - a plush Peppa Pig that actually holds conversations with a kid in Peppa's voice? Might as well let them print money. I can see how that would be on his mind.
Bold of you to assume they'll produce a physical product.

Every time they push AI like they've been doing, the more I see something like Frank Herbert's original idea for the Butlerian Jihad (not Brian Herbert and Kevin J Anderson's idea) becoming a more appealing one.
Which is why I'm so pissed at them dumbing it down to just another #Terminator.

Until we get some form of protection for artists (including writers here), I'm not a fan of AI integration with anything.
What forms of protection do you suggest which wouldn't end up benefiting corporations more?

He’s just coming right out and saying that they’re changing models from one that engages in enlightened self-interest, to one that ignores the expertise of their marketing people in favor of throwing caution to the wind in order to sell more product.
Games like #DiscoElysium, #BaldursGate3, and every indie RPG ever created were all produced under the classic studio model, and would never have seen the light of day under the 'expertise' of marketing.

My kid is a competitive Irish Dancer, a type of dance which has some (but not all) objective judging elements to it. There are quiet rumors of a company maybe making an AI tool which can analyze 3d imaging of a dancer and then suggest improvements to the dance, like a dance teacher would do, but from your home while you are practicing.
AI is a fantastic teaching tool as long as it's trained on the right data, and a fantastic conditioning tool if you don't consider the results.

AI is bad. I don’t care how Hazbro decides to use it, it’s a technology that is deeply harmful to human society. The monetization comment, I personally took a wait-and-see attitude towards. But, the track record has shown that it caused huge problems for Magic: the Gathering when they shifted to this product-lead model, so I think my suspicion of their stated intent to take a similar approach with D&D is pretty justified.
In the mean time it continues to meaningfully increase my quality of life, and should continue to do so as long as folks don't act on such reductive takes and corporations don't keep gimping the output.

I mean, controlling it by laws would be great.
And what laws would you suggest that wouldn't end up giving corporations more power and costing creatives even more jobs?

I mean, even the companies 'pioneering' 'Ai' at the moment requested to be regulated.

Even they know both the risks and that they legally can't stop themselves.
Luckily this is starting to happen, and the EU just recently made it illegal for robots to read emotions. And while I agree with their reasoning, I can't shake the feeling that it's going to backfire sooner than later.

We face significant systemic problems, and we will not be able to solve them without thinking systematically.
If only there was some advanced information processing tool which could correlate millions of datapoints in a fraction of a time humans can which could assist us in addressing such issues.

The global information literacy crisis is a complex topic, but in brief, the ease of access to information coupled with the difficulty of evaluating the validity of that information is how we end up with the recent explosion of conspiracy thinking.
Information literacy begins with approaching complex topics with the nuance they deserve, and not just making dogmatic conclusions about it.

I didn't make that comparison, but I do put AI at the level of nukes. I do believe we are walking down a road that leads to our extinction.
Ours as in 'humans' or 'humanity'? Because there's a difference.

We still don't have anything like actual AI. We're not even heading in that direction in any real or meaningful sense. The success of crappy trend-copying algorithms we have now and pretend are making art (success at making tech execs money, not success at doing anything useful or worthwhile) is already stifling real research into actual AI. Anyone that's interested in the concept of AI for any purpose other than stealing art should really be the ones shouting the loudest about the not-actual-AI that's getting all the press these days.
I can confirm without a shadow of a doubt that every claim made here beyond the first is #Misinformation, and the first one only might be.

You know, the optimist in me really wants to believe that you are right here, and that AI could one day do the math that proves (as I have always believed) that short-term gain always leads to long-term loss, and that everyone loses under our current greed-first mentality (including those at the top, they just gain more in the short-term, which is what they care about now).
Math cannot 'prove' this as it isn't a theorem but an optimization problem which acts more like a game than a formula. And such algorithms have already 'proven' that sharing always wins over greed when given perfect knowledge and an infinite amount of time, which is useless as we neither know everything nor have that kind of time.

I don't personally worry about AI, I worry about humans.
Which ironically is one reason I'm such a big proponent of AI.

By feeding it new biased information until it produces a result that they want to hear?
Which is the direction most major institutions are taking because that's both what they and their customers want to hear.

We could have easily incorporated Asimov’s Three Laws of Robotics into these “AI” programs, but if we did, then they wouldn’t have been able to perform their basic tasks. They were built to replace human labor under capitalism. The mere existence of these “AI” programs causes harm to some humans.
Congratulations, you've discovered the point Asimov was trying to convey with those three laws in the first place.

AI is a threat to humanity because humans are. AI is a tool, used by humans, and humans will use it in ways that are tremendously harmful to other humans, in the interest of profit. It’s an absurdly powerful force multiplier to the exploitative and self-destructive behaviors we as humans are already engaging in.
Which is why it's so important it doesn't end up in the hands of the few.

Should probably clarified internet as social media.
I think they meant exactly what they said.

They interviewed a man who operated a cotton farm, and he was descended from a slave who worked on a cotton plantation. He said that the combine he operated would harvest more during the time he ate his lunch in the air conditioned cab (He could eat while it harvested, as it was GPS guided and essentially ran itself) than his great grandfather would harvest in an entire week manually.
Most don't seem to accept or realize that industrialization played a far bigger role in the abolishment of slavery than moral outrage.

While it's amazing to think about the fact that these machines are so advanced and efficient now.. On the flipside however these farmers are essentially held hostage by the technology, and the company that licenses it to them.
And then industrialization enslaved us all over again, which is why it's so important this technology be available to everyone and not in the hands of the few.

Actual update on this topic from an artist who works for WotC.
Don't you believe it. If it's not in the contract it isn't binding, and if it is expect a fight.

when the world where no one has to pay for anything comes to pass, we can make all the AI we want.
We don't get to the former before implementing the latter.

Anything you plug into any of their products is theirs, and I believe will be fed to an AI, and mined for new products.
Same case for #Meta, #Alphabet, and #Microsoft, who have been given so much content that they never need to pay for training data ever again, so they won't be the ones affected by laws which prevent the use of training data without consent.

I know someone who was a webpage programmer who was replaced with someone using AI for the programming of dozens more pages than one person could do in that time.
Now imagine how much more productive a skilled programmer would be if they used the technology.

I used to be a lawyer focused on contracts, and I had a large hard drive full of the best clauses for any type of issue that I had written or essentially traded with other lawyers to aid in writing contracts. If I were doing that job now, it could mostly be done by AI, and I've seen contracts lawyers positions are drying up.
Good. Hopefully accounting is next. For those who don't know, the only reason we still do our taxes instead of the government simply sending us a bill is because the accounting industry lobbied for it to preserve their jobs.

We already have that tool. It's called "the internet."
And before this we had a tool called "the library".

When I google "Who is the god of Shadows in the Greyhawk setting?"--yes, in plain English--I get numerous links to sites made and maintained by humans,
And by 'internet' you apparently mean search engines, software which takes the work of humans without their consent and transforms it with algorithms before presenting it to users.

Now why does that sound familiar?

and then when I'm in those sites, a control-F will take me directly to her entry, or to other entries that have words like "shadows" or "darkness" in them. The only benefit of AI here is that you don't have to control-F.
As someone who values their time and whose job involves a lot of research I can tell you those small intermediary steps add up very quickly.

Folks who are defending "legitimate" uses of generative AI or lumping it in with all AI, as though it's inevitable or part of some natural give-and-take...you should really look into how gen AI works. Like the actual process behind text-to-image generators and especially LLMs.
I am both familiar with the processes as well as working on their development, yet fail to see the point you're trying to make.

Just like NFTs and crypto aren't measures of progress in digital currency, gen AI isn't a measure of progress toward better creative tools.
Meanwhile the value of crypto has skyrocketed, and many marginalized industries still use it as currency.

we really do not need one to replace 90% of jobs either…
Given the competency of the so called 'professionals' I've had the misfortune of dealing with I'm not sure I agree.

you need to be able to develop styles and techniques in creative fields that you don't need to develop in a mathematical field (and IIRC, creative accounting is generally not a good thing).
You have a very reductive view of mathematics, and even accounting has different styles.

Too bad it's wrong so much. Getting the wrong answer quickly is worthless. Worse than worthless. It gives the recipient a false sense of the answer being correct despite it being false. Taking a few minutes to get the right answer is a good investment.
Get the 'right' answer where exactly, and how do you know when you have it?

How on earth does being diverse--and with it, welcoming a wider range of potential customers--not add value?
Because it's not about being diverse, but enforcing a singular ideology. And quality has never gone up as a result of mass marketing. Popularity does not equal value, and this premise has been one of the most toxic beliefs of the 20th century.

And, the more the LLMs fill the internet with wrong information, the more wrong information is out there for them to get tripped up on, making them more likely to spread said wrong information, creating a feedback loop that results in ever-degrading slop.
There's no such thing as 'wrong' information, merely contradictory information. And the biggest problem here is not that these systems will be 'incorrect', but that they won't present anything meaningful at all, which is exactly what's happening now. Meanwhile smaller scale curated AIs are doing just fine and are typically quite a bit more accurate than their human counterparts.

Heh, in my uses of ChatGPT when trying to use it for factual stuff, after questioning it, it often ends up apologizing for misinformation.
That was added by developers because most folks won't engage an AI which is blunt, rude, or ultimately honest about the things they disagree with.

With regard to AI such as ChatGPT, the "hallucination" rate of false information is roughly 20%.
Given the 'professionals' I've dealt with I'll take those odds.

It also means that one must already be knowledgeable about the subject matter to gain any benefit from it.
Which has always been the case with information technology.

that is why it will pass the Turing test, not so much because AI got better, but because humans got worse
I wonder why that is.
 




ECMO3

Legend
you are so close to understanding this, your problem is you see the Uber drivers as an isolated singular case when they will be close to the norm

Uber drivers were the specific example used that I replied to but it is the same across the board. The population as a whole will be better, more comfortable and with a better standard of living due to technological advances including AI.

There will be some people that are worse for it (some Uber drivers in this isolated singular case) but the vast, vast majority of people will be better for it and I would argue even among the ispolated singular case (Uber Drivers) most will be better off as they move on to something else.

This has been true with every single disruptive technological advance throughout history and is true for AI too.

There are a lot of reasons people have an irrational fear of technology, but IMO greed is the underlying root cause, there are those who want more for themselves and are willing to sacrifice the good of society to protect their own personal livelihood.
 

Vaalingrade

Legend
Uber drivers were the specific example used that I replied to but it is the same across the board. The population as a whole will be better, more comfortable and with a better standard of living due to technological advances including AI.

There will be some people that are worse for it (some Uber drivers in this isolated singular case) but the vast, vast majority of people will be better for it and I would argue even among the ispolated singular case (Uber Drivers) most will be better off as they move on to something else.
I think more research on Uber and its impact is in order.

Or a ride in an Uber late at night during a peak.
 

mamba

Legend
Uber drivers were the specific example used that I replied to but it is the same across the board. The population as a whole will be better, more comfortable and with a better standard of living due to technological advances including AI.

There will be some people that are worse for it (some Uber drivers in this isolated singular case) but the vast, vast majority of people will be better for it and I would argue even among the ispolated singular case (Uber Drivers) most will be better off as they move on to something else.
I doubt that very much, the problem is that the vast majority of people will be on the Uber driver side of things, not on the side benefitting from cheaper cab services

There are a lot of reasons people have an irrational fear of technology, but IMO greed is the underlying root cause, there are those who want more for themselves and are willing to sacrifice the good of society to protect their own personal livelihood.
because the rich never ruined anyone's lives to get even more money... society is being ruined by the billionaires, not the Uber drivers
 

Remove ads

Top