Ryan Dancey & AEG Part Ways Following AI Comments

COO says that AI could make any of the company's games.
Alderac_brandpage_BS_1200x600_crop_center.webp


Ryan Dancey, the Chief Operating Officer of boardgame publisher Alderac Entertainment Group, no longer works for the company, following statements on social media where he claimed that AI could make most of the company's board games, and that D&D and Magic: the Gathering were the only new forms of gameplay in his lifetime. After another poster on LinkedIn claimed that "AI wouldn't come up with Tiny Towns or Flip Seven or Cubitos because it doesn't understand the human element of fun", Dancey responded that he had zero reason to believe that AI could not do such a thing.

"I have zero reason to believe that an Al couldn't come up with Tiny Towns or Flip Seven or Cubitos. I can prompt any of several Als RIGHT NOW and get ideas for games as good as those. The gaming industry doesn't exist because humans create otherwise unobtainable ideas. It exists because many many previous games exist, feed into the minds of designers, who produce new variants on those themes. People then apply risk capital against those ideas to see if there's a product market fit. Sometimes there is, and sometimes there is not. (In fact, much more often than not).

Extremely occasionally (twice in my lifetime: D&D and Magic: the Gathering) a human has produced an all new form of gaming entertainment. Those moments are so rare and incandescent that they echo across decades.

Game publishing isn't an industry of unique special ideas. It's an industry about execution, marketing, and attention to detail. All things Als are great at."
- Ryan Dancey​

The Cardboard Herald, a boardgame reviews channel, responded yesterday on BlueSky that "As you may have seen, [AEG] CEO Ryan Dancey stated that AI can make games “just as good as Tiny Towns or Flip 7 or Cubitos”, completely missing the inexorable humanity involved.We’ve spent 10 years celebrating creatives in the industry. Until he’s gone we will not work with AEG."

Today, AEG's CEO John Zinser stated "Today I want to share that Ryan Dancey and AEG have parted ways.This is not an easy post to write. Ryan has been a significant part of AEG’s story, and I am personally grateful for the years of work, passion, and intensity he brought to the company. We have built a lot together. As AEG moves into its next chapter, leadership alignment and clarity matter more than ever. This transition reflects that reality.Our commitment to our designers, partners, retailers, and players remains unchanged. We will continue building great games through collaboration, creativity, and trust."

Dancey himself posted "This morning [John Zinser] and I talked about the aftermath of my post yesterday about the ability of AI to create ideas for games. He's decided that it's time for me to move on to new adventures. Sorry to have things end like this. I've enjoyed my 10 years at AEG. I wish the team there the best in their future endeavors.

I believe we're at a civilizational turning point. That who we are and how we are is going to change on the order of what happened during the Agricultural and Industrial Revolutions; and it's past time we started talking about it and not being afraid to discuss the topic. Talking about AI, being honest about what it can and cannot do, and thinking about the implications is something we have to begin to do in a widespread way. Humans have a unique creative spark that differentiates us and makes us special and we should celebrate that specialness as we experience this epic change.

For the record: I do not believe that AI will replace the work talented game designer/developers do, nor do I think it is appropriate to use AI to replace the role of designer/developers in the publication of tabletop games. During my time at AEG I developed and implemented polices and contracts that reflect those views. It's important to me that you know what I believe and what I don't believe on this particular topic, despite what you may have read elsewhere."

Whatever your position on generative LLMs and the like, when the COO of your company announces publicly that all of the company’s games could have been made by AI, it’s a problem. UK readers may recall when major jewelry chain Ratners’ CEO Gerald Ratner famously announced that the products sold in his stores were “trash”, instantly wiping half a billion pounds from the company’s value back in the early 1990s. The company was forced to close stores and rebrand to Signet Group. At the time the Ratners Group was the world's biggest jewelry retailer. Ratner himself was forced to resign in 1992. The act of making a damaging statement about the quality of your own company’s products became known as “doing a Ratner”.

Dancey was VP of Wizards of the Coast when the company acquired TSR, the then-owner of Dungeons & Dragons. He is also known for being the architect of the Open Game License. Dancey has worked as Chief Operating Officer for AEG for 10 years, and was responsible for the day-to-day operations of the company, second-in-command after the CEO, John Zinser.
 

log in or register to remove this ad

Right. The point is that they're aware of the reliability issues and think there are ways to address them even for high-stakes work. Therefore, the fact that someone saw a factual error in a chatbot doesn't tell us much about the path of the technology as a whole.

So, unless you look at the fundamental differences in what is being asked of the technology in these different use-cases, you will fail to address the issues actually concerning people with the technology, and will do nothing to alleviate concerns. While "The technology as a whole" may be important to you, to most folks that is a deflection and distraction from how it may impact them.

We aren't driving on Mars, so that really doesn't make us feel better about it.

If the technology is being presented to us in a situation where we should have a reasonable expectation of factual accuracy, it ought to deliver that. If it can't, it shouldn't be presented as an option.
 
Last edited:

log in or register to remove this ad

"should eventually drive support for all other game systems to the lowest level possible in the market, create customer resistance to the introduction of new systems" seems straight-up like a plan to bring embrace and extinguish to RPGs.
it’s more a side effect than the main goal.

The main goal was selling more D&D and bringing more 3pps onto D&D (and making sure that D&D can endure regardless of what happens to the company publishing it).

That the economics of scale then meant that D&D products could be cheaper than the competition is just a consequence of their increased market share
 


The 'ways to address them' might be hoping the AI continues to get better or even to not rely on AI at all for the parts where it is not sufficiently reliable or that are critical while using it in places where it is 'reliable enough', which really is more working around the unreliability than addressing it
The problem is that current AI needs more compute power and energy to make the next leap and it does not exist.

This is why we have a consumer electronics apocalypse right now.

The hardware needed for the next big leap is not ready so we’re brute forcing it with tons of existing hardware to try and make up for it.

The net effect is that AI is rushed and it is making most peoples lives worse.

I am involved in an AI project right now for the editorial office of the future but my main goal is human preservation so I am focusing on what work people will do now that the crap tasks are handled by AI. So many companies are looking at AI and not asking themselves how to expand their business now that a person’s time has been freed.
 

Imagine how much the 'nailgun' does for wooden house building and renovation, if you give that same tool to the average person, they will probably nail themselves to their car with it...

Just because you don't know how to use a tool or know how to choose the right tool, doesn't mean the tool is useless. Not all LLMs are created equal for the same tasks, often different providers have different versions available. And just like with every other product, people sell it telling lies (or do you really think dish-soap X will be so much better then dish-soap Y, like they show in the commercials)...
The assumption that I'm an idiot that doesn't understand the tools is a failure on your part, not mine and not the AI in this situation.
I work for a company that uses neural nets, regularly hosts classes on how to use gen AI within the industry we sell to, I just hosted a course on how to structure reports so that AI summarizes them accurately.

Whenever an LLM fails its acolytes want the general public to think that it is the public's fault rather than the model's fault. They do this without introspection nor question. Because advocacy for AI in all situations is now the default within certain circles.
 

So, unless you look at the fundamental differences in what is being asked of the technology in these different use-cases, you will fail to address the issues actually concerning people with the technology, and will do nothing to alleviate concerns. "The technology as a whole" may be important to you, to most folks that is a deflection and distraction from how it may impact them.
Maybe it came across wrong--I'm not trying to convince people their concerns are not valid because I think those concerns are valid. AI will hammer the labor market and make jobs redundant and displace workers. It is going to make mistakes, consequential ones, that will get people killed.

My interest in the topic is that the anti-AI push runs the risk of being wrong, scientifically, about the nature of what it is criticizing. It's much easier to oppose AI if you think the output is always slop and it doesn't help productivity, because nothing much is lost by axing it.

But the world I am seeing is one where experts, (many with no incentive in AI success), are finding it beneficial. Ignoring these examples to focus only on the fail cases runs the risk of being anti-scientific and anti-expert. Ultimately it will be unsuccessful as a case against AI. That worries me because it leaves the anti-AI position in a poor place to counter AI risk.
 

It wasn't a side-effect, because it didn't actually happen.
3e was pretty dominant, and ‘lowest level possible’ is not zero. That WotC then dropped the ball and released a 4e without OGL didn’t help that side-effect though

It was, however, his stated goal (at least as per the interview quoted). I'm just happy that he miscalculated.
no, it is an expected side effect of increasing the popularity of D&D, even in his telling. Here is a less selective quote

“If you accept (as I have finally come to do) that the theory is valid, then the logical conclusion is that the larger the number of people who play D&D, the harder it is for competitive games to succeed, and the longer people will stay active gamers, and the more value the network of D&D players will have to Wizards of the Coast.

In fact, we believe that there may be a secondary market force we jokingly call "The Skaff Effect", after our own Skaff Elias. Skaff is one of the smartest guys in the company, and after looking at lots of trends and thinking about our business over a long period of time, he enunciated his theory thusly:

"All marketing and sales activity in a hobby gaming genre eventually contributes to the overall success of the market share leader in that genre."
In other words, the more money other companies spend on their games, the more D&D sales are eventually made. Now, there are clearly issues of efficiency - not every dollar input to the market results in a dollar output in D&D sales; and there is a substantial time lag between input and output; and a certain amount of people are diverted from D&D to other games never to return. However, we believe very strongly that the net effect of the competition in the RPG genre is positive for D&D.

The downside here is that I believe that one of the reasons that the RPG as a category has declined so much from the early 90's relates to the proliferation of systems. Every one of those different game systems creates a "bubble" of market inefficiency; the cumulative effect of all those bubbles has proven to be a massive downsizing of the marketplace. I have to note, highlight, and reiterate: The problem is not competitive product, the problem is competitive systems. I am very much for competition and for a lot of interesting and cool products.

So much for the dry theory and background. Here's the logical conclusions we've drawn:

We make more revenue and more profit from our core rulebooks than any other part of our product lines. In a sense, every other RPG product we sell other than the core rulebooks is a giant, self-financing marketing program to drive sales of those core books. At an extreme view, you could say that the core book of D&D -- the PHB -- is the focus of all this activity, and in fact, the PHB is the #1 best selling, and most profitable RPG product Wizards of the Coast makes year in and year out.

The logical conclusion says that reducing the "cost" to other people to publishing and supporting the core D&D game to zero should eventually drive support for all other game systems to the lowest level possible in the market, create customer resistance to the introduction of new systems, and the result of all that "support" redirected to the D&D game will be to steadily increase the number of people who play D&D, thus driving sales of the core books. This is a feedback cycle -- the more effective the support is, the more people play D&D. The more people play D&D, the more effective the support is.”

 

From a 2000 interview with Dancey here at enworld:

"The logical conclusion says that reducing the "cost" to other people to publishing and supporting the core D&D game to zero should eventually drive support for all other game systems to the lowest level possible in the market, create customer resistance to the introduction of new systems, and the result of all that "support" redirected to the D&D game will be to steadily increase the number of people who play D&D, thus driving sales of the core books."

He actively bragged about plans to destroy non-D&D games in pursuit of the almighty dollar. I'm cool calling his work "actively damaging" to the industry.

The word "destroy" does not appear in the quote you present.

Instead, her refers to, "the lowest level possible," but does not define that level.

You take it to mean "BWAHAHAHA! We will destroy the competition!" But that's an assumption/interpretation on your part, and it doesn't align with how corporate players typically talk.

Corporate folks hedge. A lot. They use phrases like "lowest possible level" not because they mean "destroy" but don't want to say it. They use that kind of phrasing because:

1) They want to sound like their work will be impactful in a way their bosses like, but

2) They don't know what the lowest level is, and they want to still be able say they were right a year or two down the road.

He may well have known that it wasn't actually going to drive down other games much, and later still make it sound like he was prophetic.
 

My experiences discussing AI in RPG forums has led to increasing empathy for people who think chainmail bikinis are ok, or that "orcs aren't black people". Not that I am agreeing with their arguments/views, but that I'm realizing what it's like to try to engage in a community when any attempt to offer counterpoints to the prevailing groupthink labels you a heretic.
 


Related Articles

Remove ads

Remove ads

Top