Ryan Dancey & AEG Part Ways Following AI Comments

COO says that AI could make any of the company's games.
Alderac_brandpage_BS_1200x600_crop_center.webp


Ryan Dancey, the Chief Operating Officer of boardgame publisher Alderac Entertainment Group, no longer works for the company, following statements on social media where he claimed that AI could make most of the company's board games, and that D&D and Magic: the Gathering were the only new forms of gameplay in his lifetime. After another poster on LinkedIn claimed that "AI wouldn't come up with Tiny Towns or Flip Seven or Cubitos because it doesn't understand the human element of fun", Dancey responded that he had zero reason to believe that AI could not do such a thing.

"I have zero reason to believe that an Al couldn't come up with Tiny Towns or Flip Seven or Cubitos. I can prompt any of several Als RIGHT NOW and get ideas for games as good as those. The gaming industry doesn't exist because humans create otherwise unobtainable ideas. It exists because many many previous games exist, feed into the minds of designers, who produce new variants on those themes. People then apply risk capital against those ideas to see if there's a product market fit. Sometimes there is, and sometimes there is not. (In fact, much more often than not).

Extremely occasionally (twice in my lifetime: D&D and Magic: the Gathering) a human has produced an all new form of gaming entertainment. Those moments are so rare and incandescent that they echo across decades.

Game publishing isn't an industry of unique special ideas. It's an industry about execution, marketing, and attention to detail. All things Als are great at."
- Ryan Dancey​

The Cardboard Herald, a boardgame reviews channel, responded yesterday on BlueSky that "As you may have seen, [AEG] CEO Ryan Dancey stated that AI can make games “just as good as Tiny Towns or Flip 7 or Cubitos”, completely missing the inexorable humanity involved.We’ve spent 10 years celebrating creatives in the industry. Until he’s gone we will not work with AEG."

Today, AEG's CEO John Zinser stated "Today I want to share that Ryan Dancey and AEG have parted ways.This is not an easy post to write. Ryan has been a significant part of AEG’s story, and I am personally grateful for the years of work, passion, and intensity he brought to the company. We have built a lot together. As AEG moves into its next chapter, leadership alignment and clarity matter more than ever. This transition reflects that reality.Our commitment to our designers, partners, retailers, and players remains unchanged. We will continue building great games through collaboration, creativity, and trust."

Dancey himself posted "This morning [John Zinser] and I talked about the aftermath of my post yesterday about the ability of AI to create ideas for games. He's decided that it's time for me to move on to new adventures. Sorry to have things end like this. I've enjoyed my 10 years at AEG. I wish the team there the best in their future endeavors.

I believe we're at a civilizational turning point. That who we are and how we are is going to change on the order of what happened during the Agricultural and Industrial Revolutions; and it's past time we started talking about it and not being afraid to discuss the topic. Talking about AI, being honest about what it can and cannot do, and thinking about the implications is something we have to begin to do in a widespread way. Humans have a unique creative spark that differentiates us and makes us special and we should celebrate that specialness as we experience this epic change.

For the record: I do not believe that AI will replace the work talented game designer/developers do, nor do I think it is appropriate to use AI to replace the role of designer/developers in the publication of tabletop games. During my time at AEG I developed and implemented polices and contracts that reflect those views. It's important to me that you know what I believe and what I don't believe on this particular topic, despite what you may have read elsewhere."

Whatever your position on generative LLMs and the like, when the COO of your company announces publicly that all of the company’s games could have been made by AI, it’s a problem. UK readers may recall when major jewelry chain Ratners’ CEO Gerald Ratner famously announced that the products sold in his stores were “trash”, instantly wiping half a billion pounds from the company’s value back in the early 1990s. The company was forced to close stores and rebrand to Signet Group. At the time the Ratners Group was the world's biggest jewelry retailer. Ratner himself was forced to resign in 1992. The act of making a damaging statement about the quality of your own company’s products became known as “doing a Ratner”.

Dancey was VP of Wizards of the Coast when the company acquired TSR, the then-owner of Dungeons & Dragons. He is also known for being the architect of the Open Game License. Dancey has worked as Chief Operating Officer for AEG for 10 years, and was responsible for the day-to-day operations of the company, second-in-command after the CEO, John Zinser.
 

log in or register to remove this ad


log in or register to remove this ad

I'm not convinced the type of acceptance matters. Not in the long run. AI seems well on the path to general acceptance whether people actively accept it or if it is forced upon them. Kind of like the old fake it until you make it.

The boycott and actions of WotC... maybe it will be successful (in that only human art will be used by the major RPG publishers). But I'm not convinced that model of success is actually good for the RPG market long term.

I'm not sure that is true. We have people currently (I'm not part of it, but I know of them and they are not a small group) betting on AI being a lot less useful than what people on the top are trying to push. In fact, they are putting their money on it that this will occur in the next few years with a massive market crash (some call it an AI Bubble that's going to burst like other bubbles have burst previously).

They are probably seeing what I am seeing. A LOT of money is being invested in this AI stuff, but not a lot of actual use is coming out of it. It's being pushed (and REALLY pushed hard. Microsoft is pushing it despite a LOT of people pushing back, in fact some say Microsoft is absolutely out of touch with what it's customers actually want at this point...to the point of Microsoft putting it's fingers in it's ears and singing "LaLaLaLa" really loudly. They just tried to ban the term Microslop because they don't want to face how or why that term has started to gain steam in the past few months!).

However, when I see the employees below the exec level I see a LOT of dissatisfaction with what is being pushed on them by the Higher ups. I see a lot of them making things up or exaggerating things to make their bosses happy, but the reality is that the AI is causing more problems for most (there are a few industries where it is boosting productivity) of them than it is helping.

I think there is a grim chance that the AI bubble is a reality and I dread that it may pop, but if it is a bubble then pop it eventually will. The reason it will pop is because the execs at the top of some of the tech companies have their heads so far up a cave that they can't see anything beyond their own eyelids. Part of it is that there is really no effective use that is better than what already was occurring for most of the companies out there in regards to AI, part of it is that employees find it is causing more problems than it is helping, and part of it is that people themselves don't want the AI being forced on them when they don't see any real purpose behind it and have no real use of it themselves to the degree that many Tech execs want them to.
 

I'm not sure that is true. We have people currently (I'm not part of it, but I know of them and they are not a small group) betting on AI being a lot less useful than what people on the top are trying to push. In fact, they are putting their money on it that this will occur in the next few years with a massive market crash (some call it an AI Bubble that's going to burst like other bubbles have burst previously).

They are probably seeing what I am seeing. A LOT of money is being invested in this AI stuff, but not a lot of actual use is coming out of it. It's being pushed (and REALLY pushed hard. Microsoft is pushing it despite a LOT of people pushing back, in fact some say Microsoft is absolutely out of touch with what it's customers actually want at this point...to the point of Microsoft putting it's fingers in it's ears and singing "LaLaLaLa" really loudly. They just tried to ban the term Microslop because they don't want to face how or why that term has started to gain steam in the past few months!).

However, when I see the employees below the exec level I see a LOT of dissatisfaction with what is being pushed on them by the Higher ups. I see a lot of them making things up or exaggerating things to make their bosses happy, but the reality is that the AI is causing more problems for most (there are a few industries where it is boosting productivity) of them than it is helping.

I think there is a grim chance that the AI bubble is a reality and I dread that it may pop, but if it is a bubble then pop it eventually will. The reason it will pop is because the execs at the top of some of the tech companies have their heads so far up a cave that they can't see anything beyond their own eyelids. Part of it is that there is really no effective use that is better than what already was occurring for most of the companies out there in regards to AI, part of it is that employees find it is causing more problems than it is helping, and part of it is that people themselves don't want the AI being forced on them when they don't see any real purpose behind it and have no real use of it themselves to the degree that many Tech execs want them to.
The Dot-Com bubble is a very apt comparison: clearly LLMs have some use, as did the Internet in the 90s, but right now it isn't clear to what extent it is marketable or can be effectively monetized. Even in a rosy scenario for an LLM usage future, there is no way the current investment can pay off
 



I'm not sure that is true. We have people currently (I'm not part of it, but I know of them and they are not a small group) betting on AI being a lot less useful than what people on the top are trying to push. In fact, they are putting their money on it that this will occur in the next few years with a massive market crash (some call it an AI Bubble that's going to burst like other bubbles have burst previously).
oh, I am betting against it, but modestly. As I have learned bubbles can stick around much longer than I would expect them to. See Tesla and Bitcoin, which are both still very much overvalued to this day to ridiculous amounts
 

I don't believe you that people who are pushing against AI are doing so effectively. I've asked on these boards how we can come up with effective actions to help. None of the anti AI folks had any interest in practical actions to reduce the harm that will be caused by AI.

All I see from folks who are anti-AI is "evil evil bad!" and the the most effective actions they support are complete bans/boycotts.
With the current growth and acceptance of AI in the RPG market and world, it's obvious such tactics are not working.

So in effect, anti-AI movement is not going to achieve what such people want, and they are unwilling to actually do anything that will. So all they are doing is voicing their opinions so they can be "right" and say "I told you so".

If it's not too late already, what are the anti-AI people going to do that is effective?

Really? Not washing your hands has a risk to evolve a human ending virus/bacteria. The risk is near infitesimally (sp?) tiny and so we all generally ignore that risk to humans. The development of nuclear weapons the risk has generally been considered high, high enough that society has taken action. AI is generally in the same level of risk (to me), but "we" didn't just assume the world was united on accepting that level of risk. Humans did not just assume that the risk was so universally accepted that parts of society did not work on providing data and support educating and discussing the risks associated with the technology.

Sure, go ahead and assume, go ahead and refuse to provide data or support for your views. But don't be surprised when you (anti-AI advocated) voice is ignored, dismissed and your (anti-AI advocated) actions are ineffective in reaching your (anti-AI) goal.

To me, this reply is not about supporting pro or anti, but more to point out that when someone wants to resist some common momentum, then you do need to supply data, rational and such. It doesn't matter how "obvious" the logic is to "you" or so apparent it "should be" to everyone, but since you want to change the default societal behavior, you're going to need to do something effective.
Pushing back means figuring out the best method for AI to enhance human work rather than replace it.

We need models on how AI frees people to expand rather than allowing a company to replace.
 

Related Articles

Remove ads

Remove ads

Top