Ryan Dancey & AEG Part Ways Following AI Comments

COO says that AI could make any of the company's games.
Alderac_brandpage_BS_1200x600_crop_center.webp


Ryan Dancey, the Chief Operating Officer of boardgame publisher Alderac Entertainment Group, no longer works for the company, following statements on social media where he claimed that AI could make most of the company's board games, and that D&D and Magic: the Gathering were the only new forms of gameplay in his lifetime. After another poster on LinkedIn claimed that "AI wouldn't come up with Tiny Towns or Flip Seven or Cubitos because it doesn't understand the human element of fun", Dancey responded that he had zero reason to believe that AI could not do such a thing.

"I have zero reason to believe that an Al couldn't come up with Tiny Towns or Flip Seven or Cubitos. I can prompt any of several Als RIGHT NOW and get ideas for games as good as those. The gaming industry doesn't exist because humans create otherwise unobtainable ideas. It exists because many many previous games exist, feed into the minds of designers, who produce new variants on those themes. People then apply risk capital against those ideas to see if there's a product market fit. Sometimes there is, and sometimes there is not. (In fact, much more often than not).

Extremely occasionally (twice in my lifetime: D&D and Magic: the Gathering) a human has produced an all new form of gaming entertainment. Those moments are so rare and incandescent that they echo across decades.

Game publishing isn't an industry of unique special ideas. It's an industry about execution, marketing, and attention to detail. All things Als are great at."
- Ryan Dancey​

The Cardboard Herald, a boardgame reviews channel, responded yesterday on BlueSky that "As you may have seen, [AEG] CEO Ryan Dancey stated that AI can make games “just as good as Tiny Towns or Flip 7 or Cubitos”, completely missing the inexorable humanity involved.We’ve spent 10 years celebrating creatives in the industry. Until he’s gone we will not work with AEG."

Today, AEG's CEO John Zinser stated "Today I want to share that Ryan Dancey and AEG have parted ways.This is not an easy post to write. Ryan has been a significant part of AEG’s story, and I am personally grateful for the years of work, passion, and intensity he brought to the company. We have built a lot together. As AEG moves into its next chapter, leadership alignment and clarity matter more than ever. This transition reflects that reality.Our commitment to our designers, partners, retailers, and players remains unchanged. We will continue building great games through collaboration, creativity, and trust."

Dancey himself posted "This morning [John Zinser] and I talked about the aftermath of my post yesterday about the ability of AI to create ideas for games. He's decided that it's time for me to move on to new adventures. Sorry to have things end like this. I've enjoyed my 10 years at AEG. I wish the team there the best in their future endeavors.

I believe we're at a civilizational turning point. That who we are and how we are is going to change on the order of what happened during the Agricultural and Industrial Revolutions; and it's past time we started talking about it and not being afraid to discuss the topic. Talking about AI, being honest about what it can and cannot do, and thinking about the implications is something we have to begin to do in a widespread way. Humans have a unique creative spark that differentiates us and makes us special and we should celebrate that specialness as we experience this epic change.

For the record: I do not believe that AI will replace the work talented game designer/developers do, nor do I think it is appropriate to use AI to replace the role of designer/developers in the publication of tabletop games. During my time at AEG I developed and implemented polices and contracts that reflect those views. It's important to me that you know what I believe and what I don't believe on this particular topic, despite what you may have read elsewhere."

Whatever your position on generative LLMs and the like, when the COO of your company announces publicly that all of the company’s games could have been made by AI, it’s a problem. UK readers may recall when major jewelry chain Ratners’ CEO Gerald Ratner famously announced that the products sold in his stores were “trash”, instantly wiping half a billion pounds from the company’s value back in the early 1990s. The company was forced to close stores and rebrand to Signet Group. At the time the Ratners Group was the world's biggest jewelry retailer. Ratner himself was forced to resign in 1992. The act of making a damaging statement about the quality of your own company’s products became known as “doing a Ratner”.

Dancey was VP of Wizards of the Coast when the company acquired TSR, the then-owner of Dungeons & Dragons. He is also known for being the architect of the Open Game License. Dancey has worked as Chief Operating Officer for AEG for 10 years, and was responsible for the day-to-day operations of the company, second-in-command after the CEO, John Zinser.
 

log in or register to remove this ad

Right. The point is that they're aware of the reliability issues and think there are ways to address them even for high-stakes work. Therefore, the fact that someone saw a factual error in a chatbot doesn't tell us much about the path of the technology as a whole.
The 'ways to address them' might be hoping the AI continues to get better or even to not rely on AI at all for the parts where it is not sufficiently reliable or that are critical while using it in places where it is 'reliable enough', which really is more working around the unreliability than addressing it
 

log in or register to remove this ad

The 'ways to address them' might be hoping the AI continues to get better or even to not rely on AI at all for the parts where it is not sufficiently reliable or that are critical while using it in places where it is 'reliable enough', which really is more working around the unreliability than addressing it
Yeah, no disagreement from me
 


He actively bragged about plans to destroy non-D&D games in pursuit of the almighty dollar. I'm cool calling his work "actively damaging" to the industry.
That's one way to parse what he said.

Dancey worked for a game company that wanted to sell more games. He came up with a way to do that, which also preserved D&D 3E (and eventually 5E) as an open game. How terrible.

The OGL put D&D 3E into the community as a game anyone could design content for, even if WotC went out of business, or sold D&D to somebody else, or something else done by future management. The OGL was also intended to put D&D 3E even more in the center of the hobby than it already was going to be and offload low-yield (but important) products to other companies.

The creation of the OGL wasn't an evil plan by Dancey to destroy non-D&D games. The later attempt to revoke the OGL by WotC decades later was an "evil" plan that was negative for the publishing community and the fan community.

The OGL helped raise the tide on the entire industry, which resulted in an explosion of content for D&D . . . but also an explosion of non-D&D games. It took time, and there were obviously other factors, but D&D being an open game is an important reason for D&D's continued success and the overall rise of the industry at large.

Dancey has put his foot in his mouth multiple times over the years, but this was not one of them.
 

Wait, before AI you were trusting stuff that comes across your social media feed? Seriously?
Yes.

Obviously, there was a lot of human-generated slop before the rise of our AI overlords. Since the dawn of time actually, before even the rise of social media.

But you could follow trusted sources and feel reasonably certain the news you were reading was written by a journalist.

I don't feel that's true anymore.

Not that there aren't still organizations out there who still hire journalists and strive to put out actually factual information. But who to trust? In my younger years, my faith in humanity wasn't super high, but these days, it's never been lower.
 


I asked an LLM to help me find 10 places to host work dinners with conditions. Three of them are permanently closed and another two do not meet the conditions.

Yet, this "great technology" is supposed to make a board game?
Imagine how much the 'nailgun' does for wooden house building and renovation, if you give that same tool to the average person, they will probably nail themselves to their car with it...

Just because you don't know how to use a tool or know how to choose the right tool, doesn't mean the tool is useless. Not all LLMs are created equal for the same tasks, often different providers have different versions available. And just like with every other product, people sell it telling lies (or do you really think dish-soap X will be so much better then dish-soap Y, like they show in the commercials)...
People aren’t reliable either.
No, but people are different in their unreliableness. People either lie or make mistakes, in most cases people leave hints in their lying to a greater or lesser extend. And people make mistakes a bit more predictably then LLMs. You know from experience that people are weaker in certain areas, and sometimes there are also additional identifiers when people make a mistake.

The problem with AI (LLM) is that they do not lie and the do not make mistakes, the answer might be faulty, but that isn't a lie or mistake. They just follow their design, making a correct answer virtually undistinguishable from a faulty one, unless you know the material very well and are paying attention.

The further issue with that is, that humans will be attended to their mistake and learn from it, the same with a lie. And if they do not learn from that behaviour, they will be replaced. The same isn't true of the current AI (LLM) products, they are the same for not just the same organization, they are the same for the whole world that uses the same supplier. They do not learn from the faults or the lies, nor will they be as easily be removed as humans.

People aren't reliable either, that's why in many processes there is the four eyes principle, where multiple people check each others work, and in certain cases quite a few more people look at the thing that needs to be checked.

Sure people can check the AI (LLM) work, but that requires discipline, knowledge and experience. And as we've seen in the news, it happens that these checks haven't been done. Either due to folks being lazy/lacks or folks doing things they have no knowledge in in the first place. These things of course happen with purely human situations as well, but they tend to not make the news anymore unless they are extreme cases. Currently reporting on the successes and failures of AI (LLM) is 'sexy' for the media...

Some really old people around here might remember how administration or finances worked before computers and Excel, that required a LOT more people at the office. Getting a loan at a bank wasn't dependent on your actual income, other debts, dependebility, etc. But how well the branch manager knew and liked you... And if you fell outside the 'norm' at that time or didn't go to the same church (or any church), you were SOL...

AI (LLM) will fill a similar niche imho. It won't replace everyone, and it's just a tool amongst many (just like Excel). And sure, there will be folks that will use it for stuff it isn't supposed to be used for, just like they've been using Excel as a shadow database application (something it was never intended to be used for) without telling anyone in IT about it... AI (LLM) will get misused, just like every other human invention...
 

I'm not sure he's wrong, or if he is today, based on that rate of advancement, I dont think he will be for long.
I think the problem dancy doesn't see is that without the people constantly learning by making games that use each others stuff we don't get to the examples he sited. No human gets to learn how to make the next dnd or the next magic. There's no training model that is viable becuase no one can choose to stave to make a game. You need an industry for things to develop.
 


The creation of the OGL wasn't an evil plan by Dancey to destroy non-D&D games. The later attempt to revoke the OGL by WotC decades later was an "evil" plan that was negative for the publishing community and the fan community.

"should eventually drive support for all other game systems to the lowest level possible in the market, create customer resistance to the introduction of new systems" seems straight-up like a plan to bring embrace and extinguish to RPGs.
 

Related Articles

Remove ads

Remove ads

Top