Ryan Dancey & AEG Part Ways Following AI Comments

COO says that AI could make any of the company's games.
Alderac_brandpage_BS_1200x600_crop_center.webp


Ryan Dancey, the Chief Operating Officer of boardgame publisher Alderac Entertainment Group, no longer works for the company, following statements on social media where he claimed that AI could make most of the company's board games, and that D&D and Magic: the Gathering were the only new forms of gameplay in his lifetime. After another poster on LinkedIn claimed that "AI wouldn't come up with Tiny Towns or Flip Seven or Cubitos because it doesn't understand the human element of fun", Dancey responded that he had zero reason to believe that AI could not do such a thing.

"I have zero reason to believe that an Al couldn't come up with Tiny Towns or Flip Seven or Cubitos. I can prompt any of several Als RIGHT NOW and get ideas for games as good as those. The gaming industry doesn't exist because humans create otherwise unobtainable ideas. It exists because many many previous games exist, feed into the minds of designers, who produce new variants on those themes. People then apply risk capital against those ideas to see if there's a product market fit. Sometimes there is, and sometimes there is not. (In fact, much more often than not).

Extremely occasionally (twice in my lifetime: D&D and Magic: the Gathering) a human has produced an all new form of gaming entertainment. Those moments are so rare and incandescent that they echo across decades.

Game publishing isn't an industry of unique special ideas. It's an industry about execution, marketing, and attention to detail. All things Als are great at."
- Ryan Dancey​

The Cardboard Herald, a boardgame reviews channel, responded yesterday on BlueSky that "As you may have seen, [AEG] CEO Ryan Dancey stated that AI can make games “just as good as Tiny Towns or Flip 7 or Cubitos”, completely missing the inexorable humanity involved.We’ve spent 10 years celebrating creatives in the industry. Until he’s gone we will not work with AEG."

Today, AEG's CEO John Zinser stated "Today I want to share that Ryan Dancey and AEG have parted ways.This is not an easy post to write. Ryan has been a significant part of AEG’s story, and I am personally grateful for the years of work, passion, and intensity he brought to the company. We have built a lot together. As AEG moves into its next chapter, leadership alignment and clarity matter more than ever. This transition reflects that reality.Our commitment to our designers, partners, retailers, and players remains unchanged. We will continue building great games through collaboration, creativity, and trust."

Dancey himself posted "This morning [John Zinser] and I talked about the aftermath of my post yesterday about the ability of AI to create ideas for games. He's decided that it's time for me to move on to new adventures. Sorry to have things end like this. I've enjoyed my 10 years at AEG. I wish the team there the best in their future endeavors.

I believe we're at a civilizational turning point. That who we are and how we are is going to change on the order of what happened during the Agricultural and Industrial Revolutions; and it's past time we started talking about it and not being afraid to discuss the topic. Talking about AI, being honest about what it can and cannot do, and thinking about the implications is something we have to begin to do in a widespread way. Humans have a unique creative spark that differentiates us and makes us special and we should celebrate that specialness as we experience this epic change.

For the record: I do not believe that AI will replace the work talented game designer/developers do, nor do I think it is appropriate to use AI to replace the role of designer/developers in the publication of tabletop games. During my time at AEG I developed and implemented polices and contracts that reflect those views. It's important to me that you know what I believe and what I don't believe on this particular topic, despite what you may have read elsewhere."

Whatever your position on generative LLMs and the like, when the COO of your company announces publicly that all of the company’s games could have been made by AI, it’s a problem. UK readers may recall when major jewelry chain Ratners’ CEO Gerald Ratner famously announced that the products sold in his stores were “trash”, instantly wiping half a billion pounds from the company’s value back in the early 1990s. The company was forced to close stores and rebrand to Signet Group. At the time the Ratners Group was the world's biggest jewelry retailer. Ratner himself was forced to resign in 1992. The act of making a damaging statement about the quality of your own company’s products became known as “doing a Ratner”.

Dancey was VP of Wizards of the Coast when the company acquired TSR, the then-owner of Dungeons & Dragons. He is also known for being the architect of the Open Game License. Dancey has worked as Chief Operating Officer for AEG for 10 years, and was responsible for the day-to-day operations of the company, second-in-command after the CEO, John Zinser.
 

log in or register to remove this ad

There is a lot of work being done by very clever people who are using AI productively in science and mathematics. Folks at NASA have tested a workflow where Claude plans the routes for Perseverance, for example. They found the results encouraging.
We’re talking about LLMs creating board games. That’s the subject of the thread. But yeah, you’re here stanning as usual. ;)
 

log in or register to remove this ad


You don't see why people using models for tasks where reliability is very important is relevant to your point about reliability?

My work-paid for Gemini Pro subscription prominently says at the bottom of each window a line to the extent of "replies by this service may not be entirely accurate, if you need accuracy be sure to verify the response."

When we tested out some routine business process queries, 3.0 Pro was very confidently almost correct, citing docs that almost said what it summarized, in such a way that a human would grab that and call us and be completely wrong.

(happening a lot to mechanics right now apparently)

It's superb at coding help though! Definitely eating the lunch of a lot of entry level code/analysis/etc jobs if given enough data (alas, I cannot feed it much of my data due to business restrictions as another posted noted...).
 

The OGL was a license which, upon creation, became used widely across the industry, not just for D&D, but by hundreds of other games over the following decades. It ushered in the era of open gaming to an industry which had no tradition of that previously. It changed the environment completely.

Sure, there were open licences before and after the OGL, and in industries other than ours. None of them transformed our little industry in the way the OGL did.

I'm not sure about that one.

It may not have affected home users as much, but the Open License on the Linux Kernal and the ensuing Linux OS has had a rather large impact overall.

It's one reason why we have so many Android devices as well as the entire Apple Ecosystem descended from the older ideas and kernals.

Obviously, its a completely different field and completely different arena of focus, but in general, when talking about Open Licenses, OGL is a big impact on gaming, but there are other ones out there that have had some impact that are not related to TTRPGs and such.
 



You don't see why people using models for tasks where reliability is very important is relevant to your point about reliability?
are they using them or are they taking a look at what they AIs can and cannot do...

There is a lot of work being done by very clever people who are using AI productively in science and mathematics. Folks at NASA have tested a workflow where Claude plans the routes for Perseverance, for example. They found the results encouraging.
none of this means the AI was reliable and that its results were used unchanged
 

Seems a crazy thing to yell out to the world considering his position. While I think AI could come up with board games, I'm not sure how good they'd be, maybe 1 in 1,000 would be worth playing, might be able to bounce ideas off it though.

I do see his point, a lot of games seem to be spin-offs of others but often with a twist that makes it interesting, you see it heaps with video games; someone has a great idea and others copy it, like all of the suvival village builders that followed Banished (I'm not even sure that was the first, just one of the earliest I know of).
 

none of this means the AI was reliable and that its results were used unchanged
Right. The point is that they're aware of the reliability issues and think there are ways to address them even for high-stakes work. Therefore, the fact that someone saw a factual error in a chatbot doesn't tell us much about the path of the technology as a whole.

From the anthropic article:
As with any AI output, it’s important to check Claude’s work. The waypoints drawn by Claude were run through a simulation that Perseverance uses every day to confirm the accuracy of the commands: over 500,000 variables were modeled to check the projected positions of the rover and predict any hazards along its route.

When the JPL engineers reviewed Claude’s plans, they found that only minor changes were needed. For instance, ground-level camera images (which Claude hadn’t seen) gave a clearer view of sand ripples on either side of a narrow corridor; the rover drivers elected to split the route more precisely than Claude had at this point. But otherwise, the route held up well. The plans were sent to Mars, and the rover successfully traversed the planned path.

The engineers estimate that using Claude in this way will cut the route-planning time in half, and make the journeys more consistent. Less time spent doing tedious manual planning—and less time spent training—allows the rover’s operators to fit in even more drives, collect even more scientific data, and do even more analysis. It means, in short, that we’ll learn much more about Mars
 


Related Articles

Remove ads

Remove ads

Top