Ryan Dancey & AEG Part Ways Following AI Comments

COO says that AI could make any of the company's games.
Alderac_brandpage_BS_1200x600_crop_center.webp


Ryan Dancey, the Chief Operating Officer of boardgame publisher Alderac Entertainment Group, no longer works for the company, following statements on social media where he claimed that AI could make most of the company's board games, and that D&D and Magic: the Gathering were the only new forms of gameplay in his lifetime. After another poster on LinkedIn claimed that "AI wouldn't come up with Tiny Towns or Flip Seven or Cubitos because it doesn't understand the human element of fun", Dancey responded that he had zero reason to believe that AI could not do such a thing.

"I have zero reason to believe that an Al couldn't come up with Tiny Towns or Flip Seven or Cubitos. I can prompt any of several Als RIGHT NOW and get ideas for games as good as those. The gaming industry doesn't exist because humans create otherwise unobtainable ideas. It exists because many many previous games exist, feed into the minds of designers, who produce new variants on those themes. People then apply risk capital against those ideas to see if there's a product market fit. Sometimes there is, and sometimes there is not. (In fact, much more often than not).

Extremely occasionally (twice in my lifetime: D&D and Magic: the Gathering) a human has produced an all new form of gaming entertainment. Those moments are so rare and incandescent that they echo across decades.

Game publishing isn't an industry of unique special ideas. It's an industry about execution, marketing, and attention to detail. All things Als are great at."
- Ryan Dancey​

The Cardboard Herald, a boardgame reviews channel, responded yesterday on BlueSky that "As you may have seen, [AEG] CEO Ryan Dancey stated that AI can make games “just as good as Tiny Towns or Flip 7 or Cubitos”, completely missing the inexorable humanity involved.We’ve spent 10 years celebrating creatives in the industry. Until he’s gone we will not work with AEG."

Today, AEG's CEO John Zinser stated "Today I want to share that Ryan Dancey and AEG have parted ways.This is not an easy post to write. Ryan has been a significant part of AEG’s story, and I am personally grateful for the years of work, passion, and intensity he brought to the company. We have built a lot together. As AEG moves into its next chapter, leadership alignment and clarity matter more than ever. This transition reflects that reality.Our commitment to our designers, partners, retailers, and players remains unchanged. We will continue building great games through collaboration, creativity, and trust."

Dancey himself posted "This morning [John Zinser] and I talked about the aftermath of my post yesterday about the ability of AI to create ideas for games. He's decided that it's time for me to move on to new adventures. Sorry to have things end like this. I've enjoyed my 10 years at AEG. I wish the team there the best in their future endeavors.

I believe we're at a civilizational turning point. That who we are and how we are is going to change on the order of what happened during the Agricultural and Industrial Revolutions; and it's past time we started talking about it and not being afraid to discuss the topic. Talking about AI, being honest about what it can and cannot do, and thinking about the implications is something we have to begin to do in a widespread way. Humans have a unique creative spark that differentiates us and makes us special and we should celebrate that specialness as we experience this epic change.

For the record: I do not believe that AI will replace the work talented game designer/developers do, nor do I think it is appropriate to use AI to replace the role of designer/developers in the publication of tabletop games. During my time at AEG I developed and implemented polices and contracts that reflect those views. It's important to me that you know what I believe and what I don't believe on this particular topic, despite what you may have read elsewhere."

Whatever your position on generative LLMs and the like, when the COO of your company announces publicly that all of the company’s games could have been made by AI, it’s a problem. UK readers may recall when major jewelry chain Ratners’ CEO Gerald Ratner famously announced that the products sold in his stores were “trash”, instantly wiping half a billion pounds from the company’s value back in the early 1990s. The company was forced to close stores and rebrand to Signet Group. At the time the Ratners Group was the world's biggest jewelry retailer. Ratner himself was forced to resign in 1992. The act of making a damaging statement about the quality of your own company’s products became known as “doing a Ratner”.

Dancey was VP of Wizards of the Coast when the company acquired TSR, the then-owner of Dungeons & Dragons. He is also known for being the architect of the Open Game License. Dancey has worked as Chief Operating Officer for AEG for 10 years, and was responsible for the day-to-day operations of the company, second-in-command after the CEO, John Zinser.
 

log in or register to remove this ad

It’s obvious that this was a deeply foolish and also deeply unkind thing for him to say, though I think the latter was unintentional. It was also unnecessary given that he could have emphasized the positive potential of his thesis.

I think he is right that a thing AI does really well is iterate on existing ideas. If what you want to market is basically another flavour of X, then you can see why the MBAs are all going hmmm…

But he also pointed out that truly disruptive ideas are where the most potential value resides. If AI can be harnessed to take care of the iterative tasks, then it should be done in the interest of unleashing your creatives to focus on creativity. Unfortunately…the market might not support this, as we humans tend to stick with what is comfortable.

I don’t want to jump all over the guy for speaking honestly, even if his remarks were uncomfortable and ill chosen.
 

log in or register to remove this ad

But he also pointed out that truly disruptive ideas are where the most potential value resides. If AI can be harnessed to take care of the iterative tasks, then it should be done in the interest of unleashing your creatives to focus on creativity. Unfortunately…the market might not support this
I agree with you, but I think Dancey, and folks like him, don't seem to understand that when they say "it'll free creatives up," what that's already working out to in practice is "free most of you up to apply for unemployment."

It's kind of Thanos logic operating here: "I'll do this super-disruptive thing that I feel will lead to a better world, but largely because I haven't actually asked anyone who will be affected by it if there are any problems with the idea."

I just saw an article where Computer Science, of all majors, is one of the ones that produces the worst return on investment for students, after decades of us all pushing our kids to major in it, often taking on major debt to do so. Entry level jobs for those kids, who still have that debt, are gone. Will the world be a better place for them when they're in their 30s or 40s? Maybe, but they're probably irreparably harmed by the transition.

I don't think most of the AI talking heads are actually malicious -- although I can certainly think of a few that clearly are -- but they're almost all thoughtless about how ordinary people will be affected for years, maybe decades. If they wanted to pledge a percentage of their revenue towards job training, that'd go a long way to making them seem less hostile to everyone who's not them.
 

I agree with you, but I think Dancey, and folks like him, don't seem to understand that when they say "it'll free creatives up," what that's already working out to in practice is "free most of you up to apply for unemployment."

It's kind of Thanos logic operating here: "I'll do this super-disruptive thing that I feel will lead to a better world, but largely because I haven't actually asked anyone who will be affected by it if there are any problems with the idea."

I just saw an article where Computer Science, of all majors, is one of the ones that produces the worst return on investment for students, after decades of us all pushing our kids to major in it, often taking on major debt to do so. Entry level jobs for those kids, who still have that debt, are gone. Will the world be a better place for them when they're in their 30s or 40s? Maybe, but they're probably irreparably harmed by the transition.

I don't think most of the AI talking heads are actually malicious -- although I can certainly think of a few that clearly are -- but they're almost all thoughtless about how ordinary people will be affected for years, maybe decades. If they wanted to pledge a percentage of their revenue towards job training, that'd go a long way to making them seem less hostile to everyone who's not them.
I feel this is the potential end result of the “Move Fast and Break Things” philosophy. In the course of breaking things, a lot of people can be put out of work, and whereas before this logic was applied internally within a CEO’s own company, it’s now stretching out across a lot of areas impacting society at large.

Nobody is taking ownership of the thing that is broken.
 


But moving fast and breaking things already happened when manufacturing was off shored with similar results. Tons lost work, good paying jobs went away, etc. We are now just seeing the same thing play out in another work sector (creative and knowledge based jobs).

It's now like we haven't already seen this play out in other sectors.
I agree with you, but I think Dancey, and folks like him, don't seem to understand that when they say "it'll free creatives up," what that's already working out to in practice is "free most of you up to apply for unemployment."

It's kind of Thanos logic operating here: "I'll do this super-disruptive thing that I feel will lead to a better world, but largely because I haven't actually asked anyone who will be affected by it if there are any problems with the idea."

I just saw an article where Computer Science, of all majors, is one of the ones that produces the worst return on investment for students, after decades of us all pushing our kids to major in it, often taking on major debt to do so. Entry level jobs for those kids, who still have that debt, are gone. Will the world be a better place for them when they're in their 30s or 40s? Maybe, but they're probably irreparably harmed by the transition.

I don't think most of the AI talking heads are actually malicious -- although I can certainly think of a few that clearly are -- but they're almost all thoughtless about how ordinary people will be affected for years, maybe decades. If they wanted to pledge a percentage of their revenue towards job training, that'd go a long way to making them seem less hostile to everyone who's not them.
I've been telling people for years to void tech and computer science because:
  1. "They" have been pushing everyone towards it for years and that never works out well
  2. Even before AI you were competing with the best in the world that can live and work in much cheaper countries
  3. The best of the best are even willing to move to NA and it's cheap for companies to onshore them.
  4. And now AI.....
All of your points apply to wiping out manufacturing jobs and possible soon driving jobs. It's doesn't appear to be motivated by evil, just profit.

I've been working with an American vendor over the last two years and it's been really depressing. Either I deal with someone in India, Mexico, or America. But everyone from America is being shadowed by either someone from Mexico or India so it's clear all the tech jobs are moving to one of those places. I don't know if I could sit and preform my job while I'm obviously training my cheaper replacement but I'm sure they have families to feed while they can.
 


It is, however, no better at determining the truth value of statements it presents, because it doesn't actually test for "truth".

It turns out, if you write a blog post, you can get an LLM to say all sorts of crazy stuff.

Nice article. Checked some of the fake seeds he planted in my browser’s “AI Assisted” mode, and got confidently incorrect disinfo.
 


Yesterday Charles Urbach made a post about this, and I think he makes a very strong observation. This isn't just Ryan. There are several leaders of the industry making these pro-AI comments, and that's just...pretty depressing, actually. We were an industry founded on human creativity.
 


Related Articles

Remove ads

Remove ads

Top