Mentioning "net good" is the key here. In the past, scribe jobs were destroyed by the printing press (bad, because it forced people to change jobs, which they didn't necessarily want), but it fostered an era of scientific progress that benefitted everyone (good), also leading to a war of religion that killed milions as people (bad) and so on. Then you add all the bad and good and you decide that yes, it was a net good, even if it was detrimental to scribes and and extremely detrimental to people being killed to death in religious wars. It would have been very bad to suppress the printing press because of the consideration of damage caused to scribes.
So right now, we have jobs that may be removed by AI (bad, same as scribe) and we try to evaluate the other factor to determine if there will be an overall good or an overall bad. The fact that we currently leave to witness one factor in the "net good" or "net bad" equation doesn't change the equation. We don't want our descendants, a century down the line, to suffer from the lack of benefit of AI if it was a net good, because we overfactored our current comfort.
Right now, we're mostly protecting whales and other species of fish from overfishing. That is destroying jobs for fishermen and causing them economic duress even if they don't lose their job outright (because of regulation saying you can't fish some species if they are under a certain size). If we were to consider only this bad and not the overall good that the continuation of the fish species (if anything, for further fishing in the next generation) will bring in the future, we would eat whales quickly while they last. Generally, we don't, because we try exactly to discern if something will be a net good or a net bad, not only doing a short-viewed, self-centered evaluation.
It's difficult to value the benefits (free art for the billions of people who couldn't economically commission someone to draw their characters, or scenes from their campaign, or a painting to adorn their house, so that it's a very small comfort added to a lot of people), the economical impact (doing things in 20% less time [number invented] at work means reducing the workweek or higher productivity or less tasks widely referred, most notably by David Graeber, in reference to the excrement of the male ox, to do at work (a large benefit for many people), the derivative benefits (encouraging AI in general will allow breakthrough in other fields where AI may have derived impact (AI diagnosing unit at home? that are not certain but might be a big benefit to a lot of people) and the detriment (some artists, who don't embrace AI to gain productivity, will have to switch carreer, so a larger detriment to a small number of people, and at the same time a benefit from a subset of artists who, integrating AI in their workflow, can product more art in less time) and the extreme detriment (some artists who live in country where losing your job means becoming homeless and dying from cold and hunger, an extreme detriment to a handful of people). So calculating the net good or bad is complicated. I can see different people valuing the different factors differently, possibly ending up with opposed conclusion on whether AI will be a net good or a net bad. But I don't think it's logical to only take into account the current detriment for a category of people, discard all other elements to say it's a net negative by saying "we're currently experimenting this negative, so we should discard or disregard all other factors".