AI will program, create art, invest, develop scientific theories, whilst majority of humanity will work in intellectually unfulfilling drudgery of menial jobs which are not cost effective to robotise.
So here's the problem with your doomsaying - current AI is absolutely terrible at most of this because of the way it works, and that's not going to change. The problem is that you are acting like AI is intelligent, but it's not.
It's just automation. That's all we have right now - attempts to replicate human processes by essentially brute-force-based automation. This isn't SkyNet, it's not HAL 9000, it's not even Johnny Five. It's predictive text writ large.
Current AI cannot do any of this stuff except by copying humans, because it's just brute-force automation. There's no real intelligence here. What's changed isn't some sort of amazing principal, just that's easier than ever to operate in this brute-force paradigm. Let's look at your examples:
Programming - Current AI cannot innovate in programming. It can only copy what people have already done, and it has an unfortunate tendency to screw it up severely, especially if it's asked to do anything novel. It can save you a lot of time, and if what you're doing is trivial or repetitive programming based on a very standard approach, it could make a big difference, and potentially replace some jobs. But the idea that current AI is going to replace programmers generally? No.
Create art - No. Current AI is again, just copying humans. An awful lot of what it's doing is essentially just laundering plagiarism. This can and has been discussed in a lot more detail. Further, the art it creates tends to be boring, unoriginal, kind of messed-up and so on. A lot of people will overlook this for stuff like cheap/free character portraits or [politician] doing [funny thing], but current AI isn't "creating art" in any meaningful way. Further, there's strong societal resistance to this, and no sign it's being eroded - rather strengthening - partly because so much AI art is just bad and ugly and stupid, especially if you do more than glance at it.
Invest - AI is causing absolute chaos trying to do this, sure, but is it replacing stockbrokers etc. effectively? Not really at this point, in part because it's so chaotic, and the more AI interacts with the market, the more chaos and idiocy in causes. I think we'll see very serious harm done by AI being used to try and do "investment" (not really investment, just screwing around with the stockmarket), it'll probably cause a crash at some point, but current AI replacing these jobs? No. I also expect regulation to cause some issues for AI here, especially if the NYSE or something gets crashed by it.
Develop scientific theories - Flatly no. Current AI is entirely incapable of this. I am not sure where you are getting this from. AI is good for going through stuff that requires insane numbers of iterations and extremely boring to humans, like helping to figure out protein-folding stuff. That's not developing theories. This is already heavily automated and stuff like quantum computing is going to make a big impact too, but no, current AI cannot "develop scientific theories".
So yeah, in our current era, this is all not worth worrying about. For AI to really change things here, we'd need true AI - actual intelligence - the ability to understand things, to contemplate things, not just respond to input. Not brute-force stuff which is essentially what all the current flashy AI stuff is. It's all predictive text of one kind or another. And my strong suspicion is that when we do get true AI, in anywhere from 10-1000 years, the technology needed to simulate roughly as smart as a smart human will require a huge amount of extremely expensive manufacturing and ton of energy to run (at least initially). Certain problems will absolutely benefit from it, if we can get it to be cooperative, but mass replacement of jobs? Not soon.
I absolutely hate to say this, but this is really a "Who moved my cheese?" situation still. LLMs and similar predictive text-type AI (which includes the art stuff, to be clear) will cause job losses, but they require so much fiddling with to use, that a whole bunch of other jobs will appear to manage them, check their outputs, and frankly, probably to generate content to input into them. Because here's the thing - there's a limited amount of non-AI-generated stuff out there, especially that's genuinely legally available (an issue that will become more and more painful for AI firms, especially as government regulation increasingly comes in), and already we've seen AI starting to eat it's own tail as it were. Musk's Grok AI developed a bunch of behaviours because it was just copying other AI generated material. This will really limit what these kind of AIs can do, unless you can give them more, new material. And where could that come from? From people. Because if you try to use AI you just create an increasingly bad GIGO cycle.
True AI might change this. True AI is not even on the horizon. None of the technologies which make LLM-type AI are really the same tech that would make a true AI. This is kind of a dead end tech, even. It'll get refined - slowly - with better training data, and a ton of manual interventions to change specific behaviours - indeed that may become a whole industry to itself, basically altering how LLM AIs work to get their outputs more useful. But you're not going to see true AI just evolve out of this.
(The GIGO/Ouroboros factor is likely to really cause big problems for AI art too in the future, because guess what isn't protected by copyright currently (nor should it be), and is spreading like wildfire over scrape-able sources? AI art. And don't think AI art generators have carefully chosen training data - they do not - they're huge indiscriminate. I'm sure they're already eating their own tail here, and the more they eat it, the worse it's going to be, the more AI art appears on scrape-able sources, the repetitive it's going to get.)