So I reject this notion of "slave" as that refers to situation where a peer is made into property through coercion of some sort. That doesn't really apply when discussing droid relationships.
I think it does apply, personally. Like a lot of droids, he's in every way an equal to a human in terms of sentience/sapience/free will, if you just remove the (often physical) objects preventing access to free will.
He's absolutely a peer made into property by force. It's just that force was applied effectively before he was born. I don't think you'd have any difficulty at all calling him a slave if he was a biological being created to serve and with their free will limited by some kind of removable or destroyable implant.
ChatGPT for example seems to not understand either the sense of a statement, or the reference of a statement and yet somehow it produces things that seem to be sensible statements that reference real things (most of the time at least).
Not seems. It doesn't understand anything at all.
The "most of the time" applies for a fairly narrow subset of interactions - basically asking it essentially text-based questions (even if using speech-to-text or the like) about stuff it can apply what is essentially super-powerful predictive text to.
There's no bridge from there to K-2SO or the like. It's a dead end. It can never understand anything. It can only mimic the way things are arranged.
For an easy example, I recently asked it about solving static issues with my coffee grinder - a common question, and I assumed it would just direct me to the common answer (which I couldn't remember the name of), and instead it came up with an insane solution of spraying commercial anti-static spray on the beans, because it literally doesn't understand anything at all, it doesn't understand what food or drink are, it doesn't understand where coffee goes or what coffee is, but what it can find is a bunch of websites where "anti-static spray" is associated with problems with static electricity.
And a lot of the time it makes trivially obvious mistakes that even a child who could read wouldn't make - for example, my friend was looking for the date when a certain band had played a major venue in the UK, and the Google AI very firmly stated that band had never played that venue in the UK. Literally the first actual result of the actual Google search, however, showed they had, and to an actual intelligence, that's trivial and obvious, but to an LLM, which is just basically predictive text running on a supercomputer, that information wasn't arranged in the right way for it to understand that.
What we've seen repeatedly too is that whenever someone claims they can get it to do more and to it reliably and correctly, they're lying, and it's a mechanical turk situation - i.e. a bunch of low-paid workers in a far-away country are actually doing the actual work, with the AI just essentially being a front end.