but I also they are very intelligent beings so I see no reason why some of them can't decide to be good or evil or whatever alignment the MM says they're supposed to be.
This statement was worth treating separately.
At the risk of jumping to a conclusion, I think you are using the word "intelligent" as a synonym for human. That is to say, you are supposing anything that has this quality "intelligence" must be more or less human, because humans are more or less the only intelligent thing you can think of and so you assume that every intelligent thing will thing and behave in a human fashion.
This is a very common science fiction trope and a very natural conclusion, but I think if you spend a few more minutes thinking about it, you'll realize it is a bit ridiculous. To point you toward that conclusion, let's just think of a an examples of this human intuition that are obviously ridiculous. In almost every science fiction show featuring AI, the AI if it encounters a beautiful human female will fall in love with that human female and attempt to romance the person. Now, leaving aside that we could probably think of few intelligence failure modes in a created AI that would produce this behavior, that this would always and inevitably arise especially in 'naturally arising AIs' is an obvious failure of imagination. Most people when they first think about this failure of imagination hit upon the idea that the AI wouldn't naturally be attracted to a human female because they don't look alike, and so they wouldn't necessarily find the human female beautiful and attractive.
But that's yet more failure of imagination. The underlying assumption here that is ridiculous is that an AI would experience a sexual impulse or even a desire for companionship at all. Feelings of arousal, desires for intimacy, and even loneliness are all modes of behavior that humans have to fulfill specific purposes. It is part of their 'design', as it were - whether you believe it is behavior by design or evolved fitness increasing behavior doesn't matter. The point is there is no particular reason the AI would have those emotional needs or emotional contexts, much less that they would be displayed through the emotive displays (like frowning, tears, crossed arms, etc.) that humans communicate these displays to other humans (which is also 'designed' behavior).
So no, Wall-E upon seeing a curvaceous robot would not evidence feelings of attraction for 'her'. And even if we imagined Wall-E experiencing some sort of bizarre intelligence failure mode arising from centuries of isolation and semi-random inputs, there is absolutely NO REASON Eva would ever respond to the now hopelessly dysfunctional Wall-E, nor is there really any reason for Eva to learn or want to learn Wall-E's emotional context. That entire subplot depends on the natural but entirely wrong assumption that intelligence implies humanity.
Ultimately, so does your argument about the dragon.
To understand why, let's first discuss yet another completely stupid trope that results the first leap of imagination that humans make with respect to intelligent machines - that they have no emotions. This is ever so slightly more imaginative than just assuming that they have the full human emotional context, but not much. The problem here is a failure to understand what emotion is. Humans typically are taught to think of emotion as being something different from and separable from reason. This is in fact a very natural result of the experience of being human, and in particular the way the human brain is wired up. In the human brain wiring, it often feels like reason and emotion are competing with each other for the attention the human consciousness. But all of that has to be remembered to be yet another aspect of being human and not something general to all intelligence.
In fact, I put forward that it is impossible to be intelligent and not have emotions. It's just those emotions do not in any fashion have to be like human emotions. Each intelligent thing is likely to have it's own distinctive emotional context. To understand what I'm saying here, you have to look again at that human wiring and try to understand why humans experience emotion and what would happen if you took that emotion out of the reasoning process. In other words, what is the role of emotion in all forms of reasoning. Humans have a massively parallel processing mode that is the result of attempting to compute with chemical signals in an highly energy efficient process and still have high through put. As such, humans separate the channels for 'logical' and 'emotional' processing and do them in parallel. The logical process is addressing the question, "What am I experiencing?" and can tell you the difference between food, a lion, and your mom. The emotional process is addressing the question, "What does this experience mean?" In other words, emotion is the part of reasoning that is goal-driven. Whatever goals that an intelligence has, that will set it's emotional contexts. The emotions tell the being what things mean, and how they should be valued.
People often mistake "emotions" for the emotional experience, or "feelings". This is a natural aspect of being human since "feelings" are the reinforcing feedback loop of the emotional processing context. It's how the system reinforces the goal-driven behavior. You can within some limits as a rational being take control of your emotions, but there are limits to that and what you are actually probably doing is just re-calibrating after realizing that some feedback loop is getting in the way of your own goal.
Point is that Data and Spock actually are experiencing emotions all the time. What they are not doing is making the emotional displays or under any compulsion to make emotional displays in order to communicate emotional information to other primates watching them. The emotions that they have are not entirely human emotions and they can't be communicated very easily to anyone any more than it's easy for you to communicate feelings to someone that doesn't experience your own. But they are certainly there and we know that they are there because they can assign meaning to things and make value judgments. No matter what Spock may tell you, these value judgments are not wholly rational. We don't even live in a universe where you can make a mathematical system not depend on unprovable axioms, much less one that can make value judgments wholly based on logic. There has to be something that makes you do your homework even though you know the heat death of the universe is inevitable in a scant few billion years.
For the dragon, if he has a set of values that are congruent with destructiveness, then he cannot and has no desire to change those values and every built in emotional feedback loop makes him wholly miserable when he tries and every built in emotional feeback loop makes him happy when he doesn't, then it doesn't matter how intelligent the dragon is, he's still going to behave according to his very dragon-ish nature.
And that gets us to intelligence. Intelligence isn't what most people think it is either. But this essay is long enough already, so let me just say there is no such thing as "hard intelligence" or "general intelligence". (Or if there is, we have no examples of it.)