It's possible that an AI's goals will emerge from how it was trained, which will have been on human data. You could posit that it develops some of those egoistic or self-preservation goals. You could argue that it exceeds human intellect (at least in some domains), and that the goals that emerge...