While I'm getting geared up for a good rant, let me say that most people have really niave ideas of what 'intelligence' constitutes.
In brief, intelligence constitutes the ability to act appropriately in a particular situation. What we generally think of as intelligence is actually 'strong intelligence', that is, the ability to act appropriately in any situation. The problem is, we have no actual examples of strong intelligence. We humans aren't actually strong intelligences either. We are just a hiearchy of specialized algorithms that most of the time can behave appropriately in the sort of situations you most commonly find yourself in on the planet Earth and which can approximate solutions to most problems which you'd encounter there. That's it. Everything else is fudged together from those algorithms, and in general works very badly except in a few autistic savants who otherwise can't function very well in most of those situations that you find yourself in on a planet filled with people.
I've no reason to think that AI's will work any differently, except that they'll probably have a different tool set designed to cope well with situations that they are expected to find themselves in.
When the ACM or the IEEE or OSHA or some combination finally decide that they need to regulate and certify machines or programs as intelligent, its not going to be like they sit down with a fully conversational machine and say, "How's the weather?", "What do you think of Shakespeare's sonnets?", and put a check mark on it if it seems human enough. Instead, they are going to work up a battery of tests in particular fields of behavior and knowledge, which will work something like highly specialized graduate entrance exams. Machines will be rated as 'Turing certified' not generally, but according to the number of fields that they can handle and the degree to which they succeed. So a machine won't pass 'the Turning test'. It will obtain a particular score in one or more Turing tests. In fact, for that matter, so might the machine your mind runs on.
So for example, a program might be a Turing certified accountant. It might be a passable conversationalist so long as you stick to accounting or things related to accounting, but it will be as capable (at least as capable) as a human accountant. It will not only be able to balance your check book, but would be able to advice you on what sort of accounting practices to adopt given your personal or business needs and be able to spot things that seem fishy in the books (such as when someone is embezzeling you). But, although it will probably be able of responding in a friendly way if you need it to act friendly, it won't need you to act or be friendly. It won't care. It's a machine accountant. It likewise isn't going to care about politics, being independent of you (though its likely to be very protective of you at least when it comes to the money its entrusted with), or whether you like and appreciate it (at least insofar as its doing its job as an accountant, its happy). If you trade it in or erase it or anything else, it doesn't really care, and why would you want it to?