On whether trees are intelligent, I think that we are simply using the word "intelligent" in different senses...
Yes, we are. And I'm saying that "intelligent" is a word like "magic" which in its most common everyday usage refers to something that doesn't exist and which people can therefore only poorly define. The sense you are using the word intelligent, the common sense way, refers to something that doesn't exist and which I think most AI researchers no longer pursue, largely because they assume that it doesn't exist or if it does exist that it is so far beyond our understanding that there is no point in pursuing it. Or to put it another way, we ourselves as humans are not "intelligent" in the sense you use it.
Or to put it another way, you are still trying to divide the world into things that are "intelligent" or "not intelligent", when in fact it's not binary and indeed not even a single continuous scale of "less intelligent" or "more intelligent". It's more like a multi-dimensional array, and humans are really biased about what parts are significant and probably blind to others. Or to put it in the language of AI, "All intelligence is soft intelligence."
as I would apply "intelligent" only to something that was, if not self-conscious, at least conscious to some degree, able to have experiences of some kind, and I see no reason to think that trees have genuine experiences.
The problem with "conscious" is that is first something we can't yet define and secondly a subjective experience. We use it refer for our own internal experience of being a being. But we have to admit that we can't really prove anyone else has the same experience, and so we can't know if anything else is conscious or merely acting as if it was conscious. Actually, we can't even know enough to know if we are deceiving ourselves. Some people think that in fact we are and that in fact humanity isn't actually genuinely conscious.
So, it's a terrible and largely useless indicator of whether something has some particular rights. You might have noticed that I defined consciousness earlier as an algorithm, based on my best guess as to how human consciousness works. Consciousness exists when the organism has the ability to receive input from multiple intelligent internal 'critics' and then weigh which of them to listen to. And defined such, consciousness is a scale. The more critics you have, the more complex your critics, the more you are able to weight the input of each critic, and the more you are able to resolve that to a single decision making algorithm for your entire person - what humans experience as consciously thinking, what people who are not deaf experience as 'hearing themselves' inside there own head - the more conscious you are.
Trees I don't think are conscious because I don't think they have a method for aggregating and weighing their inputs like that. They can however receive inputs, respond to those inputs, and even make something akin to memories. They can even chemically communicate these experiences - "I'm on fire!" - to nearby trees, which then respond to that stimulus to prepare themselves for fire. They have, as it were, experiences - but not so far as we can tell as yet - conscious experiences.
Someone being a person wouldn't entail that we accept their moral judgements, but if we could establish communication with them, then it would at least make sense to consider their moral arguments.
Yes, but presumably, we might decide - just as we do with children - that we are morally justified in overriding their moral decisions. We wouldn't necessarily decide that they have an inherent and unalienable right to liberty and the pursuit of their own happiness.
How do you know that I am not simply a good simulation of consciousness?
I don't. I reason however by our close kinship as beings with the same form, same heredity, and apparently similar capabilities that you probably have the same traits I do myself. If on the other hand I found you had a different form, different origin, different capabilities and so forth, then I might not know. However, as I indicated earlier, just because you didn't have this very human subjective experience of being "conscious" - whatever the hell that really means - wouldn't mean that I would necessarily decide you were a thing with no more rights than a paper weight or a laptop.
I grant that my concept of personhood begins with humans. I don't see how it could be otherwise.
I grant that as of yet it couldn't be otherwise. I'm now asking to you imagine things that aren't human. And in that, what I'm suggesting is that your concept of "personhood" is no more a real thing than your concepts of "intelligence" and "consciousness". As you use the word, it has no definite meaning. We don't live in world that neatly divides into persons and non-persons, and very soon here that's going to be hopefully self-evident. My fear however is that, humans being notoriously defective in their reason with regard to things that didn't exist in their evolutionary context (ei, something else as intelligent, as self-aware, as capable of long term planning as they are), that their reasoning will commonly fail and they'll resort to erroneously treating non-human things as human in a vain effort to understand them thereby.
For me, the question begins with in virtue of what do humans have rights, and then turns to the question of what other beings share or might share those properties.
And my answer is beings that share most of the properties of being human. But an AI is an alien thing that does not share most of the properties of being human. It is therefore wrong to treat them as if they were.
A very simple computer program could tell me that it was a person. That would mean nothing. But even the most sophisticated AI would be "deciding" only analogically. It would actually just be following its algorithm.
So is the most sophisticated person. You and I are just following our algorithms. They are very sophisticated algorithms, but they are fundamentally just algorithms. The fact that we are following an algorithm doesn't mean we aren't conscious. It's even possible it doesn't mean we lack free will, although, exactly what that means is also an open question and certainly many people think we don't have free will.
What do you think it is that makes slavery wrong in the case of humans?
Heh. I notice you are starting to move away from answering my questions and hitting back with questions.
To begin with, I never asserted that slavery was absolutely wrong. I will however say that slavery is relatively wrong, in that while there may be worse things and while it may be the case that slavery could be conducted in a way that was fair to all parties, that it is not ideal. If we look at past slavery as humanity conducted the institution, we see institutions that are at best concessions to other evils of the world, chosen as the lesser of several evils. For example, in an early bronze age society the overall society was so poor, and life so insecure, that it may have been reasonable as choice for society to consider slavery the lesser of several evils. Slavery provided for a high trust high security relationship by the wealthier provider, and for the slave being treated as a second class citizen in a tribal band might still have been better than not being treated as a member of the tribe. We have to understand that for the vast majority of slavery institutions in human history, slaves had some rights rather than no rights. Or perception is skewed by the fact that our most recent experience of slavery was one of the least regulated and cruelest versions ever practiced.
But in terms of why slavery is wrong at all, the problem as I see it is that humans are inherently peers to each other, and that slavery could only be justified in the case where the partner with superior rights was actually fully superior in some way to the other partner. (Hence the reason that slave owners frequently tried to deceive themselves about their own superiority.) Being peers, the best relationship we could adopt is the natural relationship of peers, treating each other as we would want to be treated. But that empathy would fail if extended to something that was not a peer, and in fact we intuitively know not to literally treat all sentient beings as peers. For example, we know to trespass on that literal relationship in the case of children, because as a race of beings that 'grows up', we know that treating children exactly like adults would be unfair - even though both are human and deserving of certain 'human rights'. We don't think of the status of a guardian a minor as being 'slavery', even though serfdom offered comparatively more rights.
And of course, another reason for believing slavery is wrong, is that all too often even the best case justifications simply ignore the evils of slavery as it was actually practiced. Quite often, indeed probably most of the time, slavery as an institution wasn't motivated by good will, even if in theory it could have been. Nor was it ever motivated by the best of wills, the fullest trust, and the greatest of compassion. Even in cases when those motives existed, it was just leveraging a familiar but flawed institution for a good purpose.
Now, consider what you mean by 'human rights' or 'the rights of being a person'. You are implicitly saying that among those rights is the right to self-governance and self-determination, if not absolutely, then at least largely. Otherwise, slavery would be just fine. Can that be applied to the general case of artificial beings, even if those beings are "intelligent" or "conscious" (however you think you can determine that, which you haven't made clear)?