Except that human beings are capable of feelings and perceptions over which they do not have control. So, it is in theory possible for us to "feel" like we have will, but have that feeling be merely one more automatic response, an illusion.
Given current technology, I expect it is not testable.
We are at the point that we can tell that sometimes (scarily often) our decision process goes through what amounts to emotional processing before it ever hits logical processing. That emotional processing is not conscious - it generally produces results that we then attach logical reasons to after the fact. But, that doesn't mean it isn't "free will" - there may still be a personal choice buried in there, rather than what amounts to emotional algorithmic processing.
It sounds like you're agreeing with my opening argument that we are moist robots and do not have free will....
To me, that we are/aren't moist robots isn't really up for discussion. Once we invent dry robots based on identical modelling of the human brain, we are, by definition, moist robots (as in, the opposite of dry, and applying the same noun as the dry version is a replica of the wet version).
So that mostly leaves Free Will, whether we have it, and extending the argument to, whether we will recognize it in other alien/manufactured entities when we meet them.
The Turing test is really elegant and simple. If a human talks to an AI in a chat room, and can't tell it's an AI, then the AI is indeed an AI, rather than a conglomeration of algorithms.
Given how dumb some chat rooms can be, I think mastering the art of conversation, doesn't fully prove the program is Intelligent, or Willful (as in having Free Will like a Human thinks he has).
But I like the setup of the Turing Test. It would seem that philosophically (rather than technologically), a test can be defined that determines if the test candidate has this Free Wiill stuff or not.
to me, I suspect that Intelligence, and Willfulness would be tied to problem solving, in the sense of solving the problem without actually knowing how to solve the problem first.
Given stimuli, like the house is on fire, humans, dogs and robots all execute pretty much the same code. there's not exactly thinking going on that illustrates Free Will, as in my dog and non Free Will robot can do the same thing you can do.
But setting up the participant to make some choices and come up with a new solution that isn't "pre-programmed" seems like that might be the key.
I believe, that if you have this mysterious Free Will, it means that you are able to consider a situation, identify the obvious optimum choice, and come up with new non-obvious choices.
Computers and Neural networks are always making choices, and assessing priority, and usually choosing the pre-programmed path (get food, get to safety).
Somebody with Free Will can transcend that programming and do something new.
Like the veteran who shoved his wife off the parade float in Midland yesterday and saved her life when the train hit. He died.
A dumb robot programmed for self presevation (or my dog*) would have moved itself out of the way. it took higher level thinking (free will?) to choose an alternate plan to save another person instead.
I'd hate to get mushy, but self sacrifice might be a demonstration of Free Will in that the entity is bypassing a default behavior (self preservation) in lieu of another choice.
*I like dogs. Dogs being able to save other people at risk to themselves may be qualifying for this Free Will status as well. It's a grey area as animal software does cover animals protecting their young/pack/territory and may not count as "going beyond their programming"