Pixels of each page are more than 6 millions. Are PCs robust enough for this? Assuming software exists for the job.
Each pixel would be at least a byte*, right, so are the all-image pages of your pdfs really 6 MB a page or more (even with medium compression, that's 1.2 MB a page)? Your monitor can't display that many pixels anyhow and although print quality is different, the pdfs they sell aren't print quality, surely? How many of us have a top-quality printer anyhow, that it'd be worth selling them, with all the extra hassle and cost that'd entail?
However, the pages aren't all images; it must be more efficient for text.
Anyhow, even if they are using 6 million pixels per page (and they could be; I don't know anything about pdf or publishing or even images, really, other than as binary files; I'm a coder) I don't think that it's a computationally tricky problem to find the differences between two of the files. However, if they are (and I assume they do) changing the location of the naughty pixel, the number of different copies you'd need to be confident, assuming they are being clever and throwing in some other random naughty pixels, is perhaps too large to organise or buy that many copies. That's a different issue.
Given the software, detecting a single-pixel difference between PDFs could be done in a trivial amount of time. Consider the fact that your computer needs to process all those pixels anyway in order to put them on the screen, after all.
However: Even if this particular system can be circumvented easily (and that may not necessarily be the case), I'm sure there are several possible ways to have a steganographic watermarking system that's quite robust - there's a pretty major field of research dedicated to this very sort of thing, after all.
You can, according to a paper I read a while back, encode the watermark in the Fourier ampiltudes; you make a spatial fourier transform of the image, doctor some of the amplitudes a tiny bit, then transform back. This can be undetectable to the naked eye because it turns out that Fourier phases are more important to us, visually. You can then get that information back, should the image turn up somewhere illicitly, by comparing a Fourier transform of the evildoer's image with a Fourier transform of the original.
I am sure that's more hassle than it's worth, mind.
Ironically, I looked into this only after one of my legitimate pdfs ended up somewhere it shouldn't have.
*The pixel storing the customer number would have to be the same size, and you'd need more than a byte to store an appropriate number of customer numbers; of course, you could combine the number stored with the location and that would be big enough for
all customer numbers, in combination.