We've been having trouble, my managers and I, with a method being validated by an outside lab. Briefly, when they run our HPLC assay, they get a lower result for the same sample as we do.
Our instruments are pretty comparable, and we've tried different instruments at both labs to remove the error. We use a Water's UV detector, they've tried both the PDA and the UV detector. We tested samples before we shipped them for their work, and when they had problems, we had them return the same sample for analysis. We're both using the current USP standard, but they sent us an aliquot of their supply for use to evaluate. But still, we get slightly higher assay results, no matter if we use our standards or theirs
Their peaks shapes are a little less symmetrical, but they did try new columns, and, (I'm pretty sure) usual systems checks. And yes their areas are lower than ours. But why I don't understand is a reason for why a standard and sample won't scale, despite possible differences.
Consider, we both build a standard shooting for 0.110 mg. We expect are sample to fall with the range of 0.105-0.115 mg. So our system gives peaks with 200,000 AU (arbitrary units, as I like to always say) and theirs gives 180,000 for the standard. The response should stills scale for the unknown, giving the same answer.
*EDIT -- changed level*