In c555400c @gavinandresen added the fee per kb computation with the following logic:
// This is a more accurate fee-per-kilobyte than is used by the client code, because the
// client code rounds up the size to the nearest 1K. That's good, because it gives an
// incentive to create smaller transactions.
double dFeePerKb = double(nTotalIn-tx.GetValueOut()) / (double(nTxSize)/1000.0);
While the logic given in the comment is sound, there is a counter argument: It can be productive for the network to make transactions a little larger, e.g. spending additional dust inputs. With the quantized fee logic nodes could costlessly add in dust whenever they had room before the next fee increment. This makes it less costless.
I think we should either specifically incentivize txout set size reductions or restore the quantized behavior.
Thoughts?