Re: Randomness of MD5 vs. SHA1
- From: Tom St Denis <tom@xxxxxxx>
- Date: Wed, 10 Feb 2010 14:03:25 -0800 (PST)
On Feb 10, 1:59 pm, "Scott Fluhrer" <sfluh...@xxxxxxxxxxxxx> wrote:
I'd personally write off the 16-byte block size since the calling
overhead is non-trivial at that point.
Why? Doing hashes of small blocks isn't that uncommon...
Because if you wanted to optimize MD5 for 16 byte blocks [why?] you'd
re-write the compress function so that some of the M[0..15] blocks are
constants [e.g. 12 of 16 of them]. You could write 15 versions, 14
for 1..14 32-bit words and the 15th for 15+ words. MD5 hashing of
anything less than 56 bytes involves a single compression and you can
avoid all the normal overhead by computing the entire hash in a single
If all you have is a generic routine [which normally is ideal] the
calling overhead of the typical api [e.g. init/process/done] will
obliterate any useful performance inside the compression function.
So in short, if performance of <56 byte messages matters you'd not use
a typical hash implementation as it'd be sub-optimal.