Re: Question about hashing algorithms
Date: Mon, 29 Aug 2005 14:33:23 -0500
On 2005-08-27 12:54:16 -0500, Peter Pearson <email@example.com> said:
> Are you trying to protect against an intelligent adversary, or
> only against the possibility of two chunks accidentally having
> the same hash? If the latter, what is the size of the universe
> of chunks among which hash collisions would cause trouble?
The idea would be to protect against an intelligent adversary from
flooding the network with invalid data blocks. Large data files which
would be made up of many smaller blocks would ideally not be able to be
corrupted by an attacker exploiting some easy flaw in the hashing
algorithm for finding overlapping data.
- Re: Don Knuth - Uncompressable sequences
... reconstruct the original sequence regardless of the length of the ... length (and assuming that the sha1 hash was uniformly distributed), ... sha1 hash, 65536 for a 22 byte file, etc. ... compression algorithm the universe is using that is causing that small ...
- Re: The death of global warming
... The Universe is mostly devoid of substance and it takes up a LOT ... the next time you are in a hash den in Amsterdam. ... But you CAN read the English language, ...
- Re: Is MD5 outdated ?
... > the same hash, ... That's a misunderstanding of the attack. ... Not in this universe. ... I found the half-life of a proton exceeds 10**33 ...
- Re: hash()
... function that returns each of its H possible results "at random" (meaning that there's no algorithmic way to predict any bit of the hash output short of running the hash function), then the probability that two distinct strings have the same hash is 1/H. ... if your string universe contains ...
- Re: Hash Code
... Although after a point you would have to start using B-Trees as it would not all fit in memory at anyone time. ... table in the *Universe* at any one time! ... 4E79 hydrogen atoms. ... the normal 32bit hash value is more than enough, i was just wondering for hash tables that are larger. ...