Re: Cohen's paper on byte order
From: Brian Gladman (fake_at_nowhere.org)
Date: Tue, 29 Apr 2003 12:54:40 +0100
"Mok-Kong Shen" <firstname.lastname@example.org> wrote in message
> > Now we all agree on how to cut these sequences into 8 bit groups so lets
> > just concentrate on the first such group:
> > 01234567 <- bit number
> > 00000001 <- bit value
> > Now let us assume that it is going to be treated as a little endian
> > that is a number in which low numbered bits are least significant. In
> > case we obtain the number 10000000, which is 0x80 or 128 decimal.
> > But if we assume that it is a big endian number with low numbered bits
> > more significant, we obtain 00000001, which is 0x01 or 1 decimal.
> I think and see this is the 'basic' point of debate.
> Now, if we see on 'paper' a bit sequence 00000001 and
> look at it a binary number, wouldn't we humans
> (according to common convention, cf. writing of
> decimal digits) interpret it differently than the
> number with the value 1 and hence in hex notation
> 0x01? This 'fact' indicates that, if there are 'now'
> two possibibilites offered to us to interpret the
> same, becauce we 'now' have a machine at out disposal,
> we should prefer that interpretation that is 'also'
> conform to our human interpretation, 'unless' there
> were very 'essential' reasons for the opposite, e.g.
> large efficiency difference between the two (which
> I don't think could be the case). Is this argument
If you are saying that there are good reasons to stick with established
human conventions when writing things down, I agree. The FIPS does this by
requiring that AES bit sequences are big-endian when interpreted as 8-bit
byte integer values.
The FIPS makes use of the well established conventions for writing down
hexadecimal numbers to ensure that most people who read the FIPS are guided
to the desired AES version without even realising that another version might