Re: Encrypting control channel



Hi Andrew,

On 1/12/2012 5:34 PM, Andrew Swallow wrote:

that this exact problem also exists when using an asymmetric digital
signature scheme, since they also use hashes.

It would also have to be amended to guard against replay attacks (?)

Replay protection isn't inherently provided by either digital
signatures or HMACs, so you would need to make it a separate part of
the protocol. Quite trivially, a large counter (just go with 64 or 128
bits so you don't have to worry about overflow, if you can afford
it in
bandwidth) attached to each packet will solve this; the sender starts
at zero and increments the counter for each packet while the
recipients
refuse to accept a packet unless it has a sequence number greater than
any they've seen before (this also lets you notice if a packet was
dropped since there will be a sequence number hole; if you don't care
about that just ignore it).

I'd have to tweek this since packets *can* arrive out of order (within
contraints) in the normal course of operation. And, you'd also have
to consider the case of re-requesting a "lost packet" (what counter
value does it merit?).

Use the same counter value it was originally sent with.

Which means the "counter value" (avoiding the use of the term
"sequence number") checking has to be done upstream of the
"replay prevention" logic.

In simple terms they are the same test.

No. They address different issues.

The "sequence number" of a packet determines its place in the
"source stream". I.e., the contents of packets bearing the
sequence numbers N-1, N, N+1 are "played"/"displayed" in
exactly that sequence. (similar arguments apply to the
sequence of "control messages")

The "counter value(s)" exist solely to prevent replay attacks.

In a simple system you are
probably discarding packets after the missing packet, more sophisticated
systems can queue the packets.

I *never* drop a packet -- unless its deadline has passed
(stale data). If I receive 3, 8, 5, 7, 2... then I enqueue
them so that *when* they are supposed to be "played", they
are ready and waiting.

The senders' emission rates are syntonous with the receivers'
consumption rates -- data arrives at a client because (and *when*)
it needs that data (this isn't a desktop player/STB, etc. that
can buffer huge amounts of data and request whatever it might
*decide* it needs at a later time).

[look to the boombox analogy]

Also repeat
packets if they are not acknowledged within a reasonable amount of time.

Packets are never acknolwedged. That adds lots of extra traffic.
Instead, you *assume* the packets get to their destinations
and *they* (i.e., the "detinations") react if they don't.

What's a "reasonable amount of time"? Only the particular destination
knows that, for sure (of course, you *could* centralize this information
but that increases the burden on the servers/senders and doesn't scale
well).

OTOH, a destination can more readily decide the chance of it being
able to succesfully rerequest a lost packet and alter (or, *begin*
altering) its behavior, accordingly.

Also, you would have to maintain such a "counter" for each,
(sender,receiver), right? (though, presumably, one per *sender*
could suffice if you had other mechanisms to track missing packets)

That only needs a small amount of ram.

Yes -- for each counter. But, you need one for each *sender*
(which each "recipient" must track). If you don't want the
counters to wrap (replay), then they need to be wide (since
a session can be of indefinite length).

The receiver can test for gaps but does not know if it has received the
last packet.

There is never a "last packet" (as in last = final). The data
stream continues as long as the device is powered -- it just
is encoded as "silence" when no audio/video is intended to
be played/displayed (assuming the device isn't powered down
at that time).

Each receiver can examine the "seqence numbers" of the (validated)
packets that it has accumulated at any given time. From this, it
knows when -- and for how long into the immediate future -- it
can play/display the "source content". And, what it can *expect*
to arrive (asuming no problems) hereafter. If it's inspection
of its current state and expected future state leads it to
suspect that it might NOT have a piece of that source material,
it can:
- request it (from anyone who can provide it!)
- begin taking measures to gracefully degrade performance
to minimze the impact of the *apparently* upcoming dropout.

128 bits per subscriber, which is 16 bytes. For anything other than an
8051 microcontroller that is trivial. RAM is cheap theses days.

You're thinking of small numbers of clients and servers.
If the numbers are small, there is no need for so elaborate
a distribution system!

Imagine dozens of "source providers" and *hundreds* of clients.
Every "communication pair" -- i.e., (sender,receiver) -- that
is active (or POTENTIALLY active) would need such a counter
in each receiver (I think you can lump all of the "receivers"
for a particular sender into a single "sender-side" counter).

E.g., if a particular client subscribes to two source streams
provided by two different servers, then it must maintain a
"replay counter" for each of those "connections" -- in
addition to the two "sequence counters" for those two *streams*.

If control is provided by another server, then that adds
a third replay counter that th client must track.

Additionally, any peer relations that it relies upon would
require counters -- one for each peer. (I *think* you
only need to have a "receiver" counter for each such
relationship... I suspect a single "sender" counter could
be shared among all OUTBOUND connections from the client).

Without the security burdens, I currently can accommodate
an unlimited number of peers -- I don't have to maintain
*any* state for these connections as I can consider them
"temporary"/transitory (i.e., I don't even have to
track them in ARP cache). All I have to do is make sure
my "sequence count" can't wrap in the TTL of a packet on
the wire...

If I have to carry lots of state for each potential connection,
then I need to develop a protocol whereby each client can
establish -- and register -- the peer relationships that
it will track going forward. And, a mechanism for reseeding
those relationships over time (.e., when clients are powered
up/down or move to alternate "source subscriptions")
.



Relevant Pages

  • TCPIP sequence number question
    ... How will the receiver behave upon receiving the packet with sequence ... and sending NAK to the sender? ...
    (comp.os.vms)
  • Re: Encrypting control channel
    ... The "sequence number" of a packet determines its place in the ... Client #2 never sees the control message that was ... a peer since requesting from the server would scale poorly). ...
    (sci.crypt)
  • Re: Encrypting control channel
    ... The "sequence number" of a packet determines its place in the ... Client #2 never sees the control message that was ... a peer since requesting from the server would scale poorly). ...
    (sci.crypt)
  • Fwd: svn commit: r221418 - head/sys/net80211
    ...  Fix some corner cases in the net80211 sequence number retransmission ...    that out of order / retransmission handling would be handled by the AMPDU RX ...    software packet aggregation isn't yet handled), ...  static int ...
    (freebsd-current)
  • Re: Encrypting control channel
    ... bandwidth) attached to each packet will solve this; the sender starts ... dropped since there will be a sequence number hole; ... Imagine dozens of "source providers" and *hundreds* of clients. ...
    (sci.crypt)