Re: The Huge Significance of the Newline Character in modern Cryptography.

There is nothing preventing you from applying your ideas about using
the Vigenère square in a 256 x 256 form so it can cover all possible
bytes, not just a subset of them.  As it is your system is not
guaranteed to reproduce what was initially put into it.  For example,
here is a short Java program:

public class MyMain {
  public static void main(String[] args) {
    System.out.println("Hello world!");


This program compiles and runs correctly.  Since your system does not
recognise linefeeds for encryption and on decryption inserts new
linefeeds IIRC every 77 characters, the output from your system would
look like:

public class MyMain {  public static void main(String[] args) {
t.println("Hello world!");  }}

That no longer compiles correctly because "System.ou" is not
recognised.  Your proposal cannot guarantee that what emerges is
identical to what is put in.  That is a major failing in a proposed
cryptography system.

Using a 256 x 256 Vigenère would avoid this problem.

This is an ongoing albeit unimportant issue in your previous posts
that might benefit me from open discussion. To the best of my

It is certainly NOT an unimportant issue!

knowledge the compiler instructs the operating system of any computer
to automatically limit line lenghts (internally) to 255 characters and
marks it with some character from the full set of ASCII as End-of-

The portion of the operating system dealing with files normally has
no idea what a line or a line length *IS*. Bytes is bytes. If some
userland program wants to interpret the file as a series of lines,
that's fine, but as far as the file system code is concerned, a
newline is just another character.

There might be a limit of 255 characters on input from a terminal.
That limit is commonly bypassed by using character-by-character I/O
without waiting for a complete line. And humans don't often type
lines that long directly into a program, unless it's an editor so
they can easily correct errors.

Attempting to regulate line lenghts externally is ineffectual
on what the computer does internally.

So take Set_Line_Length(X) and shove it where the sun don't shine.
It might be good for reports with columns of data, but very little

Don't use the ADA text i/o class; use a class that treats a file
as a bunch of bytes. However, if you *MUST* use the ADA text i/o
class, realize that if you read a line, there's a line ending after
that (even if it doesn't appear in the result from reading a line),
and on the decrypt end, you can use New_Line to output one.
Internally, you could use "ASCII code" 257 for newline to make
absolutely sure it does not collide with anything else in the
message. You can also make use of End_Of_Line(). It *IS* possible
to copy a file using the text i/o class and preserve line lengths.
You just have to learn how to use it correctly.

If the character set really is ASCII, then encoding an end of line
as a line-feed character for the purpose of encryption shouldn't
cause clashes with other line-feed characters in the message (since
there won't be any).

End-of File is also marked

End-of-File is normally *NOT* marked by special characters since that
spoils the ability to store arbitrary data in a file. There are some
exceptions, such as old CP/M text files which could only end on a
sector boundary and ^Z was used to mark the end of the actual text.
Modern Windows systems can have files sized to a byte boundary and
don't need this. UNIX systems never needed it.

I am sure that these terms i.e. End-of File and End-of-
Line are properties of the compiler and not the operating system.

Why? A file stored by the OS has an end regardless of the compiler
that compiled the program that wrote it.

If the compiler doesn't select the same choices as the OS tools,
then ADA files are not inter-operable with anything else, and
that pretty much dooms that ADA implementation to obscurity.

Two things follow:

Firstly, I cannot and must not use the full set of ASCII to populate
the Vigenere square as you suggest because the computer may then end
the line or the file prematurely if it reads a the terminating marker
amomng the ciphertext.

Then stop using the ADA Text_IO class to read ciphertext files.

All OSs have a way of treating a file as a series of uninterpreted
bytes. Don't use the ADA Text_IO class on these files.

There is no rule that says you can't (possibly optionally) use
ASCII-armoring of ciphertext (say, base64) after encryption. (PGP
does). It adds nothing to the strength of encryption, but makes
it easier to send in, say, email messages. I would have no objection
if you formatted *ciphertext* with a fixed line length on output
and ignored newlines in *ciphertext* on input if you use some
encoding like base64. Otherwise, treat ciphertext as raw binary
and use an appropriate I/O method to treat bytes as bytes.

Secondly, In my cryptography I use any line length (I have said 77
characters in previous posts but that is just a nominal figure - it
may be anything else within reason) to limit the displayed output line

Terabytes are within reason on a system that has this much storage.
I've seen terabyte drives for PC on sale for under $100.

length with the standard command in Ada "Set_Line_Length(X)". The

Shove that stupidity where the sun don't shine.

If you insist that plaintext be printable on an 80-column-wide
printer, you could be a lot less robotic-minded about it and change
the last space in the plaintext before 80 characters are reached
to a newline. That means you only break words if the words are
longer than what fits on a line (this is pretty unusual for ordinary
text, unless it's already base-64 encoded or something similar).
This does require a bit more coding effort. Have you noticed that
this is how word-processing programs do it?

computer then gives me that line length but admittedly it means broken
words at the ends.

That's supposed to be user-friendly???!!!??? It can change the
meaning of messages, perhaps fatally in a military situation, such
as messing up target coordinates or recognition codes. You'd have
to teach *everyone* reading messages from your encryption about
this stupidity so they don't make fatal errors interpreting the

I am not bothered about this because I can
immediately open the file of messagetext in any word processor when it
will immmediately justify all words to be complete and align them
either by the right edge or the left edge or symmetrically abiut the
about centre line as you wel realise I am sure.

This does not fix the words you broke in two.

Word processors often preserve "hard" newlines in text. And encrypted
messages are not always intended to be used in a word processor
(and a word processor may ruin them for their intended use). Some
of them are images, audio, video, source code, executables, etc.

It is fuitle to
expect or even try to make the compiler or the operating system to do
this when it is so easil done in other software that is designed
specially for that purpose.

It is futile to bother using cryptography that breaks source code
in arbitrary ways to transmit source code. Putting it in a word
processor will *NOT* fix it.

I am open to correction here if I am wrong but please explain - cheers
- adacrypt

You are very, very wrong. Programming for the convenience of the
programmer, rather than the convenience of the user, is not very
well accepted.

Relevant Pages

  • Re: The Huge Significance of the Newline Character in modern Cryptography.
    ... linefeeds IIRC every 77 characters, the output from your system would ... Then stop using the ADA Text_IO class to read ciphertext files. ... It is futile to bother using cryptography that breaks source code ...
  • Re: OT: Univac - Remember them?
    ... data bits along with a word mark bit. ... The compiler ran in 8K without mass storage. ... Unfortunately, it was years behind the times -- 6-bit characters, ... The Cyber TTL logic, designed by Seymour Cray, was overtaken by MOSFET ...
  • Re: Apple II weather display
    ... The slow drawing of the block fonts is a dead giveaway. ... tables for the characters and DRAW or XDRAW would speed the program ... NadaNet 3.1 for Apple II parallel computing! ... a compiler might do little to improve drawing if it ...
  • Re: Fortran source of non-linear unconstrained optimization
    ... I think that treating upper and lower case as ... machine-dependent mapping of upper or lower case characters to the ... "This standard does not specify: ... if a fortran compiler treats upper and ...
  • Re: Reading GIF
    ... around that lets the programmer deal with a data type that Fortran ... signed integers than as characters. ... In theory, a compiler might not ... support 8-bit signed integers either, ...