Re: [Lit.] Buffer overruns
From: Tom Linden (tom_at_kednos.com)
Date: Wed, 02 Feb 2005 20:48:59 -0800
On Wed, 02 Feb 2005 12:28:02 -0700, Anne & Lynn Wheeler <firstname.lastname@example.org>
> "Tom Linden" <email@example.com> writes:
>> Was it possible with apprpriate permission to modify that page?
> or are you talking about the page(s) in memory instead?
No I was specifically wondering about the zero-page, since it could have
a trojan host.
> default in cp67 was not have sharing in memory (except for
> a hack i'll mention) ... the virtual address space technology
> provided a very high degree of isolation (and assurance). some
> places like commercial time-sharing services used it
> as well as some gov. TLAs.
> 360/67 had added virtual memory and features like segment sharing to
> basic 360/65 hardware ... but no additional memory protection
> features. If you were to provide common, concurrent acccess to the
> same virtual page resident in real memory and still provide
> protection ... the only store protect feature available on all
> 360s was the storarge-key based protection mechanism ... extended
> discussion on the subject earlier in this thread
> http://www.garlic.com/~lynn/2005.html#2005.html#3 [Lit.} Buffer overruns
> http://www.garlic.com/~lynn/2005.html#2005.html#6 [Lit.} Buffer overruns
> this was a problem in cp67 ... as per various previous posts, cp67
> attempted to faithfully implemented the 360 principle of operations.
> Fiddling the storage keys for (system) page protection could interfer
> with some application use of the same storage key facility in the
> virtual address space. fetch protection wasn't as much of an issue,
> since with virtual address space architecture fetch protection can be
> achieved by just not mapping the pages into the virtual address space
> (if you can't address the data, then the usual implementation is that
> you also can't fetch the data). In any case the original cp67 made
> very little use of sharing the same real pages across multiple
> different address spaces.
> Now there were 1200 people that had been working on tss/360 (the
> official corporate operating system for 360/67) that did a page mapped
> filesystem and various virtual address sharing paradigms. The people
> upstairs on the 5th floor was also doing similar stuff with multics.
> In any case, i was a brash your programmer ... and I figured that
> anything anybody else could do ... i could also do. So I designed
> and implementated page mapped filesystem
> and various virtual address space facilities ... although i was forced
> to deal with some number of widely used conventions that bound
> executable code to specific (virtual or otherwise) address locations
> and for a little crypto topic drift ... some people may remember
> email addresses with the hostname dockmaster from the 90s (and
> some may even remember them from the 80s):
> other drift:
> http://www.garlic.com/~lynn/2001m.html#12 Multics Nostalgia
> http://www.garlic.com/~lynn/2001m.html#15 departmental servers
> now one of the things that had deviled tss/360 was that they had laid
> out applications in a very flat structure ... and when the application
> was invoked it just mapped in the whole structure and pretty much
> relied on single page fault at a time to fetch the
> operations. Something like a fortran compiler could be a couple
> megabyte file ... and running on a 768k real machine with possibly
> only 60-80 pages left after fixed kernel requirements ... there was a
> huge amount of page wait in the infrastructure.
> cms had essentially borrowed most of the major os/360 applications
> which had been segmented to fit in small real storage environments
> ... with transitions between phases that would do block reads for
> 60kbytes-80kbytes at a time. translating this into a page mapped
> infrastructure ... the block read requests were translated into page
> mapped operations with hints for doing block page fetch for all pages
> in the phase (single page fetch for possibly 10-20 pages at a time).
> Now the generalized virtual memory architecture for 370 added a bunch
> of stuff learned from 360/67 experience (and others). There was a
> bunch of protection features (especially for shared environments) and
> various other things like hardware selective invalidates. There were
> this series of product concensus meetings in pok involving business
> people, software people, and hardware people. As mentioned in several
> other posts ... the 370/165 engineers were claiming that if they had
> to implement various of the new features (protection mechnisms,
> selective invalidates, etc), it would delay the announce of delivery
> of 370 virtual memory by six months. Eventually the business decision
> was made to drop those additionsl features from the 370 virtual memory
> architecture and go with the earlier announce.
> http://www.garlic.com/~lynn/2005b.html#62 The mid-seventies SHARE survey
> the morphing of cp67/cms to vm370/cms was going to rely on all the new
> protection mechenisms replacing the storage-key based protection hack
> that had been used in cp67. however, when it was decided to go across
> the product line with the 370/165 virtual memory subset ... vm370/cms
> was forced to revert to the storage-key based protection hack.
> we scroll forward a little bit and i've converted my cp67 page mapped
> file system and enhanced virtual memory management facilities to vm370
> ... and the vm370 product group decide to pickup and ship a subset of
> my virtual memory management facility. In parallel with this, somebody
> else came up with an alternative shared page protection design. Using
> the storage-key based protection hack cost cms some performance that
> could be gained back if it was eliminated. One mechanism was to allow
> processes to run w/o protection and between task switches
> ... determine if the active task had corrupted any "protected" pages.
> If any corrupted protected pages were found ... they were discarded
> and the system would revert to the uncorrupted copies on
> disk. Overall, the overhead of this alternative implementation was
> slightly less than the performance gain from eliminating the
> storage-key protection hack (when limited to checking 16 virtual
> shared pages).
> The problem was that they shipped this brand new "protection"
> (actually fix up corruptioin after the fact) at the same time they
> shipped a subset of my expanded virtual address space use (which at a
> minimum doubled the typical number of shared pages to 32 ... and
> frequently to a lot larger number). At checking 32 shared pages
> (instead of only 16 shared pages), the alternative protection
> mechanism cost more than was gained from eliminating the storage key
> based protection hack.
> Now we scroll forward a little bit ... and we come to original
> relational database implementation
> all this work was going on in vm370/cms based operating system
> environment. You would have a user process address space and a systemr
> shadow address space of the user process. The shadow process had the
> protected database stuff that ran on behalf of the user ... but the
> user didn't have any direct control or access to the shadow. All the
> shadows could have code that was nominally read-only shared and
> portions of the shadow address spaces that were read/write shared
> across all database processes (caching, serialization. commits,
> locking, etc). This sharing of read/write address space areas was much
> more of a permission issue than a protection issue (i.e. you only
> wanted to have trusted processes with sharing of the read/write
> database virtual memory areas).
> So eventually you roll forward to 3033 ... and for the first time you
> see some mainframe model implementing even a piece of the original 370
> virtual memory hardware protection specification.
-- Using Opera's revolutionary e-mail client: http://www.opera.com/m2/