Re: [Full-disclosure] Why Vulnerability Databases can't do everything

From: Jason Coombs (jasonc_at_science.org)
Date: 07/16/05

  • Next message: petefran_at_gmail.com: "Solaris Runtime Linker - Exploit Detection"
    To: "Georgi Guninski" <guninski@guninski.com>
    To: "Steven M. Christey" <coley@mitre.org>
    Date: Sat, 16 Jul 2005 17:42:15 +0000 GMT
    
    

    Do either of you seriously believe that it will ever be safe to use a software programmable CPU to automatically process data that originates from some place other than your fingertips?

    The entire personal and business computer industry is producing broken and dangerous products, yet it would require only a few fundamental changes to fix everything. Instead of putting effort and capital into this objective everyone is squabbling over changing software vendors' behavior or fretting over individual bugs and whether or not they should be disclosed and if so how and when.

    1) Stop loading machine code from the same data storage devices to which user/application data is saved.

    2) Don't allow machine code to be written to program storage unless it has first been enciphered with a key assigned to the computer.

    3) Stop buffering runtime I/O within the same physical memory as machine code.

    4) Put a stop the silly belief that software must be executed in the same form that it is delivered. (i.e. Just because the programmer wrote code that made Win32 API calls on the development box, this does not mean that the OS deployment that hosts the execution of that code has to support Win32 at all -- we need to insert a machine code transformation step prior to deployment of code, combining the principles of address space layout randomization with new approaches to API obfuscation/reassembly down to the level of customized/reassigned interrupts and CPU registers.

    6) Etc.

    Do these things, some of which require modifications to the present fixed-opcode structure of the programmable CPU, and all outsider attacks against software will fail to accomplish anything other than a denial of service.

    With a little common sense applied to the design of computers, the only threats anyone would have to worry about are data theft, physical device tampering/hacking, and insiders.

    The company that achieves objectives like these will own the next 100 years of computing. Everyone who believes that security flaws in software are worth effort to discover and fix is very badly confused.

    Solving present-day systemic defects in the design of computing architectures, now that's important.

    Software bugs are a plague that infects every computer from the moment it is first assembled, but not because of software vendors' mistakes, rather as a direct result of genetic defects in the computer genome.

    Windows will no longer exist within 10 years because everyone will have realized that it was built on a flawed premise around defective hardware.

    Will Microsoft be the architect of the first non-defective programmable computer? No way. They only know how to profit by exploiting short-term opportunity.

    More likely, some microprocessor vendor will bring to market a machine architecture that the best and brightest programmers around the world will rally around to birth the neocomputer industry.

    Meanwhile, your continued loyalty to, investment in, and commercial exploitation of the computer technology we have today is obviously nothing more than an attempt to increase your own importance at the expense of others. Get over it. If you're a decent human being you will not buy nor encourage the purchase of a single computing device other than the Nokia 770 Linux Internet Tablet until the neocomputer industry emerges.

    Regards,

    Jason Coombs
    jasonc@science.org


  • Next message: petefran_at_gmail.com: "Solaris Runtime Linker - Exploit Detection"