back to article How von Neumann still controls the desktop

When John Von Neumann first wrote up his notes about the logical design of the EDVAC computer on a train journey to Los Alamos in 1946, it is unlikely that he fully appreciated the impact they would have. For all their complexity, cores and threads, their caches and bus architectures, modern computers still follow which is …

COMMENTS

This topic is closed for new posts.
  1. Anonymous Coward
    Go

    "business" users

    I guess the kind of people who enter something in a form, send it to a server and then look at some kind of list returned are "business users". Another use case of these users is to hack up crazy spreadsheets. Then also writing funny text documents and making strange presentations.

    All of that can nowadays be done in a browser with JavaScript. No more issues with distributing the right DLL of Visual C++ Runtimes, fiddling with the Registry, having an IT guy running from PC to PC.

    Just give them a Linux box with Firefox or Chrome. No more manual updates. Ubuntu does it Out Of The Box, so to speak. If something needs to be fixed, the admins can do it via a narrowband terminal connection via the GPRS phone while being on vacation in Tobago.

    Tools are Google Web Toolkit (to generate the JavaScript) and RESTful programming styles. Secure everything with SSL/TLS.

    Try Google Docs if you are in doubt about a JS Office package. Most people don't need much more. Whether you use Google or a competitior is another issue.

  2. Anonymous Coward
    Go

    Google Style Software installation

    So if 10000 "dumb" Linux Terminals need Chrome in addition to firefox, the admin will issue this over her narrow-band link:

    i=0

    while [ $i -lt 10000 ] ; do

    echo $i

    let i=$i+1

    machineNameAndUser=root@dumbterminal_$i

    scp chrome_installer.sh $machineNameAndUser:/tmp

    ssh root@dumbterminal_$i "/tmp/chrome_installer.sh"

    done

    The Admin will certainly have her ssh certificate installed on all dumb terminals and he won't need to type the password each time.

    And that's just a very special example. If you wanted to add a user to these machines you would run

    i=0

    while [ $i -lt 10000 ] ; do

    echo $i

    let i=$i+1

    machineNameAndUser=root@dumbterminal_$i

    ssh root@dumbterminal_$i "mkdir /home/newUserJoanna;useradd UserJoanna -d /home/newUserJoanna; chown UserJoanna /home/newUserJoanna "

    done

    Another special case would be chaning routing info, requiring a complex script being executed on all Dumb Terminals:

    i=0

    while [ $i -lt 10000 ] ; do

    echo $i

    let i=$i+1

    machineNameAndUser=root@dumbterminal_$i

    scp routeChangerHackedUpByAdmin.sh $machineNameAndUser:/tmp

    ssh root@dumbterminal_$i "/tmp/routeChangerHackedUpByAdmin"

    done

    I am not not writing about the fine points of issuing parallel ssh/scp requests, but I think you get the point that Goole Style System administration is vastly superior to the Windoze stuff.

    1. Anonymous Coward
      Anonymous Coward

      Well, speaking as the guy who does the IT for a small business...

      I have no idea what you just typed.

      But I find the wizards and active directory in SBS 2003 easy to use thank you.

      1. Anonymous Coward
        Anonymous Coward

        @AC

        So your employers saves a ton of money on your wage while shelling out lots of pounds for specialized tools.

        How much money does your company spend on sw licenses each year ?

  3. Jamie Craig
    WTF?

    Two articles for the price of one?

    Utterly misleading title on this article - we get a few paragraphs on von Neumann, then he goes off topic onto a how-to-spec-your-business-desktops article!

    Combining computing science theory with business advice seemingly doesn't work very well.

    1. jolly
      Unhappy

      I agree

      I was hoping for something more interesting - from a technical point of view, that is.

  4. Mike 137 Silver badge

    A much more serious aspect of von Neumann Archtecture

    A fundamental attribute of the von Neumann architecture this paper doesn't mention is that a common memory array contains both instruction codes and data. The decision as to whether a word fetched by the processor is to be interpreted as an instruction or as data depends entirely on the previous state of the machine - if the last fetch was the parameter of an instruction, this fetch is an instruction and so on. This represents a huge security vulnerability that has been systematically exploited in many ways for many years - "buffer overflow" and "stack overflow" attacks that cause maliciously injected data to be interpreted as machine instructions dominate the professional attack space. But even accidental loss of instruction pointer integrity can be extremely damaging - causing uncontrolled execution of arbitrary instructions, and it does happen, as in "hey, my machine locked up!".

    The major contender architecture - Harvard - has separate instruction and data memories, and is widely used in industrial controllers, for the very reason that they have to be robust. Harvard architecture didn't take off in the office computer space due to the initial high cost of memory, but that's not been a major consideration for some time. I've been waiting for years for a Harvard architecture PC CPU, but in vain. Even a dual-stack operating system that segregated function call and return addresses from function parameters would be a huge step forward, even if it ran on a vN CPU. But nothing's being done. Instead we have numerous questionable sticking plasters such as random memory allocation, stack validation et al, which regularly prove their ineffectiveness due to the extent of the underlying festering wound - an almost unsecurable architecture. von Neumann was not considering security when he came up with his computing model.

    1. Anonymous Coward
      FAIL

      Nope

      The Problem's Name Is "C/C++". There exist Safe Programming Languages. Java, C#, PASCAL, ADA, many others.

      It's just that broken "industry standard" C/C++ which creates trouble.

    2. David Beck
      Boffin

      Typed Storage is half-way house

      The west coast designed Burroughs machines were vN with the addition of typing on each 48 bit word. I can't remember (or be arsed to check) the number of bits but the firmware knew if the word contained an instruction, a number (all or part of an integer or float), characters or a pointer. If you tried to perform the wrong operation you got a fault. Storage was allocated in segments with defined sizes and by the hardware so buffer overflow generated a similar fault. The systems programming language was a modified Algol. But of course all of this was 30 years ago and we all know how well we have moved on to far better designs now, both in hardware and software. How else could we keep the CompSci grads employed? A B6700 running MCP needed a guy once a month to sweep out the ashes.

      PS. The other Burroughs designs of the time were classical vN with some other odd characteristics. They were decimal machines (banking was a big part of Burroughs business), so decimal that even the execution engine used decimal addresses, and you bought memory in round decimal amounts, 100,000 bytes for example. Odd.

      PPS. Finally, the other design was for the 1700 series, even odder as it was opcode agile, part of the program header declared what instruction set this prog needed and the hardware setup to use that set while executing the prog, switching between firmware if necessary for multiple tasks. The opcode sets were optimized for the languages of the day, Fortran, COBOL, ...

      The microcode engines of the day were probably a bit too touchy (lots of timing involved as in, you can't read that register for two cycles so do something else now) to compile Fortran or Cobol directly to the microcode but okay if you are decodeing an instruction set.

  5. Graham Bartlett

    @Admiral

    Bullshit.

    Those languages basically stop you shooting yourself in the foot by adding a safety catch to the gun. But ultimately, everything needs to shoot something at some point, and at that point the safety catch has to come off. The only benefit of those languages is trying to keep the safety catch on as long as possible. And even then, bugs can still get you - C# and Java will happily let you add elements to an array until the machine runs out of RAM and throws an exception, for example.

    They can also introduce new problems. Garbage collection is a nice idea, but it can actually be the *cause* of failure. I'm currently working on an embedded multimedia system (WinCE, C#) where we were persistently running out of memory at startup due to the garbage collector waiting too long before cleaning up. Whilst bugs in C++ can give you memory leaks, it rarely fails in this way, and memory leaks are not usually too hard to catch. (In fact most of them are statically detectable.)

    It's very common for people who've got past "hello world" but not as far as real industry experience to slate languages. But people who've actually done real work in software know that the problem is the engineer, not the language.

  6. Anonymous Coward
    Stop

    @Graham Bartlett

    The post I answered to was referring to the problem of C/C++ programs accidentaly overwriting the stack and thereby allowing dangerous exploits. This is due to the lack of runtime checks in C/C++ array access semantics.

    So in the context of "Mike 137"'s post I do think it makes sense to blame C/C++.

    I also dispute the claim that there exist programmers who will writes thousands LOC C/C++ without array access bugs or unsafe casts. All C/C++ based systems had this issue and still have it.

    The guys who wrote the HPUX network stack probably weren't beginners, yet you could shoot down the HPUX kernel with an oversized PING packet (ca 1995). And I could go on listing examples here.

  7. Graham Bartlett

    @Admiral

    Back in 1995, I'm not entirely surprised. I suspect the guys who wrote the HPUX stack weren't complete beginners, but were almost certainly self-taught, and almost certainly worked in an environment populated entirely by other self-taught coders. That was the software world in 1995. Things have moved on a bit since then.

    If you're working in C++, you've got the STL for bug-free array/list/dictionary implementations, and you've got C++ generally for putting an error-checking wrapper around anything accessing arrays or data. It's trivially easy in C++ to do safe array accesses - so much so that the only people making these mistakes *are* beginners.

    And sure, C# gives you garbage collection and all that. But they still give you "unsafe" operations. For all that the language has a number of defects, they were still smart enough to know that sometimes there's no option but to access memory directly - either for memory-mapped operations that require a fixed address space, or simply for operations that need maximum speed.

  8. Michael Dunn
    Coat

    @Mike137

    "von Neumann was not considering security when he came up with his computing model."

    No, he was thinking in a similar way to Thomas Watson Junior (The world has a market for 11 computers). For von Neuman, there would have been fewer than 10 computers in the world, all at Los Alamos.

    They were still using valves (tubes).

    Just picked up a load of 6SN7GT's

This topic is closed for new posts.