359 posts • joined 29 Apr 2007
Never use RAND()
There is a long history of bad random number generators in operating systems. At Bell Labs, Jim Reeds and I implemented a series of statistical randomness tests (Knuth's plus others) to explore this problem. BSD 4.something was out then, and my lab was using it. The random generator was a linear feedback type, and it failed the tests badly. Jim and I discovered that there was an off-by-one mistake in the position of the feedback tap, but when we contacted the student at UCB who wrote it, he told us to fuck off. Even today, you can find both the fixed and the broken version of this generator in UNIX distributions. Of course the random generator in windows was always terrible (16 bits, like this apple subroutine).
A very good generator is Marsaglia's multiply-with-carry. Fast, simple, and it passes every test I've ever tried.
unsigned ML_nMarsagliaX = 886459;
unsigned ML_nMarsagliaC = 361290869;
unsigned __int64 n;
n = __emulu(1965537969, ML_nMarsagliaX);
n = n + ML_nMarsagliaC;
ML_nMarsagliaX = unsigned(n & 0xFFFFFFFF);
ML_nMarsagliaC = unsigned(n >> 32);
These are not cryptographically strong generators, and I'm not sure what iOS is doing with them. They are appropriate for things like monte carlo calculations and simulations, but not for cryptography.
The web is hugely important, but its origin was a quick hack. TBL's success is sort of accidental, like Zuckerberg, except without the billion dollars. They'd both be mid-level developers or IT staff somewhere in any alternate history. He's not a dummy of course, but he's also not an industry-wide visionary or "the greatest genius since Einstein" as was said when he won a recent award.
Write Facebook: $billion idea
Claim to have invented Facebook first: $60 million idea
Re: About time
That's a pretty delusional history of OS/2 and Windows NT, Uncle Ron.
The history of software may never be written accurately, because there is so much politics and zealous fandom. A typical amateur opinion runs something like: "I hate Microsoft, I hate Windows, I hate Bill Gates...now let me tell you an objective history of computer software." You can't just take Microsoft's version of the story, or IBM's, or Kildall's. All these people have a bias, and a real historian would have to dig very deep for records and talk to a lot of different people.
But I think it is safe to say that IBM did not write Windows NT. I'd love to be there if you told that to Dave Cutler. There wouldn't even be a wet spot on the floor where you were standing.
Search Paradigms that Google Doesn't Control...Yet
Google has the major share of online advertising locked up. It's their cash cow. A couple years ago, a high level tech-firm manager told me that Google only feared two things: Facebook and Wikipedia. Not just because they had ads, but because they represented two information-search paradigms that Google didn't control. So this article should come as no surprise.
Re: ARM in the data centre is a certainty unless Intel can find a way to kill it.
Intel has a more sophisticated form of parallelism, using parallel functional units So you get parallel operations, and even room to run hyper threads, were two threads make interleaved use of the functional units. This seems inherently more efficient than duplicated whole single-thread cpu cores.
Intel's mistake was not having a low-wattage part for phones. But the economics of servers is about watts per operation.
An Ellsberg or a Rosenberg?
It is obvious that the NSA would investigate quantum computing, since code breaking is a major part of their charter and history. I'm sure the British and Russians are doing the same.
This latest leak is not a case of "whistleblowing". Snowden has revealed activities that bring up important 4th amendment issues, but he also indiscriminately leaked secret information about legal and sensitive activities of the NSA. Thus many of my American friends have questions about his character.
I consider Daniel Ellsberg to be a hero, while I consider Julius Rosenberg to be a traitor. Snowden is probably at neither end of that range, but time will tell where he falls on it.
Re: Not all darkness
Ken Thompson proved that you can insert exploits into software without having it appear in the source code. It would be especially easy for open source software.
Re: So if we award all desktops to Microsoft then I make it...
LOL "chromebook deniers".
Re: Back in the 70s
IBM did a cipher called Lucifer which used a larger block and key size, but was actually weaker than DES. The NSA knew about so-called "bent functions", s-boxes designed to be extremely nonlinear and therefore more resistant to linear-approximation attacks.
Princeton University had few Macs or PCs in 1994, it was really a UNIX workstation shop. But one grad student had a Windows PC for some project, and he showed me DOOM. I went and got Prof. Pat Hanrahan and showed him. Pat and I tried to figure out, how could a 486 processor be doing real-time 3D? It seemed impossible. The perspective mapping requires a 1/distance division operation at each pixel, and that is totally beyond the computational power of the PCs of that time.
A couple years alter, I got to meet Carmack and talk with other people in the game development field who knew how the software worked. Visibility in DOOM was calculated in the 2D floor plan projection. Polygons were rendered in vertical or horizontal scans of constant distance, so only one division per line, not per pixel. Really an elegant solution.
Later the brilliant Ken Silverman wrote probably the first (and probably the only robust) "free-direction texture mapper", which could scan convert polygons along lines of constant direction at any angle, not just vertical or horizontal rows of pixels. So you were not limited to DOOM's vertical walls and horizontal floors, you could render general polygons in space. That's tricky to get right with now pixel glitches, and among game developers, he was much admired. But free-direction mappers never really got deployed, there was just Ken's unreleased neo-Build engine that used it experimentally.
Eventually people just learned how to interpolate the 1/distance division across horizontal spans of pixels, and they got that code to run fast enough that the "British Three" could render arbitrary textured polygons on a PC (the three major British rendering engines, Renderware and two others I forget now). Microsoft bought one of those British companies and formed the beginning of Direct3D. Not very successful, and it was all rewritten a few versions later.
Hardware acceleration eventually made all this clever work obsolete.
RedneckMBA, if we wanted real information about computer technology we'd be reading Anandtech.
It's risky for Microsoft to be running these scroogled ads, since they are not without sin. But I am glad that people are made more aware of practices like email scraping. Google has built a surveillance machine that must make the NSA envious.
I know Morris and his dad (his father worked in dept 1127 at Bell Labs, where UNIX was created, before moving to the NSA). He was a bright kid, and I'm sure he intended no real harm, but it was certainly poor judgment. The whole affair seems to have been very traumatic for him, and he's never discussed it or responded to any questions or comments about it.
Several years before the he wrote the worm, Morris was a summer intern at the labs, and he helped me convert a fast DES subroutine I wrote, to perform the UNIX password hashing operation. As I understand it, he incorporated that code in the worm, and Spafford wondered in his report where it came from.
The worm is also a tribute to the crappy programming and systems design that seems to go into all email software. Sendmail was a never-ending source of security problems in the early days of BSD UNIX, and it was an unbelievable resource hog. When it first came out, it had a remote root-shell feature, the author put in to make it easier for him to troubleshoot it. Thoughtless. We called it "mailer science". When BSD 4.3 came out, sending an email to all 40 or so people in our division would take 5 or 10 minutes of processing time, the whole system was brought to its knees.
Just more leftist hot air from Charlie Stross. His fascination with free software is certainly about his anti-business political views, not a knowledgeable technical opinion. Who cares what he thinks about technology or politics?
It's a pity that young people do not see more science fiction that is about adventure and science. Jules Verne inspired generations of men to become scientists and engineers. One can only imagine Charlie Stross inspiring a generation of young people to be cynical and pessimistic.
Bill & Burgers
OK, as a former Microsoft Research employee, I can't let terrible inaccuracy this stand. Dick's Is in Seattle, Billg and his buddies ate at Burger Master. Today most burger fans eat at Five Guys or Wibbley's.
When I was still at AT&T Bell Labs in the 1980s, the company met with Microsoft to discuss some business conflict (I think it involved Cingular, or some satellite cellular scheme, but I don't recall the details now). AT&T rented a room in one of New York City's most exclusive restaurants, and billg and some other Microsoft execs joined them. When the waiter came to take Bill's order, he said, "I don't see anything on this menu I like, but I noticed there's a Burger King across the street. Would you bring me a whopper with cheese?". The waiter was unfazed, but the AT&T execs were completely floored. True story...my boss Arun Netravali was there at that dinner meeting and told me about it.
NT had a good solid API for systems programming
There's a nice little book by Johnson Hart, "Win32 System Programming" that shows off the kernel API. When NT was being designed, UNIX in its various inconstant forms was a mess, still had very poor features for memory allocation, threads, concurrency, etc. The problem was that nobody with a professional understanding of operating systems and requirements could sit down and do it over again, and NT was exactly that. Asynchronous I/O, threads, multiple heaps, completion ports, a real event mechanism (UNIX signals were really useless in the old days), a wide variety of concurrency mechanisms, choices to wait for events, poll for events, etc. Windows was also modular, with well defined interface mechanisms, dynamic linking, device deriver interfaces. All things UNIX did not have in the 1980s (it was just a monolithic C program in those days, you had to recompile the OS to insert a new driver). So NT was a systems programmer's dream. Today, Linux has most of these features, but NT deserves a nod of respect from those of us who remember how backwards UNIX was when NT came out.
If he had invented the internet, he would have invented the internet.
Re: Windows Design After Gates
Our group at MS Research had a former Apple designer, and I remember she thought Win95 was rather nice. But generally even inside Microsoft we all thought Apple did better design (but not such great software engineering). Buying an Apple product was like buying a Volvo, buying a Windows product was like buying a Ford pickup truck. Each has its advantages, if you want to look affluent and cool, or you want to get work done.
As for tablets, I worked for a while in the eBooks project (I'm one of the inventors of ClearType, check the patents if you don't believe me). I don't know if the tablets would have failed due to their design, the group did spent a fortune doing readability and font research. But the telling fact is that higher executives chickened out and cancelled the project before it ever had a chance.
Re: Bleeding Edge
I knew Ritchie and Thompson at Bell Labs. If you said that to Ken, he would have turned and walked away without saying anything. Ritchie would have laughed.
Re: With a little help from my freinds
It's not accurate to say the US rocket program was based on von Braun. The V-2 was a fascinating rocket, because of its large size, but it was not a new concept. In the mid 1930s, Goddard was launching liquid-fuel rockets with gyro guidance, the Russians built small rockets and the BI-1 rocket plane, there was a lot of experimentation.
After the war, the US has three rocket programs. Von Braun's people worked for the Army and built the Redstone, a modernized version of the V-2. The Navy has a program that began with RMI, Curtiss-Wright and Goddard, building JATO engines during the war and the Viking and Vanguard rockets in the 1950s. The Air Force contracted with its partners in the aviation industry to build the first long-range intercontinental rockets and cruise missiles (Navaho, Atlas, etc), which bore little resemblance to the German work.
It's not that the Germans didn't do a brilliant job with the V-2, it just was not unique knowledge or beyond the understanding of other engineers. It's become a cliché to talk about "our Germans and their Germans", but that is a naïve pictures of the history of rocketry in the 1950s and afterwards.
LOX/Ethanol. The V-2 engine was not very well cooled, so they had to use dilute alcohol (75%) and low chamber pressure. The Soviets got that back up to 90% ethanol and doubled the thrust of the engine for their R-5 rocket, by doing a lot of tinkering with the cooling. The fuel traveled around a spiral of pipes outside the thick steel walls of the combustion chamber, but that was insufficient. So they borrowed a trick from Goddard (who was actually being spied on by the Germans), and sprayed fuel onto the inside of the chamber.
The V-2 was a fascinating rocket, because it was an order of magnitude bigger than previous ones. You can debate whether rocketry really starts there, because it didn't have any new conceptual ideas. People had build liquid fuel rockets, used gyro stabilization, etc in America, Russia and probably a few other places. Tsiolkovsky in Russia first proposed space rockets using liquid fuel, and Goddard built the first one, Glushko in Russia had the best engines, but they were much smaller than what the V-2 needed.
Re: "bell x1" BAH!
The X-1 was a rocket powered plane, not a turbojet. But like the M52, it had two wings and was pointy on one end, so I can understand your confusion.
I get the feeling Win8 is trying to minimize UI computation. Do they think only ARM processors are going to exist in the future? It's just odd, because of those processors will eventually have some muscle. As one of the inventors of Cleartype, it's a bit annoying if they are abandoning it.
Can we sent Bart Sibrel to Mars? "I'm in a television studio! This is fake!"
Re: 20 goto 10 - MS GOTO FAIL
This is a classic example of an operations-management failure. It's got nothing to do with the OS. And if you think only Microsoft has failures, you probably missed hearing about when Amazon's load balancing system brought its cloud service down a while ago.
Amazon in general has done a great job at operations. One manager there told me, "We are experts at dealing with emergancies, because we use Linux." The thing that is more difficult for Amazon, because they lack the systems engineering culture, is to develop complex software systems. So they provide the number 1 cloud service, but they can't offer higher level services like instant e-commerce packages (like MS Dynamic).
Eadon, if you think Dave Cutler doesn't know how to design an operating system, or if you think Linux never fails without a lot of tweeking and patching, then I gotta wonder why you feel so passionate about a subject you don't actually know much about?
1. IE kinda sucks. Probably true.
2. Other browsers are flawlessly programmed, totally secure, and not in need of the same intensive testing and patching. Probably false.
Re: Now approaching ludicrous processing speed, Captain!
The issue is operations per second per watt. How much electricity do you consume to perform a computation, is the question. ARM doesn't use much power, because it's not very powerful. They're trying to solve partial differential equations, not play Angry Birds.
IBM, UBM, We all bm for IBM
IBM might have been a "powerhouse" of a corporation, but nobody thought they were software development wizards. Read "The Mythical Man Month". They were famous for bloated contorted systems. Having used their so-called timesharing system at Caltech, I can personally testifiy to how utterly horrible it was -- worse than UNIVAC exec, worse than DEC's tops20 or RT11, certainly worse than UNIX. You just can't imagine how backwards IBM was in its thinking...virtual card decks and job control language. They couldn't have done OS/2 without help from Microsoft.
IBM's strength was marketing and customer support. If you had money coming out of your ears, you hired IBM to build and run your computer center. Their support contracts were legendary. I know one case where a university's computer center burned down (U. Toronto I think), and within 24 hours IBM had a mobile center housed in trucks up and running in the parking lot. But the glorification of OS/2, that's just funny. Remember, it was not the flakey consumer Win95 that Microsoft brought out to compete with OS/2, it was Dave Cutler's NT operating system. That's what freaked out IBM, and caused one of their VPs to say he wanted to put an ice pick in BIll Gates' head.
Also, nobody was trying to write a "UNIX killer" back then. UNIX was something that ran on $50,000 workstations or $500,000 minicomputers, at universities flush with grant money. It wasn't a threatening personal computer operating system in the late 1980s.
I was in MS Research when Bob was still being developed. Supposedly, billg was given a demo, and after a few minutes he said, "Get that f****** clown off my screen". The hungarian prefix for the active agent object was subsequently changed to tfc.
Academic UI Theories
Another funny example is the Nass & Reeves theory that Microsoft got suckered into. That was the basis for projects like "Bob" and their hated paperclip character. During one demo to billg, he burst out "Get that ****ing clown off my screen!". After that, the agent objects were given the hungarian prefix tfc, which indicated how the developers felt about the idea.
I hear nothing but horror stories about the fragility of Linux from my friends at Amazon. On my workstation PC with ECC memory, neither XP nor Win7 has ever blue screened. I reboot my system a couple times a month, usually just when an update requires it (and yes I know Linux likes to hot-slide DLLs into a running system without rebooting, but don't assume that is a safe practice). There was a study done about ten years ago that showed that 50% of windows crashes were caused by parity errors in memory hardware. Does Linux just ignore or not detect memory errors?
XP was a version of NT, not a version of Windows 9x like ME. It was meant to merge all the functionality of NT and 9x into one system, and the technical challenge was that 9x was very permissive with apps and games. XP had to be "apps compatable", while maintaining the safer, more formal system interface of NT.
For about few cents, you can buy a brick and just pretend the net connection is down.
The NT microkernel architecture was rejected very early. It was going to support Win16, Win32 and even POSIX, but no such system was ever released. NT and even DOS/Windows 9X were still much more modular than UNIX. Microsoft used DLL's, COM interfaces and device-driver interfaces (DDI) long before UNIX had them. Even more complex technology was developed later to support extensible GUI interfaces (linking and embedding, Visual Basic and such stuff). Microsoft did this to allow them to update small parts of the OS instead of shipping a whole OS release to their customers everytime.
I remember installing device drivers on my SUN workstation in the late 1980s, and you had to edit the interrupt vector tables and recompile the kernel, because at that point in time UNIX was still just a giant monolithic C program. NT was much more advanced when it came out in 1989 -- it supported threads and concurrency much better than UNIX, it had async I/O and events and light-weight coroutines (fibers), I/O completion ports, etc. These features were more or less added to Linux much later on (I've heard async I/O in Linux is pretty dodgy).
They do need to support OpenGL. There's a long weird history that Microsoft needs to forget and move on. OpenGL was not doing well in the late 1990s, because SGI was holding it to a model based on their hardware's functionality, functionality that could not scale up to what we see on PC games today. Direct3D (which was not just designed by MS, but also nVidia and ATI) was a much needed innovation with its vertex and pixel shaders and triangle meshes. Immediately, the hardware vendors added all the Direct3D functionality to OpenGL, via its extension mechanism. Probably both Microsoft and SGI were annoyed by this, but today OpenGL and Direct3D essentially do the same things. So it was never really the lock-in that Microsoft might have hoped for.
OSI was not successful, it was an academic charlie foxtrot. It's why a lot of the industry is so wary of W3C, because folks know how much damage an unfettered international standards committee can do. We evalutated parts of OSI at Bell Labs in the 1980s, the red book and blue book versions. TP4 was horrifically inefficient. One implementation took longer to negotiate a packet transmission than the connection-time-out period of TCP. The first two full implementations of the X.400 in Europe couldn't even send email to each other, because the groups had chosen such different data format options, both totally in agreement with the sprawling specification. At Bell Labs, we used the phrase "mailer science", as a derogitory term for this kind of badly engineered over-designed system. The net effect of this standards committe circle jerk was that a cabal of contractors and universities got a gravy train of publications and grant money, while Europe and Japan delayed joining the internet for several years -- think about the opportunity cost of that.
If there is one lesson to take away from the OSI debacle, it is never, ever, let the ITU have control of the internet. America clearly has too much control now, but just keep the corrupt, inept UN committees away from it, or everyone will be sorry.
Microsoft hires Russian hackers, calling them "penetration engineers". That's gotta look good on your resume.
Facebook's product is an audience for advertising, which is also how Google makes a lot of its money. Is Google foundering? It is not at all obvious that something is fundmentally wrong, except that the stock market hasn't figured out what its share price should be.
Privacy is an important concern, but at least FB gathers information voluntarily, from the user entering information and clicking "like" on various items. This as opposed to the giant surveillance machine that is Google, monitoring web searches, what videos you watch on youtube, scraping your email.
FB faces two big technological problems. First is the technology of advertising, which is surprisingly complex. There is demographic analysis, even real-time actions for ad space when you visit some pages. The only alternative I see is an option to pay for an ad-free service. Experience with mobile apps suggests that only about 1 in 1000 users will chose to pay. However, you can easily make 1000x as much money by charging someone $10 a month -- the revenue from advertising is measured in fractional cents.
The other technological problem for Facebook has been growth. People wonder why FB isn't adding fancy new features or doing all the things that G+ tried, but they are failing to comprehend the engineering challenge of scaling up data centers and software by orders of magnetude, to handing their ridiculous growth. Now that things seem to be settling down, I hope we will see FB turn their attention to making more improvements.
PCs vs iPads
It's not really valid to say that pad sales equals a loss of PC sales. They are somewhat different devices, and it's not a zero sum game.
How many android phones does it take to equal an 8-core Ivy Bridge chip?
How did a smart company like Intel, with the best design and fabrication technology in the world, let a ho-hum processor like ARM threaten them? Their failure to anticipate the mobile device industry is an epic fail, Microsoft-like.
IE 9 too?
I've seen features (like image search) go dark for IE 9 within the last week. Have other's noticed this? As their browser climbs to the top, will Google snuff out its competition by making search and youtube only usuable by chrome?
There have been a number of reports within Microsoft on the cause of crashes. 50% are due to hardware memory errors, which is why servers use ECC ram (error-correcting coded). I like to build my own PCs, and definitely quality matters. Supermicro or Intel motherboards are a good idea too.
On a system with ECC and decent parts, like my last three machines, I have literally never seen a blue screen crash. My windows 7 machine runs for weeks, until some kind of undate requires me to reboot it.
Re: Xbox Loss?
XBox and XBox Live has been pretty successful. And it has dashed SONY's hopes of replacing the PC with their Playstation as the home hub device.
"911. What's your emergency?"
"Someone bought Microsoft Office instead of Open Office!"
"Sir, this line is for real emergencies only."
Jim didn't work on DEC's servers, he was a research scientist known for expertise in data base systems. One of the best technical writers ever, his book on Transaction Processing is one of the bibles of database design and theory. Jim worked at Tandom, IBM, DEC and Microsoft.
One of Jim's projects was Terraserver, the forerunner of Google Earth. Hosting complete satellite image coverage, his group had to buy military spy satellite images from Russia. He also worked with astronomers to develop a unifying database of sky survey images, which was only completed after his death (the World Wide Telescope).
Jim was a great guy, and a great scientist, and his disappearance at sea caused a lot of sadness. Ironically, there was a large effort to search for his boat using online satellite images.
EC2 vs Azure
Amazon is dominating the cloud computing market. One place they cannot compete with Microsoft is in developing their own sophisticated software services, like MS Dynamics. And of course they are tying Azure services in with Win8.
Azure itself is pretty cool OS technology. Culter's team spent a year just studying the formal requirements for geographically distributed data centers. They merged technology from NT, Hypervisor and (I'm guessing) the SQLOS layer. The scalability tests I've seen are surprising, way better than Linux and even better than NT at multicore parallel performance.
What makes me skeptical about Microsoft is their executive management. I still would not be surprised if they fail. Their lasting impact may be the disporia of former Microsoft systems engineers, who are scattered throughout the industry today (for example, running EC2 at amazon). Systems engineering rules. You don't learn it in college, and it's not part of the open source culture, so if you want to built something big and complex, you gotta find those industrial experts.
Re: Azure vs AWS
You don't know what you're talking about.
IBM once had 70% hardware market share
What is this new devilry? Something from Microsoft?
No. It is IBM, a demon of the ancient world.
This foe is beyond any of you...run!
Famous for being bad
BFE isn't really the worst movie or even the worst sci-fi movie. But it has become the most famous-for-being-bad movie, which is why it always wins these contests. Same thing just happened on io9. Still...it is kinda bad.
Why do we care what Greenpeace thinks?
Greenpeace: misanthropic, anti-growth, anti-technology, dogmatic, puritanical, zealous, upper-middle-class-white, romanitic pasturalists.
- Updated Hidden network packet sniffer in MILLIONS of iPhones, iPads – expert
- Students hack Tesla Model S, make all its doors pop open IN MOTION
- RISE of the Jesus Phone MOUNTAIN: Apple orders 80 MILLION 'Air' iPhone 6s
- BBC goes offline in MASSIVE COCKUP: Stephen Fry partly muzzled
- PROOF the Apple iPhone 6 rumor mill hype-gasm has reached its logical conclusion