back to article Microsoft's R&D chief: the people problem with innovation

Rick Rashid, leader of Microsoft's R&D operation, said he could foresee cloud computing some years back. The challenge as a technologist, though, has been in anticipating the finer details of how the cloud and its related technologies - the data center, replication, and synchronization - will be adopted by people and …

COMMENTS

This topic is closed for new posts.
  1. Anonymous Coward
    Anonymous Coward

    Trendy

    "Did we know it was going to pan out in a particular way. Did we know what's happening now with the shift to cloud services and the way businesses are thinking about large-scale computation? No. The exact details of how things pan out have to do with society, legal and government environment, and business climate. But were all the seeds there? Absolutely"

    In other words... did we think about it? Yes, then we dismissed it. Did we know it was going to be trendy and the mothership would dump $9bn for it? No.

    Lets be honest. Its not like "business" is some brilliant invisible hand. Its a bunch of board room monkeys jumping on the next thing that looks like a banana so they can keep their bonuses. In the long term? Hopefully Mr. Smith was right. God lets hope so. Although, if true, I bet the west will be on the wrong side.

  2. R Callan
    Flame

    The Microsoft innovation problem

    As many people have claimed all over the web, Microsoft have never produced any significant innovation. Their "innovations" seem to come down to:

    A: Buy a company/product that is innovative.

    B: Re-implement some-one else's innovation.

    These abilities can be put down to a company that has too much influence and cash (arguably from predatory monopolist behaviour.) Perhaps that is their "Innovation".

  3. David

    Correction

    Rick Rashid is the head of the Microsoft Research organization, not the head of Microsoft R&D, and the budget of $9billion is the budget for the whole R&D (not Microsoft Research that Rashid heads).

  4. N

    Heads in the clouds

    The problem with cloud computing or whatever the evangelists like to call it, is trust.

    And that is the significant hurdle they have to overcome, even if their products are perfect, which looking at the past, they may not be, the users, whoever they are must have absolute faith in the provider.

    And I for one certainly dont have anywhere near that kind of faith to trust Microsoft by a long way.

  5. Anonymous Coward
    Linux

    Bollox

    MS never innovates, it just copies someone else's R &D then uses it's monopoly to leverage the market. So the old joke goes that MS's R & D department is in Cupertino. See Alan Cooper's ( Father of Visual Basic) book "The Inmates are running the asylum" for a good discussion.

    MS owes it's existence to Gary Kildall

  6. Anonymous Coward
    Anonymous Coward

    tl;dr

    hindsight ftw

  7. Chris Bradshaw
    Boffin

    easy...

    ...getting an annual $9bn budget...

    ...creating $1bn businesses every few years over the last 10 years...

    IANAMBA, but gee, how hard can it be to create a $1bn business every few years given $9bn in funding every year? Especially if one can use some of the funding (say, $1bn or so) to buy products from last year's R&D spin-off???

    :-)

  8. Anonymous Coward
    Anonymous Coward

    R&D

    With billions spend on R&D by the brightest and best computer scientists in the world, Microsoft are still incapable of making an OS which doesn't overwrite an existing OS on the same disk.

  9. Ascylto
    Gates Horns

    Cloudy thinking.

    "Rick Rashid, leader of Microsoft's vast, multi-billion dollar R&D operation, said he could foresee cloud computing some years back."

    So, Microsoft's new strategy is to state "Yeah, well, we knew about that ... so WE THOUGH OF IT FIRST!"

    Innovation ... wow!

  10. Toastan Buttar

    Anyone can have a great idea.

    Games companies get inundated with them every day from hopeful third parties. The trouble is that in order to make a great product, everything depends upon the details of implementation - those minute-by-minute decisions which lead to a quality product. The same holds true for music, movies, books, cars, gadgets and toys.

    Great idea + crappy implementation = useless end result

    Crappy idea + great implementation = potentially great product

    Great idea + great implemetation = killer app

  11. Ken Hagan Gold badge

    Timing is everything

    Sun actually tried to market cloud computing in the 90s, but it failed because it makes a telephone wire a single point of failure for your business. That's still true. If this guy "had the idea" a few years ago, then perhaps in another few years time he will learn the lesson that Sun learned (or did they?) about ten years ago.

    Had mobes existed 30 years ago, the match between Windows 1.0 (very small, no proper protection, only reliable if you control all the apps) and that platform would have been obvious. By the time such devices were widespread, Windows was far too large and had to be stripped down to make CE.

  12. Mark

    What?

    "Rick Rashid, leader of Microsoft's vast, multi-billion dollar R&D operation, said he could foresee cloud computing some years back.

    The challenge as a technologist, though, has been in anticipating the finer details of how the cloud and its related technologies - the data center, replication, and synchronization - will be adopted by people and organizations."

    I predicted it even before that in the midst of a drunken slumber and I couldn't foresee the finer details either. Bollocks.

    I'll be sure to post in a few years time about some of the other current tech-buzz happenings that I foresore but wasn't sure about the fine print thereof. Tit.

  13. Tom

    Cloud computing is easy in theory

    its near impossible when you have a major player ignoring all common standards and the patenting everything in sight in a desperate attempt to own everything rather than actually implement.

  14. Anonymous Coward
    Thumb Down

    Avie Tevanian?

    Rick Rashid was the PI on the Mach project. IIRC, Avie Tevanian (among many others) did a ton of work for Mach as well. Rick claiming that:

    "Did I realize in the early 1980s that the operating system I was building would someday run on a cell phone ..."

    implying that He alone built Mach, reeks of academic dishonesty. A "minor" slip like that is not minor at all, it indicates a completely screwed up perception of his role in the project.

    Perhaps Avie is getting shortchanged due to his testifying against Microsoft in United States v. Microsoft in 1998?

  15. nick perry
    Stop

    R&D

    I was under the impression that Microsoft R&D was "put together by monkeys" - James Doohan

    The great plan being to throw bit together or already existing ideas that works fine ... and see what happens with the distinct understanding of:

    If it work - Great!!! Chalk one up for the R&D party. but the downfall of this being that Microsoft losses money on all the bug fixes and phone support generated.

    If it doesn't work - Call it Windows!!!

  16. Giles Jones Gold badge

    Mobile phones did exist in the early 80s

    Motorola DynaTAC was the first US approved cell phone in 1983.

  17. Anonymous Coward
    Unhappy

    Why bother

    We're still waiting for them to come out with a good OS.

  18. Ralph B
    Gates Horns

    Go Watch Star Trek Rick

    "Did I realize in the early 1980s that the operating system I was building would someday run on a cell phone. I'd have said: 'What's a cell phone?' - they didn't exist."

    S'funny, but Gene Roddenberry had a darn good idea what a cell phone was back in the 1960s. WTF _have _ MS R&D been doing for the last 20 years?

  19. Don Mitchell

    Innovation

    As far as touch screen user interfaces, I believe Dave Kurlander's work that later led to their "Surface" product had a lot of the ideas you see on the iPhone.

    It's a tired cliche that Microsoft never innovates, repeated by people who don't really know anything about technology, but hate big corporations in general. Microsoft's NT operating system was lightyears ahead of UNIX in 1990, and Linux is just now catching up to it in terms of feature set. Windows 95 had a pretty good UI for its day, many of its ideas sneaking their way back into Apple's system later (clearly a lot of give and take between those two companies). OLE Automation was a major innovation that allows extensible interfaces on applications, something the Linux community tried and botched with the Bonobo fiasco. Windows was far more modular than UNIX, with dynamic libraries and object interfaces (COM), and device driver interfaces -- all ideas copied years later in Linux. If you start to look at multicore hardware, NT is still leading in terms of its Amdahl-law characteristics (Linux levels of at 4 processors, NT at 8 and Microsofts new red dog is rumored to reach 128 processors before reaching assymptotic performance). The layman tends to look at superficial things, like the widgets in Vista and Mac, oh who had those first (well, Konfabulator and other 3rd party software did that long ago). They don't understand what goes on under the hood, and under the hood the NT kernel is still superior technology to the Mac or Linux operating systems.

  20. Anonymous Coward
    Paris Hilton

    R or R & D?

    The D is a big, huge, mega differentiator and I'd guess needs some 6th sense crystal -ball-ology to meet a tight audit.

    How about: just let the R trickle down and into the wild with, say, a sweet UI whacked with APIs to see what is hot with a sample population?

    Top-Down or Bottum-Up trends are totally different things with B-U runners always being things of surprise? (eg: i*art on the iPhone? What budget controller would seek to throw capital at that then give a formal report to seniors about progress of an i*art project (where * = f)

    So I suppose trends in Apple's App store at what is catching public interest might be worthy of analysis to identify core trends?

    Sounds like fun all the same.

  21. Anonymous Coward
    Paris Hilton

    Howabout?

    R = easy-peasy (every country in the world is probably researching IT or some aspect of it)

    D = easy-peasy too

    I for innovation = easily-peasily doable

    Finding a product, concept or thing people want to buy, now there is the kernel of differentiality.

    Speculation: onion skin

    In the past (say 2 to 10 years?) developers could focus on traditional workflow patterns and translate these into IT based applications with a suitable hardware surround.

    But most of that has happened to a greater or lesser extent already so now developers have to carve new ground (see Apple for examples of successful business models?) and that is where easy-peasy meets a different reality altogether.

    Couple that with a user base that prefers IE6 or older and expect that the very intelligent people proposing symbiotic hardware-software push to continually buy new stuff falls flat (even the mobile market is struggling?) especially with high costs involved.

    Problem = yes!

    Challenge = fantastic!

  22. John Smith Gold badge

    @Don Mitchell

    It's a tired cliche that Microsoft never innovates, repeated by people who don't really know anything about technology

    And that's a tired straw man argument put up either by Microsoft sock puppets or people who are deeply ignorant of the *history* of the technology they are talking about. On the off chance you are the later prepare to learn.

    Microsoft "innovate" by buying up companies,hiring staff *after* they have made innovations or buying patents to enforce then getting saturation PR to describe it as ground breaking.. This is not innovation as most people understand it.

    "Microsoft's NT operating system was light years ahead of UNIX in 1990,"

    Debatable. What features did you have in mind?

    What's not debatable was the Director,Windows NT Development, David Cutler(and most if not all of his team) came from DEC where he designed VMS. The most compatible part of Windows NT with Windows 9x was calling it Windows.

    "and Linux is just now catching up to it in terms of feature set."

    Which ones did you have in mind?

    "Windows 95 had a pretty good UI for its day, many of its ideas sneaking their way back into Apple's system later."

    Neatly airbrushing out O/S2 and the long term work done by Xerox, which pre-dated all Apple & Microsoft work in this area. Windows keyboard short cuts are straight out of the IBM Common User Access guidelines BTW.

    "OLE Automation was a major innovation that allows extensible interfaces on applications."

    Setting inter-process communication standards (which OLE, ActiveX or whatever your going to call it are) is a lot easier when you have a proprietary OS architecture (Amiga Rexx anyone) or a virtual monopoly on a very limited number of hardware architectures (x86 compatible and what others these days?). The IBM "Message Queue" approach is cross hardware platforms.

    Corba is the platform and OS neutral standard for this but I don't know how widely it is used. The others were and are real cross app scripting and control options around when OLE came out. Consensus in a monopoly is easy. Its tougher in a community and tougher still to explain why having it is a *good* thing. I note the common lack of qualification. This only applies to non Windows with significantly greater difficulty.

    "Windows was far more modular than UNIX, with dynamic libraries and object interfaces (COM), and device driver interfaces -- all ideas copied years later in Linux. "

    Funny, part of Microsoft's claimed reason for resisting the un-bundling of IE was that explorer code was so entwined with the core modules it was impossible to remove.

    I note your now moving the goalposts from Linux to Unix in general. Unix always had the idea of high level "block" or "character"mode I/O devices rather than a whole different API for each device. IIUC Linux made them installable without a full scale kernel re-compile. Something (I think) even the original Unix developers noted under "What we would have done differently." Linux might have gotten DDL from Windows. Or OS400, an OS (and apps) originally designed to run as a large set of DDL's to reduce disk and memory footprints and allow running form the disk directly. Hard core eXecute-In-Place for the palmtop developers reading this. I don't know enough other architectures in depth to comment if there could have been other sources. Burroughs B5000 for example?

    "If you start to look at multi core hardware, NT is still leading in terms of its Amdahl-law characteristics (Linux levels of at 4 processors, NT at 8 and Microsoft's new red dog is rumoured to reach 128 processors before reaching asymptotic performance). "

    NT (do you not mean an NT derived OS like Vista) would still be behind VMS (same architect) which IIRC topped out at a 64 processor cluster. And Amdahl's law warns that the speed-up hits the buffers when any part of the program cannot be parallelised. You'll need an app which is <0.78% serial to actually exploit that.

    "They don't understand what goes on under the hood, and under the hood the NT kernel is still superior technology to the Mac or Linux operating systems"

    I'm not sure how much you understand either. “Superiority” is debatable. Expensive, insecure,proprietary, bloated and designed to force customer lock-in. definitely

    To anyone who grew up using Microsoft kit only it can seem very impressive. To those of us who know of the wider IT world its rather less so. Apple's system later."

    Neatly airbrushing out O/S2 and the long term work done by Xerox, which pre-dated all Apple & Microsoft work in this area. Windows keyboard short cuts are straight out of the IBM Common User Access guidelines BTW.

    "OLE Automation was a major innovation that allows extensible interfaces on applications."

    Setting inter-process communication standards (which OLE, ActiveX or whatever your going to call it are) is a lot easier when you have a proprietary OS architecture (Amiga Rexx anyone) or a virtual monopoly on a very limited number of hardware architectures (x86 compatible and what others these days?). The IBM "Message Queue" approach is cross platform.

    Corba is the platform and OS neutral standard for this but I don't know how widely it is used. The others were and are real cross app scripting and control options around when OLE came out. Consensus in a monopoly is easy. Its tougher in a community and tougher to explain why having it is a *good* thing.

    "Windows was far more modular than UNIX, with dynamic libraries and object interfaces (COM), and device driver interfaces -- all ideas copied years later in Linux. "

    Funny, part of Microsoft's claimed reason for resisting the un-bundling of IE was that explorer code was so entwined with the core modules it was impossible to remove.

    I note your now moving the goalposts from Linux to Unix in general. Unix always had the idea of high level "block" or "character"mode I/O devices. IIUC Linux made them installable without a full scale kernel re-compile. Something (I think) the original Unix developers noted under "What we would have done differently" a *long* time ago.

    Linux might have gotten DDL from Windows. Or OS400, an OS (and apps) originally designed to run as a large set of DDL's to reduce disk and memory footprints and allow running form the disk directly. Hard core eXecute-In-Place for the palmtop developers reading this. I don't know enough other architectures in depth to comment if there could have been other sources.

    "If you start to look at multi core hardware, NT is still leading in terms of its Amdahl-law characteristics (Linux levels of at 4 processors, NT at 8 and Microsoft's new red dog is rumoured to reach 128 processors before reaching asymptotic performance). "

    NT (do you not mean an NT derived OS like Vista) would still be behind VMS (same architect) which IIRC topped out at a 64 processor cluster. And Amdahl's law warns that the speed-up hits the buffers when any part of the program cannot be parellelised.

    "They don't understand what goes on under the hood, and under the hood the NT kernel is still superior technology to the Mac or Linux operating systems"

    Again debatable. Expensive, insecure,proprietary, bloated and designed to force customer lock-in. definitely

    "It's a tired cliche.." "repeated by people who don't really know anything about technology"

    I know a little of the history of this technology. I have described it on the off chance you merely ignorant of it. To anyone who grew up using Microsoft kit only it can seem impressive. To those of us who know of the wider IT world its rather less so.

  23. John D Salt

    @ Don Mitchell

    So, let's be clear about this:

    1. You're saying that no Unix had dynamic libraries before Windows?

    2. You're saying that no Unix had device drivers before Windows?

    3. You're saying that no Unix had object interfaces before Windows?

    Perhaps I have misunderstood you, in some strange way, but at the moment your grasp of "what goes on under the hood" (or "bonnet", as we say in English) seems to be roughly on a par with the staggering percipience of Mr Rashid, who managed to remain unaware of the existence of mobile phones years after they were invented (and quite possibly after I;d made my first call on one).

    John.

  24. rob
    Thumb Down

    Microsoft's Innovation

    Microsoft have innovated nothing. Full stop. All of their claimed "Innovations" were either stolen, purchased, or are a combination of other open standards. (Defrag -> Norton, IE -> Mosaic, AD -> Kerberos/LDAP, in that order)

    NT was way behind all other operating systems. NT was the most unstable and insecure OS of all at the time. Back then I maintained a server room of 15 Linux (Slackware) servers running the 2.0.x kernel, and an MCSE with his NT 4 server and a hotline to Microsoft. The NT4 server crashed more often than ALL of the Linux servers combined. I should know, I was the one called out to hit the reset button!

    The UI design of All Windows' is godawfully bad. There are plenty of good ways to implement a good menu system, and this has been shown time and time again in the plethora of Unix/Linux based WMs that were available at the time.

    Linux may have had a limited feature set as it stems from the ideal "Do one thing and do it well" as opposed to the Microsoft manifesto of "Do everything half-assed and release patches often"

    Microsoft's COM is a heap of crap, rebranded in to other heaps of crap that are then rebranded into other heaps of crap. COM -> COM+ -> ? -> ActiveX

    But maybe I'm wrong.. They did come up with the concept of Vista, an OS so hefty that you need more RAM than most laptops can support just in order to display the desktop and do a little light web browsing. Hooray for innovation!

  25. ChrisInBelgium
    Happy

    @Don Mitchell

    "under the hood the NT kernel is still superior technology to the Mac or Linux operating systems"

    Haha! Thanks for the good laugh ;-) You should have used the 'Joke' icon though...

  26. John Smith Gold badge
    Boffin

    @John D Salt

    "You're saying that no Unix had dynamic libraries before Windows"

    I looked this one up. I knew that OS400 used dynamicly linked libraries which could be shared between multiple applications but was not sure how far back the technique went and IBM were never that keen on talking technology.

    The key development was the somewhat forgotten but seminal Multics OS developed jointly betweew GE, Bell labs & (I think) MIT. It did use dynamic linking (in the Windows sense) to share data objects and program functions between different applications.

    When Multics hit the skids Ken Ritchie at Bell Labs knocked up Unix. He wanted pipes (also Multics) but the PDP11 adress space was 16 bits max (Mutlics H/W had 24, huge for the time) so static linking a whole app once was (i guess) easier to implement. It was a specific design decision. So the answer is no.

    The POSIX standard re-introduced dynamic linking.

    Multics was a bit of "Grand challenge" project of its day. Covered in ACM and at least 2 books I know of from the late 60s onward. I'd expect quite a few Microsoft staff would know of it or possibly have worked on it.

    Dynamic linking on a processor without an MMU (x86/x88) might be tricky but dynamic linking is another non Microsoft innovation.

  27. Mike

    re: Avie Tevanian?

    If there's anyone who can take credit for building Mach its the PI. Obviously, building an OS takes a lot of work. There's plenty of credit to go around.

    But the PI had to come up with the ideas, find the money, justify the research, prioritize things, etc. Did the architect build the house, or was it the guy who hammered the nails? (And think about the people in between.) They can all rightly say they built the house.

  28. John D Salt

    @John Smith

    "So the answer is no."

    Quite sure? Final answer?

    I am not sure what year any of the various OSsen that go under the name of "Windows" introduced dynamic libraries, nor when the relevant POSIX standard was defined, but I'm pretty dam' sure that Sun Unix at least had the beasties in, ooh, lessee, 1989.

    Now I've stirred the memory pot, I wish I could remember if you could use external .atr files in Simula on any Unix. I only played with them on VMS, unfortunately.

    Good to know some people remember Multics; I tend to doubt that many microsofties were ever Multics refugees, though. Remember that Win 3.1 still hadn't quite managed the leap to multi-programming, as Bill thought it unnecessary on a PC, hence the dreadful nonsense of TSRs.

    Good to see OS/2 remembered, too. I can remember a time when the fastest and safest way to run Windows applications was to run them under OS/2 Warp.

    But, as ever, the Microsoft fanboys try to re-write history -- possibly because they are simply ignorant of it in the first place.

    All the best,

    John.

  29. John Smith Gold badge
    Coat

    DDL A qualification

    "Ken Ritchie" is a Freudian slip. That should have been Ken Thompson.

    On the various lecture notes I've seen it was a design decision to go with static linking in Bell Labs original Unix.

    The flexibility and (potential) small memory footprint of DL is outweighed by the complexity and performance hits from address translation and reading in stuff off the disks of the time. PDP11s did not have address translation MMU's as standard IIRC.

    From your experience at some point the code base forked or was re-implemented and DL was back on the menu. I think all Motorola chipsets after the 68000 had MMU parts or on-chip MMU either of which I suspect is a *very* strong enabler for this.

    Getting decent performance out of DL software in an architecture without hardware MMU is I suspect "Challenging."

    As originally conceived Bell Labs Unix, still no. But evolved versions of Unix (including versions pre-dating Windows 3.0) Yes.

    And claims that Microsoft invented dynamic linking. Nonsense. See all Multics comments. The big reference is I believe was "The Multics System by Elliott Organick" I always chuckled that Mr (Dr?) Organick was actually orginally trained as chemist rather than a computer scientist.

    I also remember an old Doc Dobb's discussing software IC's and DDL in DOS. The old Jensen Partners compiler company offered a range of languages using common back end libraries by doing dynamic linking without DDL format files. Windows may have been out by then.

    NB OS/2 is still knocking about. It was licensed by a US company whose name (I think) started with a T. Still attractive for specialised Line of Business apps and I thinks some US local authorities. Can't use the name however.

    "he Microsoft fanboys try to re-write history"

    I suspect this is Microsoft's crowing achievement. A generation of developers and managers (and their budgets and influence) who think software begins and ends with Microsoft and is only as robust as Microsoft *can* make it.

    Time to get my anorak and leave.

  30. Jodo Kast
    Go

    It's not your job, but others out there

    others out there know how to do the job!

    I work with end-users, from call center reps to executives. They tell me quite plainly what they are looking for.

    What they don't tell you, is that they hate learning new technology. Case in point: I developed a system identical to twitter, but it required a web server. I don't have funds to throw away hosting twitter websites for anonymous users, so the app required a web server.

    Twitter popped up as a web service, and it has taken the Internet community by storm. Since there is no 'technological hurdle' to jump over, Twitter has been successful.

    Once Microsoft learns that most users hate programming, then perhaps their operating systems and online services will get better.

  31. John D Salt

    @ John Smith

    ""Ken Ritchie" is a Freudian slip. That should have been Ken Thompson."

    Don't worry, I didn't spot it. After all, who bothers with surnames for Ken, Brian and Dennis?

    All the best,

    John.

  32. John Smith Gold badge
    Gates Horns

    @jodo kast

    "Once Microsoft learns that most users hate programming, then perhaps their operating systems and online services will get better."

    Why?

This topic is closed for new posts.

Other stories you might like