back to article Node.js Native breakthrough: cloudy C++ on steroids

Cancer or not, Node.js is attracting plenty of interest, and just like smoking cigarettes at school Node.js is seen as the cool thing to do. Started by Ryan Dahl in 2009, this server-side scripting environment has in less than three years attracted enough coverage to persuade Microsoft, the world's largest software company, …

COMMENTS

This topic is closed for new posts.
  1. Pete Spicer

    Downvote away, you know you're going to want to.

    Two things.

    1. Why the **** are you giving Ted Dzuiba any more credence as a tech writer than he deserves (which is none)?

    (Yes, he doesn't like Node.js and he doesn't have to use it. For the rest of us, it's a tool to be used, should you wish to do so, and it carries up and down sides like everything else. For anything that's intended to service long-life connections, it's not really a bad solution. At least you can actually achieve such things in Node.js rather than trying to do it based on Apache/PHP if you have *any* plans about scalability)

    2. There is a very, very good reason why we put abstraction layers like PHP or JavaScript between the raw web and the general developers. Yes, it's slower, but damn is it safer. All software has bugs, all software has points where vulnerabilities can creep in. If you sit and code at the C++ level, you're more likely to trigger them.

    I'm not arguing that PHP (and JavaScript) has produced a lot of very bad code. But imagine, for a moment, that the bad PHP coders then go off and try and write C++ apps on their little servers. That, assuredly, is a disaster waiting to happen. At least there is a modicum of insulation without them having to worry about buffer overflow injections as well as all the normal gamut of vulnerability classes.

    1. Spearchucker Jones
      FAIL

      Huh?!?

      I didn't down-vote you, and I'm in what follows I'm using "you" in a general sense.

      You're doing what Microsoft have been doing for years - you're relying on tools to insulate your organisation or yourself from shit programming practices.

      The implications (security, robustness, performance, scalability, maintainability) of Node are anecdotal and yet to be fully understood. A 20-something pimply-faced kid will make an architectural decision (to use Node) purely because he's teh hackerz and knowz teh JavaScript.

      A better approach, and one I push for pretty strongly at work, is to educate teh hackerz. Topics range from authentication and authorisation to configuration management, DR and monitoring.

      Node works well if you're working out of your garage, writing a web support page for a fart app in iTunes. But if you're dealing with user data, financial data or anything business critical It. Just. Doesn't. Cut. It.

      1. Pete Spicer

        Re: Huh?!?

        Here's the thing, I read your post and everywhere I see 'Node', I can substitute in 'PHP' because it's precisely the same argument, just held a few years back, and with all the same connotations.

        Bad coders make bad code no matter the language. Even skilled developers still make mistakes in any language, but at least the higher level language environments do insulate against certain classes of vulnerability - leaving a smaller potential surface of attack.

        PHP or Node may not be perfect but I think you'll find it's a touch harder to execute raw machine instructions via them than it might be if the front-side apps are in C++...

        Yes, educating the coders is important. There's just not enough people understanding authorisation, authentication, configuration management, DR, monitoring, or for that matter even processes like prevention of SQL injection, and dealing with XSS and CSRF matters are still not entirely understood by an awful lot of coders out there. And educating them is undoubtedly going to help.

        Unfortunately, the more you try and remove protective layers, the more coder stupidity is going to cause problems, and it's not like the current generation of coders is that great at doing what they do. Nor the users for that matter.

        Bottom line: Node isn't fully understood yet, sure. Doesn't mean it's not a suitable tool for certain cases, and I would love to see a tool that has security, robustness, performance, scalability and maintainability - and still allows me to work in a reasonably safe environment (i.e. not C++ because it's far too easy to have buffer overflows) while being able to handle the sorts of matters that PHP can't (which means if I don't use Node, I'm basically looking at Python/Twisted or Ruby/EventMachine for that use case... are you telling me that these meet all your criteria that Node does not?)

        1. Steve Crook

          Re: Re: Huh?!?

          There are an awful lot of bad coders out there, and there's an awful lot of code that needs to be written. However, out of all of that code there's probably a relatively small amount that actually has requirements that would make an interpretive language impossible to use. We also don't have enough people with the wit and wisdom to work out what is appropriate and what is not.

          Frankly, if we end up with people going back to using C++ for this sort of stuff, I predict a flood of buffer overrun security issues in servers all over the globe. I know you can write safe C++, and the people coding in C++ should know how to write safe C++. But that's theory, practice and experience should tell us something else...

          We've got a generation of devs brought up on Java and C#, they're going to require careful supervision...

    2. Anomalous Cowherd Silver badge

      Mebbe you're right and mebbe you're wrong.

      But 5 seconds to calculate Fibonacci sequence to 40? My dad had a pocket calculator that could beat that, and that was in 1983.

      Sure it's is a contrived example, but other than the recursion the code looks OK on the surface and I've interviewed devs who've written worse. Abstraction is good - actually great - to a point, but like any model you've got to know when to apply it, and that comes from experience and understanding how it works under the hood.

    3. Anomalous Cowherd Silver badge

      To follow myself up

      I thought it best to quantify the outrage of 5 seconds to calculate this, and I confess I've surprised myself.

      Same code in Java: 854ms (average of 100 runs after the 1st)

      Same code in C: 3751ms (average of 100 runs)

      Same code in Perl: I got bored, but the first round took about 163000s

      So Node.js with it's slightly daft event model, at 5600ms - call it 5500ms to generously allow for network stack overhead - is only 22% slower than the same impementation in C, which is a whopping 430% slower than Java.

      So although I vaguely dislike Node.js I'm reluctantly going to qualify my previous post as follows: "shit code can be written in any language; and never underestimate the power of an optimizing JIT compiler".

      JS in itself isn't a bad language - I was writing JS OO libraries as far back as 1999 - but it stops bad coders from shooting themselves in the foot, and that's not necessarily good thing.

      1. Anonymous Coward
        Anonymous Coward

        Both nasty

        This Javascript versus C++ debate is missing the point. They are both *horrible* languages to do asynchronous programming in. You have to write reams of boilerplate which is completely unnecessary.

        streamline.js (a language which compiles to Javascript) and C#'s forthcoming async support do it right. Both transform more natural looking code into the tangle of callbacks needed for asynchronous programming.

        Writing asynchronous code in a language that doesn't support it is like doing maths in English: pointless masochism.

      2. edge_e
        WTF?

        Re: To follow myself up

        Am I missing something? When you say fibonacci sequence to 40, do you mean finding the 40th term? If so your doing something seriously wrong to get such ludicrous times.

      3. Sentient

        Re: C vs java

        @Anomalous Cowherd

        If your C code is slower than java you don't know what you're doing.

        Glad you're sticking to Java.

        1. JDX Gold badge

          Re: Re: C vs java

          Well done for failing to read his post. He said he implemented the SAME code in each, this is a discussion about how the compiler works with the SAME input, not about how to optimise code.

          Go back to thinking how clever you are now.

          1. Anomalous Cowherd Silver badge

            Re: Re: Re: C vs java

            Much obliged JDX, you saved me the effort. The code is taken from the linked-to article, and it's supposed to be bad. The point is to compare the same bad code in different languages.

      4. magnetik
        WTF?

        Re: To follow myself up

        What on earth are you running your Fibonacci code on? I just tried a PHP and Python test and both generate 40 numbers in the series in under 10ms on a tiny little VPS. You must be doing something very wrong, even an old C64 would give better results than yours.

  2. Sean Timarco Baggaley

    OS X uses C++?

    News to me. All Apple's documentation—which I'm inclined to trust as I'm learning to develop for their platform—makes it crystal clear that OS X is built on... Objective-C.

    Objective-C had a similar birth to C++ and, like the latter, evolved from the original C programming language. Nevertheless, Objective-C took a different path to C++ and remains at heart a "C with Objects" language with a very clean design considering its age. By contrast, C++ has become a nasty, brutish camel of a committee-designed language that has more ways to blow your own foot off than a minefield.

    1. Anonymous Coward
      Pint

      Re: OS X uses C++?

      It depends where in the source code you are looking. A lot of the kernel libraries (IOKit etc) are all C++ while user space (applications etc) can use ObjC. Suspect that the performance slant of the article gave rise to the author's ambiguity.

      I'm not going to get into the whole C++ is the spawn of the devil('s committee) thing as it's really tired. There's nothing wrong with a feature-rich language providing it is designed right - if you don't like (or need) features, don't use them. C++ is designed so that language features do not add overhead if they are not used.

    2. Anonymous Coward
      Anonymous Coward

      Re: OS X uses C++?

      @Sean Timarco Baggaley: "All Apple's documentation—which I'm inclined to trust as I'm learning to develop for their platform—makes it crystal clear that OS X is built on... Objective-C."

      You're confusing the application development framework with the underpinning of the OS itself. The fact that a particular environment is coded in one language doesn't preclude it from offering an API or development framework in another. There are many parts to OS X - some will be coded in C, others C++, and some in Objective-C too.

  3. Christian Berger

    I guess common grounds would be Lazarus

    That's native code, memory safe strings, and event-based processing all in one package. The only problem is that those features currently require you to use the graphical interfaces so you'll need an X11 server. That is, however only a problem with the current libraries and there's already talk about fixing it.

    C++ has the huge problem that a) It doesn't even try to prevent you from shooting yourself into the foot and b) that there are only few people who are able to write C++ code without shooting themselves into the foot.

    On modern Pascal you have native string types with the most common one being arbitrarily long and offering copy on write, so you'll only need to shift references in many cases. Plus you can opt-in for things like array boundary checking or integer range checks. (You also have working exceptions of course, so you can safely drop connections or have proper error messages.)

    1. Anonymous Coward
      WTF?

      Pascal??

      "On modern Pascal "

      Oh please, give it up, pascal is dead and buried. Get over it. C++ is a lot more powerful that that kids language and if some programmers shoot their foot off because of that its not the fault of the language. A poor workman does not blame his tools.

      1. Christian Berger

        Re: Pascal??

        Please, show me 10 programmers who can actually write good C++ code under real-life conditions.

        If it was about power, people would use Assembler and add a macro pre-processor for it.

        1. Anonymous Coward
          WTF?

          Re: Re: Pascal??

          "Please, show me 10 programmers who can actually write good C++ code under real-life conditions."

          I could show you hundreds. Perhaps you should try working with professionals instead students or BAs who reckon they can program because they once did a course 10 years ago.

        2. stanimir

          Re: Re: Pascal??

          Assembler cannot be used for power by almost any normal (or even leet hacker) human beings due to OOO (out of order execution), prefetching, etc. You need an optimizing compiler for anything non-trivial. There is a high chance a C or (even) Java compiler would generate better code than hand written assembler.

          Hand written assembler in some cases could be clear win but those case are just too few.

      2. TonyHoyle

        Re: Pascal??

        No it's not dead.. it's still used commercially. Modern pascal has modern features - generics, anonymous functions, namespaces, etc. and is far, far from what you probably learned at college.

        It's big downside is it's single pass, which makes for some difficult choices as the program architecture gets more complex.

        C++ is just different.. not more powerful, and I regularly write code in both where the situation demands it - along with half a dozen other languages. There's no overall best language, really.

    2. Anonymous Coward
      Anonymous Coward

      Horses for courses

      @Christian Berger: "C++ has the huge problem that a) It doesn't even try to prevent you from shooting yourself into the foot and b) that there are only few people who are able to write C++ code without shooting themselves into the foot."

      Why does every, single article about C++ attract a gaggle of "ooohh, C++ might blow your foot off" comments? You've mentioned 'shooting', so lets rephrase this as if we were actually talking about a gun:

      "A gun has the huge problem that a) It doesn't even try to prevent you from shooting yourself into the foot and b) that there are only few people who are able to fire a gun without shooting themselves into the foot."

      Now, how clever does that sound? C++ can be like a gun - they are both powerful and can be disastrously misused in the hands of someone untrained, inexperienced, or unbalanced. But they both have essential roles in the hands of trained, licensed professionals. And clearly, point B is false. There are literally thousands of professionals around the world who use C++ to great effect in "real-life conditions". And it's nice to occasionally see articles covering developments that concern the language. If you aren't one of those people, feel free to ignore the articles and - above all - don't even look at the comments, let alone contribute.

      1. Anonymous Coward
        Anonymous Coward

        Re: Horses for courses

        @Ralph 5.

        Everyone mentions the shoot your foot off analogy, because Straustrup was widely reported as having said it, as anyone old enough to remember will tell you.

        "It's harder to shoot yourself in the foot with c++ [than c,] but when you do, you blow your whole leg off." was the main claim in the "I love Ada because it was designed to be bug reducing, even at sourcecode level, versus I love c++ because I love c++ because it looks like what I did on my uni course." wars of the 80s.

        I wrote a library the other day which allows CRM 2011 development to be ten times as fast as without it. This doesn't mean Javascript is brilliant. It could have been done in any language. C++ isn't brilliant either. The reason it's used on all those things, is an accident of history. IMHO there is no silver bullet.

        1. Anonymous Coward
          Anonymous Coward

          Re:Horses for courses

          @AC: "Everyone mentions the shoot your foot off analogy, because Straustrup was widely reported as having said it, as anyone old enough to remember will tell you."

          The problem is that the people who "quote" Stroustrup (misquote is more apt) seem to have no idea what he was actually talking about. The gist of it is that C++ is designed to be safer than C, i.e. it is more difficult to cause accidental harm, but in the hands of an incompetent or ignorant developer the damage can be much more severe. And that is the point I was trying to make - C++ is a powerful language, but that doesn't inherently make it dangerous. And the key ingredient - to my mind - is that it maximises *choice*. You can write something very much akin to assembly language if you want (or inline assembly where it really counts). Or build elegant, abstract frameworks. Or both. You can bolt garbage collection onto it if that suits the context, but the language doesn't force anything on you.

          And my title - horses for courses - should have made it plain that I don't see C++ as a panacea for all software problems. Although I primarily use C++ (many of the tasks I address could not be done in anything other than C/C++), I use a variety of other languages depending on the context. I'm just sick of the "C++ is scary/cryptic/dated/etc" mentality. C++ is constantly evolving, and is a powerful contributor to modern software development.

    3. Anonymous Coward
      Anonymous Coward

      Re: I guess common grounds would be Lazarus

      Christian,

      My last company and my current company are both completely C++ shops. Each of them had at least 10 people who can code C++ extremely well for extremely stable application. One was producing shrink wrap software that we couldn't afford to have crash or be packed full of vulnerabilities. The second produces realtime (not hard realtime) software that processes ridiculous amounts of data. Neither organisation could have produced the same thing in a garbage collected language for performance reasons.

      With every language it is a case of picking the correct tool for the correct job. C++ is there when you want blinding speed in an object oriented package, and don't mind paying additional development costs to get it (C++ has longer development times than the garbage collected languages - both from compile times, also from the additional checking for memory leaks or corruption that you have to do, and from the simple fact that the language is more complex).

      I'm happy to use garbage collected languages or dynamically typed scripting languages when the time is right for them.

      By the way, copy on write is not a panacea. In fact in multi-threaded applications, the additional overhead of reference counting (even with compare and swap) can be huge. For blinding speed, I like the fact that I have choice about what string library to use. I can use a copy on write library, I can use a hybrid library (example one which has a small stack based buffer for strings which are deep copied around, and a pointer to a copy on write string), I can use a copy always library, or I can use an explicit deep copy based library. I can even come up with additional stuff as and when I need it. Of course, in 90%+ of applications built around the world, and default will be fine, and that is why Java, C#, Javascript, Perl etc. are so popular.

      1. stanimir

        Re: Re: I guess common grounds would be Lazarus

        @AC, Feb 17, 13:32GMT

        If you have free cores to run a non STW (stop-the-world) GarbageCollection, I can't see the overhead of the GC environment. As plus, GC allows for a lot of concurrent algorithms that are much harder or even impossible via standard ref-count.

        If you rely on CAS for reference counting, it's already a quite substantial overhead even on modern CPU (Nehalem+), esp. if all of the CPU caches are not interconnected - even non-contended CAS causes more than local latency.

    4. Ocular Sinister
      Mushroom

      Re: I guess common grounds would be Lazarus

      Christian, I think you are kidding yourself. If you are a programmer capable of shooting yourself in the foot with C++ you will quite likely achieve the same in other, higher level languages. Maybe not buffer over/under runs but there are a myriad other ways to write crap, insecure software - SQL Injections, lack of input validation, XSS, ... the list is endless. A crap programmer will shoot themselves in the foot regardless.

      With all this shooting at feet, its a miracle there isn't some kind of explosion...

  4. Eddie Edwards

    Zen Question

    Here's a Zen koran for you.

    If Ted Dzubia writes an article, and El Reg doesn't link to it several times, does anyone read it?

    1. Anonymous Coward
      Pint

      Re: Zen Question

      think you mean "koan" unless "koran" alludes to the whole "my programming language is better than yours" religious thing.

      1. Eddie Edwards
        Pint

        Re: Re: Zen Question

        I think you're right :)

      2. Anonymous Coward
        Anonymous Coward

        Zen Koran

        If mohammed is the last prophet, and he's dead, should we still listen to him?

  5. Matthew Smith

    Ugh. Javascript.

    Javascript really isn't scalable in the same way as RoR isn't. It has no concept of threading, and its inheritance model is a bodged job. It might allow interfaces, but only in a kinda, sorta way. Its fine for manipulating web pages, but not for heavy serious cloudy backend processing. Which is where RoR also fell down.

    If Google stick with Dart, that'll do the same job better.

  6. JDX Gold badge

    That Cancer link is hilarious

    My favourite: "Node Punishes Developers Because it Disobeys the Unix Way". I'd never heard of Ted before, but he sounds delightful and I think I shall bookmark him alongside TheDailyWTF.

    1. Destroy All Monsters Silver badge
      Trollface

      I don't know whether I should be dismayed because that Dziuba guy sounds like a maintainer of 4chan popping speed pills or moderately in accord because underneath all the Kid Rage Talk there are these points:

      1) Separation of concerns is good, moving applogic into the webserver is bad.

      2) You are not as blocking as you naively might think you are

      ... but then we knew all that, right?

      I still don't get where his REAL problem is.

  7. AndrueC Silver badge
    Trollface

    >C++ is so compelling that much of the world's most popular software still uses it - OS X, Facebook, Chrome, MapReduce, Windows 7, Firefox and MySQL to name just a few

    People now need to be /told/ this stuff? Oh gawd I'm feeling old.

    Worrying I actually do struggle to understand the main thrust of the article. Something to do with cloud computing and scripting, eh? Wow. What will these whipper snappers think up next?

    1. Anonymous Coward
      Anonymous Coward

      Yes, us youngsters have to be told facts to know them...

      So bored of the condescending primary-school-teacher tones of the upper-middle age-ists in the forums here.. I'll hold my hands up and admit I'm not a C/++ aficionado, and yes I do rely on node (and javascript in general) to easily and succinctly perform tasks that could be performed more efficiently in other languages.

      But now I have to be clairvoyant about existing large-scale software architectures and their programming environments? Get to ****

      1. AndrueC Silver badge
        Boffin

        Lol, at least you didn't down vote me. It was sort of a joke but it also has a serious point. I've been involved in IT as a software developer for 25 years now. For most of that time I've understood everything and it's all made sense. But over the past five years I've noticed that some technologies are appearing that seem to be unusual. Things that don't quite make sense.

        It's probably something everyone goes through (my Dad gave up on computers when microchips started to appear). The serious point I'd like to make though is that there comes a time when the older generation do start to lose touch. I wonder if there's scope for an El Reg article here?

        Hmmm.

        A related concern here is that, yes, C++ is still important. CPUs are important. OSes are important. 'The Cloud' is a cool concept but the up and coming IT generation is going to have to understand and support 'old school' tech. You can't operate a cloud without understanding assembly language. Not long term anyway :)

  8. Anonymous Coward
    FAIL

    C++11

    Speaking as a C++ developer I do wish they could just leave well along. The language is complicated enough already - if I need any more meta style programming there's Boost. All they're doing is creating more potential obscure questions to be asked in job technical tests.

    "C++ developers no longer have to build or pick their own libraries to achieve concurrency in C++ applications."

    What??? There is already a standard threading library - its called pthreads! Its not hard to use for any half decent developer and what happens if I don't want multithreaded but want multiprocess - has it got a wrapper for fork() (which Windows doesn't even support) and shared memory, semaphores, message queues, pipes, fifos, sockets, signals? No? Why not? Either do it properly or don't bother. C++ should be a language , not an enviroment.

    Yet another half baked chuck-in-everything-thats-cool proposition from a committee that really needs to find something more productive to do with its time.

    1. Anonymous Coward
      Anonymous Coward

      Re: C++11

      Speaking as a long-time C++ developer, I have to agree about the half-arsed concurrency additions to C++11. The only thing that actually needs to be supported by the language, in my opinion, is atomic operations and well-specified barriers so that we can write such code in a portable way.

      But other than that, most additions to C++11 are quite welcome. I won't name them all, but things like type inference, lambda functions, constructor delegation, as well as the STL cleanups and improvements, were long overdue and really make the language much nicer.

      Still, won't beat Lisp anytime soon.

      1. Anonymous Coward
        Anonymous Coward

        Re: Re: C++11

        There is a huge amount of C++11 which I am really looking forward to. Of course, the nice thing is that everything is pretty well backward compatible, so the reality is, if you want to like in your C++98 or C++03 world, you can remain there. Personally, for me, one of the biggest things that will improve life is in the space of move constructors and all the spin-off stuff from that. Being able to do the following without a tonne of temporaries slowing everything down is huge:

        MyComplexType a, b, c, d, e, f, g, h, i;

        a = b+c+d+e+f+g+h+i;

        Small things like type inference and initialiser lists will make using boost and the standard library much nicer. I could go on, but I won't. Just suffice to say I'm looking forward to it.

    2. Anonymous Coward
      Anonymous Coward

      Re: C++11

      "What??? There is already a standard threading library - its called pthreads! "

      Really, and I assume that pthreads is platform independent? I assume it works on Windows (pthreads-w32 is a poor substitute), and on embedded systems - both huge areas for C++. The lack of a memory model has been one of the biggest issues with C++ since its inception. It leads to inefficient threaded code, and it leads to broken threaded code, especially since C++ allows access through to the assembler so many more people are using native compare and swap in their code. Having memory barriers and atomic operations in the core language is huge.

      1. Anonymous Coward
        WTF?

        Re: Re: C++11

        "Really, and I assume that pthreads is platform independent?"

        Frankly who cares. If you're coding in C or C++ then you're going to be using OS specific APIs at some point anyway. Thats where #ifdef's come into their own. And if you don't want those then thats what Java was invented for.

        Utter bollocks. If your memory management causes threading issues then you're doing it wrong. Personally I'd go for multiprocess anyway, you all the benefits of multithreading without most of the problems. (It has a couple of its own but nothing significant). But oh dear, Windows doesn't do multiprocess very well does it. Too bad eh? I won't be ditching fork() anytime soon just on the off chance someone might need to port it to a Microsoft platform.

    3. JDX Gold badge

      Re: C++11

      OpenMP offers language-level multithreading. Shame it doesn't really catch on because for simple parallel tasks it is way less code than 'proper' threading libraries.

  9. Jeff 11

    I think it's the lack of libraries and frameworks that stop software houses from adopting new languages. No-one's interested in producing these for node.js because no-one is using node.js apart from in specialised, high-concurrency apps which need to cram in as much functionality into as few server resources as possible... which don't need bloated frameworks and libraries.

    For the beginner or intermediate developer - which these days, is about 99% of the developer population - doing things raw means doing things wrong, and that's why they'll stick to Apache and PHP.

  10. NomNomNom

    thanks guys but think I will stick to VB.NET

  11. Dave Ashe
    FAIL

    99% of the situations don't require node.js - I had a customer where Cisco ACE was not performing well (it was kicking users off for no apparent reason)

    I coded a loadbalancer using php, apache and mysql and it is working very well now for over a year now.

    Why would I want to even go there, re-learning a new framework, new tools, new webservers (node.js can't work on its own, it needs some other kludge of servers for production) it just doesn't make any sense to me.

    A correctly configured installation of apache, mysql and php works well enough for anything you can throw at it - you can even have in-memory variables shared between processes using apc or memcached if you really need high performance - and for the most part mysql is fast enough since immediate queries are cached in-memory as long as you have configured it correctly.

    I've had enough problems with the java jvm in production environments never mind using javascript!

    1. sabroni Silver badge

      >>I've had enough problems with the java jvm in production environments never mind using javascript!<<

      The two are completely different in pretty much every respect but the name. Javascript gets a load of shit because idiots think they can write in it, but for client side in browser code it's very powerful and flexible. You just need to know which bits to use and which bits to steer well clear of....

      1. Dave Ashe
        FAIL

        Its fine for client-side scripting but server-side? Give me a break, its already been tried - it was called server-side javascript aeons ago..

    2. Destroy All Monsters Silver badge
      Facepalm

      >>I've had enough problems with the java jvm in production environments never mind using javascript!

      Yes, I suppose so. Your "production environment" wouldn't be an Acer laptop running in your cellar hanging off the fridge power strip and pumping bad early-2000 websites with blinking gifs to innocent punters?

      1. Dave Ashe

        Ohh, you're so clever..

        No, actually you are completely wrong, its running a state of the art AJAX-intensive customer management system with datagrids, trees etc. on high-end servers.

        This is just the sort of idiotic comment i've come to expect from lame internet trolls like yourself.

        Just for the record, you were probably not even working on the internet pre-2000, and were still in your nappies.

        1. Destroy All Monsters Silver badge
          Pint

          Re: Ohh, you're so clever..

          >>Just for the record, you were probably not even working on the internet pre-2000, and were still in your nappies.

          Don't know why you would want that for the "record", nor for whose record exactly, but I think you mistake me for someone from a younger generation. Connecting with a Tektronix 4014 to the Unicos machine downstreet? Yup, been there, done that.

  12. Paul Johnston

    No news is good news!

    An alternative take on not hearing much about RoR is that it has got past the hype and is at the point where people decide to use it or not based on its merits rather than you must use it because all the cool kid do!

    Funnily enough bringing RoR and Javascript in the same article doesn't mention the former now uses the latter

    http://www.rubyinside.com/rails-3-1-adopts-coffeescript-jquery-sass-and-controversy-4669.html

    1. peyton?

      Re: No news is good news!

      I concur. I actually thought it was an ironic statement in an article about C++, a language not exactly 'in the news' but still immensely popular and useful.

      Certainly, if one were to go by job postings here in the US, RoR is still alive and well, while Node.js is pretty much non-existant.

  13. David Dawson
    Trollface

    Grails wins!

  14. Anonymous Coward
    Anonymous Coward

    Hardly "blindingly fast"

    C++ is fast, but it's still an order of magnitude slower than assembler. Which is not to say that we should be programming everything in assembler, but we should bear in mind the reality of the tradeoffs.

    1. James 47
      Unhappy

      Re: Hardly "blindingly fast"

      Oh Robert, you just *had* to go there, didn't you?

    2. Anonymous Coward
      Happy

      Re: Hardly "blindingly fast"

      "C++ is fast, but it's still an order of magnitude slower than assembler. Which is not to say that we should be programming everything in assembler, but we should bear in mind the reality of the tradeoffs."

      Actually , for most things more complicated than Hello World a good C/C++ compiler will probably produce a binary as good as , and in some cases a lot faster than hand coded assembler simply because compiler writers are generally top flight gurus who have forgotten more about optimisation and getting the most out of the hardware than your average self taught assembler coder ever knew.

      1. Anonymous Coward
        Anonymous Coward

        Re: Re: Hardly "blindingly fast"

        No. This is a modern myth put about by people who have done a module of compiler design at Uni.

        Indeed, if you are using C++ as an OOPL rather than C with a few extra bits then there's no chance of coming anywhere *close* to assembler. The tradeoff is in the development speed and that's where the value is; there's no need to invent other reasons.

        1. Anonymous Coward
          Anonymous Coward

          Re: Re: Re: Hardly "blindingly fast"

          "No. This is a modern myth put about by people who have done a module of compiler design at Uni."

          Umm , no , its a generally accepted fact. Most modern CPUs are so complicated and require so much nursing to optimise the flow of the instruction and memory caches that ist very unlikely that one person can always make the correct judgement everywhere in any assember program they write. Whereas with a compiler it only has to be programmed in once.

          "Indeed, if you are using C++ as an OOPL rather than C with a few extra bits then there's no chance of coming anywhere *close* to assembler. "

          Depends how you use it. Sure virtual functions are a 2 step call instead of 1 step but thats hardly going to slow it down by any significant amount. And there's little else that I can think of that produces a serious run time hit on speed apart from temporaries but they're easily avoided. If you use templates in inline functions then all the hard work is done at compile time.

          1. stanimir

            Re: Re: Re: Re: Hardly "blindingly fast"

            Virtual function prevent inlining and optimizaions, that's their 1st major fall.

        2. JDX Gold badge

          Re: Re: Re: Hardly "blindingly fast"

          I'm not going to suggest my compiler can create code as fast as hand-crafted ASM. But it's certainly not an order of magnitude slower. That level of performance would mean ASM would remain much more widely used because there are still areas where performance is worth the extra work... such as gaming (both code and GPU programming) and software trading.

          Intrinsics and so on mean you can use SIMD without having to mess about in ASM anyway.

      2. stanimir

        Re: Re: Hardly "blindingly fast"

        Agreed! C(++) is bound to be faster than assembler for any non-trivial.

  15. Destroy All Monsters Silver badge
    Terminator

    Kang Grade Mark Eleven?

    "Like many attracted to the original Node.js, Kang likes the non-blocking architecture .. Node.js combines all user requests as a single thread but offloads I/O operations that can slow things down for things such as disk or database operations from that main thread."

    But doesn't everybody do it this way?

    In this world of multithreadingmulticore hardware, you really do not want a "main" or "single" thread which requests I/O [including the ugly, ugly RPCalls that are everywhere these days] then waits until I/O is done.

    You want a pool of worker threads checking new work as fast as it comes in.

    You want SEDA : http://www.eecs.harvard.edu/~mdw/proj/seda/

    1. TonyHoyle

      Re: Kang Grade Mark Eleven?

      "But doesn't everybody do it this way?"

      Yes. It's so bleeding obvious that I'm surprised the article needed to mention it.

    2. David Dawson

      Re: Kang Grade Mark Eleven?

      Well, no, not really.

      Most applications of this kind (web based), operate on a thread per request model.

      While there is a nice thread pool to work within, each request essentially operates serially, blocking whenever it calls out to an external resource (eg, DB I/O web service call etc).

      So, the description of the change Node.js makes is accurate, most web based server applications don't do this currently.

      Its not completely groundbreaking, even in web development though; for example Java enterprise has had async servlet support for a while in some form or other (eg Jetty had it back in 2007).

      This lets you do a similar thing.

  16. Stephen Channell
    Thumb Up

    Good work... now let performance/cost decide if we use it

    None of the comments seem to have got to the real issue, which is not the “my fav’ language is better/faster/cheaper/tougher than your fav’ language”.. but whether the Node.js event-model is better for your application than a session-model.

    If you site does not need strong session support for permissions or transactions, then Node.js is a scalable candidate because stripping out session support and reusing thread-pools offsets the cost of Javascript.. + session cost increase with every new node.

    Is Javascript (V8 or other) an issue? No : The scenarios where Node.js works, also see Javascript in the client and Ajax over the wire.. the weakness in the language is compensated by better integration/testing between the client & server.. complexity is not such an issue either because the need for sessions for complex apps will hit before code lands in the tar-pit of spaghetti code.

    Would you start with Node.js Native? No : You’d start with Javascript and switch if performance became an issue.. at that point you’d consider re-implementation in C++.. and at that point we’d see a nice performance/cost case-study.

  17. Boris the Cockroach Silver badge
    Devil

    I'll give it 2 months

    before recruiting agencies start adding "Must have 2 yrs + experience in writing node.js applications"

  18. DoctorB
    WTF?

    I don't trust javascript to run anything

    ...just saying:

    https://www.destroyallsoftware.com/talks/wat

  19. Jean-Luc
    Boffin

    Am I missing something?

    From a strictly timing viewpoint, isn't the whole point of node.js to avoid waiting on blocking I/O calls?

    i.e. I am guessing network calls to other sites, perhaps database reads?

    In that case, what gain is there from making the handling of the initiate call/respond to call results faster (by using C++), when the actual service call is likely to be very much the determining factor in the overall response time?

    The speed up of an overall db read handling for example will not help much if the db call over the network is slow. And if it isn't, why use node.js?

    When you think of it this way, that's precisely why BitTorrent was first coded in Python, because local machine execution speed WASN'T the issue.

    Of course, you can reduce server _load_ with a more efficient architecture, if that is the aim. Even then, wouldn't Java (not exactly my fave language) suffice? After all, how much C++ is there in the web server/application server space?

    Not dissing C++, but we ain't talking about video graphics drivers, network stacks or I/O subsystems here. Smart move to raise one's profile for job hunting though.

    1. Anonymous Coward
      Megaphone

      @Jean-Luc: Your knowledge of C++ Applications is limited

      For quite a few systematic reasons, C, C++, Pascal and Fortran are much more efficient than Java. Most of that is related to the fine-grained control of memory allocations and layout you have with these languages, and with Java you don't. Think Stack Allocation, Object Content Aggregation, Destructors and so on.

      C++ is used in many more ways than you point out. Think of real-time stock exchange servers and the corresponding "quant" trading applications, which need to respond in the order of 1ms. And that quite reliably. Think of real-time financial data distribution servers, which is very much different from the finance apps to process traditional banking transactions or ERP stuff. Think of huge in-memory databases. Then of course RDBMSs, web browsers, large Office packages (Google Docs is a silly joke and you can figure that by simply trying to work with a 50-page document), CAD/CAE systems. For example, to design/simulate a new chip you often need to handle chip models which are sized in the multi-Gigabytes of RAM usage. Then think about all sorts of statistical analysis (OLAP and much more), which have to crunch hundreds of megabytes in a dozen of seconds or so and often maintain huge tables which will go into the Gigabytes with C++ or Pascal and in the dozens of Gigabytes with Java.

      When you have to wait ten minutes for an analysis run to complete, you will consider re-implementing it in C++, if that reduces processing time to two minutes and increases data volume by a factor of three.

      I am not trying to blast you, but be assured that the programming world is much, much bigger than the Java world, and that is definitely not for historical reasons.

      1. Kebabbert

        Re: @Jean-Luc: Your knowledge of C++ Applications is limited

        "...C++ is used in many more ways than you point out. Think of real-time stock exchange servers and the corresponding "quant" trading applications, which need to respond in the order of 1ms...."

        Well, several of the fastest stock exchange systems in the world are developed in Java. For instance the NASDAQ stock exchange is pure Java. It is among the fastest in the world, with latency of 0.1 ms and extreme through put. Java is fine if you need extreme performance.

        1. Anonymous Coward
          Stop

          @Kebabbert: Deterministric Runtime ??

          And how does that NASDAQ system assure they don't have 0.5s to 3s delay in case the GC runs ? Everything pre-allocated/no new operator used ?

          1. David Dawson

            Re: @Kebabbert: Deterministric Runtime ??

            They use a tuned Azul VM as far as I'm aware.

            This can address enormous amounts of memory, which reduces the need to GC, and then also gives the fancy azul tech that removes the impact of the remaining GC runs.

        2. JDX Gold badge

          Re: Re: @Jean-Luc: Your knowledge of C++ Applications is limited

          Sorry but you're simply incorrect. Java and .NET are great platforms and are perfectly suited for 90%+ of software, but optimised C++ code is faster than optimised Java code simply because Java has more stuff to do. You can't implement the same code on both to compare, you have to tweak your algorithms based on understanding how it works.

          Also, your soundbite about NASDAQ is not relevant. When we talk about financial institutions needing ultrafast algorithms, we don't mean the stock exchanges. We mean the automated trading algorithms which banks develop. These are entirely different things.

  20. rho
    WTF?

    Javascript simplicity

    Wow, the simplicity of Javascript.

    It's Lisp, with an extra set of braces.

  21. Anonymous Coward
    Go

    How To Program Robustly In C++

    I would also like to add that insecure programming is not a god-given if you use C++. It is just an unfortunate historic development that even the Standard Template Library (Hashtables, vectors, ordered maps and so on) is unsafe regarding index over- oder underruns.

    But that can be fixed with quite moderate effort by a skilled engineer by implementing his or her own container classes. Or by deriving and overloading "operator []()" of vector and overloading the iterator classes of the containers.

    Then, disciplined C++ developers can implement a policy of "always smart pointers; no plain pointers", which will guarantee that all pointers are either NULL or valid. Then of course, there is the RAII (see http://en.wikipedia.org/wiki/RAII) pattern, which strongly contributes to security and proper resource usage. Actually, RAII is much more robust than the Java try/catch/finally mechanism, which demands a lot of attention from an already stressed developer.

    So, the prospects of C++ are excellent, because efficiency has always been and always will be a major factor in engineering, software or otherwise. If it weren't we would surely fly 747s made out of Uranium and lead. After all, a Saturn V rocket would get that lead 747 into the air, right ?

    1. JDX Gold badge

      Re: How To Program Robustly In C++

      One problem though is if you implement super safe pointers and memory allocation and all that stuff in C++ to protect the developer, you surely end up reinventing lots of stuff Java does out of the box, and simply narrow the gap between the two in performance as well as functionality/safety.

      1. Anonymous Coward
        Go

        Re: Re: How To Program Robustly In C++

        The runtime hit is typically in the order of 10-20% if you use bounds checking and smart pointers. Smart Compilers could eliminate most of that (think of a "standard" for loop with inlined vector element accesses).

        The memory overhead just exists for pointers (because you typically need refcounters for smart pointers), so the memory overhead is much, much less than the overhead resulting from dead Java objects lingering until the next GC.

        The security benefits (less chance of successful cyber attack) clearly justify the described runtime and memory penalties.

  22. t_lark

    65 comments missing the point

    My profiler and I build performance systems at the world class level.

    I use Java for high speed advanced data-structures as you 1. can't build persistent data structures in a non-garbage collected language, and 2. it can be blindingly fast if you know what your are doing.

    I use C++ sparingly for signal processing (images, sound) because of the hacks you can pull off, but I minimize its usage because of the increased development time and debugging. 90% of runtime cost is on one part of the system, so I write that bit in C++ *once I have identified it*.

    I use python for test harness and overall glue, because you can rearrange an application very easily with it and there is no annoying compile routine.

    I use javascript for the pretty interface because running this via a web browser is the coolest way of making an appliance application platform independent.

    I use Matlab for intelligence and visualization.

    I glue these all together using a middle ware solution (ROS).

    My development time is my employer's main cost. Premature optimization occurs at the language selection level. Their is a silver bullet, it's called mixed language development, but it requires forcing yourself to learn new paradigms all the time (working on better functional stuff at the moment, looks cool)

    1. Anonymous Coward
      WTF?

      Re: 65 comments missing the point

      "you 1. can't build persistent data structures in a non-garbage collected language,"

      Sorry , but what the fsck are you talking about? If you have the experience you suggest then how did you come up with this nonsense?

  23. Anonymous Coward
    Go

    @t_lark: So what ?

    You merely acknowledge what most other posters have been trying to say: Every language class (e.g. Garbage Collected vs. not G.C.) have their strengths. I think nobody claimed C++ development would be efficient in terms of R&D time to deliver a certain functionality (not performance target, though). But if your (sub-)problem is processing-intensive, you will certainly go C, C++ or Pascal, Fortran or Ada. Fortran still leads vector supercomputing applications, because it is not a clusterfsck like C++ (e.g. pointer aliasing prohibiting optimizations) and there exist compilers which will even change the order of nested for loops to get better cache line access patterns. Think matrix multiplications.

    So yes. your hybrid approach makes a lot of sense. I do not understand your statement ". can't build persistent data structures in a non-garbage collected language". Are you talking object databases ? If yes, there are lots of that which use C++. I assume they are a little out of fashion, now that the shiny hype has come off, though.

    If you want to do transaction processing or some less-than demanding GUI, you will probably be done faster with Java, C# or JS, that's true. But even here, Lazarus and Delphi are tough competitors, because their compilers are extremely fast and the result does not J-suck in terms of memory consumption and regular GC-freezes. Java is incompatible with product excellence, but yes, it might be good enough for many commercial settings.

  24. Anonymous Coward
    Mushroom

    Example Of Fortran Optimization

    To all those who foolishly call Fortran "outdated", look at this document

    http://www.google.com/url?sa=t&rct=j&q=fortran%20loop%20order%20changing%20optimization&source=web&cd=2&ved=0CCoQFjAB&url=http%3A%2F%2Fsoftware.intel.com%2Ffile%2F6397&ei=hkBBT4GmN4jYsgaPxOn4BA&usg=AFQjCNHE2uJGSyl-GW9IdTTr5UkMJfCDRg&cad=rja

    and search for "Loop Interchange". Actually, it could probably be also done in Java, but I guess only Fortran compilers (mainly from a little firm called "Kuck and Associates" do this. Intel, DEC and many others bought their technology). So who is modern ??

    Picture of a nuke simulated in Fortran.

    1. t_lark

      Re: Example Of Fortran Optimization

      nomenclature clash. I meant persistent data-structures like path copying e.g. http://hackerboss.com/copy-on-write-101-part-1-what-is-it/ *not* persistent like databases. Pointer ownership is difficult to work out for non GC environments. The difference between using a persistent algorithm and a non persistent one can be an order of complexity on an algorithm, so if C++ is stuck with the O(n) implementation while Java is might achieve O(log(n)) (with the occasional freeze though :/ ). Nor can you get rid of that freeze either by the normal trick of caching objects for the same reasons you can't implement the algorithm in c++, lack of clear ownership of objects.

      1. Anonymous Coward
        Stop

        @t_lark: Copy-On-Write

        I really cannot see why you need a GC language to perform copy-on-write and the associated object database, or the undo system. As long as the data structures are directed acyclic graphs. One would use smart pointers to do that.

        I know some people will claim smart pointers are less efficient than GC, but I think the more limiting thing are cyclic data structures. But these can be detected by source code analysis tools quite easily and a workaround can surely be found in most cases.

        Also, there will be a performance hit with smart pointers related to reference counters requiring intra-CPU-core communications (aka barriers). That could be a major issue, but on the other hand hardware designers have lots of options to speed that up. Think of a hardware directory of shared cache lines, which some hw architectures already implement. That way only the sharing cores are affected by a performance hit and overall scalability will be quite good.

This topic is closed for new posts.

Other stories you might like