Cancer or not, Node.js is attracting plenty of interest, and just like smoking cigarettes at school Node.js is seen as the cool thing to do. Started by Ryan Dahl in 2009, this server-side scripting environment has in less than three years attracted enough coverage to persuade Microsoft, the world's largest software company, that …
Re: Ohh, you're so clever..
>>Just for the record, you were probably not even working on the internet pre-2000, and were still in your nappies.
Don't know why you would want that for the "record", nor for whose record exactly, but I think you mistake me for someone from a younger generation. Connecting with a Tektronix 4014 to the Unicos machine downstreet? Yup, been there, done that.
No news is good news!
An alternative take on not hearing much about RoR is that it has got past the hype and is at the point where people decide to use it or not based on its merits rather than you must use it because all the cool kid do!
Re: No news is good news!
I concur. I actually thought it was an ironic statement in an article about C++, a language not exactly 'in the news' but still immensely popular and useful.
Certainly, if one were to go by job postings here in the US, RoR is still alive and well, while Node.js is pretty much non-existant.
Hardly "blindingly fast"
C++ is fast, but it's still an order of magnitude slower than assembler. Which is not to say that we should be programming everything in assembler, but we should bear in mind the reality of the tradeoffs.
Re: Hardly "blindingly fast"
Oh Robert, you just *had* to go there, didn't you?
Re: Hardly "blindingly fast"
"C++ is fast, but it's still an order of magnitude slower than assembler. Which is not to say that we should be programming everything in assembler, but we should bear in mind the reality of the tradeoffs."
Actually , for most things more complicated than Hello World a good C/C++ compiler will probably produce a binary as good as , and in some cases a lot faster than hand coded assembler simply because compiler writers are generally top flight gurus who have forgotten more about optimisation and getting the most out of the hardware than your average self taught assembler coder ever knew.
Re: Re: Hardly "blindingly fast"
No. This is a modern myth put about by people who have done a module of compiler design at Uni.
Indeed, if you are using C++ as an OOPL rather than C with a few extra bits then there's no chance of coming anywhere *close* to assembler. The tradeoff is in the development speed and that's where the value is; there's no need to invent other reasons.
Re: Re: Re: Hardly "blindingly fast"
"No. This is a modern myth put about by people who have done a module of compiler design at Uni."
Umm , no , its a generally accepted fact. Most modern CPUs are so complicated and require so much nursing to optimise the flow of the instruction and memory caches that ist very unlikely that one person can always make the correct judgement everywhere in any assember program they write. Whereas with a compiler it only has to be programmed in once.
"Indeed, if you are using C++ as an OOPL rather than C with a few extra bits then there's no chance of coming anywhere *close* to assembler. "
Depends how you use it. Sure virtual functions are a 2 step call instead of 1 step but thats hardly going to slow it down by any significant amount. And there's little else that I can think of that produces a serious run time hit on speed apart from temporaries but they're easily avoided. If you use templates in inline functions then all the hard work is done at compile time.
Re: Re: Hardly "blindingly fast"
Agreed! C(++) is bound to be faster than assembler for any non-trivial.
Re: Re: Re: Re: Hardly "blindingly fast"
Virtual function prevent inlining and optimizaions, that's their 1st major fall.
Re: Re: Re: Hardly "blindingly fast"
I'm not going to suggest my compiler can create code as fast as hand-crafted ASM. But it's certainly not an order of magnitude slower. That level of performance would mean ASM would remain much more widely used because there are still areas where performance is worth the extra work... such as gaming (both code and GPU programming) and software trading.
Intrinsics and so on mean you can use SIMD without having to mess about in ASM anyway.
Kang Grade Mark Eleven?
"Like many attracted to the original Node.js, Kang likes the non-blocking architecture .. Node.js combines all user requests as a single thread but offloads I/O operations that can slow things down for things such as disk or database operations from that main thread."
But doesn't everybody do it this way?
In this world of multithreadingmulticore hardware, you really do not want a "main" or "single" thread which requests I/O [including the ugly, ugly RPCalls that are everywhere these days] then waits until I/O is done.
You want a pool of worker threads checking new work as fast as it comes in.
You want SEDA : http://www.eecs.harvard.edu/~mdw/proj/seda/
Re: Kang Grade Mark Eleven?
"But doesn't everybody do it this way?"
Yes. It's so bleeding obvious that I'm surprised the article needed to mention it.
Re: Kang Grade Mark Eleven?
Well, no, not really.
Most applications of this kind (web based), operate on a thread per request model.
While there is a nice thread pool to work within, each request essentially operates serially, blocking whenever it calls out to an external resource (eg, DB I/O web service call etc).
So, the description of the change Node.js makes is accurate, most web based server applications don't do this currently.
Its not completely groundbreaking, even in web development though; for example Java enterprise has had async servlet support for a while in some form or other (eg Jetty had it back in 2007).
This lets you do a similar thing.
Good work... now let performance/cost decide if we use it
None of the comments seem to have got to the real issue, which is not the “my fav’ language is better/faster/cheaper/tougher than your fav’ language”.. but whether the Node.js event-model is better for your application than a session-model.
I'll give it 2 months
before recruiting agencies start adding "Must have 2 yrs + experience in writing node.js applications"
Am I missing something?
From a strictly timing viewpoint, isn't the whole point of node.js to avoid waiting on blocking I/O calls?
i.e. I am guessing network calls to other sites, perhaps database reads?
In that case, what gain is there from making the handling of the initiate call/respond to call results faster (by using C++), when the actual service call is likely to be very much the determining factor in the overall response time?
The speed up of an overall db read handling for example will not help much if the db call over the network is slow. And if it isn't, why use node.js?
When you think of it this way, that's precisely why BitTorrent was first coded in Python, because local machine execution speed WASN'T the issue.
Of course, you can reduce server _load_ with a more efficient architecture, if that is the aim. Even then, wouldn't Java (not exactly my fave language) suffice? After all, how much C++ is there in the web server/application server space?
Not dissing C++, but we ain't talking about video graphics drivers, network stacks or I/O subsystems here. Smart move to raise one's profile for job hunting though.
@Jean-Luc: Your knowledge of C++ Applications is limited
For quite a few systematic reasons, C, C++, Pascal and Fortran are much more efficient than Java. Most of that is related to the fine-grained control of memory allocations and layout you have with these languages, and with Java you don't. Think Stack Allocation, Object Content Aggregation, Destructors and so on.
C++ is used in many more ways than you point out. Think of real-time stock exchange servers and the corresponding "quant" trading applications, which need to respond in the order of 1ms. And that quite reliably. Think of real-time financial data distribution servers, which is very much different from the finance apps to process traditional banking transactions or ERP stuff. Think of huge in-memory databases. Then of course RDBMSs, web browsers, large Office packages (Google Docs is a silly joke and you can figure that by simply trying to work with a 50-page document), CAD/CAE systems. For example, to design/simulate a new chip you often need to handle chip models which are sized in the multi-Gigabytes of RAM usage. Then think about all sorts of statistical analysis (OLAP and much more), which have to crunch hundreds of megabytes in a dozen of seconds or so and often maintain huge tables which will go into the Gigabytes with C++ or Pascal and in the dozens of Gigabytes with Java.
When you have to wait ten minutes for an analysis run to complete, you will consider re-implementing it in C++, if that reduces processing time to two minutes and increases data volume by a factor of three.
I am not trying to blast you, but be assured that the programming world is much, much bigger than the Java world, and that is definitely not for historical reasons.
Re: @Jean-Luc: Your knowledge of C++ Applications is limited
"...C++ is used in many more ways than you point out. Think of real-time stock exchange servers and the corresponding "quant" trading applications, which need to respond in the order of 1ms...."
Well, several of the fastest stock exchange systems in the world are developed in Java. For instance the NASDAQ stock exchange is pure Java. It is among the fastest in the world, with latency of 0.1 ms and extreme through put. Java is fine if you need extreme performance.
@Kebabbert: Deterministric Runtime ??
And how does that NASDAQ system assure they don't have 0.5s to 3s delay in case the GC runs ? Everything pre-allocated/no new operator used ?
Re: @Kebabbert: Deterministric Runtime ??
They use a tuned Azul VM as far as I'm aware.
This can address enormous amounts of memory, which reduces the need to GC, and then also gives the fancy azul tech that removes the impact of the remaining GC runs.
Re: Re: @Jean-Luc: Your knowledge of C++ Applications is limited
Sorry but you're simply incorrect. Java and .NET are great platforms and are perfectly suited for 90%+ of software, but optimised C++ code is faster than optimised Java code simply because Java has more stuff to do. You can't implement the same code on both to compare, you have to tweak your algorithms based on understanding how it works.
Also, your soundbite about NASDAQ is not relevant. When we talk about financial institutions needing ultrafast algorithms, we don't mean the stock exchanges. We mean the automated trading algorithms which banks develop. These are entirely different things.
It's Lisp, with an extra set of braces.
How To Program Robustly In C++
I would also like to add that insecure programming is not a god-given if you use C++. It is just an unfortunate historic development that even the Standard Template Library (Hashtables, vectors, ordered maps and so on) is unsafe regarding index over- oder underruns.
But that can be fixed with quite moderate effort by a skilled engineer by implementing his or her own container classes. Or by deriving and overloading "operator ()" of vector and overloading the iterator classes of the containers.
Then, disciplined C++ developers can implement a policy of "always smart pointers; no plain pointers", which will guarantee that all pointers are either NULL or valid. Then of course, there is the RAII (see http://en.wikipedia.org/wiki/RAII) pattern, which strongly contributes to security and proper resource usage. Actually, RAII is much more robust than the Java try/catch/finally mechanism, which demands a lot of attention from an already stressed developer.
So, the prospects of C++ are excellent, because efficiency has always been and always will be a major factor in engineering, software or otherwise. If it weren't we would surely fly 747s made out of Uranium and lead. After all, a Saturn V rocket would get that lead 747 into the air, right ?
Re: How To Program Robustly In C++
One problem though is if you implement super safe pointers and memory allocation and all that stuff in C++ to protect the developer, you surely end up reinventing lots of stuff Java does out of the box, and simply narrow the gap between the two in performance as well as functionality/safety.
Re: Re: How To Program Robustly In C++
The runtime hit is typically in the order of 10-20% if you use bounds checking and smart pointers. Smart Compilers could eliminate most of that (think of a "standard" for loop with inlined vector element accesses).
The memory overhead just exists for pointers (because you typically need refcounters for smart pointers), so the memory overhead is much, much less than the overhead resulting from dead Java objects lingering until the next GC.
The security benefits (less chance of successful cyber attack) clearly justify the described runtime and memory penalties.
65 comments missing the point
My profiler and I build performance systems at the world class level.
I use Java for high speed advanced data-structures as you 1. can't build persistent data structures in a non-garbage collected language, and 2. it can be blindingly fast if you know what your are doing.
I use C++ sparingly for signal processing (images, sound) because of the hacks you can pull off, but I minimize its usage because of the increased development time and debugging. 90% of runtime cost is on one part of the system, so I write that bit in C++ *once I have identified it*.
I use python for test harness and overall glue, because you can rearrange an application very easily with it and there is no annoying compile routine.
I use Matlab for intelligence and visualization.
I glue these all together using a middle ware solution (ROS).
My development time is my employer's main cost. Premature optimization occurs at the language selection level. Their is a silver bullet, it's called mixed language development, but it requires forcing yourself to learn new paradigms all the time (working on better functional stuff at the moment, looks cool)
Re: 65 comments missing the point
"you 1. can't build persistent data structures in a non-garbage collected language,"
Sorry , but what the fsck are you talking about? If you have the experience you suggest then how did you come up with this nonsense?
@t_lark: So what ?
You merely acknowledge what most other posters have been trying to say: Every language class (e.g. Garbage Collected vs. not G.C.) have their strengths. I think nobody claimed C++ development would be efficient in terms of R&D time to deliver a certain functionality (not performance target, though). But if your (sub-)problem is processing-intensive, you will certainly go C, C++ or Pascal, Fortran or Ada. Fortran still leads vector supercomputing applications, because it is not a clusterfsck like C++ (e.g. pointer aliasing prohibiting optimizations) and there exist compilers which will even change the order of nested for loops to get better cache line access patterns. Think matrix multiplications.
So yes. your hybrid approach makes a lot of sense. I do not understand your statement ". can't build persistent data structures in a non-garbage collected language". Are you talking object databases ? If yes, there are lots of that which use C++. I assume they are a little out of fashion, now that the shiny hype has come off, though.
If you want to do transaction processing or some less-than demanding GUI, you will probably be done faster with Java, C# or JS, that's true. But even here, Lazarus and Delphi are tough competitors, because their compilers are extremely fast and the result does not J-suck in terms of memory consumption and regular GC-freezes. Java is incompatible with product excellence, but yes, it might be good enough for many commercial settings.
Example Of Fortran Optimization
To all those who foolishly call Fortran "outdated", look at this document
and search for "Loop Interchange". Actually, it could probably be also done in Java, but I guess only Fortran compilers (mainly from a little firm called "Kuck and Associates" do this. Intel, DEC and many others bought their technology). So who is modern ??
Picture of a nuke simulated in Fortran.
Re: Example Of Fortran Optimization
nomenclature clash. I meant persistent data-structures like path copying e.g. http://hackerboss.com/copy-on-write-101-part-1-what-is-it/ *not* persistent like databases. Pointer ownership is difficult to work out for non GC environments. The difference between using a persistent algorithm and a non persistent one can be an order of complexity on an algorithm, so if C++ is stuck with the O(n) implementation while Java is might achieve O(log(n)) (with the occasional freeze though :/ ). Nor can you get rid of that freeze either by the normal trick of caching objects for the same reasons you can't implement the algorithm in c++, lack of clear ownership of objects.
I really cannot see why you need a GC language to perform copy-on-write and the associated object database, or the undo system. As long as the data structures are directed acyclic graphs. One would use smart pointers to do that.
I know some people will claim smart pointers are less efficient than GC, but I think the more limiting thing are cyclic data structures. But these can be detected by source code analysis tools quite easily and a workaround can surely be found in most cases.
Also, there will be a performance hit with smart pointers related to reference counters requiring intra-CPU-core communications (aka barriers). That could be a major issue, but on the other hand hardware designers have lots of options to speed that up. Think of a hardware directory of shared cache lines, which some hw architectures already implement. That way only the sharing cores are affected by a performance hit and overall scalability will be quite good.