Re: 1980s_coder
Yes - now made it clear in the story.
PS: Don't forget to email corrections@theregister.co.uk if you spot anything odd in a story - we can fix it immediately.
C.
Intel has given its Xeon E7 processor family its annual refresh, this time emphasising analytics at scale. The new E7-8800/4800 v3 chips use the Haswell micro-architecture, meaning all Chipzilla's Xeons have made the jump. Intel's been a bit cagey, and did not share the list of E7 v3 models as we were going to press, but we' …
This post has been deleted by its author
No at all; TSX is transactional memory support in hardware (think temporal locking and shadow copies of cache pages) and the only protection it helps to achieve is against data races - if used correctly, which can be tricky. It will be mostly used in usermode applications (although kernel would benefit as well), by developers brave enough to try a different paradigm of parallel programming. Or, it could be used implicitly when hardware internally optimizes uses of locking primitives which is another mode of TSX operation.
there's something that doesn't quite up there
That wouldn't be the first time..
The one with the Casio FX 602P, thanks.
What I want to know is just where my 602 has got too. I bought one (and the tape interface and the printer) when they came out, as a replacement for my 502. But somehow the 602 has gone missing over the years. My 502 came back to me a few years back as my father no longer feels the need for one.
If your data processing needs didn't increase much, yes, it's years you can "downsize" your hardware and coalesce more and more processing on less hardware (but for redundacy reasons), or go cloudy. It's also one of the reason the PC market slowed down, far less need of a new one for performance reasons alone.
That was inevitable - Intel will have to find new revenue streams, and they know, although it's not that easy.
because x86 does not scale to many sockets. The big difficulty is to make scalable servers. That is the reason x86 is low end, and only POWER/SPARC are high end.
Scale-out servers (clusters with 100s of sockets such as SGI UV2000, ScaleMP, etc) can not run monolithic business software, such as databases, SAP, etc. The reason is such code branches too much, so there will be too much communication between all the nodes. Guaranteeing data integrity in transaction heavy envrionments (locking, rollback, etc) is very communication intensive which punishes nodes far away. Scale out servers can only run HPC number crunching workloads where each node runs a tight for loop, with less communication.
Scale-up servers (large Unix/Mainframe servers with up to 32-sockets) can run business software (databases, SAP, etc). The largest x86 scale-up server has 8-sockets.
This is the reason x86 is no match for high end Unix/Mainframes: scale-up x86 servers that can run business software has maximum 8 sockets. This gives low performance. You need 16 or 32 sockets for really good performance. Scale-out servers can not run business software, so you will never see any scale-out server records in SAP benchmarks because latency suffers.
This is the reason it is impossible to get high SAP scores with x86: scale-up servers stays at 8-sockets which is bad. Scale-out servers with 100s of sockets can not run business software. If you look at the SAP benchmark top list, all are Unix servers: SPARC or POWER. There are no x86 server records at the top. Nowhere can SGI UV2000 servers be seen: they are unsuitable for SAP benchmarks.
I invite anyone to post a SAP benchmark with high score, using a x86 server. You will not find any good SAP x86 scores. Impossible. So, x86 is no match for high end Unix servers running SAP, databases, etc