* Posts by Kebabbert

808 publicly visible posts • joined 22 Jul 2009

Clash of the Titans: Which of you has the greatest home lab

Kebabbert
Thumb Up

Re: Computational number theory lab

Cool! Why? Hobby? Are you using Number Field Sieve? Where is the source code? :o)

Hard drive sales to see double-digit dive this year

Kebabbert

Great!

Ever since the last big independent manufacturer was bought 2-3 years ago, there has been an oligopoly. The prices are too high, much higher than earlier. Earlier prices went down very fast, but now the prices have not yet recovered from the flooding in Thailand, the prices are actually higher than before the flooding. Some years later.

So I think it is good that sales are diving. Then maybe the oligopoly will understand that they should lower the prices so we can start buying disks again. I planned to buy 3TB disks years ago, but prices skyrocketed and are still not low. When prices lowers back to where we were before the flooding, people will start buying disks again. I will. Not until then.

Flash pioneer STEC invests in ZFS zealot Nexenta

Kebabbert

What ZFS flash limitations?

I am not aware of any. You can build a zpool out of flash drives, or SAS disks. Or you can add a layer of flash disks as cache (L2ARC / ZIL).

What do they mean, talking about "limitations"?

'SHUT THE F**K UP!' The moment Linus Torvalds ruined a dev's year

Kebabbert

Re: After all this, what's the general standard of Linux architecture and kernel code?

You should provide some links if you claim controversial statements. Here are links to developers complaining Linux code quality:

Linux kernel maintainer Andrew Morton says the code is bad:

http://lwn.net/Articles/285088/

"I used to think [code quality] was in decline, and I think that I might think that it still is. I see so many regressions which we never fix....it would help if people's patches were less buggy."

OpenBSD developer Theo de Radt says the code is bad

http://www.forbes.com/2005/06/16/linux-bsd-unix-cz_dl_0616theo.html

"It's terrible," De Raadt says. "Everyone is using it, and they don't realize how bad it is. And the Linux people will just stick with it and add to it rather than stepping back and saying, 'This is garbage and we should fix it.'"

Linus Torvalds, says the code is bloated

http://www.theregister.co.uk/2009/09/22/linus_torvalds_linux_bloated_huge/

"Citing an internal INTEL corp study that tracked kernel releases, Bottomley said Linux performance had dropped about two per centage points at every release, for a cumulative drop of about 12 per cent over the last ten releases. "Is this a problem?" he asked. "We're getting bloated and huge. Yes, it's a problem," said Torvalds."

Ted Tso, creator of ext4 says the Linux developers cheat to get higher performance, on the penalty of bug free code:

http://phoronix.com/forums/showthread.php?36507-Large-HDD-SSD-Linux-2.6.38-File-System-Comparison&p=181904#post181904

"In the case of reiserfs, Chris Mason submitted a patch 4 years ago to turn on barriers by default, but Hans Reiser vetoed it. Apparently, to Hans, winning the benchmark demolition derby was more important than his user's data. (It's a sad fact that sometimes the desire to win benchmark competition will cause developers to cheat, sometimes at the expense of their users.)...We tried to get the default changed in ext3, but it was overruled by Andrew Morton, on the grounds that it would represent a big performance loss, and he didn't think the corruption happened all that often (!!!!!) --- despite the fact that Chris Mason had developed a python program that would reliably corrupt an ext3 file system if you ran it and then pulled the power plug "

http://kerneltrap.org/Linux/Active_Merge_Windows

"The [linux source code] tree breaks every day, and it's becomming an extremely non-fun environment to work in....We need to slow down the merging, we need to review things more, we need people to test their f--king changes!"

There are many more links if you google a bit. Unix creators have said the Linux code is bad. I can supply many more links, from Linux kernel hackers, claiming on the bad code "the kernel is going to pieces"

.

Also, Linux has huge scaling problems. It scales fine on a cluster, but scales very bad on a single fat server. Clusters are for HPC work (number crunching, easily parallell problems, etc), they can have 1000s of cpus and many TB of ram. For instance, the SGI Altix server has 2048 cores and 64TB RAM. But it is a cluster. As the Linux ScaleMP server with one single image Linux kernel with 8192 cores, just as the SGI Altix server:

http://www.theregister.co.uk/2011/09/20/scalemp_supports_amd_opterons/

"The vSMP hypervisor that glues systems together is not for every workload, but on workloads where there is a lot of message passing between server nodes – financial modeling, supercomputing, data analytics, and similar parallel workloads. Shai Fultheim, the company's founder and chief executive officer, says ScaleMP has over 300 customers now. "We focused on HPC as the low-hanging fruit,"

A programmer writes about this server:

"I tried running a nicely parallel shared memory workload (75% efficiency on 24 cores in a 4 socket opteron box) on a 64 core ScaleMP box with 8 2-socket boards linked by infiniband. Result: horrible. It might look like a shared memory, but access to off-board bits has huge latency."

All these cc-NUMA servers belong to the HPC server category: clusters of many comput nodes, on a fast network switch.

.

On the other hand, SMP servers, are a single fat server. They can have 16 cpus, 32 cpus, some has as many as 64 cpus. IBM Mainframes belong to the SMP category. IBM P795 is also a SMP server, it has as many as 32 cpus. Oracle has SMP servers with 64 cpus, the M9000. And HP has 32 cpu SMP servers. They typically can have up to 64 cpus and 2-4TB RAM. The IBM P595 used for the TPC-C record, costed 35 million USD. List price. One single SMP fat server. The HPC servers are very cheap in comparison, because it is a cluster of cheap compute nodes on a fast network.

The biggest Linux SMP server has 8 cpus (from Oracle, IBM and HP, it is just a standard 8-way x86 socket server). There are no big Linux SMP servers on the market. No one sells them. For a reason: Linux has problems handling 8 cpus in SMP fashion. There are no 16 cpu Linux servers, or 32 cpu Linux servers. Not a single one.

Thus, Linux scales good horizontally (in a HPC cluster). But scales extremely bad in a fat big server (SMP server). Linux can not handle 16 cpus, or someone would create and sell such servers for a fraction of the price of Oracle/IBM/HP - because big SMP servers costs a lot.

So, can any show me a Linux server with as many as 16 or 32 cpus? No, there are none for sale. And has never been. Only old mature Enterprise Unix like IBM AIX, Solaris, HP-UX can scale to 32 or 64 cpus.

WD to crash down five terabyte desktop job, mutterings suggest

Kebabbert

Oligopol

Now there are only three big manufacturers of hard disks left. Shortly after the fourth company was bought, prizes didnt move (it even increased 300% after the flooding) and development grinded to a halt.

This is clearly a oligpol. Compare how fast prizes decreased before the acquistion and the pace of development, and after the acquisition. Graph it. There is a huge difference which means the big three vendors have got together and split up the market. Is such behavior allowed, by law?

Brr, feeling cold? Galaxy is home to plenty of WARMER Earth twins

Kebabbert

First discovery of planets in other solar systems

The scientist who discovered the first planet in another solar system should get the Noble prize in physics, I think. What do you think? This is a great discovery. Throughout the history mankind has always wondered if there are other planets, and now we know.

The next question that needs to be solved is "are there life on other planets?"

Scientists build largest ever computerized brain

Kebabbert

Re: Java?

Do you mean that Java is slow? Maybe you should read about adaptive optimizing compilers? They can in theory be faster than an ordinary, say, C/C++ compiler. The thing is, every time the JVM runs the program, it can gradually optimize. If you use gcc, then you can not use vector instructions, because not all x86 cpus have those. Therefore you just deliver a common binary. But with the JVM, it can optimize to use vector instructions if your cpu has them, or in other ways, optimize your code to suit your very own cpu. Gcc can not do that, as it only optimizes once. It is not difficult to see why JVM can be faster than C/C++.

NASDAQ largest stock exchange system called INET, is developed in Java. And the NASDAQ system is among the fastest in the world, with sub 100ns latency and extreme throughput. If Java suffices for one the worlds largest and fastest stock exchanges, then it suffices for most needs.

But we must remember that Java is best on the server side. Not on the client side, sure, you use it for developing games and such (Minecraft) but Java is built mainly for servers.

Ten four-bay NAS boxes

Kebabbert

Re: @Matt

I dont understand your excitement of me confirming that ZFS does not cluster? Everybody knows it, Sun explained ZFS does not cluster, Oracle confirms it, and everybody says so, including me. You know that I always try to back up my claims with credible links to research papers / benchmarks / etc, and there are no links that say ZFS does cluster - because it does not. Therefore I can not claim that ZFS does cluster.

Are you trying to imply that I can not admit that ZFS is not perfect, that is has flaws? Why? I never had any problems looking at benchmarks superior to Sun/Oracle and confirming that, for instance, that POWER7 is the fastest cpu today on some benches. I have written it repeatedly, POWER7 is a very good cpu, one of the best. You know that I have said so, several times. I have confirmed superior IBM benchmarks, without any problems.

Of course ZFS has its flaws, it is not perfect, nor 100% bullet proof. It has its bugs, all complex software has bugs. You can still corrupt data with ZFS, in some weird circumstances. But the thing is, ZFS is built for safety and data integrity. Everything else is secondary. ZFS does checksum calculations on everything, that drags down performance, which means performance is secondary to data integrity. Linux filesystems tend to sacrifice safety to performance. As ext4 creator Ted Tso explained, Linux hackers sacrifice safety to performance:

http://phoronix.com/forums/showthread.php?36507-Large-HDD-SSD-Linux-2.6.38-File-System-Comparison&p=181904#post181904

"In the case of reiserfs, Chris Mason submitted a patch 4 years ago to turn on barriers by default, but Hans Reiser vetoed it. Apparently, to Hans, winning the benchmark demolition derby was more important than his user's data. (It's a sad fact that sometimes the desire to win benchmark competition will cause developers to cheat, sometimes at the expense of their users.)...We tried to get the default changed in ext3, but it was overruled by Andrew Morton, on the grounds that it would represent a big performance loss, and he didn't think the corruption happened all that often (!!!!!) --- despite the fact that Chris Mason had developed a python program that would reliably corrupt an ext3 file system if you ran it and then pulled the power plug "

I rely on research and official benchmarks and other credible links when I say something. Scholars and researchers do so. You, OTOH, do not. I have showed you several research papers - and you reject them all. To me, an academic, that is a very strange mindset. How can you reject all the research on the subject? If you do, then you can as well as rely on religion and other non verifiable arbitrary stuff, such as Healing, Homeopathy, etc. That is a truly weird charlatan mindset: "No, I believe that data corruption does not occur in big data, I choose to believe so. And I reject all research on the matter". Come on, are you serious? Do you really reject research and rely on religion instead? I am really curious. O_o

So yes, ZFS does not cluster. If you google a bit, you will find old ZFS posts where I explain that one of the drawbacks of ZFS is that it doesnt cluster. It is no secret. I have never seen you admit that Sun/Oracle has some superior tech, or seen you admit that HP tech has flaws? On my last job, people said that HP OpenVMS was superior to Solaris, and some Unix sysadmins said that HP Unix was the most stable Unix, more stable than Solaris. I have no problems on citing others when HP/IBM/etc is better than Sun/Oracle. Have you ever admitted that Sun/Oracle did something better than HP? No? Why are you trying to make it look like I can not admit that ZFS has its flaws? Really strange....

.

.

"...If I stick FreeNAS on an old desktop and hawk it on eBay am I "competing with EMC"?..." No, I dont understand this. What are you trying to say? That Nexenta is on par with FreeNAS DIY stuff? In that case, it is understandable that you believe so. But if you study the matter a bit, Nexenta beats EMC and NetApp in many cases, and Nexenta has grown triple digit since its start. It is the fastest growing startup. Ever.

http://www.theregister.co.uk/2011/09/20/nexenta_vmworld_2011/

http://www.theregister.co.uk/2012/06/05/nexenta_exabytes/

http://www.theregister.co.uk/2011/03/04/nexenta_fastest_growing_storage_start/

Thus, FreeNAS PC can not compete with EMC, but Nexenta can. And does. Just read the articles or will you reject the facts, again?

.

.

"...Both hp and IBM are a good case in point. Both pay license fees to Symantec to use their proprietary LVM for their filesystems. If ZFS was so goshdarnwonderful as you say, and "free" to boot, surely hp or IBM would be falling over themselves to use ZFS? They aren't. ..."

Well, DTrace is another Solaris tech that is also good. IBM has not licensed DTrace, nor has HP. What does that prove? That DTrace sucks? No. Thus, your conclusion is wrong: "If HP and IBM does not license ZFS it must mean that ZFS is not good" - is wrong because HP and IBM has not licensed DTrace.

IBM AIX has cloned DTrace and calls it Probevue

Linus has cloned DTrace and calls it Systemtap

FreeBSD has ported DTrace

Mac OS X has ported DTrace

QNX has ported DTrace

VMware has cloned DTrace and calls it vProbes (gives credit to DTrace)

NetApp has talked about porting DTrace on several blogs

Look at this list. Nor HP nor IBM has licensed DTrace, does that mean DTrace sucks? No. Wrong conclusion of you. DTrace is the best tool to instrument the system, and everybody wants it. It is best. Same with ZFS.

.

.

"...There is a reason - ZFS is not as good as you think and there are other options, especially on Linux, that are far superior..." Fine, care to tell us more about those options that are far superior to ZFS? What would that be? BTRFS, that does not even allow raid-6 yet? Or was it raid-5? Have you read the mail lists on BTRFS? Horrible stories of data corruption all the time. Some Linux hackers even called it "broken by design". Havent you read this link? Want to see? Just ask me, and I will post it.

So, care to tell us the many superior Linux ZFS options? A storage expert explains that Linux does not scale I/O wise, and you need to use real Unix: "My advice is that Linux file systems are probably okay in the tens of terabytes, but don't try to do hundreds of terabytes or more."

http://www.enterprisestorageforum.com/technology/features/article.php/3745996/Linux-File-Systems-You-Get-What-You-Pay-For.htm

http://www.enterprisestorageforum.com/technology/features/article.php/3749926/Linux-File-Systems-Ready-for-the-Future.htm

.

.

"...There is a demonstratable case for ECC RAM. There is not for ZFS, despite what you claim...."

Fine, but have you ever noticed ECC firing? Have you ever seen it happen? No? Have you ever seen SILENT corruption? Hint, it is not detectable. Have you seen it?

Have you read experts on big data? I posted several links, from NetApp, Amazon, CERN, researchers, etc. Do you reject all those links that confirm that data corruption is a big problem if you go up in scale? Of course, when you toy with your 12TB hardware raid setups, you will never notice it. Especially as hw-raid is not designed to catch data corruption. Nor SMART does help. Just read the research papers. Or do you reject Amazon, CERN and NetApp and all researchers? What is it you know, that they dont know? Why dont you tell NetApp that their big study on 1.5 million Harddisks did not see any data corruption at all? They just imagined the data corruption?

http://research.cs.wisc.edu/adsl/Publications/latent-sigmetrics07.pdf

"A real life study of 1.5 million HDDs in the NetApp database found that on average 1 in 90 SATA drives will have silent corruption which is not caught by hardware RAID verification process; for a RAID-5 system that works out to one undetected error for every 67 TB of data read"

Are you serious when you reject all this evidence from NetApp, CERN and Amazon, or are you just Trolling?

Kebabbert

Re: @Matt

Matt,

No, ZFS can't cluster. This is actually a claim of yours that happens to be correct, for once. Non clustering is a disadvantage, and if you need clustering, then ZFS can not help you. But you can tack distributed filesystems on top ZFS, such as, Lustre or OpenAFS.

.

.

"...More evasion. I pointed out that not one vendor has dropped expensive Veritas for "free" ZFS and all you do is go off on a tangent. Just admit it and then go stick your head back up McNealy's rectum..."

Well, I admit that I dont know anything about your claim. But you are sure on this, I suppose, otherwise you would not be rude. Or maybe you would be rude, even without knowing what you claim?

But there are other examples of companies and organizations switching to ZFS. For instance, CERN. Another heavy filesystem user of large data is IBM. I know that the IBM new supercomputer Sequioa will use Lustre ontop ZFS, instead of ext3 because of ext3 short comings:

http://www.youtube.com/watch?v=c5ASf53v4lI (2min30sec)

http://zfsonlinux.org/docs/LUG11_ZFS_on_Linux_for_Lustre.pdf

At 2:50 he says that "fsck" only checks metadata, but never the actual data. But ZFS checks both. And he says that "everything is built around data integrity in ZFS".

If you google a bit, there are many requests from companies migrating from Veritas to ZFS. Here is one company that migrated to ZFS without any problems.

http://my-online-log.com/tech/archives/361

.

.

"...Oooh, [Nexenta] a tier 3 storage maker! Impressive - not!..."

Why is this not impressive? Nexenta competes with NetApp and EMC having similar servers, faster but cheaper. Why do you consider NetApp and EMC "not impressive"?

.

.

"...More evasion. I asked you for one server vendor that has dropped Veritas for ZFS and the answer is NONE..."

What is your point? ZFS is proprietary and Oracle owns it. Do you mean that IBM or HP or some other vendor, must switch from Veritas to ZFS to make you happy? What are you trying to say? I dont know of any vendor, but I have not checked. Have you checked?

.

.

"...You failed AGAIN to answer the point and pretend that naming cheapo, tier 3 storage players is an answer. It's not. Usual fail. Maybe before you do your next (pointless) degree you should do a GCSE in basic English..."

I agree that my English could be better, but as I tried to explain to you, English is not my first language. BTW, how many languages do you speak, and at which level?

Speaking of evading questions, can you answer my question? Have you ever noticed random bit flips in RAM which has triggered ECC error correcting mechanism in RAM? No? So, just because you have never seen it (because you have not checked for it) that means ECC RAM is not necessary? I mean, users of big data, such as Amazon cloud says that there are random bit flips all the time, in RAM, on disks, etc. Everywhere. But you have never seen any, I understand. I understand you dont trust me, when I say that my old VHS cassettes deterioate because of the data begins to rot after a few years. This also happens to disks, of course.

So, I have answered your question on which vendors have seized Oracle proprietary tech: I havent checked. Probably they dont want to get sued by Oracle.

Can you answer my question? Do you understand the need for ECC RAM in servers?

Kebabbert

Re: @Matt

"....But here is the most telling point about ZFS - no vendor wants it, even for free. ..."

Well, there are many who wants ZFS. Oracle sells ZFS storage servers, typically they are much faster for a fraction of the price of a NetApp server. Here are some benchmarks when ZFS crushes NetApp:

http://www.theregister.co.uk/2012/04/20/oracle_zfs7420_specsfs2008/

There are many more here:

http://www.unitask.com/oracledaily/2012/04/19/sun-zfs-storage-7420-appliance-delivers-2-node-world-record-specsfs2008-nfs-benchmark-2/

Nexenta is selling ZFS servers, and Nexenta is growing fast, fastest ever. Nexenta is rivaling NetApp and EMC.

http://www.theregister.co.uk/2011/03/04/nexenta_fastest_growing_storage_start/

Dell is selling ZFS servers:

http://www.compellent.com/Products/Hardware/zNAS.aspx

There are more hardware vendors selling ZFS, I dont have time to google up them now for you. Have work to do. FreeBSD has ZFS. Linux has ZFS (zfsonlinux). Mac OS X has ZFS (Z-410).

It seem that your wild false claims have no bearing in reality. Again? It would be nice if you just for once could provide some links that support your claims, but you never do. Why? Are you constantly making things up? How are expected to be taken seriously when you talk of things you dont know, and never support anything you say with credible links? Do you exhibit such behaviour at work too? O_o

Kebabbert

Re: @Matt

"..Really? And ZFS has no limitations and never crashes and corrupts data? Not according to the online forums!.."

Of course ZFS is not bullet proof, no storage system is 100% safe. The difference is that ZFS is built up from the ground to combat data corruption, whereas the other solutions does not target data corruption. They have never thought about that. ZFS is not bug free, no complex software is bug free. But ZFS is safer than other systems. CERN did a study and concluded this. And there are other research claiming this too. It is no hoax.

Earlier, disks where small and slow and very rarely you saw data corruption. Disks have gotten larger and faster, but not _safer_. They still exhibit the same error rates. Earlier you rarely read 10^16 bits, today it is easy with large and fast raids. Today you start to see bit rot. The reason you have never seen bit rot, is because you have dabbled with small data. Go up to Petabyte and you will see bit rot all the time. There is a reason CERN did research on this: they are storing large amounts of data, many Petabytes. Bit rot is a real big problem for them.

Have you seen the spec sheets on a modern SAS / Fibre Channel Enterprise disk?

https://origin-www.seagate.com/files/docs/pdf/datasheet/disc/cheetah-15k.7-ds1677.3-1007us.pdf

On page 2, it says:

"Nonrecoverable Read Errors per Bits Read: 1 sector per 10E16"

What does this mean? Does it mean that some errors are uncorrectable? In fact ALL serious disk vendors say the same thing, they have a point about "irrecoverable error". The disk can not repair all errors. Just as NetApp research says: One irrecoverable error on 67TB data read. Read the paper I linked to.

.

"..Your claims about bit rot - which I have NEVER seen in forty years of computing.."

Let me ask you, Matt, how often have you seen ECC errors in RAM? Never? So your conclusion is that ECC RAM is not needed? Well, that conclusion is wrong. Do you agree on this? This is question A. What is your answer on Question A)? Have you ever encountered ECC RAM errors in your servers?

.

"... [you] are the salesman trying to sell a streetcleaner to the average housewife on the mythical chance she might need to clean a street some day. Pointless..."

Well, it is not I who did the research. Large credible institutions and researchers did this, such as CERN, NetAPP, Amazon, etc did it. I just repeat what they say. Amazon explains why have never seen these problems: The reason is because you dabble with small data. When you scale up, at large scale, you see these problems all the time. The more data, the more problems.

Amazon explains this to you:

http://perspectives.mvdirona.com/2012/02/26/ObservationsOnErrorsCorrectionsTrustOfDependentSystems.aspx

"...AT SCALE, error detection and correction at lower levels fails to correct or even detect some problems. Software stacks above introduce errors. Hardware introduces more errors. Firmware introduces errors. Errors creep in everywhere and absolutely nobody and nothing can be trusted.

...Over the years, each time I have had an opportunity to see the impact of adding a new layer of error detection, the result has been the same. It fires fast and it fires frequently. In each of these cases, I predicted we would find issues at scale. But, even starting from that perspective, each time I was amazed at the frequency the error correction code fired...

...

Another example. In this case, a fleet of tens of thousands of servers was instrumented to monitor how frequently the DRAM ECC was correcting. Over the course of several months, the result was somewhere between amazing and frightening. ECC is firing constantly. ...The immediate lesson is you absolutely do need ECC in server application and it is just about crazy to even contemplate running valuable applications without it.

...

This incident reminds us of the importance of never trusting anything from any component in a multi-component system. Checksum every data block and have well-designed, and well-tested failure modes for even unlikely events. Rather than have complex recovery logic for the near infinite number of faults possible, have simple, brute-force recovery paths that you can use broadly and test frequently. Remember that all hardware, all firmware, and all software have faults and introduce errors. Don’t trust anyone or anything. Have test systems that bit flips and corrupts and ensure the production system can operate through these faults – at scale, rare events are amazingly common."

.

Ok? You have small data, but when you go to large data, you will see these kind of problems all the time. You will see that ECC is absolutely necessary. As are ZFS. That is the reason CERN is switching to ZFS now.

CERN did a study on hardware raid, and saw lots of silent corruption. CERN wrote the same bit pattern all the time on 3000 hardware racks and after 5 weeks, they say that the bit pattern differed in some cases:

http://www.zdnet.com/blog/storage/data-corruption-is-worse-than-you-know/191

"...Disk errors. [CERN] wrote a special 2 GB file to more than 3,000 nodes every 2 hours and read it back checking for errors after 5 weeks. They found 500 errors on 100 nodes."

Matt, how about you caught up with latest research, instead of trying to rely on your own experiences? I mean, Windows 7 has never crashed for me, does this mean that Windows is fit for large Stock Exchanges? No. Your experience can not be extra polated to large scale. Just read the experts and researchers, isntead of trying to make up your own reality?

Kebabbert

Re: not enough bays

"... ZFS can't cluster and offers SFA resilience as it can't even work properly with hardware RAID..."

Matt, matt. As I tried to explain to you, hardware raid are not safe. I have showed you links on this. And NetApp research says that too, read my post here to see what NetApp says about hardware raid. There are much research on this. Why dont you check up and read what the researchers in comp sci says on this matter, instead of trusting me?

OTOH, researchers say that ZFS protects against all the errors they tried to provoke, and concluded that ZFS is safe. When they tried to provoke and inject artificial errors in NTFS, EXT, XFS, JFS etc - they all failed their error detection. But ZFS succeeded. There are research papers on this too, they are here (papers numbered 13-18):

https://en.wikipedia.org/wiki/ZFS#Data_Integrity

.

And you talk about the cloud. Well, cloud storage typically use hw-raid which, as we have seen, are unsafe. And the internet connection is not safe too, you need to do a MD5 checksum to see that your copy was transfered correctly. You need to do checksum calculations all the time. Just what ZFS does, but hw-raid does not. Therefore you should trust more on your home server with ECC and ZFS, than a cloud. Here is what cloud people says:

http://perspectives.mvdirona.com/2012/02/26/ObservationsOnErrorsCorrectionsTrustOfDependentSystems.aspx

"...Every couple of weeks I get questions along the lines of “should I checksum application files, given that the disk already has error correction?” or “given that TCP/IP has error correction on every communications packet, why do I need to have application level network error detection?” Another frequent question is “non-ECC mother boards are much cheaper -- do we really need ECC on memory?” The answer is always yes. At scale, error detection and correction at lower levels fails to correct or even detect some problems. Software stacks above introduce errors. Hardware introduces more errors. Firmware introduces errors. Errors creep in everywhere and absolutely nobody and nothing can be trusted...."

.

Matt, read and learn?

Kebabbert

Re: @Matt

Anonymous Coward:

>"Why use Windows? It is not safe and susceptible to data corruption"

>What a load of FUD.... dont forget to wrap some foil around your hard drives

You are right on requesting links, otherwise it would be pure FUD: make lots of strange negative claims without ever backing up with credible links. Here you have a PhD thesis where the research conclude that NTFS is not safe, with respect to data corruption. I suggest you catch up with the latest research if you want to learn more:

http://pages.cs.wisc.edu/~vijayan/vijayan-thesis.pdf

http://www.zdnet.com/blog/storage/how-microsoft-puts-your-data-at-risk/169

Dr. Prabhakaran found that ALL the file systems (NTFS, ext, JFS, XFS, ReiserFS, etc) shared:

"... ad hoc failure handling and a great deal of illogical inconsistency in failure policy ... such inconsistency leads to substantially different detection and recovery strategies under similar fault scenarios, resulting in unpredictable and often undesirable fault-handling strategies. ... We observe little tolerance to transient failures; .... none of the file systems can recover from partial disk failures, due to a lack of in-disk redundancy."

.

.

.

Matt Bryant:

>"...I can get built-in RAID on most PC mobos, it is reliable and cost-effective, does not impact on CPU >performance, why bother with the hassle of ZFS which steals cycles from the CPU?..."

First of all, hardware raid are not safe with respect to data corruption. Here are some information if you want to learn about limitations of hardware raid:

https://en.wikipedia.org/wiki/RAID#Problems_with_RAID

Second, it is true that ZFS uses cpu cycles, a few percent of one core. There is a reason ZFS uses cpu cycles: ZFS does checksum calculations of every block. If you ever have done a MD5 checksum of a file to check data integrity, you know it uses cpu cycles. ZFS does that (SHA256 checksum actually). Hardware raid does not do any checksum calculations for data integrity, instead, hw-raid does PARITY calculations which are not the same thing. Parity calculations are just some XOR easy calculations and hw-raid are not designed to catch bit rot.

Have you ever experienced bit rot of old 5.25" or 3.5" data discs? Old disks doesnt work anymore. This problem also applies to ECC RAM, with time, a powered on server will have more and more random bit flips in RAM, to the point it crashes. That is the reason ECC RAM is needed. Do you dispute the need for ECC RAM? Do you think servers dont need ECC?

.

>"...I run fsck via cron which is all scrub is, and so far I've not found any hobbyhorse sh*t on my drives..."

Matt, matt. ZFS scrub is not a fsck. First of all, fsck does only check the metadata, such as the log. fsck never checks the actual data, which means the data might still be corrupted after a successful fsck. One guy did fsck on a XFS raid in like one minute, think a while and you will understand it is fishy. How can you check 6 TB worth of data in one minute? That means the fsck read the data at a rate of 100 GB/sec. This is not possible with just a few SATA disks. The only conclusion is that fsck does not check everything, it just cheats. Second, you need to take the raid off line and wait while you do fsck.

ZFS Scrub checks everything, data and metadata, and that takes hours. ZFS scrub is also designed to be used on a live mounted active raid. No need to take it off line.

The thing is, to be really sure you dont have silent corruption, you need to do a checksum calculation every time you read/write a block. In effect, you need to do a MD5 checksum. Otherwise, you can not know. For instance, here is a research paper by NetApp whom you trust?

https://en.wikipedia.org/wiki/ZFS#cite_note-21

"A real life study of 1.5 million HDDs in the NetApp database found that on average 1 in 90 SATA drives will have silent corruption which is not caught by hardware RAID verification process; for a RAID-5 system that works out to one undetected error for every 67 TB of data read"

If you look at the spec sheet of a new Enterprise SAS disk, it says one irrecoverable error on every 10^16 bit read. Thus, SAS disks get uncorrectable errors. And Fibre Channel disks are even more high end, and they also get uncorrectable errors:

http://origin-www.seagate.com/www/en-us/staticfiles/support/disc/manuals/enterprise/cheetah/10K.7/FC/100260916d.pdf

Matt, again you have very strong opinions on things you have no clue about. It would be good for you if you caught up on research, otherwise you just seem totally lost when people discuss things over your head. And as usual, you never backup any of your strong negative claims, even though we have asked you to do so. Try to handle all that bitterness inside you? It is difficult to try to explain things to you, as you discard even research papers, and you continue to regurgitate things you have no clue of.

.

.

BTW, ZFS is free in a gratis and easy distro called FreeNAS. Just set up and forget. It is built for home server. Nas4Free it is called, maybe...?

Kebabbert

Re: I don't get NAS boxes...

"...Why not build a Win 7 / Win 8 PC with shared libraries,..."

Why use Windows? It is not safe and susceptible to data corruption. Ever hard disk gets lots of read/write errors during normal usage, but gets corrected on the fly with error correcting codes on the disk. However, some of these errors are not correctable by the codes. Even worse, some of these errors are not even DETECTABLE by the codes. The codes are not fool proof, you know. They might be able to correct 1 bit errors, and detect two bit errors, but sometimes three bit errors happen. Very rare, but they happen. And those three bit errors are not correctable. Sometimes four and even five bit errors happen, and such errors might not even be detectable. This is called Bit Rot, google on it. Old VHS tapes doesnt work today, the data has begun to rot.

It is exactly the same problem in RAM. Why do you think servers need ECC RAM? ECC RAM can correct 1 bit errors, and detect 2 bit errors. Microsoft concluded (after collecting information about Windows crashes) that 30% of all Windows crashes was caused by random bit flips in RAM, something that ECC RAM would have protected against. Cosmic radiation is a big source of random bit flips, flaky power, sun bursts, etc. There are much research on data corruption.

You need a solution that always calculates checksum data of every read block on the disk, in effect doing a MD5 checksum on every read. Or SHA1, or any other checksum. This is the only way to protect against bit rot on disks. Incidentally, ZFS is designed to protect against bit rot, and does exactly that: for every block you read, it calculates a SHA256 checksum. And if the checksum is not correct, it automatically repairs the block from the raid. And, ZFS is free, as in gratis. If you try ZFS, you must not use hardware raid, because they mess up the checksum calculation. Sell it, and use free ZFS. Here is much research on data corruption on disks. And of course, if you are serious with your data, you should use ECC RAM too.

NTFS, ext, HFS, XFS, JFS, etc is not safe, and might corrupt your data (research paper) here. And ZFS is safe, according to researchers.

http://en.wikipedia.org/wiki/ZFS#Data_Integrity

Author of '80s classic The Hobbit didn't know game was a hit

Kebabbert

Play online:

Or you could try here:

http://www.c64s.com/game/418/hobbit,_the/

Does anyone know how to solve the game? Any walk throughs available?

Oracle nudges Sparc T5s back out to 2013

Kebabbert

Re: Take no bets

"...Sun sailed that course for many years with this idea of "sum(more cores) > sum(fast cores)..."

Yes, as opposed to IBM that mocked that course, saying "1-2 strong cores are better than many weaker cores, because databases runs best on strong cores". Where are those 1-2 core IBM cpus running at 7-8GHz today? Maybe IBM should mock POWER7+ cpus with 16 cores running at an even lower clock speed.

When IBM competitors do something, it is the worst idea in history. When IBM does the same thing years later, it is "innovative".

Kebabbert

Re: Take no bets

Intel x86? And POWER? Well, both of them can not show 100% performance increase every other year. Larry Ellison is a hard CEO: "just double it", no matter how technically difficult it will be to increase performance that much.

For instance, the newest Haswell had the graphics and power consumption as main targets, the cpu performance is not a priority. Haswell will only be 10-20% faster than Ivy Bridge, which is not much compared to 100% every other year. The POWER7+ is only 20% faster than POWER7, despite double cpus.

.

.

"It wouldn't be surprising to see KSplice hot splicing come to Solaris, too."

Actually, KSplice is coming to Solaris. But Solaris had a similar technique decades ago, way back in Solaris 8, where you could hot patch the kernel. Regarding DTrace in Linux, the inventor of DTrace, Bryan Cantrill says that DTrace in Linux is not a good port. Yet. Totally useless as of today.

Unisys pumps up ClearPath mainframes with Xeon E5s

Kebabbert

Emulation

"...The Libra 4280, which has perpetual pricing, spans from 50 to 2,400 MIPS, while the Libra 4290 offers from 30 to 1,680 MIPS with a ceiling of 2,400 MIPS peak if you have a sudden spike...."

Emulation of an IBM Mainframe gives 3.200 MIPS on a 8-socket Nehalem-EX server.

http://en.wikipedia.org/wiki/Hercules_%28emulator%29#Performance

Oracle fudges touts Sparc SuperCluster prowess

Kebabbert

Re: It's hilarious

@Jesper Frimann,

While I respect your knowledge, I do not really appreciate your very strong bias. You can complain on something, and in the next post you do exactly the same - how nice is that? You answer "Phil 4" like this:

"...So you now refute to name calling.. That is nice and mature, but kind of shows your true colours..."

Let me ask you, how many times have you done the same to me? Some would call that hippocrisy when you complain of "Phil 4" calling you "Jester", and when you call me "kebab brain" or anything else of all insults you spewed upon me - everything is fair and square, right?

I remembered a discussion we had. I showed a benchmark where the SPARC Niagara T2 had lower latency than POWER6 and you said something like "I reject your benchmark because POWER6 had better throughput". Later I showed a benchmark where SPARC Niagara T2 had better throughput, and guess what? You said something like "I reject your benchmark because POWER6 had lower latency"!!!! No matter what benchmark I show, I can never get you to accept them. Damned if you do, damned if you dont.

Again and again you take the freedom to complain loudly when others do something, and when you do exactly the same thing everything is fine. How mature, right? Now, this is a side I dont like with you.

I can show you a benchmark, and you reject that because of A. Then I show another benchmark, and you reject it because of B. It will continue for eternity, C, D, etc. Wriggling and wriggling. Worse than a lawyer.

Whats worse, one time you said something like "but kebabbert, what has happened? You show me a benchmark! You are backing up your claims with a benchmark???". What ugly behavior. People has complained on me posting to many links to benchmarks and white papers and research papers. Just look above here, I posted several benchmarks. Well knowing of that, you pretend to be surprised "why, are you posting a benchmark??? I have never seen you do that!!!". Really really ugly.

Me on the other hand, often accept your benchmarks and had no problem of saying that POWER7 was the fastest cpu a while back. Because it was true, just look at the benchmarks! But to make you say something similar of SPARC would never work, no matter which benchmark I showed you. "No, that benchmark had blue ink, it must be black ink".

Kebabbert

Re: IBM do it too...

@Jesper Frimann

"...I was replying to Kebbart's link:

https://blogs.oracle.com/si/entry/zfssa_smashes_ibm_xiv_while..."

It was never clear that Oracle compared vs an POWER5 system. It is no merit to be faster than an old generation system. I will never post that benchmark again, because it is unfair.

Thanx for informing me.

Kebabbert

Re: XIV ?

"...System z isn't very bad in terms of CPU. True, it is not a supercomputer or a Power 795..."

Well, the Z12 is a very slow cpu. No doubt about it. Where are all the IBM benchmarks? Nowhere. Why? For a reason. Have you ever wondered why IBM never have released benchmarks of Mainframe cpus? There are none. Zipp. Zero. Zilch. Nada. Why? You tell me.

On the other hand, we have loads of POWER7 cpu benchmarks everywhere of all different kinds. Well, the POWER7 happens to be a really good cpu, so we understand when IBM wants to tell the world how fast the POWER7 is. But regarding the Mainframe cpu? Well, it is "the worlds fastest cpu", but no evidence nor proof of that. Just some marketing material:

http://www-03.ibm.com/press/us/en/pressrelease/32414.wss

At least IBM could have offered ONE benchmark vs the POWER7? But no. The Mainframe cpus all get obliterated by the POWER7.

Some would call extravagant claims without any proofs to be FUD. Which coincidentally, happens to be IBM's master dance:

http://en.wikipedia.org/wiki/Fear,_uncertainty_and_doubt#Definition

"FUD was first defined with its specific current meaning by Gene Amdahl the same year, 1975, after he left IBM to found his own company, Amdahl Corp.: "FUD is the fear, uncertainty, and doubt that IBM sales people instill in the minds of potential customers who might be considering Amdahl products."

Kebabbert

Re: IBM do it too...

Another one. IBM XIV vs Oracle T4 ERP. "Without any optimizations Oracle was 2.5x faster".

https://blogs.oracle.com/si/entry/zfssa_smashes_ibm_xiv_while

Kebabbert

Re: IBM do it too...

Here is a Oracle ZFS benchmarks vs NetApp:

http://www.theregister.co.uk/2012/04/20/oracle_zfs7420_specsfs2008/

Here are some more

https://blogs.oracle.com/si/entry/7420_spec_sfs_torches_netapp

Kebabbert

Re: XIV ?

Regarding cpu performance on the IBM system Z, they are very bad. Any decent high end x86 cpu is roughly twice as fast. So cpu performance wise, the new Z12 is very bad.

POWER7 on the other hand is a very good cpu and it is much much much faster than the IBM z12 or Z196 cpu. So I would not call the System Z, the high end. An IBM P795 packs much more cpu punch than a dinosaur z12 or z196.

Kebabbert

Re: Comparable systems

It is not obvious that more RAM helps a certain benchmark. In this case, is the current benchmark RAM starved? If it is, then Oracle should not publish such benchmarks where the RAM differs. But if the RAM amount does not matter, above a certain threshold, then the RAM amount does not matter.

I remember a SAP benchmark where the Sun x86 machine had 256GB RAM but slow RAM, and the Linux x86 server had 128 GB RAM, but faster RAM - and still the Solaris server was slightly faster. It turned out that the Linux HP server used 128GB RAM, because the faster RAM sticks came in smaller size. So HP could choose to equip their Linux server with 256GB RAM or 128 GB RAM, and HP chose the faster RAM sticks. In this SAP benchmark, it did not matter if you had 128GB or 256GB - because had it mattered, then HP would have gone with 256GB RAM.

Thus, it is not obvious that this benchmark would have been helped with more RAM.

.

.

But I agree that this benchmark seems a bit fishy from Oracle. It needs to be investigated a bit further. For instance, is RAM crucial? Is it better to have 2TB RAM, or does it not matter? If it does not matter, then it doesnt matter and the criticism is only IBM fanboyism.

On the other hand, when IBM does fishy benchmarks, then the author Timothy Prickett does not write an article about it. Clearly IBM biased.

Fujitsu, Oracle pair up on future 'Athena' Sparc64 chips

Kebabbert

Re: faster than anything

I think he meant "faster than any other cpu on the planet".

Now where are all those IBM supporters claiming that SPARC64 is dead? :o)

Oracle Linux honcho 'personally hurt' by Red Hat clone claims

Kebabbert

Re: Car analogy

"...DTrace is CDDL, just like ZFS, and it's already in the Oracle Unbreakable Linux kernel...."

Yes, DTrace is in Oracle Linux now, but have you read more on the port? It is utterly useless right now. Bryan Cantrill, the inventor of DTrace has tested it on Linux and it is useless. Read his blog for more information on DTrace on Linux.

Power7+ chips debut in fat IBM midrange systems

Kebabbert

Re: Smoke and Mirrors anyone?

5) Everybody knows there are scalability difficulties. Well see later when IBM releases more info.

6) Those benches will come in due time.

8) Yes, the author Timothy PM is very clearly IBM biased, and that is the reason he critizices Oracle, while IBM gets away with anything. But everybody knows this.

.

.

I think it is was interesting to see that the POWER7+ was only 20% faster than the POWER7. I wonder how much faster the POWER8 will be. If it is only 20% faster too, then Intel might have caught up on POWER8 when it is released, in terms of performance.

Oracle hurls Sparc T5 gladiators into big-iron arena

Kebabbert

Re: @Bruyant - POWER & SPARC Comparison

"... IBM is number one in global UNIX server sales, please explain how they need to "catch up"?..."

He is not talking about Unix server sales, everybody agrees IBM is no 1 here. Just look at the sales, they speak for themselves. If you deny that IBM is no 1, then you are a FUDer.

He is talking about technical innovation, where IBM lags behind. Just because you are no 1, does not mean you have good tech or are innovative. Just look at Windows, hardly anyone here would say Windows is superior to Unix, even though the Windows market share is larger than all Unix combined.

There is no innovation from IBM, and hasnt been for a long time. IBM copies Solaris and SPARC, just as everyone else. Of course Solaris has copied virtualization tech from IBM, just as IBM has copied Solaris Zones and named them AIX WPAR. There is no innovation from IBM on OSes. Everybody is copying from Solaris today. I never hear Linux or FreeBSD or Mac people talking about AIX features. Where is the AIX list?

AIX WPAR is a Zones clone

AIX ProbeVue is a DTrace clone.

Linux BTRFS is a ZFS clone.

Linux Systemtap is a DTrace clone

VMware vProbes is a DTrace clone

FreeBSD has ported ZFS

FreeBSD has ported DTrace

Mac OS X has ported DTrace

Mac OS X has ported ZFS

QNX has ported DTrace

etc etc

Everybody drools over Solaris tech. I dont see Linux wanting AIX tech, badly? Nor FreeBSD? Nor Mac OS X?

Regarding cpus, IBM has those arcane and slow Mainframe cpus, which IBM calls "Worlds fastest CPU":

http://www.engadget.com/2010/09/06/ibm-claims-worlds-fastest-processor-with-5-2ghz-z196/

Come on, how can a IBM Mainframe CPU running at 5.26GHz and having close to 300MB cpu cache be half as fast as a 2.4GHz x86 cpu with 10-20MB cache? Where is the innovation in that?? I mean, almost half a GB of cpu cache! And still it is slow! O_o IBM has failed miserably with their transistor budget.

Also, IBM always mocked Niagara for having many lower clocked cores, because "databases are best run on 1-2 strong cores". Where are the 7-8GHz single/dual core POWER cpus? No, they have many cores, lower clocked. Just as Niagara CMT. Sun realized that the future was not in single/dual cores running at 10GHz (Intel was into the GHz race with Prescott(?)). No, instead the future is in many cores. At last IBM has realized this too. Better late than never.

And Matt Bryant, again:

How can a "cache starved" Niagara cpu be close to 10x faster than IBM and HP cpus who have huge caches? Can you answer me this question, which you have ducked all the time? And still you continue to say the Niagara is cache starved, even though it is 10x faster than IBM cpus with huge caches. Illogical your claim seems?

Kebabbert

AIX dead

"...You know that article is from 2003, right?..."

Yes, so what? IBM talks about "multi decade time frame" before AIX is killed off. This is the first decade, and already we see signs of AIX decreasing in importance.

POWER cpus are not really that fast anymore. POWER7+ is a hodge podge, nothing new or innovative there. The Intel Westmere-EX is 14% slower than POWER7 on some workloads, but is far cheaper

http://www.anandtech.com/print/4285

The Sandybridge is 10-15% faster than Westmere. And IvyBridge is faster than Sandybridge. Very soon we will see Haswell which will be even faster. Intel x86 is rivaling POWER7 today. Very soon x86 will be faster than POWER cpus. x86 is improving much more than POWER cpus.

POWER6 costed 5-10x more than x86 and was several times faster than x86.

POWER7 costs 3x more than x86 and is 14% faster on some workloads.

POWER8... will cost 1-2x more than x86 and be as fast as x86?

POWER7+ is late and we have never heard about POWER8 yet. It is evident that POWER development is slowing down. POWER is superior any longer. IBM is only doing high margin business. Soon POWER will be cheap as x86, and then IBM will kill POWER.

AIX runs on POWER. If POWER is dead, what will AIX run on? Coincidentally, IBM has said that AIX will be killed off. That will happen sometime when x86 is matching POWER, at a cheaper price. It has taken some 10 years for x86 to catch up POWER. In another 10 years, x86 will be superior.

POWER dead => AIX dies.

Better learn Linux, boys. AIX is not innovative and hasnt been in a long time.

Kebabbert

Re: RISC Chips

"...Must be from all that paid-for blogging [Kebabbert] used to do for Sun...."

How about you check things up sometimes, instead of just writing whatever comes to your mind? I used to have a blog, yes, it was about food and health. There was nothing IT related on my blog. Why would Sun pay me to blog about food? Maybe you will find my old blog here if look a bit. I googled on "kebabbert blogg" and couldnt find anything today, but one year earlier I could find my health blog. Maybe if you look around a bit you will find it. Have fun reading about my health and food writings. I did have a splendid Texas Chili recipe that many people liked.

http://archive.org/web/web.php

Do you have more fantasies about me that you want to add?

.

.

"...where you scream and shriek rabidly that Niagara doesn't need more cache, and yet every generation since has had more cache added..."

I never said that. I suggest you read again. I said something like: "The Niagara T2 is not dependent on having a large cache, to be 10x faster or so, than the competition on some workloads". That is what I said. I never said that T2 does not need a cache, nor doesn't need more cache. More cache is always good, but T2 does not need it to break IBM or HP.

The Niagara T2 had maybe 1-2 MB cache in total and still it was like 10x faster than IBM and HP cpus which had far larger caches. How do you explain that a "cache starved" cpu can be 10x faster than cpus with huge caches? Something does not add in your claims, if you think logically. I have asked you this many times, but you have never answered. Can you answer me now? If I say "please"? How can a "cache starved" cpu be 10x faster than the competition? Maybe 1-2MB cache suffices for T2 because of its radical novel design?

Kebabbert

Re: RISC Chips

Anonymous,

"...As nice as these new SPARCs are, when compared to IBM's new offering, they just don't seem to be in the same league..."

Can you elaborate on this? IBM has no offering that matches these T5 cpus. In what way are POWER7 "better"? They are taking longer time to market? Is that better?

In fact, IBM is planning to kill AIX, did you know that? AIX will be deprecated, according to IBM executives. Didnt you know?

http://news.cnet.com/2100-1001-982512.html

"...The day is approaching when Linux will likely replace IBM's version of Unix, the IBM's top software executive said..."

.

"...How much better are these really than x86_64 chips in a real world situation, and is it worth the extra dosh?..."

Did you know that Oracle is projecting 2x the performance every other year? How much is x86 Intel projecting? 10% better performance every other year? How much is IBM projecting? Did you know that the previous generation SPARC T4 are more than 2x faster than POWER7 on some workloads, and T4 hold several world records. Did you know?

Kebabbert

Re: Right problem, right application, right system, right CPU

"... but still look to have too little cache..."

If the CMT Niagara cpus have too little cache, how do you explain their superior performance to IBM cpus with huge caches? You do know that Niagara cpus holds several world records? That T4 is more than 2x faster than POWER7 on some workloads?

I prefer a Niagara CMT cpu with "too small cache" that utterly crushes the competition, to a cpu with huge cache that is left behind.

Kebabbert

Re: RISC Chips

"...Any Sunshiners out there still trying to pretend that single-threaded performance and cache don't matter?..."

Sun/Oracle have always been very clear with the Niagara T1,T2,T3 that they are best suited for heavily light threaded work. Sun/Oracle have never said they are general purpose cpus. The T[1-3] cpus are much much faster than ordinary cpus for heavily light threaded work, and in that area they do reach (in optimal case) >50 GHz of aggregate work, just as claimed (see below*).

That is the reason there are two SPARC families, one for light threaded work (Niagara, CMT) and one for single threaded work (UltraSPARC from Fujitsu).

Thus, can you finally stop saying that Sun/Oracle claims the Niagara CMT cpus are good for single threaded work? They have never said that. I have explained this to you, umpteen times. Why do you continue to say this? And why do you continue to claim the cache is too small, when the Niagara T2 cpus are 10x faster than IBM offerings on appropriate work loads? IBM cpus have huge caches and are still crushed by Niagara. So how can Niagara suffer from a small cache?

Can you present any evidence for your false claims? Any links? As usual: No. Why do you like to FUD a lot?

.

(*)

For instance, in SIEBEL v8 one Sun T5440 matches six POWER6 servers (including P570). These POWER6 servers had in total something like 60-70GHz of aggregate speed. Which one T5440 totaling 7GHz of aggregate work matched. Hence, the Niagara cpus did reach much higher GHz, just as claimed. Just look at the benchmarks.

Or, for instance, one T2 cpu matched thirteen (13) IBM CELL cpus running at 3.2GHz, in String Pattern Matching. How do you explain this? One 1.6GHz cpu matches 40GHz of IBM CELL aggregate power?

Again we see that one single 1.6GHz Niagara reached >50 GHz worth of IBM GHz. How is this possible with a "too small cache" as you claim below? How can the Niagara be cache starved when it utterly crushes IBM cpus? It is 10x or more faster. Not 10%, but 10x.

IBM's z12 mainframe engine makes each clock count

Kebabbert

Difficult

"...The top-end EC12 has z12 engines that are 25 per cent more powerful, at around 1,600 MIPS..."

With all those IBM engineers struggling to build a new high performing Mainframe cpu, it is a heavy burden to try to keep up with the old back compatibility from 1960 or so. That is maybe the reason the new Z12 Mainframe cpu gives only 1,600MIPS? And the previous IBM Mainframe z196 cpu from 2011, was dubbed the fastest cpu in the world, by IBM. Thus, the Z12 which is faster. should cement the IBM claim "worlds fastest cpu" even more. Probably, IBM will soon claim the z12 is the fastest CPU in the world.

http://www.engadget.com/2010/09/06/ibm-claims-worlds-fastest-processor-with-5-2ghz-z196/

An old Intel Nehalem-EX cpu gives 400 Mainframe MIPS - under software emulation using "TurboHercules". Software emulation is 5-10x slower than running native code. If Nehalem-EX could run IBM Mainframe software natively, it would give 2000-4000MIPS. This is faster than the "worlds fastest cpu" Z12. Here are sources:

http://en.wikipedia.org/wiki/Hercules_emulator#Performance

Thus, if you had 20 of the old Nehalem-EX, they would give 40,000-80,000 MIPS, matching the biggest z12 Mainframe. For a fraction of the price. Of course, the IBM Mainframe have better uptime and better I/O - no doubt. But cpu performance lags behind, even though they are "the worlds fastest cpu".

Intel teaches Xeon Phi x86 coprocessor snappy new tricks

Kebabbert

Niagara SPARC?

"...the four threads will look like HyperThreading to Linux software, but Chrysos says that the threads are really there to mask misses in the pipeline...."

It seems that the Niagara CMT cpus idea of masking the misses in the pipeline was a novel approach and worth wile to copy. How many other cpus will copy it? I mean, the idea of having of many lower clocked cores was shunned upon by IBM ("Databases works best on single strong cores"). But now IBM has cpus with many cores, lower clocked. Will IBM also copy this masking of caches, instead of trying to cram in larger and larger caches. As we all know, large caches are useless when we talk about server workloads serving thousands of clients, because all that data will never fit into a cpu cache.

Fujitsu to embiggen iron bigtime with Sparc64-X

Kebabbert
Happy

Re: How ?

All high clocked cpus must have deep pipelines. I dont know why, maybe someone can explain. But this is what I have read on several places.

Basically, each phase in the pipeline does some work when decoding machine code. High GHz, means many phases that can do work. Maybe each phase is very specialized and can act fast, and the higher GHz, the more specialized? I dont know.

.

Where are Matt Bryant that all the time claimed there will be no SPARC64, and that SPARC64 is dead? And the rest of the IBM supporters? :o)

Microsoft claims Windows Server 2012 is 'first cloud OS'

Kebabbert

Re: Late

RICHTO

"...But Microsoft actually delivered...."

You are too funny. MS have not delivered anything yet. Oralce 11 has been out there for a while, how long has MS offering been out?

And there are cloud OSes out there, for instance at Joyent which uses an OpenSolaris derivative called SmartOS

http://dtrace.org/blogs/bmc/2011/09/15/standing-up-smartdatacenter/

All the sauce on Big Blue's hot chip: More on Power7+

Kebabbert

Re: AIX for databases...

"...It is funny that Oracle is claiming that Sparc is the highest performing chip for everything but "integer math"..."

Do you have a link where Oracle is claiming that SPARC is fastest on everything, except integer math? I have never seen such a claim, and I follow SPARC and Oracle closely.

On the other hand, IBM claimed that having many slower cores as Niagara was a bad thing and the future lied in having 1-2 very strong cores, because "databases are best run on a strong cores rather than many weak cores". So where are those POWER cpus with 1-2 cores running at 7-8GHz or even beyond that? Why does the newer POWER cpus have many lower clocked cores? POWER6 had 2 cores running at 5GHz. POWER7 should have 2 cores running at 6-7 GHz, and POWER7+ should have 2 cores running at 7-8GHz. Have IBM changed their mind? Are many lower clocked cores not a dumb thing to do, anymore?

IBM embiggens iron with System zEnterprise EC12 mainframe

Kebabbert

Re: Pretty awesome

"... It is nice to see a company is still taking some pride in system engineering instead of figuring out how to push x thousand more half-baked x86 white boxes off the truck...."

Yes, but a company truly taking pride in system engineering would sell cpus much faster than these slow Mainframe cpus. Any high end 8-socket x86 server rivals the biggest IBM Mainframe - when we talk about number crunching.

Sure, Mainframes have far better uptime and much more I/O, but cpu wise, they are far behind x86 cpus when we talk about computing power. There is a reason you dont buy Mainframes to crunch numbers - they lack the ability. Instead a cluster of x86 are much cheaper and deliver much higher performance.

Microsoft promises Metro developers 'fame and fortune'

Kebabbert

Re: Really !!!

Developers, developers, developers, developers, developers, ....

;o)

Intel gobbles Lustre file system expert Whamcloud

Kebabbert

Re: ZFS?

I dont know, but IBM supercomputer Sequioa is using ZFS + Lustre. And Lawrence Livermore National Laboratyr too. Look up their ZFS + Lustre solutions?

IBM's new Power7+ hotness - we peek through the veil

Kebabbert

Re: Benchmarks ??

Here are some benchmarks that you asked for.

POWER7 is 14% faster than the old Intel Westmere-EX:

http://www.anandtech.com/print/4285

SPARC T4 is more than 2x faster than POWER7 on TPC-H

https://blogs.oracle.com/BestPerf/entry/20110926_sparc_t4_4_tpc

But sure, IBM has some record benchmarks, particularly in integer arithmetic. As Larry Ellison said: "IBM is faster for integer arithmetic. If IBM think companies do a lot of arithmetic, cool. We think they access a lot of data and run a lot of Java."

"We're better than IBM in Java, and we're going to beat them in integer arithmetic, and then there will be nothing left," he added.

.

I think it is funny that "IBM does not need any roadmaps". Maybe IBM had problems of keeping x86 at bay, and that is the reason? x86 is fast catching up on POWER, and the old Westmere-EX is almost as fast as POWER7 on some benchmarks. We soon have Ivy Bridge 10 core Xeons, I bet they will be faster than POWER7. And there is Haswell next year. "No need for roadmap"? That is so fanboyish blind it is funny. Normally, when IBM has something good, they bragg about it all the time. Silence is a sign of problem, and IBM tries to to not talk about their problems (for instance manufacturing problems) - as everybody does. IBM doubts the future of the POWER chips, as IBM are going to kill AIX and then have to compete with faster and cheaper x86 cpus.

.

SPARC T5 will also be presented at the same conference. It will be an improved T4: have double the number of cores: 16, and scale up to twice the number of cpus: 8. Which means 4x better performance than today's top T4 servers. T5 will be really brutal on database work loads, which is what really counts in the Enterprise. Oracle has officially said they will double the SPARC performance every second year, that is much more aggressive than any other cpu vendor. Not even the x86 camp is talking about such improvements.

WD: HDD prices won't fall to pre-flood levels until 2013

Kebabbert

Re: Just because they can doesn't mean it's right, I agree with Joerg

Read this, the crisis is fake, the manufacturers delivered more disks after the floodings, than before:

http://news.softpedia.com/news/HDD-Crisis-Was-Fake-Seagate-and-Western-Digital-Post-Big-Profits-266676.shtml

Red Hat Storage Server NAS takes on Lustre, NetApp

Kebabbert

Nice step, but

there are still some things to work with:

http://www.enterprisestorageforum.com/technology/features/article.php/3749926/Linux-File-Systems-Ready-for-the-Future.htm

All-flash IBM V7000 smashes Oracle/Sun ZFS box

Kebabbert

Re: hold on - apples and oranges?

"...And yes there are stories out there with more or less all types of systems and technology that have failed, and you are always sure to point this out, as long as this does not involve any Oracle stuff..."

And have you EVER pointed out IBM stuff failing? No? Then why do you accuse me of bias?

Kebabbert

Re: Be careful with comments

@Jesper Frimann,

"...and people like Kebabbart will twist and turn any results ..."

When have I twisted and turned any results? Can you post any links?

You on the other hand, have certainly twisted and turned results. I remember our debate on the Niagara T2+ cpu, and I showed a benchmark where Niagara had greater throughput than IBM POWER6 upon which you replied something like "throughput does not matter, POWER6 has lower latency". Later I showed another benchmark where Niagara T2 had lower latency, upon which you replied something like "latency does not matter, POWER6 has greater throughput". No matter which benchmarks I show, you always find something to nag - sometimes even contradicting yourself.

And you say that the benchmarks and white papers I show, are "twisting and turning facts"? Great. People have even complained that I always copy links to benchmarks, white papers, etc.

Linus Torvalds drops F-bomb on NVIDIA

Kebabbert

Immature

And Linus Torvalds is supposed to represent Linux? I would be surprised if I saw an adult raise middle finger and shouts "Fuck you" in a professional setting.

IBM US nuke-lab beast 'Sequoia' is top of the flops (petaflops, that is)

Kebabbert

Uses ZFS

This IBM supercomputer uses Lustre + ZFS. The reason ZFS was chosen is because it scales better, provide data integrity, etc.

http://www.youtube.com/watch?v=c5ASf53v4lI

http://zfsonlinux.org/docs/LUG11_ZFS_on_Linux_for_Lustre.pdf

I wonder why they did not choose the IBM GPFS filesystem.