Empathy more feature rich than Pidgin?
Red Hat has released Red Hat Enterprise Linux 6, the first major update for RHEL in over three years. RHEL 5 debuted in March 2007 and used the Linux 2.6.18 kernel. Although incremental updates have added a number of kernel updates and new features, RHEL5 is starting to look aged. Of course much of the appeal of an enterprise …
ignoring other features, the ongoing lack of proxy support in empathy or gwibber means they aren't "enterprise-ready", not if your enterprise likes its proxy servers.
The data sheet claims 4096 cores. Yeah I laughed too when I read that
Currently, they support only 128 logical cores single image (ie 8-socket machine with 8core CPUs with Hyperthreading on). That's beacause they say that they support only configurations they have physically tested themselves.
...a comparative workload analysis between 5.5 and 6 - and for grins W2003 and W2008 - on a performance/watt basis. We recently spent a whole piss pot full of money upgrading our data center lighting to save a few watts (ROI was measured in years or tens of years from what I heard). This sounds much more interesting than motion sensing lights - reduced power draw at the servers also means reduced cooling loads.
More on this - pretty please with sugar on top!
I'd like to see this too. Savings could come from
-keeping multi-voltage CPUs in their lower-voltage range, big saving for not much performance cost.
-spinning down HDDs. Danger, leads to more disk failure. But having less apps do indexing and buffered logging, could work.
-scheduling work across CPUs to keep some cores/CPUs completely idle when load is light.
this is all so workload dependent though.
I am not sure why you keep saying that 128 hardware threads are a lot. Just consider 8 nehalem processors with 8 cores each and hyperthreading. This already reaches 128. 128 is actually surprisingly small. Of course, they provide a different kernel for larger configurations. The fact that they can't make the same kernel work well for small and large machines isn't a good sign imho.
Same kernel can work from laptops to clusters, but it will not be top performance everywhere. That is true for every OS. Microsoft HPC server and Vista 7 don't run same exact kernel, at least compile flags are different.
Different RHEL kernels usually are not that different, just compiled with different optimisations enabled, and to make top performance Red Hat ships different kernels for different uses. Otherwise, customers will download source and compile their own kernels to achieve maximal speed, and if that happens, updates might brake the system since Red Hat used different flags.
128 threads configurations is currently the biggest one that is sold in some volume. Red Hat says that they will increase these limits as HW with more threads appears. Boutique HW not included, they mean volume.
But I think you can strike a deal with them to support your bigger machine. The fact that they don't officially support it, doesn't mean that it isn't possible - their kernel is supposed to scale up to 4096 logical cores... They are merely conservative - they don't want to say they "support" something unless they prove it to be functional in practice.
Is a dual processor workstation in 2015. So, it's got four years of life in it. Six before it will only work in high end single processor machines.
Those four thousand cores will be utilised by a single processor machine in 2027, assuming two years for each process transition and a linear doubling of cores per process transition.
Red Hat doesn't just release RHEL 6 and then do nothing. There are upgrades every 6 months, so by the time 2015 comes around we'll be on roughly RHEL 6.8, and if > 128 core systems are common you can be sure that it will support far more cores by then.
What's this then: ftp.redhat.com ?
Looks like that has a beta of 6.0 from earlier on in the year. Besides getting hold of the ISOs (or whatever) is one thing, but when you pay then you gain access to all the updates etc. So RHEL by itself is a bit useless without it's subscription. If you want free and updated, then use CentOS.
Generally I only usually use RHEL when we have to (as specified by third party software types) or for the more important database boxes. For everything else CentOS is fine.
Support only 128 cores is really bad. _One_ SPARC Niagara T3 has 128 threads. With four T3 you get 512 threads within a single box.
In 2015 there will be a Solaris box with 16.384 threads and 64TB RAM.
This is chicken shit.
Yes I guess that the T3 then finally puts Linux on SPARC in the grave, as a serious platform.
And 128 threads is more than enough for the sweet spot of Linux. I mean that is a 8 Socket Nehalem-EX box with Hyperthreading enabled, a 16 Socket Itanium and a 4 Socket POWER7 box. Sure with todays virtualization, what this actually mean is that you have a max. virtual machine size of 64 cores on x86, 64 cores on Itanium and 32 cores on POWER7. Which is kind of enough.
And in 2015 we'll all have flying cars and live forever.... yeah... right..
I think it is funny that you doubt there will be a 16.384 thread Solaris box in 2015. You know, when Sun did the 8-core Niagara that was shocking. But Sun has always been a leader, and others have followed. Now everybody (in particular IBM) has stopped the GHz race and turned to many lower clocked cores - just like Sun did ages ago.
Now it is shocking to hear about a 16.384 thread machine, but years later, everyone will have that. Mark my words.
Today T3 is 128 threads. If the T4 is 256 threads, and you succeed in putting 64 of those cpus into a box, then you get those 16.384 threads. And Solaris scales well. Which AIX does not, they had to reprogram AIX to be handle to handle as few as 256 thread machines. Solaris is the correct tool for a 16.384 threaded monster server. The performance will be shocking.
"I think it is funny that you doubt there will be a 16.384 thread Solaris box in 2015."
Keb, I have no doubt that there will be a Gazillion threaded Solaris box in 2015. I guess you can get one right now.. a Exadata with 32 four socket nodes.. that is 32 x 4 x 128=16.384 Threads..
"You know, when Sun did the 8-core Niagara that was shocking."
Jup it had shockingly bad single threaded throughput...
"But Sun has always been a leader, and others have followed. "
Oh they have ? In what way ? Please enlighten me ?
"Now everybody (in particular IBM) has stopped the GHz race and turned to many lower clocked cores - just like Sun did ages ago."
Nahh.. that is not the whole story. POWER7 still clocks in between 3GHz to 4.25GHz, and that is with an increased per core throughput. A POWER 595 with 5.0 GHz cores does 33.75 specint_rate2006 per core where as a POWER 795 with 4.25 GHz cores does 48.05 specint_rate2006 per core. Now that is a 15% drop in frequency for a 42% increase in throughput.
With regards to Oracle's Tx it's just more threads and more threads... so get real. Sure the increased work done per socket is good for throughput, but it still requires workloads that can be scaled horizontal. And here real life problems like locking becomes a serious problem on many workloads.
"And Solaris scales well. Which AIX does not, they had to reprogram AIX to be handle to handle as few as 256 thread machines."
First you are wrong with the threads thing, counting a problem ?
I think you have misunderstood the concept of scalability. It's not an advantage to have many threads if they don't do much work.
And that's with almost x3 better response time.
It's the work done that matters not the number of light threads.
"Solaris is the correct tool for a 16.384 threaded monster server. The performance will be shocking."
Well you are talking about something that is what 3-6 Generations into the future. Again in the future...