Feeds

back to article Server tech is BORING these days. Where's all the shiny new goodies?

Once upon a time, a mere 10 years or so ago, servers had direct-attached disks or network-attached disk arrays. The flash in the arrays – SSDs primarily, but also in the controllers – made data access faster. However multi-core, multi-threaded CPUs have an almost insatiable appetite for data. Any access latency is bad when the …

COMMENTS

This topic is closed for new posts.

Badly configured

Surely the best usage for all this extra RAM is a cache in the VM itself? If you configure the storage application( your database) correctly, the only disk accesses will be unavoidable ones. If the VM OS is set up right, it will do the staged/lazy writes.

What am I missing?

0
0
Silver badge

Re: Badly configured

However multi-core, multi-threaded CPUs have an almost insatiable appetite for data. Any access latency is bad when the cores have to wait.

What cores are waiting for is an effective software model. Eg. my Firefox instance runs about 40 threads over 8 CPUs, but it is still almost as slow as it was in 2000. Tools like Gzip will hog 100% of one CPU to compress a huge media file while the other 7 sit idle.

Deduplication and similar technologies have stumbled through being great in principle but hard to engineer in the real world, so deployment is limited.

2
0

Re: Badly configured

That technological advances eventually compensate for poor design? Just look at the Porsche 911.

1
0
Bronze badge
Happy

Re: Badly configured

Not sure I can help with the browser, but 'pigz' is a Parallel Implementation of GZip. There's also pbzip2, check your repos (Red Hat flavours might need EPEL enabling).

0
0
Bronze badge

Coomidities

They haven't invested enough because they think technology as it is is "good enough".

And it is true... problem is, the moment you stop investing you product starts being a commodity. And that means low margins.

Disk cabins may seem as great for admins, but from my point of view, they are very VERY expensive in terms of lost time in latency.

The very same people that defend this kind of storage are, in my point of view, the same ppl who said flash storage was wrong.

Storage belongs INSIDE the server, as close as possible to the processor for most loads.

That of course, would mean treating the servers as "compute units", and you would need different tools to manage servers, loads, etc. More easy to say than to do, of course....

0
0
Bronze badge

Damn.. sorry for "Coomidities", I guess the typo is clear...

Damn.. sorry for "Coomidities", I guess the typo is clear...

0
0

The band-aid problem exactly why I'm planning on going with Infiniband for my next storage project. For a bunch of Windows servers, SMB Direct over Infiniband makes a lot of sense when you start looking at all the other shared storage options out there. Near native access speeds to storage is very tempting. Lets hope Storage Spaces is mature enough at this point to provide the reliability that's required. On paper everything look good...

0
0
Silver badge
Joke

Selling points

These startups position their products as turbo-charging your servers so you need fewer of them, in order to justify the premium price. I have no idea why HP and Dell would not want to innovate on that level.

0
0
Bronze badge

building blocks

servers are little more than building blocks. Last I heard you can go out and get Pernix software and put it on most any server, so what is the issue here. Servers are about hardware. There's not a lot of software with them outside of management functions, and there should not be.

Same goes for fusion IO's acceleration software, go out and buy it and slap it on a HP or Dell or whatever server, or have your VAR do it for you if you don't want to do it yourself.

Next thing you'll be seeing this reg author wondering why the server vendors don't make their own hypervisors too. Since the hypervisor has had a massive order of magnitude of impact on servers vs any fancy storage caching scheme.

Now what I'd like to see is better integration between the various guest operating systems(Linux I suppose is the one I care about most) and the hypervisor (e.g. automatically shutting off vCPUs if they are not required(as in making it impossible to schedule anything on a vCPU from within the guest until the other vCPU(s) are heavily loaded instead of trying to load balance), freeing up buffer cache automatically when it is not being actively used, perhaps even some sort of control plane communication between guests(co-ordinated by the hypervisor) so they can tell each other what they are doing and perhaps make more intelligent decisions on resource utilization). I'm talking kernel level stuff here - I don't think this sort of thing can be done with the stuff vmware tools has for example.

0
0

Maybe it's just the cloud marketing?

Even if it's not the reality for many companies, I think that a lot of the traditional server vendors are spooked about the rise of the whitebox cloud data centers. When Joe Random CIO of a 100-person company reads a magazine article or Gartner report about Open Compute Project, and Facebook running on thousands of no-name servers, maybe HP and IBM are afraid that they'll ditch their gear for better or worse. Then, they declare that the cloud wins, and stop innovating on the standalone server market. After that, it's a race to the bottom to see how cheaply they can put out the latest System x or ProLiant box...we're already seeing this in my company that buys low- to midgrade servers for projects.

Just like the PC world, however, there is still some innovation going on at the high end of the market. IBM and HP have some interesting things in their 4+ socket monster boxes. But IBM just sold that business to Lenovo, the ultimate low-margin box shifter. Cloud economics aside, you always get what you pay for. Whitebox stuff is fine as long as you pay a fleet of people to keep them running and invest in your own management tools. Vendor backed stuff gives you the luxury of a warranty and the research they pour into new hardware designs.

0
0
Anonymous Coward

Re: Maybe it's just the cloud marketing?

Declaration - I work for Oracle

Putting aside most reg readers dislike of Oracle for the moment, Oracle is still innovating with its Sparc chips and the engineered system concept. You can do a lot of work with 32 sockets of 12 cores. You can also fit a lot of databases into 32TB of RAM and it has some very fast IO running rDMA to Infiniband attached storage and much of the SQL workload is pushed down to the storage. There's also more and more database code being pushed into silicon.

The innovation started by Sun hasn't died inside Oracle, if anything it's now invigorated as it's being funded by a company who knows how to make a profit and so has the cash to invest.

http://www.oracle.com/us/products/servers-storage/servers/sparc/supercluster/supercluster-m6-32/overview/index.html

0
0
Anonymous Coward

Re: Maybe it's just the cloud marketing?

Oracle Hardware has been the biggest disappointment of the decade. Just ask the departed VP who is now working for Isilon. Oracle apparently can't even recruit his replacement (or chooses not to). Not the engineering mind you - it's Oracle internal politics holding back the vision.

0
0

Moonshot

Clearly the author of this article has never researched Moonshot. Some modern servers are plenty interesting.

1
0
Silver badge

Servers are supposed to be boring

Adjectives you want for for servers are very much like those you want for accounting:

boring, reliable, conservative...

You don't want all those shiny-pants, bleeding edge features that break all the time etc.

Do your TIFKAM-like experiments on consumer devices, not on servers.

0
0
This topic is closed for new posts.