Well, it’s been fun, but we’re starting to draw this virtualisation lab series to a close. Over the next couple of weeks we’ll be wrapping things up, tying things down and otherwise leaving things neatly parcelled. While of course it is a free society, so you don’t have to pay any attention to the stream of editorial on this …
How about programming for scalability (aka "Can we stop using Apache yet")?
Virtualisation is all well and good, but I can't quite shake the feeling that sometimes, the very fact that you *can* consolidate multiple "servers" to run on the same piece of tin is because those pieces of tin weren't being pushed to the limit to start with. If that's because you just aren't asking much of those boxes, that's fine. But if it's because the software on those boxes can't take advantage of the hardware available to it, you need better software.
Why, for example, are most of us still running webserver software in which the overhead for an open-but-idle connection isn't completely, utterly trivial? From the Apache 2.2 docs for the "event" MPM: "However, Apache traditionally keeps an entire child process/thread waiting for data from the client, which brings its own disadvantages. To solve this problem, this MPM uses a dedicated thread to handle both the Listening sockets, and all sockets that are in a Keep Alive state." Sounds fine, but the event MPM is still considered "experimental".
my required title...
I couldn't agree more. We should be pushing the limits of the processing we have available.
If you want to use virtualization to manage testing or host legacy systems, that's one thing, but if anyone thinks it is a "solution" for anything new, they aren't looking at the problem the right way. Sure, I can dig a subway system with an army of people with shovels, but it is MUCH faster/better/cheaper to use a tunnel boring machine. Get it? Good. Apply that thinking to software.
next big things
Here are three, since you ask:
- Running a Power station/Botnet/HomePC from my netbook
- Using the handphone to Baby-sit/Switch on TV/Turn off tap/Read the electricity meter
- in-ear MP3 player/radio
Bad Management (I know, no secret!)
" But if it's because the software on those boxes can't take advantage of the hardware available to it, you need better software."
I've found that many people, or organizations, would love to look in that direction, but can't because they use proprietary programs that they don't program in house, or may take "too much time and effort" to re-program, according to "management".
I know this because I am in a situation right now where I would love to get one specific set of servers all onto one, but can't because of the way they proprietary program we use is configured. Not a program I run, but one that an outside vendor allows our use through contract.
Many of the problems. in general like this arise from poor management decisions. In one case, about a year ago, these people wanted access to an old DOS based system that ran on BTrieve, not Pervasive, BTrieve. So, I simply asked why we needed it, and since I didn't get an exact answer, I simply informed the CEO that it was lunacy to go through the efforts of rebuilding such a system.
network virtualization too
Network virtualization is starting to take off as well. Companies like Juniper and Extreme have had layer 3 virtual routers in their gear for many years. I recently noticed force10 added functionality similar last year. Brocade added virtual functionality to their new ADX load balancers last year, I'm told F5 has something similar (yet undocumented?) in their latest bleeding edge code.
Then there is technology like HP VirtualConnect which turns a 10GbE port into 4 flexible virtual NICs(was talking with a friend from Broadcom today and he believes it is based on their technology which provides 4 layer 2 functions per 10GbE port, and I did confirm that the Flex10 NICs are Broadcom).
Still have a ways to go before network virtualization is as dynamic or flexible as storage virtualization or server virtualization but it'll get there eventually.