28 posts • joined 6 Jun 2007
Re: waste of time
Microsoft should be giving away embedded windows licences and VDI usage permissions for free, as a mechanism to keep corporate users on a windows desktop platform regardless of how they access it - so they can still sell the OS licenses, AD infrastructure and Office tools that bring in all the real money.
With the ongoing shift to web based enterprise SAAS applications (including tools being built for internal usage by major corporates) MS is increasing the risk of some big corporate CIO's (who aren't under Redmonds thumb in other areas) suddenly taking a look at the cost of deploying and maintaining windows desktop environments in general, deciding "f**k it" and using the massive opex cost savings of binning MS at the desktop level as the justification to work through the pain of rejigging their remaining critical apps into web based tools.
By the same token - developers building applications for remote delivery are only too aware that non Windows OS's are now the norm (via IOS, Android), and that web-apps are the only realistic way to deliver a platform agnostic product, because who wants the hassle of designing and coding every version umpteen times? Chuck the UI in HTML5, do all the heavy lifting on a server backend, and be done with it.
If MS's goal was to smother VDI in the crib to protect its stranglehold on the Desktop market, then they've largely succeeded - but the world doesn't care, as the need for end to end VDI is diminishing right in line with the demand for windows desktops in general - with more and more users adopting a non-MS devices (e.g. fondleslab) as a primary device at home and work, developers are now building their tools to be inherently agnostic of the platform they end up running on.
MS are winning the battle with VDI, but loosing the platform war. Once they loose the that fight, they then loose the user familiarity vector that currently allows them to push the 'money' products (Office, Windows etc) into the stack.
Those expensive CSA's are presumably targeted at enterprises who (for some compliance or political reason) aren't ready to adopt a distributed VSAN architecture? I'm going to go with the good old "financial institutions" piñata here who are still in many cases using code and processes older than most of their customers.
I'm guessing that somewhere, carved on a stone tablet in the (presumably very heavy) dungeon masters acceptable risk handbook is a line that says "though shalt buy only physical SAN's with Multiple controllers, and no less than six power supplies, and it shall sayeth EMC or NETApp on the front, for all else is witchcraft and heresy" or something.
I'm also guessing that CSA's are a nice work around for virtualisation admins - they might be stuck with the stipulated backend storage (because until the peasants rise up and stick a pitchfork through the dusty old storage manager who has complete control over that side of things) - but they can damn well fling a CSA in front of it, as that's in their domain god dammit, and one day they'll throw off the shackles and deploy a VSAN and cut the dark lords of storage out of the picture entirely. One day...
It makes sense for the big cloud players to start working on real 64 bit ARM options. The vast majority of AWS Linux micro instances could be capably serviced by a modern ARM CPU at greater densities than Intel can deliver.
HP have already shown off the density possibilities via Moonshot (albeit as "enterprise" grade ARM kit, with a lot of unnecessary guff wrapped around it and only on 32bit). Give someone who builds their own hardware and platform by cutting unnecessary components (i.e. Amazon, google, facebook) and I bet they could cram a LOT of very usable low end compute into a very DC cost effective footprint.
Red Hat have shown off RHEL running on ARM64 now, so ARM servers are certainly coming. I don't expect them to replace x86 an AWS or anywhere else any time soon for many users, but it'd be a great start to diversifying the platform. Good luck to them - them working on diversifying one of the few areas (x86 based servers) in which there is currently no real alternative to the industry standard can only benefit us all in the long term.
I suspect it's a reference to average lines of code committed in changes or something equally skewed.
AWS isn't a magic bullet - the costs of hosting infrastructure on the platform versus leveraging private cloud on an in house hardware setup are significant, amazon aren't giving it away for free.
Also, there is a rising sense of concern that AWS is a closed source platform - if your not careful, it's very easy to paint yourself into corner and make moving elsewhere a real challenge.
I doubt amazon would be waxing so lyrical on the topic if they didn't regard open stack, cloud stack etc as genuine fuel for competitors to their defacto dominance in the public and private cloud industries.
Glad to hear I'm actually getting more employable!
Re: Sounds good but...
The clever bit about the Atlantis software (when I looked at it in anger a year or so ago after seeing it at VMworld anyway) was that it did two things to take the load off of backend disk - firstly, it did full deduped RAM caching of disk blocks in the VMhost RAM, (for both read and write) - the dedupe making it very efficient as most of the OS image hot blocks are common.
Secondly, it did dedupe, serialisation and compression of any writes back to disk, using an "inline" virtual NFS store - basically it shows your VMhost an NFS share (which live sin memory), where you provision your VMDK's as normal, but that NFS actually lives on an Atlantis VM on each host, which is then synced back to the underlying disk.
The idea is you should use the Atlantis provisioned storage for your VDI boot images and working drives, and stick your "proper" data on more traditional storage (file server or NAS or whatever), giving your very fast desktops with reliable file storage.
There's a good writeup of the new release (with comments input from the Atlantis guys) on Brianmadden.com if you want more detail. They've solved the persistant VDI image restriction, and it now supports multiple hypervisors etc. Nice that its licensed per desktop, and I reckon a per host version for server VM accelerationis forthcoming soon...
Re: RE: comparison with a PC and PC performance
My numbers comparison is just a casual abstraction to illustrate the problem Mark, as is the example of hand coding in cpu assembler I don't think anyone expects to dive in and hand code Uncharted 5 in Assembler from start to finish (other than some specific subroutines that genuinely require maximum performance tweaking, computer compiled is generally adequate and quicker) ;)
Perhaps a more accurate comparison would be to say that the compilers available for a stable hardware platform can be far more focussed and better optimised on a console than is achievable on a general purpose PC. Additionally, game engines can be written to take full advantage of the highpoints and avoid the pitfalls of the hardware.
Right now, PC based programming is based on lowest common denominator optimisation routines (as software has to work with a wide variety of present and future hardware, and the easiest way to achieve that is to use computationally expensive abstraction layers) - fixed hardware platforms don't have this constraint, so code can be made to be far more efficient in less time than would be required to get it working half as well on an acceptable range of PC hardware to cover the market.
As the machine remains in the market longer, developers (of both game software and the API's, compilers and engines used to build them) can focus their time on optimisation rather than rebuilding every time a new GPU generating is released, and wasting time on ensuring backward compatibility and scalable performance options to remain inclusive of users with older hardware. The old argument about how good games released at the end of a consoles lifecycle look and perform in comparison to release day titles illustrates this nicely?
My point is that those who assume that any developer is going to take the time to squeeze half the practical performance out of an equivalently specced Windows (or Linux) based White box PC isn't taking into consideration the commercial challenges this would entail. Consoles have a 6-7 year shelf life nowadays, and it is a testament to the unique benefits of closed platform optimisations that the Xbox360 and PS3 can come reasonably close to delivering the gaming experience achievable on a modern PC costing 10 times the price 7 years after they launched!
RE: comparison with a PC and PC performance
Cross porting may not be as easy as all that.
At present, every PC game relies on high level API's to inter face with the underlying hardware - DirectX, OpenGL for 3D rendering. These calls are pretty damn inefficient at exposing the true power of the hardware. This is by necessity - the same API's have to abstract a huge range of physical GPU's from a variety of manufacturers, so this loss of optimisation is to be expected.
You've then got the operating system layer, which operates as a go-between from the code to the hardware, again abstracting to cope with a wide range of hardware varients.
If everyone wrote they're PC games in x86 Assembler, and their graphics code in the AMD or Nvidia equivalent, we'd see performance an order of magnitude better than we do now. Of course, that's not realistic as that code wouldn't be portable to the near infinite number of hardware configuration varients found in the PC world, and I doubt x86 assembly is much fun to work in these days...
Even if the PS4 (and next Xbox) use x86 based CPU's and a variant of the Radeon GPU, they're going to be a single fixed part for the lifetime of the console - Meaning Sony can provide bespoke API's that are much more closely coupled to the hardware, or even provide direct access to the hardware for particularly performance focussed developers to tinker and squeeze out the maximum performance with - Thus the "real world" performance of a game on the PS4 is going to be extremely good compared to its PC version running on basically the same hardware, which is having much of its performance sapped by unavoidable inefficiencies at the API, driver and OS layers.
Speaking to the hardware in the PS4 is like two native English speakers having a chat - quick, simple and efficient.
Speaking to the hardware in a PC is like a guy who only speaks English wanting to speak to a guy who only speaks German - the problem being he has to use an English to French translator and then a French to Spanish translator, and then a Spanish to German translator to communicate every sentence.
Of course - early PS4 games will probably use some familiar, common API layers (OpenGL etc) until dev's get time to get to grips with calling the hardware natively, so don't expect miracles from the first generation of 3rd party software!
Expensive... heres something to try instead.
Yikes - its frightening that decent server RAM is currently so much cheaper per gb that an all flash storage array...
Why not try this out - stick your VMware view golden images on an NFS share mounted on a free ZFS ramdrive - enable block level dedupe (for space saving) and synced writeback to persistant storage (for resilience) enabled. Witness how many golden image VM's you can boot from THAT badboy!
Re: A step forward
Hmmm... Correcting myself before anyone else does - it may be possible to pull out the HP storage blade without loosing connectivity - I'm seeing conflicting info... need to look into it more...
A step forward
Getting a fully functional EQL into a blade chassis is a real boon for Dells Blade lineup. Dual controllers and having the full lineup of disk options (SAS, SSD, NL, Hybrid SSD+SAS) makes it very flexible.
Regards the accuracy of the comparison, a P4500 Left hand pair would be the closest comparison in terms of performance positioning, but the lefthand stuff lags way behind the EQL lineup in terms of performance, density and simplicity, plus its rack mounted - and this Dell sponsored comparison is about blade integrated storage options.
The HP storage blades do however offer you a way to put a dozen SSD's on the PCI bus of any adjacent blade for maximum IOPS and bus speed latency - something Dell can't do at this point in time - this is great if your running high IO SQL or similiar, and want ns (bus) rather than ms (iscsi) disk access latency to your SSD's
Practically - I expect that the Equallogic blade will be a great fit for building out dense DC deployments in a physically resiliant manner - but HP have the edge when it comes to building high performance storage right into the chassis - its a shame the VSA solution is so horrid. I'm hoping for a baby 3PAR blade for the c7000 at some point down the line.
PS - One VERY important point not highlighted above - the Equallogic can be slid out of the rack while live to hot swap dud RAID disks or controllers as it has a cantilever arm and ribbon cable attaching it to the backplane - the HP blade storage VSA appliance needs to be taken offline to get at the disks in the event of a failure - meaning downtime for your storage just to replace a spinner... worth noting if uptime is important to you.
Re: No direct connect?
Its not surprising that they only bundle "demo" licenses really - vSphere Enterprise/Plus licenses for a fully loaded example of the (rather lovely) quarter height dual socket E5 + Equalogic Blade chassis at the bottom of the article would cost roughly twice as much at point of sale as the hardware itself - and the OEM discounts EMC do on vSphere are crap, theres no escaping the cost.
Now - Given that release is slated for august, I would expect Dell to pull a stroke and focus on making the Server 2012 Hyper-V 3 integration nice and slick (they've always done a good job of supporting previous Hyper-V variants on their blades and Equalogics compared to the competition).
Same capacity - equivalent capabilities (in HV3 anyway), a third of the all up cost (provided you have the skills and management tools to support HV3 etc etc). That's a pretty compelling alternative.
This is very different than the virtualised hardware GPU offered under RemoteFX or the software 3d GPU offered in Vmware View 5.
Essentially - VGX is a low level instruction path and API that allows a vertical slice of the phyiscal graphics cards resources to be routed through to a VM - by a method similiar to VMwares DirectIO for those who want a read. Basically, the VM has direct, non abstracted access to the physical GPU, together with all that GPU's native abilities and driver calls - i.e. Directx11, OpenGL, OpenCL, CUDA... the lot.
The Virtualised GPU in RemoteFX is an abstraction layer that presents a virtual GPU to the VM, with a very limited set of capability (DirectX9 level calls, no hardware OpenGL, no general purpose compute) - not only does this not fully leverage the capabilities of the GPU, but it is less efficient due to having to translate all Virtual > Phyiscal GPU calls at a hypervisor level.
Contrary to some comments above - VGX is a real game changer for MANY industries - my only hope is that NVidia don't strangle the market by A) Vastly overcharging for a card that is essentially a £200 consumer GPU B) Restrict competition by tieing virtualisation vendors into a proprietary API to interface with the GPU, thus locking AMD out of the market which is to the longer term detriment of end users (e.g. CUDA vs OpenCL).
Highly scalable inhouse app for data crunching (can't be more specific than that I'm afraid...) - the important thing is I selected that hardware platform as it was the best fit for the task at hand - Bulldozer might be a lemon on the desktop atm (I don't think anyone could rationally argue otherwise), but I can assure you it was a real fight to get initial stock allocation of the 6276 2.3-2.6ghz 16 core CPU's (the sweetspot for powerdraw/price/performance it would seem) so they must be selling for AMD!
Big Bulldozer boxes
I've just deployed a fully populated blade chassis with 8 quad socket 16 core Opteron 6276's.
512 cores and 2TB of ram in about 8U of rackspace (up to 30A under load admittedly).
Under the particular workload they're doing (heavily parallel, integer based, memory intensive workloads) they absoloutely scream when configured correctly. Each 2.6ghz (boosted) core is doing about 75% of the real world work that a 3.4ghz (boosted) workstation Intel sandybridge core was doing.
The key here is that for that level of density, and Intel solution was totally unfeasible - the cost to load up a blade with four 8 or 10 core Xeons was about 2.0-2.5x the price per blade, and would have delivered the same overall performance at best.
I see a lot of bashing of Bulldozer by people who aren't leveraging them at a decent scale - or are comparing them thread for thread against intels desktop SKU's. The Server/DC market is a very different beast however. Intel Xeon prices (and the associated platform) scale much more steeply than AMD's current offering as you increase core density, so pound for pound the AMD kit is a very realistic option right now.
Judge it in 9 months...
Its nice to see Microsoft finally taking steps to unify the functionality and deployment of the system centre toolset. All those RC's and Beta's are no doubt there in anticipation of Server8 being finalised.
As it stands, my VMware Enterprise plus licencing (without any Ops Director type bolton) costs me more over a 2 year cycle than the hefty hardware it runs on. I'm paying a fraction of this every month for my Windows DC per socket licencing via SPLA anyway, and SPLA costs for the System centre suite are similarly minimal.
I'm certainly keen to see how Hyper-V3 and the rest of the Server8 Ecosystem performs - key additions like a proper virtual switch, port aggregation, thin provisioning etc mean Hyper-V now meets or exceeds the requirements of most ESX deployments.
The vSphere management interface is very good but don't forget that system center is now little more than a GUI wrappers for a whole new batch of Powershell commandlets - I don't expect it to be long until third parties start producing superior GUI's built on that fact.
Pound for Pound
I've just priced up some Supermicro AMD blades (Supermicro being very on the ball with getting new tech to market in thier boxes).
Basically, I can buy a blade with four 16 bulldozer core 2.3ghz 6276's for the pricee of a single (non E-series!) 10 core Xeon.
For workloads that benefit from lots of cores/threads, and which don't incur licening that makes it worth spending top dollar to max out per-socket performance (some VMware or SQL situations I expect) these chips will definately be worth investigating carefully.
They're going to be awesome for VDI in genreral and cloud VSP reseller scenario's in particular - lots of 'fast-enough' cores you can allocate, and strong memory
It seems Bulldozer was always going to be a server chip. Makes you wonder why they bothered with a retail/consumer version at all!
I bought a new car the other day... it runs on Diesel. sadly, when I went to my filling station, they had run out of Diesel, so I filled it up with good ol' Unleaded - which my old (though admittedly now rusty, and comparitively unsafe) car ran GREAT on for YEARS.
Wouldn't you know it, my new car runs like a dog, and its all the new manufacturers fault, why didnt they organise with my local filling station to have plenty of diesel available for me?
Moral: If your hardware is so old and unsupported that you can't even get drivers for it then either upgrade the kit, complain to the hardware vendor, or just stick with the old OS? How is any of that Microsofts fault?
We all loved XP, but like Old Shep, its time to let it go and stop living in the past.
End of the Century?
I'd like to hope it'll be out by the end of decade never mind the end of the century... Unless MS are planning on a Vista style release schedule!
Paris, because she's known to get the goods out fast.
"Shields up, set red alert"?
Perhaps the precursors of a non-iphone based editing app in the works? - if the underlying hardware and interface on the much rumored apple tablet device is based on the iPhone UI then who knows...
I can't see current iphone hardware being any much use even for casual video editing... (If apple inteded video creation functionality then surely video recording would have been an option from launch, never mind 3.0?) The built in camera is crap anyway, the processing hardware won't cope and there is no external interface to import footage from the majority of other devices (bluetooth and Wifi compatible HD camera's being a novely).
Now, a multicore tablet doo-hicky capable of doing draft editing of footage in a directors hands in the field, now that would be cool.
Just speculation though. The artwork is probably totally unrelated to video editing.
Control Methods still a big block.
The three areas where the pc really excels over consoles (for the most part) are:- FPS, RTS, and MMO's.
Back in the bad 'ol days the sheer power of a PC was the only machine on the block powerful enough to do more than sprite shunting, plus it had lots of "added features" (internet access, network gaming etc). High resolution and powerful 3d gaming etc were solely the remit of the biege box. Nowadays, consoles have caught up with the power of the PC (Full HD graphics, network gaming, internet access) in these fields.
However, there are two areas that the PC still beats them in easily:-
1) Customisability - most competitively played PC games all run custom rulesets, and are easily and widely modded by players (COD4, CSS etc) - given the closed loop nature of console development, this is difficult if not impossible to implement at present on console versions (With an honourable mention to Bungie for the massive number of Halo gametype options).
2) Control method - why is there no WoW client for Xbox or PS3? Why are FPS games such a pain in the ass to play? Why are RTS games on consoles universally pie? Mouse and Keyboard. There is simply no comparison between the most advanced joypad and the cheapest 5 quid Tesco Value keyboard/mouse combo when it comes to FPS, MMO, or RTS games.
Really, as soon as Microsoft and Sony do the decent thing and enforce standardised keyboard/mouse support in all relevent titles, I'll be buying all my software in a console format - thus getting the best of both worlds. For the time being the way I choose to play complex modern titles is still too heavily restricted on console platforms.
System Center Virtual Machine Manager 2008 Free? Errr...
I'm probably missing something - but I can't see anything in the linked announcement that suggests Virtual Machine Manager will be free, only Hyper-V.
Previous announcements would suggest that Microsoft are planning to offer System Center Virtual Machine Manager 2008 seperately from the from the SMS suite, but would be charging just about as much for it...
Don't get me wrong, if they put it out gratis I would SERIOUSLY consider investigating it as an alternative to VMware for smaller, non-clustered virtualisation environments consisting of primarily 2003 or better yet, 2008 VM's.
Virtualisation in the server room is practically worthless without a centralised administration and deployment tool, VMwares free server products are there to familiarise admins with the basics before taking the plunge with proper, ESX / VirtualCentre managed Infrastructure 3.5 products for multi server production environments. If MS were to give away thier management tool for nowt, I could see it having a serious impact on VMwares Infrastructures "Foundations" level package sales, which don't come bundled with any of the stuff that puts VMware ahead of Hyper-V anyway (Vmotion and the like).
Or buy it for 800 quid less from novatech...
I still dont understand how Rock stay in business... Novatech (www.novetech.co.uk)have the same machine (simply a rebadged generic CLEVO chassis) for £1400 with the same 512mb 8800M GTX, 2.4ghz c2d and 4GB of ram and a 320GB hard disk (but no HD-DVD player I suppose... big whoop!).
I reckon Rock are trying to play on shortages in 8800M parts to gouge on the price, but 2200 quid is ridiculous!
Pick up a Buffalo Linkstation live. I liked the one in Work so much I bought my own... Faster than a Greyhound on speed, no restrictions (unless you enable them) and the latest updated for its build in linux distro makes it web accessible as well.
Well, My XP Pro install couldnt have gone more smoothly... nlited out all the nonsense (legacy drivers, media centre components, movie maker) and disabled a load of useless services. I should highlight that this was VERY easy with Nlite (happy to provide the nlite config for anyone who wants it!).
Installed size it a hair over 1GB.
First thing to do once it boots up for the first thing is to ensure that Swapfile is disabled (saving space and protecting the SSD drive). Also, consider disabling the indexing servince, system restore, and backgroud defrag. All of these will otherwise:-
A) Use memory
B) Use Hard disk space
C) Use CPU Time
D) Eat idle battery
Now... my eee is running (with all drivers) at 0%CPU idle, ~90meg Ram usage, and is EXTREMELY reponsive.
I feel that despite the solid percentage, the el'reg reviewer is beating down on the unit a bit hard. Contrary to most users, I found the built in "appliance" frontend a bit crap, totally locked down and generally unresponsive in opening apps. It also runs at 256 meg idle ram usage!
the 900mhz (actually running at 630 due to lowered FSB bus speed!) CPU copes beautifully with a neat XP build, and the 512mb of ram is also perfectly adequete (without any swapfile noless!). No doubt I'll look at sticking a 1GB or 2GB chip in there at a later date - but not as soon as I expected I'd want to.
Finally, I want to highlight that for a mere £220 quid (inc vat!), I have a responsive, unbeatably versatile, eye catching, highly mobile device that delivers a solid 4 hours usage (with wireless disabled and screen at a readable 60% brightness) and which I can slip in my bag for plugging into remote networks, Cisco switches, light web usage (as previously mentioned, a hell of a lot better than a 320x240 phone screen!), light picture manipulation and office app usage, and really whatever else I find I need it to do while on my travels. I feel that Asus have missed a trick on this one - its being pitched as a mobile web access machine, but its real calling is in the hands of IT users who'll stick on a real OS and wiil be using it to run hyperteminal, RDP onto servers, and as a troubleshooting terminal while out in the field.
PS - Did I mention that the mains adaptor isnt a block - its like a Nokia phone wall charger! - hardly a nightmare to carry around!
The Journo's will go nuts!
I know only too well the importance for ISDN - live voice traffic for critical events (like say... football commentry?) is all dealt with using ISDN codec boxes.
It's a bloody brave man who'll try and tell a BBC journo that they will have to cope with a contended IP service or a flakey analogue line rather than the ideal-for-task plug and go, always reliable ISDN box.
I'll have 2 please!
Nice bit of kit... but surely rip-off Britain will get her talons in there and that o-so-attractive £100 theoretical price point will baloon to somewhere closer to the £200 mark...
Still, its a nice wee machine - perfect for reliable on the road powerpointing and slipping in the glovebox, though I'm surprised they cant tease a bit more out of the battery on such a technilogically streamlined device.
20 quid says the AC power block weighs as much as the laptop...
- Infosec geniuses hack a Canon PRINTER and install DOOM
- Feature Be your own Big Brother: Monitoring your manor, the easy way
- Boffins say they've got Lithium batteries the wrong way around
- In a spin: Samsung accuses LG exec of washing machine SABOTAGE
- Phones 4u slips into administration after EE cuts ties with Brit mobe retailer