* Posts by CheesyTheClown

476 posts • joined 3 Jul 2009


Is it the beginning of the end for Visual Basic? Microsoft to focus on 'core scenarios'


Re: Fickle Microsoft

haha I remember being the C programming king of high school and Windows came out and even with all the help that I could get from the Charles Pezhold book *which I spent two weeks of grocery store wages on), I couldn't for the life of me figure out the API.

Of course, X11, Windows, Mac all have horrible low level APIs... but now, I just code language and environment doesn't really matter anymore. It's more about simply just sitting down to type.

I nearly died laughing at the guy who said that simply changing the language made him go from senior developer to junior developer. I never met a senior level developer who was senior because of how well he could use one particular tool or paradigm. I always considering the most versatile person to be senior and people who speak like he did as ready to be promoted to janitorial staff.


SAP Anywhere goes nowhere, reaches commercial cul-de-sac


Probably a weak pound issue

If they pen agreements with U.K. companies when the pound is weak, they will have to take what they can get since most U.K. companies don't want to pay prices that make sense on the sales sheet in dollars. US companies are forced to charge quite a bit more because of the very high cost of VAT in Europe and they'd probably have to charge U.K. companies less than their US counterparts.

In addition, I suspect that U.K. regulation is about to make a lot of rapid changes requiring a lot of coding to support it. The cost will be high. It's probably more cost effective to wait until the U.K. market stabilizes following Brexit to bother investing in that.

Of course it could simply be that Trump induced stupid has all their programmers and lawyers so busy that trying to keep up with American issues leaves no time to waste on U.K. stupid as well.


Hypervisor kid Jeff Ready: Converged to the core, and NO VMware


Re: After what these guys did to their storage customers...

Dedupe on HCI is easy if you're not using VMware as they don't properly support pNFS, it does a bastardized form of it called multipathing.

The solution to this to to sell your soul to VMware, get access to the NDDK and implement a native storage driver which can implement pNFS on its own. There's absolutely no value in doing this and no one should ever bother trying.

There's the alternative which is to attempt to get iSCSI up and running in a scale-out environment. Due to limitations in vSwitch, this isn't an option since multicast iSCSI isn't supported in VMware's initiator and anycast isn't profitable in this case.

FC is out if for no other reason but FC is storage for people who still need mommy to wipe their bottoms for them. FC is so simple stupid, a monkey can run an FC SAN (until they can't... but consultants are cheap right?) and what makes it so simple stupid is that FC doesn't support scale-out AT ALL, though MPIO could scale all the way to two controllers.

So, then there's the question of value. Where's the value in dedup on a VMware HCI platform? That's a tricky one since due to the nature of VMware's broken connectivity options for storage, you can't scale out the system connectivity to begin with. You also can't extend the vmware kernel to support it because even if you have access to the NDDK, there's no one who actually knows it well enough to program with it and if you look at VMware's open source code for their NVMe driver on github, you'll see that you probably don't want to use their code as starting point. It's pretty good... kinda... but I'm tempted to write a few exploits for the rewards now.

Oh, then there's the insane cost and license problem behind the VAAI NAS SDK from VMware. I almost choked when they sent me a mail saying "$5000 and we basically can tell you what to do with your code"... for a 13 page document (guessing the size). So, you can't even properly support NFS to begin with. And no, I would never ever ever agree to the terms of that contract they sent me and there's less chance I would consider paying $5000 for a document that should not even be required.

So, back to dedup... you can dedup... in HCI... no problem! The problem is, how can you possibly get VMware to actually use the dedup and replicated data?

Then there's Windows Server 2016 which ships with scale-out storage, networking and compute all on one disc and all designed from the ground up for.... scale-out.

There's OpenStack which works absolutely awesome on Gluster with scaleout and networking.

So, what you're talking about "dedup on HCI is hard and slow." this is absolutely not true. dedup and scale-out on VMware is damn near impossible. But it's a stock component of all other systems and see a post I made earlier about slow. Slow is not a requirement. It just takes companies with real storage programmers not just hacks that slap shit together using config files.


Re: Seriously? Did he really said that? With a straight face?

Ok... because there are bad implementations of dedupe out there (lots of them... NetApp being among the worst I've seen), there will always be comments like this.

Let's talk a little about block storage. There are many different levels of lookup for blocks in a storage subsystem. If you look at a tradtional VMware case, there are at least 6 translations, possibly up to 20 for each block access across a network. Adding FibreChannel in-between aggravates the issue quite badly. It adds a lot of latency based on it's 1978 era design (this is no an exaggeration, the SCSI protocol is from 1978). There are many more problems which come into play as well.

Every block oriented storage system which supports any form of resiliency through replication of any sort (which is not an option anymore) has to perform hashing on every single block received. Those hashes must be stored in a database for data protection. For 512-4096 byte blocks, chances are a CRC-32 is suitable for data protection, and for deduplication with a "lazy write cache" it's is also suitable. However, in the case of NetApp for example which is severely broken by design, everything is immediate and there's no special storage for lazy or scheduled dedup.

In a proper dedup system, blocks which have two or more references on a write operation (even if hash matches) will decrease their reference count and a new block will be written to high performance storage (NVMe for example) with a single reference. If there was only one reference, then the block is altered in place and the hash is updated.

Then dedup will run "off-peak" meaning (for example) that if the CPU is under 70%, then the new blocks stored on disk will be compared 1:1 with other blocks with matching hashes and references will be updated only a single copy of the data itself will be maintained. In addition, during this phase, it is possible to lazy compress blocks and migrate to cold-storage (even off-site) or heaven forbid FC SAN storage blocks which are going stale.

Dedup should have absolutely ZERO impact on performance when implemented by engineers who actually have half a brain.

The disadvantage to the system described above is that dedup won't be sexy at trade shows since it might take minutes, hours or more to see the return from the dedup operation.

As for databases, if you're running mainstream SAN (EMC, Hitatchi, 3Par, NetApp), you're absolutely right. You should avoid dedup as much as possible. None of the those companies currently employ the "real brains behind their storage" anymore and they haven't had decent algorithm designers on staff in years. They take a system which works and layer shit upon shit upon shit to sell them. There will be problems using any GOOD storage technologies on those systems.

For database and most modern instance, you should make a move away from block storage oriented systems and focus instead on file servers with proper networking involved. In this case, I would recommend a Gluster cluster (even if you have to run it as VMs) with pNFS or Hyper-V with Windows Storage Spaces Direct. These days, most of the problems with latency and performance are related to forcing too may translations between guest VM and the physical disk. There's also the disgusting SCSI command queuing illness which is something which orders file read and write operations impressively stupidly since NCQ at each point it's processed has no idea what the block structure of the actual disk is. pNFS and SMBv3 are far better suited for modern VM storage than FC and iSCSI can ever be.

That said, there are some scale-out iSCSI solutions which aren't absolutely awful. But scale-out is technically impossible to achieve over FC or NVMe.

P.S. - Dedup in my experience (I write file systems and design hard drive controllers for personal entertainment) shows consistently higher performance and lower latency than the alternative because of the simplicity involved in caching.

P.P.S. - I've been experimenting with technology which is better than dedup as it would instrument guest VMs with a block cache that eliminated all Zero-Block reads and writes at the guest. It improves storage performance more than most other methods... sadly, VMware closes their APIs for storage development, because of this, I have to depend on VMware thin-volumes or FC in-between to implement that technology.

P.P.P.S. - I simply don't see this company doing anything special other than trying to define a new buzz term which is nothing new. Implementing code into the KVM kernel is the same as Microsoft implementing SMB3 into Hyper-V, it's just old hat.


Cisco goes 32 gigging with Fibre Channel and NVMe



Let's all say this together

Fibrechannel doesn't scale!

MDS is an amazing product and I have used them many times in the past. But let's be honest, it doesn't scale. All flash systems from NetApp for example have a maximum of 64 FC ports per HA pair (which is so antiquated it's not worth picking on here) and that means that the total system bandwidth of the system is about 8Tb/sec. Of course, if you consider that HA pairs suggest you have to design for a total system failure of a single controller which cuts that in half. Then consider that half that bandwidth is upstream, the other half down. Meaning, half is for connecting drives to the system, the other half is for delivering bandwidth to the servers. So we're down to 16 reliable links per cluster. There has to be synchronization between the two controllers in an HA pair. So let's cut that I. Half of we don't want contention related to array coherency.

An NVMe drive consumes about 20Gb/sec bandwidth. So, that's a maximum capacity of 25 online drives in the array. Of course there can be many more drives, but you will never reach the bandwidth of more than 25 drives. Using Scale-Out, it is possible scale wider, but FC doesn't do scale out and MPIO will crash and burn if you try. iSCSI can though.

Now consider general performance. FC Controller are REALLY expensive. Dual ported SAS drives are ridiculously expensive. To scale out performance in a cluster of HA pairs would require millions in controllers and drives. And then because of how limited you are for controllers (whether cost or hard limitations) the processing requires for SAN operations would be insane. See, the best controllers from the best companies are still limited by processing for operations like hashing, deduplication, compression, etc... let's assume you're using a single state of the art FPGA from Intel or Xilinx. The internal memory performance and/or crossbar performance will bottleneck the system further and using multiple chips will actually slow it down since it would consume all the SerDes controllers just for chip interconnect at a speed 1/50th (or worse) than the internal macro ring bus interconnects. If you do this in software instead, even the fastest CPUs couldn't hold a candle to the performance needed for processing a terabit of block data per second. Just the block lookup database alone would kill Intel's best modern CPUs.

FC is wonderful and it's easy. Using tools like the Cisco MDS even makes it a true pleasure to work with. But as soon as you need performance, FC is a dog with fleas.

Does it really matter? Yes. When you can buy a 44 real core, 88 vCPU blade with 1TB of RAM on weekly deals from server vendors, a rack with 16 blades will devastate any SAN and SAN Fabric making the blades completely wasted investments. Blades need local storage with 48-128 internal PCIe lanes dedicated to storage to be cost effective today. That means the average blade should have a minimum of 6xM.2 PCIe NVMe internally. (NVMe IS NOT A NETWORK!!!!!!) then for mass storage, additional SATA SSDs internally makes sense. A blade should have AT LEAST 320Gb/sec storage and RDMA bandwidth and 960Gb/sec is more reasonable. As for mass storage, using an old crappy SAN is perfectly ok for cold storage.

Almost all poor data center performance today is because of SAN. 32Gb FC will drag these problems out for 5 more years. Even with vHBAs offloading VM storage, the cost of FC computationally is absolutely stupid expensive.

Let's add one final point which is that FC and SAN are the definition of stupid regarding container storage.

FC had its day and I loved it. Hell I made a fortune off of it. I dumped it because it is just a really really bad idea in 2017.

If you absolutely must have SAN consider using iSCSI instead. It is theoretically far more scalable than FC because iSCSI uses TCP with sequence counters instead of "reliable paths" to deliver packets. By doing iSCSI over Multicast (which works shockingly well) real scale out can be achieved. Add storage replication over RDMA and you'll really rock it!


Microsoft's new hardware: eight x86 cores, 40 GPU cores


Re: 4K? Meh

I had the orange... I was told it was called Amber. And it was supposed to be better than green but Eddie Murphy told me that his grandmother suckered him worse with burgers that were better than McDonalds.

What sucks is that simcga almost never worked for me. But to be fair, Sierra was generally good about supporting HGC.


Re: Project Scorpio?

$700 is excluding VAT. With VAT at 17%, that should be £819. Then consider the "you're in Europe tax" which Apple is the worst about but Microsoft tries to suck at too. I'd guess £850-900.


Elastifile delivers stretchy file software


Built into Windows Server and Linux?

Why would pay money for something already built into the operating system?


Google Cloud to offer support as a service: Is accidental IT provider the new Microsoft?


Don't use google for the same reason you don't use AWS or IBM

If you choose to go cloud, you want a single solution that works in the cloud or out. Google, Amazon and others don't make the platform available to take back home. Sure, you can go IaaS, but do you really want IaaS anymore?

Never use a platform which has PaaS or SaaS lock in and Google and AWS are permanent commitments. Once you're in, you can't go out again.


After 20 years of Visual Studio, Microsoft unfurls its 2017 edition


Re: Getting better all the time

Maintaining projects other than your own is always a problem. But updating to a new IDE and tool chain is just a matter of course and is rarely a challenge. I've moved millions of lines of code from Turbo C++ to Microsoft C++ 7.0 to Visual C++ 1.2 through Visual Studio 2017. Code may require modifications, but with proper build management, it is quite easy to write code to run on 30 platforms without a single #ifdef.

I've been programming for Linux and Mac using Visual C++ since 1998. I used to write my own build tools, then I used qmake from Qt. Never really liked cmake since it was always hackish.

Now I code mostly C# since I've learned to write code which can generate better machine code after JIT than C++ generally can since it targets the local processor instead of a general class or generation of CPUs. Since MS open sourced C# and .NET, it's truly amazing how tight you can write C#. It's not as optimized as JavaScript, but garbage collected languages are typically substantially more optimal for handling complex data structures than C or C++ unless you spend all your time coding deferred memory cleanup yourself.


Why did Nimble sell and why did HPE buy: We drill into $1.2bn biz deal


Re: Cisco: Be Bold!

Cisco is dumping SAN, why would they buy another one. Cisco is the only company who seems to be trying to take hyperconverged seriously... now if only they figured out that hyperconverged isn't a software SAN.


And there goes Nimble

To be fair, over the past several software releases, Nimble has been dropping rapidly in quality. But still, they were probably the best option for SAN storage available.

No one shed a tear when Dell bought EMC since EMC was already yesterdays crap and VMware was already falling apart.

But HPE buying Nimble is a disaster since they probably are already trying to decide how many people to lay off to "reduce redundancies" since there's a storage nerd here and storage nerd there. They'll outsource support to India with a bunch of guys with a support script. As for knowledge for marketing, no one at HPE will sell Nimble since they barely understand 3Par and they kinda just figured that out.

I predict that Nimble will perform about as well under HPE as Aruba did... and frankly Aruba is pretty much dead now.


Sir Tim Berners-Lee refuses to be King Canute, approves DRM as Web standard


Standard DRM = crack once use forever

This is a good thing. Imagine you buy a phone or a tablet and it reaches end of support. This device sold and marketed a device capable of playing standard DRM content might end up black listed because someone else found a method of cracking DRM using that device. Since updates are not available, whoever blacklisted that device can be held liable and sued for their actions.

Consider that browser based DRM is simply not possible.

A plugable module is code which requires standardization of an API. The API will be well understood and will not be restricted. So you write a small loader app and then based on entry points, issue your own keys and decide some of your own streams and find out where the keys are held.

The DRM must be extremely lightweight otherwise batteries will drain to quickly. One could write the DRM using JavaScript which would be smartest and with instruction level vectorization a part of WebAssembly, it could be quick. But it would consume far more power than a hardware solution. So DRM in code would have to be limited more to rights and providing decryption keys for AES or EC. And if the keys can be transferred at all, they can be cracked.

The media player pipeline in Mozilla and Chrome are well understood. The media player pipeline in Windows is designed to be hooked and debugged. There is absolutely no possible way to DRM video on Windows, Linux or Mac that can't be intercepted after decryption. As for Android, unless the DRM blacklists pretty much every Android device ever made, it can't work.

So... good luck trying... I actually buy all my films, but I decrypt them so I can still watch them even if the DRM kills. I lost tons of money buying audiobooks on iTunes which could only be downloaded once. I won't ever buy media I can't decrypt again. I'll join the race to see who can permanently crack the DRM fastest.


The day after 'S3izure', does anyone feel like moving to the cloud?


Azure Stack

At least Azure Stack will make it possible to move things out of the cloud and back home.

With Amazon and Google, you're screwed


Nimble gets, well, nimble about hyperconverged infrastructure


Where would it fit?

Microsoft and OpenStack currently implement hyperconverged storage in their systems with full API support and integration between management, compute, storage and networking technologies. VMware does not support hyperconverged storage at all since they haven't built an application container (think vApp) that can describe location independent storage without reference to SCSI LUNs (local, iSCSI or FC). As such, at this time, VMware doesn't support either hyperconverged storage or networking.

So, except for making half-assed attempts at running traditional storage on compute nodes (definitely a good start but very definitely not hyperconverged), where would this fit?

Just remember that hyperconverged requires that you have to do more than just run traditional storage on the same box as compute. It has to actually be converged. Meaning that storage and networking is part of the application itself.

As I said, both Windows and OpenStack clearly define how to achieve this and both support Docker style apps (container or otherwise) through a standard API which actually supports hyperconverged. Adding high speed storage makes it faster, but replacing Storage Spaces or Swift actually hurts the system by introducing unnecessary levels of management and abstraction.

So, if VMware ever learns how make a current generation solution, the market for hyperconverged storage won't exist any longer. It would be like buying a new car and then trying to add a second engine to it that actually made the car slower because of the extra weight.


Linux on Windows 10: Will penguin treats in Creators Update be enough to lure you?


Re: Java is so easily messed up... just put spaces in a path or a password...

I believe he's referring to the code within the Java standard class libraries which handles cross platform code which is the real reason Java failed as a "cross platform tool". Between file processing and AWT, then SWT and Swing, Java has been a frigging nightmare for developers. It may have improved since I ran screaming from it, but I grew awfully tired of screwing around rewriting half of Java every time I tried anything because the Java implementation was broken and due to sealed classes, it was nearly impossible to make anything work without starting over.

Remember the worst thing to ever happen to Java was to name the language, the intermediate language, the run time, the class libraries and the platform as Java. Because of this, the even Oracle management doesn't understand what something like DalvikVm or SWT is and how it fits. Closure straight out baffles them since it's a non-Java language running on Java and that's confusing.


Re: Is it better than Cygwin?

Better support for porting Linux apps to Windows for sure. An example would be that Handbrake, the video compression tool can leave all their libraries (shared libraries) in native Linux format compiled with GCC or LLVM with GNU assembler optimizations while building a UI using XAML and C#. This will save thousands of hours working out platform incompatibility issues often associated with porting complex applications to Windows.

Cross compiled tool chains are another advantage. For example, one could develop code using Apple Swift for IOS development directly on Windows and thoroughly troubleshoot the code using tools like Visual Studio and then compile natively for Android.

Android is another one. It's possible to build the native Android emulator for Ubuntu on Windows allowing native access to Dalvik, GCC and LLVM directly from within Visual Studio allowing faster and more accurate memory debugging than has been possible using SSH or Cygwin/MingW implementations.

There are many reasons this is better for developers.

As for users, that's different. A user probably won't care much about the differences since it's basically the same code. It should be a bit better with the recent emergence of alternative to X11 which generally don't "remote" as well for screen mirroring.


HPE's started firing people at Simplivity, say former employees


HP + Cisco + Dell != Software

If there's anything that Cisco, Dell and HP have proven over and over again is that they are hardware only companies. They simply don't understand things like the fact that software is actually more important than hardware because you can take the software with you when you leave.

VMware still has at least a little bit of trust in the industry because Dell hasn't been stupid enough to try and roll it into Dell as a Dell offering.

Simplivity is absolutely useless now because it's an HPE product and you should be expected to run it on HPE hardware and if you leave, you leave Simplivity also.

Cisco's HyperFlex technology is an absolute joke. Because VMware is a disaster when it comes to software updates, HyperFlex is dangerously scary as you probably for safety reasons should never ever upgrade any hosts running HyperFlex technology... or NSX, you should simply delete the node and start off with a new replication. This means that where every other vendor's hyperconverged technology only needs three servers in a cluster, HyperFlex needs a minimum of four.

Companies need to use storage from the hypervisor vendor exclusively. Using third party hyperconverged storage with VMware, Azure, OpenStack or Citrix is sheer stupidity. It's also excessively wasteful. Currently, VMware's solution is the weakest and the worst to manage. I conspire to believe this is related to having been own by a storage company peddling legacy for so long they didn't want people to depend on better solutions.


So you want to roll your own cloud


Why not buy your own cloud?

Honestly, just buy an Azure Stack from Cisco, Dell (or if truly desperate HP) and be done with it. Then you have a finished platform with all the cloud services including PaaS and SaaS without the headache of either rolling your own or selling your soul to Amazon, Google or Microsoft.

Yes, I know you'd be running Microsoft software... we already sold our souls to them for that.


Cancel your cloud panic: At $122bn it's just 5% of all IT spend


Re: Biannual

I honestly have no idea how to respond to this. I am certainly not a speaker of the Queen's English as I find it a disorderly mess. I honestly lost absolutely all respect for the Queen's English when I heard her in an interview refer to the game of football as "Footie". People should prefer Oxford English over Queen's English, the Queen is a gutter slang speaker as well.

I recently learned while paying close attention on a visit to central England that the reason American's spell it color and the English spell it colour is because the American's pronounce it color and the English pronounce it colour which the ou in the English pronunciation is not the conjunction ou but instead the letter O and the letter U being rammed into each other softly.

There are many horrible words in the many different dialects of English. I believe that OED's persistence of documenting every single word ever without properly listed etymology as part of their new definitions any longer, practically disqualifies OED as an official dictionary as opposed to a competitor to "The Urban Dictionary". The last 5 times I've visited the OED, I received poor quality definitions with no further qualifications and have had to refer to wiktionary instead which supplied a slightly better experience.

As for your use of "Wheelie".

I believe that if you are an Englishman, you should be forced to use the term "wheel stand" instead because the British tongue has been grossly infected with a plague of "eeeeeeeee"'s. Every single possible noun in the British tongue has been reduced to a ridiculous single syllable followed by IEs. Honestly, Butties, Footie. The "cutsie shit plague" which has afflicted your nation is unforgivable. Call them sausages instead of bangers. Don't abbreviate mashed potatoes, there is simply no profit in that.

American English sucks like this as well. But unlike the British who seems to feel that they still have some resemblance of authority over than English language and more specifically "The Queen's English", one should strive to set an example of culture and dignity as opposed to allowing your language to degrade into a failed Hello Kitty cartoon.

My blogging/commenting grammar is reflective of my speech pattern as opposed to representative of grammatically correct writing as I would do elsewhere. I believe that if we are to take it upon ourselves to be grammar nazis in public, we should also strive to set a better example.

I'll forgive your wheelie comment today, I do believe that EEEEEEs affliction or not, it is likely the proper word in that place. However, as some point, I'd like to have a nice discussion with you about the British compression of the word "the". For example, I prefer to visit "The Hospital" when I'm ill as opposed to visiting someone named "Hospital". I feel one should be educated at "A University", "The University" or maybe at "Oxford University" or "The University of Chambridge" as opposed to simply "at university".

The almost random but accepted disappearance of the word "The" in The Queen's English would be considered guttural, unrefined or "Straight out damn near toothless redneck" in other dialects. For example, I would expect Kanye West to selectively omit the word "The" as he may not be able to spell it.


More cloud spending when aaS is removed

This year marks the beginning of fully supported private clouds being shipped. You'll get the full public cloud experience with SaaS, PaaS and IaaS as a package you can buy in a box and have delivered. As such, most of the money currently earmarked for spending on "servers and storage and stuff" will be earmarked for "private cloud" instead.

We're about to see a massive move out of the public cloud as the cost of uncertainty increases throughout the world. With Theresa May being the first new leader of hate related politics and quickly followed by The Donald and Germany, France, Poland, etc... coming up soon, public cloud VERY SCARY right now. Possibly the worst choice any company can make is to place their business files on servers controlled by American or European countries that are lead by populist politicians. Consider that hosting data in the public cloud within the UK makes it susceptible to the snooper charter and the new follow up bills. The US government is suing Microsoft, Google, Amazon and others to claim the should have access to data held in data centers outside of America simply because American companies manage the data.

Populist propaganda removes human and civil rights from people generally under the heading of national security. While the cloud technology is perfectly sound, the problem is politics.

I was in a Microsoft Azure Security in the Cloud session last week held by Microsoft and asked "If I use one of the non-Microsoft Azure data centers located in Germany, does Microsoft U.S. have access to my data". The guy really avoided answering but eventually admitted in theory a subpoena issued in the U.S. would be all that would be required to give access to data in non-Microsoft data centers in Germany because it's still part of the Azure platform. Due to additional laws in America, Microsoft would be required to gag themselves and not tell anyone that the US government is snooping.

While I don't have anything to hide from the American's and certainly don't care if they are checking out the naked pictures I keep of myself (I'm not an attractive person) on my cloud accounts... I think that there are many companies out there that have to avoid that. There are no American companies currently delivering cloud services in any data center anywhere that can actually meet the requirements of EU Safe Harbor. UK companies are REALLY REALLY REALLY out on that one thanks to Miss May.

So... in the end, cloud will grow like crazy, but not in the public cloud. Instead, turn-key private cloud will be where we are in 5 years.


UnBrex-pected move: Amazon raises UK workforce to 24,000


Cheap labor?

Hiring cheap labor is always good practice. Best part is, the new employees won't be able to afford international travel, so they'll be close during vacations.


The stunted physical SAN market – Dell man gives Wikibon forecasts his blessing


Hyperconverged will die shortly after as well

HyperConverged simply means that software stores virtual disks reliably and efficiently on the virtualization hosts themselves. Windows Storage Spaces/ReFS and systems like GlusterFS/ZFS have be mature for some time. VMware is about 5 years behind but may eventually mature to a similar level as to Windows and Linux.

Once people eventually figure out that scale out file servers running natively on hypervisor hosts is more efficient and reliable, the entire aftermarket hyperconverged market will simply die.


Connected car in the second-hand lot? Don't buy it if you're not hack-savvy


Pretty sure it's brand dependent

BMW makes it nearly impossible to connect to your own car. In many cases you can't even connect to a car you properly own. I'm pretty sure that their system which is paranoid strict about device connectivity won't let the new owner connect unless the old owner first releases it.


Hyperconverged market gets hyper-competitive as new riders enter field


Re: HPE/Simplivity not a competitor

Like how Aruba, SGI, DEC, Compaq, 3com, Tandem, etc... all benefited from HPE sales and engineering? There are plenty more, but HPE buys companies in that top right quadrant, rides them a few months and as the customers start looking elsewhere, they buy someone else. HPE has been a chop shop since the dot com era.

I'm not saying Cisco is better with a track record like they have with Cloupia and now Cliqur, but HPE is where IT innovation goes to die.

Even HPE born and raised hardware is so ignored by engineering that ILO is damn near unusable at this time. It's API barely works, it's command line fails more often than it works. It's SNMP is actually insecure and practically and industry joke. Oh, and if you want it to work "right" it requires you keep an unpatched Windows XP with IE 7 or 8 to even get KVM to operate semi-ok. As for installing client certs... just don't bother.


Windows 2016, Gluster & Docker/OpenStack?

Is it a competition to see who will pay the most money to keep using VMware? Honestly, storage is part of the base OS now... networking too... unless you want to pay more and use VMware... which well doesn't really solve anything anymore. Don't get me wrong, I'm all for retro things. But it seems like hyperconverged products from EMC, Cisco, HP/Simplivity or NetApp is more about spending money for absolutely no apparent reason.

In addition, I can't really understand why server vendors are still screwing around with enterprise SSD when Microsoft, Gluster and others have obsoleted the need for it. Dual-ported SAS or NVMe seems like the dumbest idea I've heard of in a while.

People, reliability, redundancy and performance comes from sharding and scale out. When you depend on things like dual-ported storage, you actually limit your reliability, performance and redundancy.

And no... fibre channel is longer a viable option for storage connectivity anymore. Why do you think the FC ASIC vendors are experimenting with alternative protocols over their fabrics?


UK Snoopers' Charter gagging order drafted for London Internet Exchange directors


Didn't this behavior collapse the Empire?

I am not completely familiar with British history, but somehow I recall hearing that a blind overly-nationalistic belief was the primary flaw in the later empire which eventually led to its collapse.

It seems to me that as with the Americans, Britain seems to believe that simply having been squeezed from a particular vagina in a particular place justifies an unjustified belief in ones superiority.

Patriotism is a disgusting illness. It leads to some sort of lethargic behavior that allows a person to blindly believe they have no need to try to succeed since simply claiming membership in a birthright is a satisfactory alternative.


Global IPv4 address drought: Seriously, we're done now. We're done



I'm using my phone right now to post this. It has a private IP over LTE and works just fine. When I tether my laptop, it works just fine. I regularly visit sites behind load balances that multiplex at layer-5, in fact, there are often tens of thousands of major websites operating sharing a single IP.

Our current IPv4 problem is entirely greed based and artificial. There is absolutely no reason we can't solve the problem. With less than 100,000 registered active autonomous systems on the internet, we certainly should be able to make due with a few hundred thousand /24 networks.


Microsoft ups Surface slab prices for Brits. Darn weak pound, eh?


Supply and demand?

I'm pretty sure that the people at Microsoft report their quarterly results in dollars. When they sell to customers in other countries, they account for value added tax where applicable, shipping if necessary, cost of support (employing locals), regionalization (spelling checkers with colour and favourite), etc...

If the value of a local currency drops too drastically relative to the value of the dollar, Microsoft must increase prices to cover the exchange rate related losses.

If the market can't or won't bare the adjustment, they will incur a different set of losses and choose to stay and fight or give up and leave.

Microsoft probably waited for the pound to reach a level they expect be stable and made a big painful adjustment that should compensate for possible further minor shifts allowing the U.K. Market to adjust to the change and go on as normal. I also assume they are not sitting and celebrating this change or even taking pride in it.

Consider as someone living in Norway, our currency devalued by 50% during the oil crisis and hasn't recovered even though oil more or less has. We feel your pain but also understand that $1 is $1 and it takes more crowns to make a dollar today than 3 years ago.


HPE, Samsung take clouds to carriers


What the?

Network function virtualization is a standard component of Windows Server and OpenStack. I think Nutanix even has something that could be considered NFV if you ignore what NFV actually is. By using it with Docker and/or Azure apps, it's entirely transparent. Why the hell would anyone pay for this? More to the point, why the hell would anyone ever use any platform that doesn't make this a minimum standard feature?


Dell's XtremIO has a new reseller in Japan – its all-flash rival Fujitsu


Bluray vs. HD-DVD?

Remember when Bluray won in the format wars? It was hilarious, Sony won the war when HD-DVD just died because they stopped pressing the discs and stopped making the players. Sony was sure that they would be rich because the whole world would flock to their format and what really happened was that Sony, should have learned that they probably should have just stopped making Bluray too because the world had already simply ditched using discs and moved to download services. Instead Sony went all in and now has almost no presence in the consumer video market to speak of. The moral is, neither Bluray or HD-DVD won, but the HD-DVD guys lost less because they knew when to pull out.

Dell/EMC, NetApp, Hitatchi, HP, etc... are all going all in on storage and all flash believing that they can win and take the cake using things like NVMe and such, but in reality, they're all hanging on to something which is already being forgotten.

SANs made a lot of sense in a time when file systems and operating systems lacked the ability to provide the storage needed for server farms and later virtualization. Now with the exception of VMware who seems to think that storage is a product as opposed to a component, the world is moving away from these technologies and we'll instead use scale-out file servers running on our compute nodes which provide performance and redundancy with none of the bandwidth problems SAN has. We'll use clouds and version controlled file systems to provide backups as well. It provides us with substantially lower TCO, better support, better integration and a clear long term path for growth in capacity and performance without the massive lost investments SAN are doomed to.

So, while the dozens or hundreds of storage companies battle it out, the hypervisor vendors will simply localize the storage and provide something better eliminating the need or desire to use these dinosaurs.

I wonder, which companies will be the smart ones who realize that the ship has sailed and they weren't on it first. I think Dell's merger with EMC will be interesting because the only thing of value they appear to have gotten from the deal is VMware and that company is so plagued with legacy customers demanding support, Dell will probably miss the boat on too many other opportunities by trying to force VMware to become something else.


Stallman's Free Software Foundation says we need a free phone OS


Isn't he cute?

Stallman managed to make it into the news again. And here we thought he was finally gone.

1) You can make the best free phone OS but no one will use it

2) Every vendor will give it a try because ... well why not

3) Every vendor will stop supporting it within days of it being released

The consumer who defines the success of a platform or not doesn't give a shit about free. They want music, videos and games.


Ooops! One in three tech IPOs now trading below their starting price


Re: Why?

VMware went to shit when it became board controlled. All their competitors are miles ahead of them in every area and VMware, possible the most innovative company of the first five years of the millennium has become a "me too... kinda" company. Hardware support for virtualization has eliminated competitive edges in hypervisors. It's become about integration and management of which VMware is thoroughly lacking. Even now, they actually sell their system APIs blocking developers from establishing a community and ensuring their vendors will get innovative features first.

Facebook and others actually produce a surprising amount. In the case of Facebook, they provide massive amounts of innovative technology to the community. Oh... and they have managed to monetize the shit out of their platform.


UK, you Cray. Boffins flex ARM in 'first-of-its-kind' bonkers HPC rig


Re: Interesting opportunity for comparison.

Not really.

1) Supercomputing code generally is written by scientists and runs horribly. I've done multiple tests and found that I often can rewrite their code and perform better on 40 processor cores and 4 GPUs than they do on 3 million pound computers.

2) We're not comparing ARM to x86 here. That comparison can be accomplished far better with a few desktop systems. Performance-wise, you're making the assumption that performance is related to instruction set. It's generally about instruction execution performance and memory performance. Intel uses more transistors on their multipliers than ARM uses in their entire chip. This may sound inefficient, but it is those things which given Intel an edge. Let's also consider that memory performance is almost all about management of DDR bursts and block alignment. ARM has much tighter restrictions on those things. Also, more often than not, the scientific code makes profiting from cache utterly meaningless. Ask a scientist working on this code whether they can describe the DDR burst architecture or whether they can describe cache coherency within the CPU ring bus or whether they can explain the process of mutexing within a NUMA environment.

This is about whether shitty code costs less to run on one computer 100 times larger than it should be vs another.

For 3 million pounds, I would imagine they could have bought a gaming PC and a programmer.


Tintri, thrown on the El Reg grill: We'll support NVMe! We promise!


NVMe fail

NVMe is a protocol for block storage across the PCIe bus. Like SCSI, it is intended as a method of storing block in a direct connected system and assumes lossless packet delivery. When FibreChannel came around, SCSI drives could be placed in a central system allowing the physical drives of a server to be located in a single box. When this happened FC was designed to deliver the SCSI QoS requirements across fiber.

A few brilliant engineers got together and found out they could provide virtual drives instead of physical drives over FC and iSCSI while still placing the same demands on the fabric to support SCSI QoS.

This is where things begin to go wrong... people wanted fabric level redundancy as well. This meant designing an active/standby solution for referencing the same block devices. The problem is, SCSI and now NVMe are simply not a good fit for this.

1) The volumes (LUNs) being accessed as block storage ARE NOT physical devices. They are files stored on file systems.

2) The client devices accessing the LUNs ARE NOT physical computers with physical storage adapters. They are virtual machines with virtual storage devices.

3) The computational overhead to simulate a SCSI controller in software, then translate the block numbers from the virtual machine to a reference in a VMFS or NTFS file system then look up the virtual block to reference in the virtual file system, convert that reference to a virtual file position, then lookup that block within a virtual file, translate that block to a physical block and the perform everything in reverse is wasteful and consumes power and slows everything down. In addition, it severely limits scalability.

4) Dual ported storage exists to compensate for limitations in block based storage. It would be far more intelligent and cost effective to plug a large number of single ported drives into a PCIe switch and then multi-master the PCIe bus. This technology dates back 20 years and is solid and proven. The problem is, PCIe is too slow for this. When facing NVMe and new storage technologies, the bus would max out at about 32 NVMe devices.

5) Scale out file servers simply scale out better than controllers. SCSI and now NVMe really can't probpey scale past two controllers and since NVMe and FC lack multicast, performance is simply doomed.

The solution is simple... build out either :

1) GlusterFS

2) Windows Storage Spaces Direct

3) Lustre

Build up each storage node with hotest(NVMe)/hot(SATA SSD)/cold(spinning disk)

Build 3 or more nodes





FC (if needed)

Use proper time markers (not snapshots) for backup.

Be happy and save yourself millions.

PS - Hyper-V, OpenStack, Nutanix and more have this built in as part of their base licenses.


Well, FC-NVMe. Did this lightning-fast protocol just get faster?



Ok... this is 2016 almost 2017... WE DON'T SEND RAW BLOCK REQUESTS TO STORAGE!!!!

Let's make this very clear, SCSI and NVMe are the dumbest things you could ever put in the data center as an interconnect. When we used to connect physical disks in an array to the fabric, they wouldn't have sucked so bad. But now, we have things like :

1) Snapshots

2) Deduplication

3) Compression

4) Replication

5) Mirroring

6) Differencing disks

There are tons of nifty things we have. SCSI and NVMe are protocols designed to talk to physical storage devices not logical ones. There are two needs when talking to a storage array :

1) a VM is stored on the array

2) a physical host is stored on the array

When you install 5-500,000 physical hosts with VMware, Linux or Windows, you will use the exact same boot image with a fork in the array. This is REALLY REALLY easy and with some systems (like VMware) which can do stateless boot disks, you can use the exact same boot image without forking at all.

When you install 5 or 50 million virtual machines you do roughly the same thing. Clone an image and run sysprep for example.

What does this mean? The hosts or virtual machines DO NOT talk directly to the disks and therefore don't need to use a disk access protocol. Instead, a network adapter BIOS or system BIOS able to speak file access protocols will be far more intelligent.

There is simply no reason why block storage protocols should EVER be on a modern data center's network. Besides being shit to begin with (things like major SCSI SNAFUs) block storage protocols generally don't provide good security, they don't scale and you end up building impressively shitty networks... I'm sorry fabrics in pairs because FC routing never really happened.

iSCSI almost doesn't suck... but it's just an almost.

People are saying "NVMe is about latency..." blah blah blah... no it isn't. It's about connecting Flash memory to motherboards. It's basically PCIe. It's a system board interconnect. It is not a networking protocol and should never be used as one.

If QLogic is actually bent on making something that doesn't suck... why not make an Ethernet adapter which supports booting from SMBv3 and NFS without performance issues? I should be able to saturate a 100Gb/s network adapter on two channels when talking to GlusterFS or Windows Storage Spaces without using any CPU.


Re: I remember...

FCoE was not really that great. From a protocol perspective, it had tons of overhead. Reliable Ethernet was absolutely shit because it depended on a slightly altered version of the incredibly broken 802.3 flow control protocol. Add to that that FCoE is still SCSI which actually needs reliable networking and it's a disaster compounded ontop of another disaster.

iSCSI was about 10,000 times better than FCoE since the overhead was roughly the same and it implements reliability at layer 4 which is highly tunable and not network hardware dependent. Add good old fashioned QoS on top and it's better.

Better yet, why not stop using broken ass block storage protocols altogether and support a real protocol like SMBv3 or NFS? They are actually far more intelligent for this purpose.


Trump's 140 characters on F-35 wipes $2bn off Lockheed Martin


F-35 is about jobs

It's been said by others, but the US government has been quite successful at not only providing a lot of jobs by siphoning funds into defense contractors that gets spread out far and wide, but they did it under the heading of national security which always goes unchallenged and more importantly, they forced every NATO country to buy some as well feeding more money into the US economy. In the end, the F-35 program has been probably the most successful economy builder in the US for decades. And the best thing is, the cost of owning an F-35 is so ridiculously high that it will draw money into the US for decades.

That said, for aerial combat, drones will probably take over. There's really just no point in spending that much money on a jet which while being quite cool, puts the pilot's life in danger. You can build 2000 armed drones for the cost of a single F-35. While an F-35 may be more effective in battle than a drone, a fighter against 2000 drones probably won't do so well.


HPE 3PAR storage SNAFU takes Australian Tax Office offline


Problem with SAN in general

I was recently told by a colleague of mine his company was about to upgrade firmware on their SAN controllers due to performance problems on a nearly exabyte SAN. I asked "Do you have a mirror?" And he said they have backup but not a mirror. I asked how long it would take to restore the backup and the number was nearly a month. I asked whether they have fully verified the contents of their backup and he said not recently because it would take a month just to stream the data from the backup.

The problem with SAN is that it centralized all problems. It's a single point of failure. The performance of even the fastest NVMe SANs are very very slow compared to distributed file systems.

They managed to do the upgrade it will now take about 6 weeks to run the rebuild on the array. The rebuild is destructive and they will have no idea whether the problem is fixed until it is done. They also don't know what caveats will be introduced from the upgraded firmware.

I don't experience these problems because I run two distributed file systems. One for performance and one for transaction oriented journaling. I have about 1Tb/s bandwidth between the two systems which can easily be saturated during transfer operations. What'a best is that my system cost less than a 10th of what his system cost per byte and instead of adding new disk shelves, I add disk, bandwidth and performance for each expansion. Instead of replacing SANs, I simply remove obsolete nodes and add newer and more efficient ones.

Trick one: Don't use VMware. Linux based GlusterFS systems only work with iSCSI or fiber channel which is slow and doesn't scale. VAAI NAS isn't available in Linux because of VMware's stupid policy of locking out open source developer.

Trick two: If you absolutely must run VMware, use Oracle Solaris for storage. Unlike EMC, NetApp, 3Par, etc... it can actually do proper scalability for performance and capacity. Consider Oracle Infiniband for the storage interconnect. Take classes on ZFS. Use Oracle servers. If you can afford $15,000 per blade for VMware, you can afford Oracle servers for storage. Oh... and don't use Infiniband for networking VMware or NSX. The CPU cost is too high.


'Toyota dealer stole my wife's saucy snaps from phone, emailed them to a swingers website'


Re: Unless you're the FBI...

I regularly have conversations with my children regarding this exact problem. I explain that they should never want any photographs on their phones they don't want out in the wild. This has nothing to do with right and wrong. But an example of a conversation at breakfast this morning. We were discussing with our 13 year old daughter and 14 year old son about their friends using snus, drinking and vaping. I explained that while I don't condone these activities, under no circumstance are they to ever walk home alone or use a normal taxi while drunk. They are to pick up the phone and have me come get them or send an Über to them since it's safer than a random taxi being driven by the owner's brother-in-law. Also, they are never ever ever allowed to take a sip of a drink they haven't seen poured or have had out of their eyesight for even a second.

It is not right I should have to have these conversations with two children. But it's right that I do. Just because people shouldn't do bad things doesn't mean they won't.

So, while I agree with you, your point is overly altruistic and not meaninful because these things will happen and the best advice is... don't store pictures like these on any electronic devices.

Oh... and damn... lucky pastor.


Ford slams brakes on sales spreadsheets after fire menaces data center


Re: DR done right

Did we read the same article? This was a piss poor example of disaster recovery. All I could think while reading the article was "Sounds like Ford".

Any company managing their own servers should have a minimum of 3 data centers spread out geographically. Their systems should have 100% (not 99.999) uptime and they should be thoroughly embarrassed by any announcement of this type. If I were in PCI enforcement or banking regulation enforcement, I would open a case to investigate gross negligence.

Ford should really outsource their data center to someone competent with technology. They have proven for nearly 100 years that anything with electronics designed by them is going to constantly suffer failures.


Good God, we've found a Google thing we like – the Pixel iPhone killer


Uh... what?

I tend to hear this walled garden thing only from Android users who have locked themselves into Google's infrastructure for life.

Android is just as much of a lock-in as Apple.

That said, I can easily take all my Apple media and strip the DRM and play it on any phone or PC.

As for apps, Apps only work on the OS you bought them for.


Solidfire is 'for people who **CKING HATE storage' says NetApp Founder Dave Hitz


Re: Scale up vs. scale out

I'all grant you have many good points. I work with quite a few different workloads. Agreed that NVMe is simply a method of remote procedure calling over the PCIe bus as well as a great method of command queuing to solid state controllers. It is designed to be optimal for single device access and has severe limitations in the queue structure itself for use in a RAID like environment. In fact, like SCSI, it has mutated from a single device interconnection protocol to something which it really sucks at. If creating virtualized devices in ASIC, there are extreme issues regarding upgradability. If implemented in FPGA, there are major issues with performance as even extremely large scale FPGAs have finite bandwidth resources. In addition, even using the latest processes for die fabrication, power consumption and heat issues are considerable. A hybrid design combining a high performance/low power crossbar along with FPGA for implementing localized storage logic could be an option, though even with the best PCIe ASICs currently available, there will be severe bandwidth bottlenecks as expandability is considered. PCIe simply does not scale well in these conditions. Ask HPC veterans why technologies like Infiniband still do well in high density environments for RDMA when PCIe interconnects have been around for years. SGI and Cray have consistently been strong performers by licensing technologies like QPI and custom designing better interconnects because PCIe simply isn't good enough for scalability.

So NVMe is great for some things. For centralized storage... nope.

As for storage clustering, I'm not aware of any vendors that cluster past 8 controllers currently. That's a major problem. Let's assume that somehow a vendor has implemented all their storage and communication logic in FPGA or dreadfully within ASIC. They could in theory build a semi-efficient crossbar fabric to support a few dozen hosts with decent performance. It is more likely, they have implemented their ... shall we say business logic in software which means that even if they had the biggest baddest CPUs from Intel, overall their bandwidth on scale will be dismal. There are only so many bits per second you can transfer over a PCIe connection and there are only so many PCIe lanes in a CPU/chipset. Because of this limitation, high performance centralized storage with only 8 nodes will never be a reality. Consider as well that due to fabric constraints in PCIe, there will be considerable host limits regarding scalability without inplementing something like NAT. This can be alieviated a bit by running PCIe lanes independently and performing mostly software switching and mostly eliminating the benefits of such a bus.

Centralized storage has some benefits such as easier maintainance, but to be fair, if this is an issue, you have much bigger problems. When using a scale out file server environment configured with tiers, for DR, backup, snapshots, etc... this makes use of centralized clusters of servers. You may choose to use a SAN for this, but that just strikes me as inefficient and very hard to manage. When configuring local storage properly, there is never a single copy of data and it is accessible from all nodes at all times with background sharding that copes well with scaling and outages. If there is a SSD failure, that means the blade failed and should be offlined for maintainance. This is no different that a failed DIMM, CPU or NIC. These aren't spinning disks, we generally know when something is going to die.

You're absolutely right about blades and PCIe lanes. Currently, so far as I know, no vendor is shipping blades like this which is why I have been forced to use rack servers. Thankfully, my current project is small and shouldn't need more than 100 per data center.

I am actually doing a lot of VDI right now. But that's just 35% of the project. The rest is big storage with a database containing a little over 12 billion high resolution images with about 50,000 queries an hour requiring indexing of unstructured data (image recognition) with the intent of scaling to 200,000 queries an hour. I am designing the algorithms for the queries from request through processing with every single bit over every single wire in mind.

I have worked with things as simple as server virtualization in the past on small and gigantic scale. With almost no exception, I have never achieved better ROI with centralized storage than with localized, tiered and sharded storage.

The only thing that centralized storage ever really accomplished is simplicity. It makes it easier for people to just plug it in and play. This is of great value to many. But I see centralized NVMe being an even biggest disaster than centralized SCSI over time.


Scale up vs. scale out

Scale out exists not because you want to have more storage. It's because storage array controllers and SANs are too slow to meet the needs of high density servers. Storage has become some a major bottleneck that it's no longer possible to populate modern blades and actually expect to have even mediocre performance of VMs because it's like running a spinning disk on a laptop. It's just horrible.

Local storage scaled out is far better. So internal tiered storage works pretty good. You get capacity and performance in a single package. It doesn't scale up as well as a storage array... unless you buy more blades. Instead, it's pretty damn good for trying to make sure that your brand new 96 core blade isn't sitting at 25% CPU usage because all machines are waiting on external storage to catch up.

Scale-out in a SAN environment is just plain stupid. Even with the fancy attempts by some companies to centralize NVMe which is performed using PCIe bridge ASICs, the problem is that you'd need to have dedicated storage centralized for each blade to make use of that bandwidth. Additionally, NVMe is quite slow. NVMe generally only uses 4 PCIe lanes. Using local storage, I can use 32 PCIe lanes which is a small but noticeable improvement.

Scale up is still quite useful. Slow and steady wins the race. Cabinets that specialize in storing a few petabytes are always welcome. You really wouldn't want to use them for anything you might need to read, but an array that can provide data security would be nice. So, maybe Netapp should be focusing on scaling up instead of out. Cluster mode was kinda of a bust, it's just too slow. 8 controllers with a few gigs of bandwidth each don't even scratch the surface of what's needed in a modern DC.


'Geek gene' denied: If you find computer science hard, it's your fault (or your teacher's)


Great idea but where's the actual research

I know this is the reg, the headlines are always basically click bait. So far as I can tell there's nothing in the research which can in any way be considered conclusive regarding whether genetics can impact this. That would require identifying a specific sequence to be tested and even then the results would simply say "We can't clearly identify whether this genetic sequence does or does not impact aptitude."

I'm quite sure there is a strong tie between genetics and aptitude. The gene involved is related to some form of obsessive compulsive disorder. Nearly all the "Nerds" (not geeks) I know and I know a lot are all people who :

1) Possess the ability to grind obsessively until they understand something

2) Possess incredible ability to recall information generally having cataloged it through associations

3) Have a weaker sense of community than others. This means they are perfectly willing to forego interpersonal interaction in favor of grinding on a problem.

4) A very high percentage show varying different levels of Asbergers. Ranging from appearing somewhat absent minded to having absolutely no interest in other people's perception of their behavior.

A nerd is generally someone who shows great aptitude (meaning willingness to work his/her ass off to learn something) towards one or more topics and therefore establishing a strong "genius like" ability in the topic. A nerd is generally quite confident in themselves for having achieved mastery in a field as such, they eventually are known to pursue other "hobbies"... very commonly an art (like guitar) or a sport (maybe soccer/football). This is where they'll establish their community and often attempt to mate.

A geek on the other hand generally has no particular aptitude for anything. They favor learning the "lingo" of something generally considered intellectual without actually achieving much more than a rudimentary knowledge of the topic. They present themselves as being nerds and even take pride in being permitted into social gatherings among nerds. The reason for this is to allow them to establish a sense of community. This happens from an early age. A person without the obsessive need to study and learn who doesn't have their own community as they are not athletic or maybe they don't see themselves as being pretty enough or important enough "to hang with the popular kids" and therefore latch onto the "brainy kids" who at that age are generally less interested in personal presentation than academia. They see the "brainy kids" as have some sort of adept talent for being brainy and believe that their skills are given to them by "being born smart" and as such see their gift as being similar to beauty or athletics. The geeks however establish an interest in what their friends are involved in and become something of an "accessory to the crime" for lack of a better phrase. The nerds being often quite happy to have a friend without the need to work hard to earn one or keep one then accepts the geek into their "circle".

Generally when puberty occurs, nerds will for the purpose of "satisfying their needs" attempt to groom themselves better, show interests in other topics (surprisingly music and marijuana are incredibly popular as it doesn't generally require physical prowess) and start blending into other cultures. Geeks on the other hand are generally what's left behind as being what appears as the mentally intellectual to the masses. In reality, the geek is simply someone that by this time shows a real interest in a given topic and seeing that the nerds "dropped out of the game", take over. This could mean pushing the projector carts around the hallways or working in the library. Generally just things that allow them to look like how the nerds should look but in reality, being just an awkward person with a strong (though often misused) vocabulary learned via osmosis from being in proximity to the nerds for so long.

A geek in modern times (not the old greek) was a person who joined the circus looking for safety in numbers during more dangerous times (like the old west) where a generally awkward person would be in danger from predators (like the more dominant males). Though these people weren't talented as they required hard work learned over time they would join as a "freak" where the other outcasts were. And while the person in question wasn't particularly freakish, they would perform freakish acts ...namely biting the heads off of live chickens. As such they established themselves within a community for safety.

I'd like to believe I'm a nerd... though who knows... maybe I'm a geek.


M.2 SSD drive format is under-rated. So why no enterprise arrays?



maybe missed a order of magnitude somewhere?


Google tries to lure .NET devs with PowerShell cloud bait


Jury is out?

I was pretty sure that Azure has kinda proven itself already.

The real question is whether public cloud will survive now that you can build an entire Azurr Stack in a few rack units capable of running ten thousand users. It's now officially cheaper to run Azure Stack instead of Azure, AWS or Google public clouds.

I have a 26U rack with eight 16 Core blades w/192GB each, 80Gb/sec networking to each blade, 8 terabytes of scale-out storage pumping over a million IOPs. I also bought a NetApp FAS2020 for near line backup storage.

The total cost of deployment for the entire system was about $10000 on eBay. I tend to only keep 3 blades running at a time since I only have 100 VDI users at a time. It spins up new VDI systems in about 13 seconds each. It has IIS, Load Balancing, SDN, SDS, etc... I tend to be at about 8% capacity for the three blades for normal office loads with 100 users.

Currently, it's a development pod and classifies as being able to run under the MSDN terms as lab equipment.

Getting Azure Stack up initially was a pain. Now, I've scripted the whole thing. A laptop with a fresh Windows 10 installation can download all the ISOs and deploy the entire Azure Stack in about an hour. I'm not using any fancy tools, just PowerShell. Since prepping ISOs as VHDs needs WAIK anyway, there was no point using anything except Powershell. I wrote it all object oriented and implemented a simple command queue pattern to implement the entire system with test driven development.

Now, Microsoft update does the rest.


'We already do that, we’re just OG* enough to not call it DevOps'


DevOps works... But only if you know how

Step 1) Avoid CVs/Resumes of people with DevOps on it

Step 2) Avoid technologies and products claiming to do DevOps

Step 3) Stop trying to teach IT guys how to code. They have more than enough to do just figuring out what should be coded

Step 4) There is no such thing as a DevOps degree. You're looking for computer science grads.

Step 5) Stop letting vendors try and tell you how to do DevOps

Step 6) Plan a project, build a high level design. Perform a PoC and document in detail step by step how to verify the system works.

Step 7) Write code to roll back the system when it fails

Step 8) On a whiteboard, make a REALLY clear plan of what changes are to be made and in what order

Step 9) Make the plan reflect zero downtime

Step 10) Write a script which can make the changes.

Step 11) Prepare for rollback, run the changes, verify the changes worked, verify the rest of the system didn't die, roll back when it screws up. Repeat until the change works without screwing everything up.

This is not complex. Any university comp-science grad can do this all day and night. We call it test driven development. Use Powershell to avoid stupid shit like 500 language syndrome. No don't use Python, Puppet, Chef, etc. You'll spend 99% of your time trying to figure out how to make Powershell work from inside them.


Is VMware starting to mean 'legacy'? Down and out in Las Vegas


VMware can have and eat well off of legacy

I am about to deploy a 120,000 user VDI POC on Hyper-V/Azure Stack. I never even considered VMware for the project since it's just not well suited for VDI. I work with about 40 customers in 15 countries and for new deployments 3 years ago, they were 100% VMware. Now, 75% deploy about 80% Hyper-V and 20% VMware. The last 25% are 100% VMware.

The first reason is simple. Price. If you have to pay $12000 per blade for Windows licenses and $7500 per socket ($15000 per blade) for VMware, you might as well use Hyper-V and skip paying for another VMM

Memory consumption. Linux containers and Hyper-V integrate tightly with the guest system virtual machine memory managers and allow substantially denser guest deployments than ESXi. VMware still insists on simulating the an MMU as an API for interfacing with the SLAT. Hyper-V and LXC instead integrate via "system calls" between the guest virtual memory managers and the host. This tends to cut memory footprint of VMs on average by at least 60% over ESXi.

Management. vRealize as a suite looks like an absolute joke written by a retro software freak next to Azure Stack and Ubuntu's OpenStack management systems. If VMware would quit competing against themselves and focus on doing it once and doing it right, they could get somewhere.

vCenter... Let's be honest... vCenter is the best tool on the planet if you plan on automating absolutely nothing. No other product gives you that "I'm an NT 4 sys admin" feel better than vCenter. But if you actually want to manage more than 50 VMs, you don't manage it from there. That's what vRealize, UCS Director, Nutanix, etc... are for.

Storage. Am I the only person who looks at VMware's storage solutions and wonders "Did EMC tell them they can't make anything that might compete with their stuff?" and "Did someone tell VMware that storage is something you can charge for?". Cisco released HyperFlex with a 3rd party storage solution which I think is just GlusterFS and pNFS configured for scale-out and a VAAI NAS plugin. It blows the frigging doors off anything VMware makes and most of it's open source and freebies. Are you seriously telling me that VMware couldn't have made that a stock component within a few months of work?

Networking. NSX was SOOoOooo cool 8 years ago. It was revolutionary. Then VMware bought it and kept it hidden for years and by the time it shipped, the entire world had moved on to far better solutions. It's not even integrated into VMware's other stuff. It's like running a completely 3rd party tool and what's worse is that it's REALLY slow and they ended up implementing Microsegmentation because the other VMware management tools were so broken and unusable that you couldn't have more than a few dozen port groups before things just fell apart. So, instead of fixing their other stuff, they basically just hacked the shit out of NSX to break the whole SDN paradigm. Oh did I mention that NSX costs an absolute fortune when SDN is free with every other solution?

Graphics. NVidia Grid is absolute-friggin-lutly spectacular on Hyper-V. It's like a ray of sunshine blown from the bottom side of angels every time you start a VM. RemoteFX is insane. I'm not kidding that adding a Grid to a host with Hyper-V nearly tripled my VDI density. When I tested the same card on VMware, it was agony. I got it working... Kinda. It wasn't too bad. Once you got the drivers on the guest to finally recognize the card, it was nice but they were generations behind and the improvement was about half that of Hyper-V. I speculate it's because on Hyper-V communication with the card is bare metal but on VMware, a software arbiter running on a single core is required which is killing the CPU ring bus or QPI. The behavior even suggests it might be maintaining cache coherency through abuse of critical sections which across sockets can be so slow it's almost silly.

So... should VMware be scared? Are they obsolete? Hell no. They are legacy. I work with hundreds of people who like installing Windows by hand over and over. I work regularly with a team of 150 people who are paid to work 8 hours a day manually provisioning virtual machines as change requests come in. They are kind of like people being paid by a company to lick stamps and put them on envelopes because peel and stick is too "fancy and fangled" and they don't want to figure out this new stuff.

VMware will be needed and loved and sold so long as people are 100% focused on "it's always worked for us doing it this way." Think of VMware as the COBOL of PC virtualization. Microfocus is still banking big bucks on COBOL. I think the worst thing VMware could do is to be better. There are still tens of thousands of small-organization mind sets and a VMM that can be fully configured to a "good enough" state in 30 minutes should always be around.


How many zero-day vulns is Uncle Sam sitting on? Not as many as you think, apparently


The department discloses... What about the hackers?

Seems to me that hackers are asked to hack. As such, they may or may not be asked to make the hack used part of the official catalog. So a simple work around to this is that you tell the hackers to only report the zero days that were low hanging fruits.




Biting the hand that feeds IT © 1998–2017