* Posts by CheesyTheClown

468 posts • joined 3 Jul 2009

Page:

Google Cloud to offer support as a service: Is accidental IT provider the new Microsoft?

CheesyTheClown

Don't use google for the same reason you don't use AWS or IBM

If you choose to go cloud, you want a single solution that works in the cloud or out. Google, Amazon and others don't make the platform available to take back home. Sure, you can go IaaS, but do you really want IaaS anymore?

Never use a platform which has PaaS or SaaS lock in and Google and AWS are permanent commitments. Once you're in, you can't go out again.

3
2

After 20 years of Visual Studio, Microsoft unfurls its 2017 edition

CheesyTheClown

Re: Getting better all the time

Maintaining projects other than your own is always a problem. But updating to a new IDE and tool chain is just a matter of course and is rarely a challenge. I've moved millions of lines of code from Turbo C++ to Microsoft C++ 7.0 to Visual C++ 1.2 through Visual Studio 2017. Code may require modifications, but with proper build management, it is quite easy to write code to run on 30 platforms without a single #ifdef.

I've been programming for Linux and Mac using Visual C++ since 1998. I used to write my own build tools, then I used qmake from Qt. Never really liked cmake since it was always hackish.

Now I code mostly C# since I've learned to write code which can generate better machine code after JIT than C++ generally can since it targets the local processor instead of a general class or generation of CPUs. Since MS open sourced C# and .NET, it's truly amazing how tight you can write C#. It's not as optimized as JavaScript, but garbage collected languages are typically substantially more optimal for handling complex data structures than C or C++ unless you spend all your time coding deferred memory cleanup yourself.

3
0

Why did Nimble sell and why did HPE buy: We drill into $1.2bn biz deal

CheesyTheClown

Re: Cisco: Be Bold!

Cisco is dumping SAN, why would they buy another one. Cisco is the only company who seems to be trying to take hyperconverged seriously... now if only they figured out that hyperconverged isn't a software SAN.

0
3
CheesyTheClown

And there goes Nimble

To be fair, over the past several software releases, Nimble has been dropping rapidly in quality. But still, they were probably the best option for SAN storage available.

No one shed a tear when Dell bought EMC since EMC was already yesterdays crap and VMware was already falling apart.

But HPE buying Nimble is a disaster since they probably are already trying to decide how many people to lay off to "reduce redundancies" since there's a storage nerd here and storage nerd there. They'll outsource support to India with a bunch of guys with a support script. As for knowledge for marketing, no one at HPE will sell Nimble since they barely understand 3Par and they kinda just figured that out.

I predict that Nimble will perform about as well under HPE as Aruba did... and frankly Aruba is pretty much dead now.

5
8

Sir Tim Berners-Lee refuses to be King Canute, approves DRM as Web standard

CheesyTheClown

Standard DRM = crack once use forever

This is a good thing. Imagine you buy a phone or a tablet and it reaches end of support. This device sold and marketed a device capable of playing standard DRM content might end up black listed because someone else found a method of cracking DRM using that device. Since updates are not available, whoever blacklisted that device can be held liable and sued for their actions.

Consider that browser based DRM is simply not possible.

A plugable module is code which requires standardization of an API. The API will be well understood and will not be restricted. So you write a small loader app and then based on entry points, issue your own keys and decide some of your own streams and find out where the keys are held.

The DRM must be extremely lightweight otherwise batteries will drain to quickly. One could write the DRM using JavaScript which would be smartest and with instruction level vectorization a part of WebAssembly, it could be quick. But it would consume far more power than a hardware solution. So DRM in code would have to be limited more to rights and providing decryption keys for AES or EC. And if the keys can be transferred at all, they can be cracked.

The media player pipeline in Mozilla and Chrome are well understood. The media player pipeline in Windows is designed to be hooked and debugged. There is absolutely no possible way to DRM video on Windows, Linux or Mac that can't be intercepted after decryption. As for Android, unless the DRM blacklists pretty much every Android device ever made, it can't work.

So... good luck trying... I actually buy all my films, but I decrypt them so I can still watch them even if the DRM kills. I lost tons of money buying audiobooks on iTunes which could only be downloaded once. I won't ever buy media I can't decrypt again. I'll join the race to see who can permanently crack the DRM fastest.

6
0

The day after 'S3izure', does anyone feel like moving to the cloud?

CheesyTheClown

Azure Stack

At least Azure Stack will make it possible to move things out of the cloud and back home.

With Amazon and Google, you're screwed

0
2

Nimble gets, well, nimble about hyperconverged infrastructure

CheesyTheClown

Where would it fit?

Microsoft and OpenStack currently implement hyperconverged storage in their systems with full API support and integration between management, compute, storage and networking technologies. VMware does not support hyperconverged storage at all since they haven't built an application container (think vApp) that can describe location independent storage without reference to SCSI LUNs (local, iSCSI or FC). As such, at this time, VMware doesn't support either hyperconverged storage or networking.

So, except for making half-assed attempts at running traditional storage on compute nodes (definitely a good start but very definitely not hyperconverged), where would this fit?

Just remember that hyperconverged requires that you have to do more than just run traditional storage on the same box as compute. It has to actually be converged. Meaning that storage and networking is part of the application itself.

As I said, both Windows and OpenStack clearly define how to achieve this and both support Docker style apps (container or otherwise) through a standard API which actually supports hyperconverged. Adding high speed storage makes it faster, but replacing Storage Spaces or Swift actually hurts the system by introducing unnecessary levels of management and abstraction.

So, if VMware ever learns how make a current generation solution, the market for hyperconverged storage won't exist any longer. It would be like buying a new car and then trying to add a second engine to it that actually made the car slower because of the extra weight.

0
2

Linux on Windows 10: Will penguin treats in Creators Update be enough to lure you?

CheesyTheClown

Re: Java is so easily messed up... just put spaces in a path or a password...

I believe he's referring to the code within the Java standard class libraries which handles cross platform code which is the real reason Java failed as a "cross platform tool". Between file processing and AWT, then SWT and Swing, Java has been a frigging nightmare for developers. It may have improved since I ran screaming from it, but I grew awfully tired of screwing around rewriting half of Java every time I tried anything because the Java implementation was broken and due to sealed classes, it was nearly impossible to make anything work without starting over.

Remember the worst thing to ever happen to Java was to name the language, the intermediate language, the run time, the class libraries and the platform as Java. Because of this, the even Oracle management doesn't understand what something like DalvikVm or SWT is and how it fits. Closure straight out baffles them since it's a non-Java language running on Java and that's confusing.

7
1
CheesyTheClown

Re: Is it better than Cygwin?

Better support for porting Linux apps to Windows for sure. An example would be that Handbrake, the video compression tool can leave all their libraries (shared libraries) in native Linux format compiled with GCC or LLVM with GNU assembler optimizations while building a UI using XAML and C#. This will save thousands of hours working out platform incompatibility issues often associated with porting complex applications to Windows.

Cross compiled tool chains are another advantage. For example, one could develop code using Apple Swift for IOS development directly on Windows and thoroughly troubleshoot the code using tools like Visual Studio and then compile natively for Android.

Android is another one. It's possible to build the native Android emulator for Ubuntu on Windows allowing native access to Dalvik, GCC and LLVM directly from within Visual Studio allowing faster and more accurate memory debugging than has been possible using SSH or Cygwin/MingW implementations.

There are many reasons this is better for developers.

As for users, that's different. A user probably won't care much about the differences since it's basically the same code. It should be a bit better with the recent emergence of alternative to X11 which generally don't "remote" as well for screen mirroring.

5
0

HPE's started firing people at Simplivity, say former employees

CheesyTheClown

HP + Cisco + Dell != Software

If there's anything that Cisco, Dell and HP have proven over and over again is that they are hardware only companies. They simply don't understand things like the fact that software is actually more important than hardware because you can take the software with you when you leave.

VMware still has at least a little bit of trust in the industry because Dell hasn't been stupid enough to try and roll it into Dell as a Dell offering.

Simplivity is absolutely useless now because it's an HPE product and you should be expected to run it on HPE hardware and if you leave, you leave Simplivity also.

Cisco's HyperFlex technology is an absolute joke. Because VMware is a disaster when it comes to software updates, HyperFlex is dangerously scary as you probably for safety reasons should never ever upgrade any hosts running HyperFlex technology... or NSX, you should simply delete the node and start off with a new replication. This means that where every other vendor's hyperconverged technology only needs three servers in a cluster, HyperFlex needs a minimum of four.

Companies need to use storage from the hypervisor vendor exclusively. Using third party hyperconverged storage with VMware, Azure, OpenStack or Citrix is sheer stupidity. It's also excessively wasteful. Currently, VMware's solution is the weakest and the worst to manage. I conspire to believe this is related to having been own by a storage company peddling legacy for so long they didn't want people to depend on better solutions.

1
3

So you want to roll your own cloud

CheesyTheClown

Why not buy your own cloud?

Honestly, just buy an Azure Stack from Cisco, Dell (or if truly desperate HP) and be done with it. Then you have a finished platform with all the cloud services including PaaS and SaaS without the headache of either rolling your own or selling your soul to Amazon, Google or Microsoft.

Yes, I know you'd be running Microsoft software... we already sold our souls to them for that.

0
0

Cancel your cloud panic: At $122bn it's just 5% of all IT spend

CheesyTheClown

Re: Biannual

I honestly have no idea how to respond to this. I am certainly not a speaker of the Queen's English as I find it a disorderly mess. I honestly lost absolutely all respect for the Queen's English when I heard her in an interview refer to the game of football as "Footie". People should prefer Oxford English over Queen's English, the Queen is a gutter slang speaker as well.

I recently learned while paying close attention on a visit to central England that the reason American's spell it color and the English spell it colour is because the American's pronounce it color and the English pronounce it colour which the ou in the English pronunciation is not the conjunction ou but instead the letter O and the letter U being rammed into each other softly.

There are many horrible words in the many different dialects of English. I believe that OED's persistence of documenting every single word ever without properly listed etymology as part of their new definitions any longer, practically disqualifies OED as an official dictionary as opposed to a competitor to "The Urban Dictionary". The last 5 times I've visited the OED, I received poor quality definitions with no further qualifications and have had to refer to wiktionary instead which supplied a slightly better experience.

As for your use of "Wheelie".

I believe that if you are an Englishman, you should be forced to use the term "wheel stand" instead because the British tongue has been grossly infected with a plague of "eeeeeeeee"'s. Every single possible noun in the British tongue has been reduced to a ridiculous single syllable followed by IEs. Honestly, Butties, Footie. The "cutsie shit plague" which has afflicted your nation is unforgivable. Call them sausages instead of bangers. Don't abbreviate mashed potatoes, there is simply no profit in that.

American English sucks like this as well. But unlike the British who seems to feel that they still have some resemblance of authority over than English language and more specifically "The Queen's English", one should strive to set an example of culture and dignity as opposed to allowing your language to degrade into a failed Hello Kitty cartoon.

My blogging/commenting grammar is reflective of my speech pattern as opposed to representative of grammatically correct writing as I would do elsewhere. I believe that if we are to take it upon ourselves to be grammar nazis in public, we should also strive to set a better example.

I'll forgive your wheelie comment today, I do believe that EEEEEEs affliction or not, it is likely the proper word in that place. However, as some point, I'd like to have a nice discussion with you about the British compression of the word "the". For example, I prefer to visit "The Hospital" when I'm ill as opposed to visiting someone named "Hospital". I feel one should be educated at "A University", "The University" or maybe at "Oxford University" or "The University of Chambridge" as opposed to simply "at university".

The almost random but accepted disappearance of the word "The" in The Queen's English would be considered guttural, unrefined or "Straight out damn near toothless redneck" in other dialects. For example, I would expect Kanye West to selectively omit the word "The" as he may not be able to spell it.

3
0
CheesyTheClown

More cloud spending when aaS is removed

This year marks the beginning of fully supported private clouds being shipped. You'll get the full public cloud experience with SaaS, PaaS and IaaS as a package you can buy in a box and have delivered. As such, most of the money currently earmarked for spending on "servers and storage and stuff" will be earmarked for "private cloud" instead.

We're about to see a massive move out of the public cloud as the cost of uncertainty increases throughout the world. With Theresa May being the first new leader of hate related politics and quickly followed by The Donald and Germany, France, Poland, etc... coming up soon, public cloud VERY SCARY right now. Possibly the worst choice any company can make is to place their business files on servers controlled by American or European countries that are lead by populist politicians. Consider that hosting data in the public cloud within the UK makes it susceptible to the snooper charter and the new follow up bills. The US government is suing Microsoft, Google, Amazon and others to claim the should have access to data held in data centers outside of America simply because American companies manage the data.

Populist propaganda removes human and civil rights from people generally under the heading of national security. While the cloud technology is perfectly sound, the problem is politics.

I was in a Microsoft Azure Security in the Cloud session last week held by Microsoft and asked "If I use one of the non-Microsoft Azure data centers located in Germany, does Microsoft U.S. have access to my data". The guy really avoided answering but eventually admitted in theory a subpoena issued in the U.S. would be all that would be required to give access to data in non-Microsoft data centers in Germany because it's still part of the Azure platform. Due to additional laws in America, Microsoft would be required to gag themselves and not tell anyone that the US government is snooping.

While I don't have anything to hide from the American's and certainly don't care if they are checking out the naked pictures I keep of myself (I'm not an attractive person) on my cloud accounts... I think that there are many companies out there that have to avoid that. There are no American companies currently delivering cloud services in any data center anywhere that can actually meet the requirements of EU Safe Harbor. UK companies are REALLY REALLY REALLY out on that one thanks to Miss May.

So... in the end, cloud will grow like crazy, but not in the public cloud. Instead, turn-key private cloud will be where we are in 5 years.

3
0

UnBrex-pected move: Amazon raises UK workforce to 24,000

CheesyTheClown

Cheap labor?

Hiring cheap labor is always good practice. Best part is, the new employees won't be able to afford international travel, so they'll be close during vacations.

0
0

The stunted physical SAN market – Dell man gives Wikibon forecasts his blessing

CheesyTheClown

Hyperconverged will die shortly after as well

HyperConverged simply means that software stores virtual disks reliably and efficiently on the virtualization hosts themselves. Windows Storage Spaces/ReFS and systems like GlusterFS/ZFS have be mature for some time. VMware is about 5 years behind but may eventually mature to a similar level as to Windows and Linux.

Once people eventually figure out that scale out file servers running natively on hypervisor hosts is more efficient and reliable, the entire aftermarket hyperconverged market will simply die.

0
2

Connected car in the second-hand lot? Don't buy it if you're not hack-savvy

CheesyTheClown

Pretty sure it's brand dependent

BMW makes it nearly impossible to connect to your own car. In many cases you can't even connect to a car you properly own. I'm pretty sure that their system which is paranoid strict about device connectivity won't let the new owner connect unless the old owner first releases it.

0
0

Hyperconverged market gets hyper-competitive as new riders enter field

CheesyTheClown

Re: HPE/Simplivity not a competitor

Like how Aruba, SGI, DEC, Compaq, 3com, Tandem, etc... all benefited from HPE sales and engineering? There are plenty more, but HPE buys companies in that top right quadrant, rides them a few months and as the customers start looking elsewhere, they buy someone else. HPE has been a chop shop since the dot com era.

I'm not saying Cisco is better with a track record like they have with Cloupia and now Cliqur, but HPE is where IT innovation goes to die.

Even HPE born and raised hardware is so ignored by engineering that ILO is damn near unusable at this time. It's API barely works, it's command line fails more often than it works. It's SNMP is actually insecure and practically and industry joke. Oh, and if you want it to work "right" it requires you keep an unpatched Windows XP with IE 7 or 8 to even get KVM to operate semi-ok. As for installing client certs... just don't bother.

1
2
CheesyTheClown

Windows 2016, Gluster & Docker/OpenStack?

Is it a competition to see who will pay the most money to keep using VMware? Honestly, storage is part of the base OS now... networking too... unless you want to pay more and use VMware... which well doesn't really solve anything anymore. Don't get me wrong, I'm all for retro things. But it seems like hyperconverged products from EMC, Cisco, HP/Simplivity or NetApp is more about spending money for absolutely no apparent reason.

In addition, I can't really understand why server vendors are still screwing around with enterprise SSD when Microsoft, Gluster and others have obsoleted the need for it. Dual-ported SAS or NVMe seems like the dumbest idea I've heard of in a while.

People, reliability, redundancy and performance comes from sharding and scale out. When you depend on things like dual-ported storage, you actually limit your reliability, performance and redundancy.

And no... fibre channel is longer a viable option for storage connectivity anymore. Why do you think the FC ASIC vendors are experimenting with alternative protocols over their fabrics?

0
3

UK Snoopers' Charter gagging order drafted for London Internet Exchange directors

CheesyTheClown

Didn't this behavior collapse the Empire?

I am not completely familiar with British history, but somehow I recall hearing that a blind overly-nationalistic belief was the primary flaw in the later empire which eventually led to its collapse.

It seems to me that as with the Americans, Britain seems to believe that simply having been squeezed from a particular vagina in a particular place justifies an unjustified belief in ones superiority.

Patriotism is a disgusting illness. It leads to some sort of lethargic behavior that allows a person to blindly believe they have no need to try to succeed since simply claiming membership in a birthright is a satisfactory alternative.

39
6

Global IPv4 address drought: Seriously, we're done now. We're done

CheesyTheClown

CGNAT?

I'm using my phone right now to post this. It has a private IP over LTE and works just fine. When I tether my laptop, it works just fine. I regularly visit sites behind load balances that multiplex at layer-5, in fact, there are often tens of thousands of major websites operating sharing a single IP.

Our current IPv4 problem is entirely greed based and artificial. There is absolutely no reason we can't solve the problem. With less than 100,000 registered active autonomous systems on the internet, we certainly should be able to make due with a few hundred thousand /24 networks.

0
1

Microsoft ups Surface slab prices for Brits. Darn weak pound, eh?

CheesyTheClown

Supply and demand?

I'm pretty sure that the people at Microsoft report their quarterly results in dollars. When they sell to customers in other countries, they account for value added tax where applicable, shipping if necessary, cost of support (employing locals), regionalization (spelling checkers with colour and favourite), etc...

If the value of a local currency drops too drastically relative to the value of the dollar, Microsoft must increase prices to cover the exchange rate related losses.

If the market can't or won't bare the adjustment, they will incur a different set of losses and choose to stay and fight or give up and leave.

Microsoft probably waited for the pound to reach a level they expect be stable and made a big painful adjustment that should compensate for possible further minor shifts allowing the U.K. Market to adjust to the change and go on as normal. I also assume they are not sitting and celebrating this change or even taking pride in it.

Consider as someone living in Norway, our currency devalued by 50% during the oil crisis and hasn't recovered even though oil more or less has. We feel your pain but also understand that $1 is $1 and it takes more crowns to make a dollar today than 3 years ago.

2
1

HPE, Samsung take clouds to carriers

CheesyTheClown

What the?

Network function virtualization is a standard component of Windows Server and OpenStack. I think Nutanix even has something that could be considered NFV if you ignore what NFV actually is. By using it with Docker and/or Azure apps, it's entirely transparent. Why the hell would anyone pay for this? More to the point, why the hell would anyone ever use any platform that doesn't make this a minimum standard feature?

0
0

Dell's XtremIO has a new reseller in Japan – its all-flash rival Fujitsu

CheesyTheClown

Bluray vs. HD-DVD?

Remember when Bluray won in the format wars? It was hilarious, Sony won the war when HD-DVD just died because they stopped pressing the discs and stopped making the players. Sony was sure that they would be rich because the whole world would flock to their format and what really happened was that Sony, should have learned that they probably should have just stopped making Bluray too because the world had already simply ditched using discs and moved to download services. Instead Sony went all in and now has almost no presence in the consumer video market to speak of. The moral is, neither Bluray or HD-DVD won, but the HD-DVD guys lost less because they knew when to pull out.

Dell/EMC, NetApp, Hitatchi, HP, etc... are all going all in on storage and all flash believing that they can win and take the cake using things like NVMe and such, but in reality, they're all hanging on to something which is already being forgotten.

SANs made a lot of sense in a time when file systems and operating systems lacked the ability to provide the storage needed for server farms and later virtualization. Now with the exception of VMware who seems to think that storage is a product as opposed to a component, the world is moving away from these technologies and we'll instead use scale-out file servers running on our compute nodes which provide performance and redundancy with none of the bandwidth problems SAN has. We'll use clouds and version controlled file systems to provide backups as well. It provides us with substantially lower TCO, better support, better integration and a clear long term path for growth in capacity and performance without the massive lost investments SAN are doomed to.

So, while the dozens or hundreds of storage companies battle it out, the hypervisor vendors will simply localize the storage and provide something better eliminating the need or desire to use these dinosaurs.

I wonder, which companies will be the smart ones who realize that the ship has sailed and they weren't on it first. I think Dell's merger with EMC will be interesting because the only thing of value they appear to have gotten from the deal is VMware and that company is so plagued with legacy customers demanding support, Dell will probably miss the boat on too many other opportunities by trying to force VMware to become something else.

0
6

Stallman's Free Software Foundation says we need a free phone OS

CheesyTheClown

Isn't he cute?

Stallman managed to make it into the news again. And here we thought he was finally gone.

1) You can make the best free phone OS but no one will use it

2) Every vendor will give it a try because ... well why not

3) Every vendor will stop supporting it within days of it being released

The consumer who defines the success of a platform or not doesn't give a shit about free. They want music, videos and games.

19
7

Ooops! One in three tech IPOs now trading below their starting price

CheesyTheClown

Re: Why?

VMware went to shit when it became board controlled. All their competitors are miles ahead of them in every area and VMware, possible the most innovative company of the first five years of the millennium has become a "me too... kinda" company. Hardware support for virtualization has eliminated competitive edges in hypervisors. It's become about integration and management of which VMware is thoroughly lacking. Even now, they actually sell their system APIs blocking developers from establishing a community and ensuring their vendors will get innovative features first.

Facebook and others actually produce a surprising amount. In the case of Facebook, they provide massive amounts of innovative technology to the community. Oh... and they have managed to monetize the shit out of their platform.

1
0

UK, you Cray. Boffins flex ARM in 'first-of-its-kind' bonkers HPC rig

CheesyTheClown

Re: Interesting opportunity for comparison.

Not really.

1) Supercomputing code generally is written by scientists and runs horribly. I've done multiple tests and found that I often can rewrite their code and perform better on 40 processor cores and 4 GPUs than they do on 3 million pound computers.

2) We're not comparing ARM to x86 here. That comparison can be accomplished far better with a few desktop systems. Performance-wise, you're making the assumption that performance is related to instruction set. It's generally about instruction execution performance and memory performance. Intel uses more transistors on their multipliers than ARM uses in their entire chip. This may sound inefficient, but it is those things which given Intel an edge. Let's also consider that memory performance is almost all about management of DDR bursts and block alignment. ARM has much tighter restrictions on those things. Also, more often than not, the scientific code makes profiting from cache utterly meaningless. Ask a scientist working on this code whether they can describe the DDR burst architecture or whether they can describe cache coherency within the CPU ring bus or whether they can explain the process of mutexing within a NUMA environment.

This is about whether shitty code costs less to run on one computer 100 times larger than it should be vs another.

For 3 million pounds, I would imagine they could have bought a gaming PC and a programmer.

1
0

Tintri, thrown on the El Reg grill: We'll support NVMe! We promise!

CheesyTheClown

NVMe fail

NVMe is a protocol for block storage across the PCIe bus. Like SCSI, it is intended as a method of storing block in a direct connected system and assumes lossless packet delivery. When FibreChannel came around, SCSI drives could be placed in a central system allowing the physical drives of a server to be located in a single box. When this happened FC was designed to deliver the SCSI QoS requirements across fiber.

A few brilliant engineers got together and found out they could provide virtual drives instead of physical drives over FC and iSCSI while still placing the same demands on the fabric to support SCSI QoS.

This is where things begin to go wrong... people wanted fabric level redundancy as well. This meant designing an active/standby solution for referencing the same block devices. The problem is, SCSI and now NVMe are simply not a good fit for this.

1) The volumes (LUNs) being accessed as block storage ARE NOT physical devices. They are files stored on file systems.

2) The client devices accessing the LUNs ARE NOT physical computers with physical storage adapters. They are virtual machines with virtual storage devices.

3) The computational overhead to simulate a SCSI controller in software, then translate the block numbers from the virtual machine to a reference in a VMFS or NTFS file system then look up the virtual block to reference in the virtual file system, convert that reference to a virtual file position, then lookup that block within a virtual file, translate that block to a physical block and the perform everything in reverse is wasteful and consumes power and slows everything down. In addition, it severely limits scalability.

4) Dual ported storage exists to compensate for limitations in block based storage. It would be far more intelligent and cost effective to plug a large number of single ported drives into a PCIe switch and then multi-master the PCIe bus. This technology dates back 20 years and is solid and proven. The problem is, PCIe is too slow for this. When facing NVMe and new storage technologies, the bus would max out at about 32 NVMe devices.

5) Scale out file servers simply scale out better than controllers. SCSI and now NVMe really can't probpey scale past two controllers and since NVMe and FC lack multicast, performance is simply doomed.

The solution is simple... build out either :

1) GlusterFS

2) Windows Storage Spaces Direct

3) Lustre

Build up each storage node with hotest(NVMe)/hot(SATA SSD)/cold(spinning disk)

Build 3 or more nodes

Run

NFS

ISCSI

SMBv3

FC (if needed)

Use proper time markers (not snapshots) for backup.

Be happy and save yourself millions.

PS - Hyper-V, OpenStack, Nutanix and more have this built in as part of their base licenses.

0
4

Well, FC-NVMe. Did this lightning-fast protocol just get faster?

CheesyTheClown

STUPID STUPID STUPID!!!!

Ok... this is 2016 almost 2017... WE DON'T SEND RAW BLOCK REQUESTS TO STORAGE!!!!

Let's make this very clear, SCSI and NVMe are the dumbest things you could ever put in the data center as an interconnect. When we used to connect physical disks in an array to the fabric, they wouldn't have sucked so bad. But now, we have things like :

1) Snapshots

2) Deduplication

3) Compression

4) Replication

5) Mirroring

6) Differencing disks

There are tons of nifty things we have. SCSI and NVMe are protocols designed to talk to physical storage devices not logical ones. There are two needs when talking to a storage array :

1) a VM is stored on the array

2) a physical host is stored on the array

When you install 5-500,000 physical hosts with VMware, Linux or Windows, you will use the exact same boot image with a fork in the array. This is REALLY REALLY easy and with some systems (like VMware) which can do stateless boot disks, you can use the exact same boot image without forking at all.

When you install 5 or 50 million virtual machines you do roughly the same thing. Clone an image and run sysprep for example.

What does this mean? The hosts or virtual machines DO NOT talk directly to the disks and therefore don't need to use a disk access protocol. Instead, a network adapter BIOS or system BIOS able to speak file access protocols will be far more intelligent.

There is simply no reason why block storage protocols should EVER be on a modern data center's network. Besides being shit to begin with (things like major SCSI SNAFUs) block storage protocols generally don't provide good security, they don't scale and you end up building impressively shitty networks... I'm sorry fabrics in pairs because FC routing never really happened.

iSCSI almost doesn't suck... but it's just an almost.

People are saying "NVMe is about latency..." blah blah blah... no it isn't. It's about connecting Flash memory to motherboards. It's basically PCIe. It's a system board interconnect. It is not a networking protocol and should never be used as one.

If QLogic is actually bent on making something that doesn't suck... why not make an Ethernet adapter which supports booting from SMBv3 and NFS without performance issues? I should be able to saturate a 100Gb/s network adapter on two channels when talking to GlusterFS or Windows Storage Spaces without using any CPU.

0
0
CheesyTheClown

Re: I remember...

FCoE was not really that great. From a protocol perspective, it had tons of overhead. Reliable Ethernet was absolutely shit because it depended on a slightly altered version of the incredibly broken 802.3 flow control protocol. Add to that that FCoE is still SCSI which actually needs reliable networking and it's a disaster compounded ontop of another disaster.

iSCSI was about 10,000 times better than FCoE since the overhead was roughly the same and it implements reliability at layer 4 which is highly tunable and not network hardware dependent. Add good old fashioned QoS on top and it's better.

Better yet, why not stop using broken ass block storage protocols altogether and support a real protocol like SMBv3 or NFS? They are actually far more intelligent for this purpose.

0
0

Trump's 140 characters on F-35 wipes $2bn off Lockheed Martin

CheesyTheClown

F-35 is about jobs

It's been said by others, but the US government has been quite successful at not only providing a lot of jobs by siphoning funds into defense contractors that gets spread out far and wide, but they did it under the heading of national security which always goes unchallenged and more importantly, they forced every NATO country to buy some as well feeding more money into the US economy. In the end, the F-35 program has been probably the most successful economy builder in the US for decades. And the best thing is, the cost of owning an F-35 is so ridiculously high that it will draw money into the US for decades.

That said, for aerial combat, drones will probably take over. There's really just no point in spending that much money on a jet which while being quite cool, puts the pilot's life in danger. You can build 2000 armed drones for the cost of a single F-35. While an F-35 may be more effective in battle than a drone, a fighter against 2000 drones probably won't do so well.

1
0

HPE 3PAR storage SNAFU takes Australian Tax Office offline

CheesyTheClown

Problem with SAN in general

I was recently told by a colleague of mine his company was about to upgrade firmware on their SAN controllers due to performance problems on a nearly exabyte SAN. I asked "Do you have a mirror?" And he said they have backup but not a mirror. I asked how long it would take to restore the backup and the number was nearly a month. I asked whether they have fully verified the contents of their backup and he said not recently because it would take a month just to stream the data from the backup.

The problem with SAN is that it centralized all problems. It's a single point of failure. The performance of even the fastest NVMe SANs are very very slow compared to distributed file systems.

They managed to do the upgrade it will now take about 6 weeks to run the rebuild on the array. The rebuild is destructive and they will have no idea whether the problem is fixed until it is done. They also don't know what caveats will be introduced from the upgraded firmware.

I don't experience these problems because I run two distributed file systems. One for performance and one for transaction oriented journaling. I have about 1Tb/s bandwidth between the two systems which can easily be saturated during transfer operations. What'a best is that my system cost less than a 10th of what his system cost per byte and instead of adding new disk shelves, I add disk, bandwidth and performance for each expansion. Instead of replacing SANs, I simply remove obsolete nodes and add newer and more efficient ones.

Trick one: Don't use VMware. Linux based GlusterFS systems only work with iSCSI or fiber channel which is slow and doesn't scale. VAAI NAS isn't available in Linux because of VMware's stupid policy of locking out open source developer.

Trick two: If you absolutely must run VMware, use Oracle Solaris for storage. Unlike EMC, NetApp, 3Par, etc... it can actually do proper scalability for performance and capacity. Consider Oracle Infiniband for the storage interconnect. Take classes on ZFS. Use Oracle servers. If you can afford $15,000 per blade for VMware, you can afford Oracle servers for storage. Oh... and don't use Infiniband for networking VMware or NSX. The CPU cost is too high.

1
8

'Toyota dealer stole my wife's saucy snaps from phone, emailed them to a swingers website'

CheesyTheClown

Re: Unless you're the FBI...

I regularly have conversations with my children regarding this exact problem. I explain that they should never want any photographs on their phones they don't want out in the wild. This has nothing to do with right and wrong. But an example of a conversation at breakfast this morning. We were discussing with our 13 year old daughter and 14 year old son about their friends using snus, drinking and vaping. I explained that while I don't condone these activities, under no circumstance are they to ever walk home alone or use a normal taxi while drunk. They are to pick up the phone and have me come get them or send an Über to them since it's safer than a random taxi being driven by the owner's brother-in-law. Also, they are never ever ever allowed to take a sip of a drink they haven't seen poured or have had out of their eyesight for even a second.

It is not right I should have to have these conversations with two children. But it's right that I do. Just because people shouldn't do bad things doesn't mean they won't.

So, while I agree with you, your point is overly altruistic and not meaninful because these things will happen and the best advice is... don't store pictures like these on any electronic devices.

Oh... and damn... lucky pastor.

32
1

Ford slams brakes on sales spreadsheets after fire menaces data center

CheesyTheClown

Re: DR done right

Did we read the same article? This was a piss poor example of disaster recovery. All I could think while reading the article was "Sounds like Ford".

Any company managing their own servers should have a minimum of 3 data centers spread out geographically. Their systems should have 100% (not 99.999) uptime and they should be thoroughly embarrassed by any announcement of this type. If I were in PCI enforcement or banking regulation enforcement, I would open a case to investigate gross negligence.

Ford should really outsource their data center to someone competent with technology. They have proven for nearly 100 years that anything with electronics designed by them is going to constantly suffer failures.

0
4

Good God, we've found a Google thing we like – the Pixel iPhone killer

CheesyTheClown

Uh... what?

I tend to hear this walled garden thing only from Android users who have locked themselves into Google's infrastructure for life.

Android is just as much of a lock-in as Apple.

That said, I can easily take all my Apple media and strip the DRM and play it on any phone or PC.

As for apps, Apps only work on the OS you bought them for.

7
5

Solidfire is 'for people who **CKING HATE storage' says NetApp Founder Dave Hitz

CheesyTheClown

Re: Scale up vs. scale out

I'all grant you have many good points. I work with quite a few different workloads. Agreed that NVMe is simply a method of remote procedure calling over the PCIe bus as well as a great method of command queuing to solid state controllers. It is designed to be optimal for single device access and has severe limitations in the queue structure itself for use in a RAID like environment. In fact, like SCSI, it has mutated from a single device interconnection protocol to something which it really sucks at. If creating virtualized devices in ASIC, there are extreme issues regarding upgradability. If implemented in FPGA, there are major issues with performance as even extremely large scale FPGAs have finite bandwidth resources. In addition, even using the latest processes for die fabrication, power consumption and heat issues are considerable. A hybrid design combining a high performance/low power crossbar along with FPGA for implementing localized storage logic could be an option, though even with the best PCIe ASICs currently available, there will be severe bandwidth bottlenecks as expandability is considered. PCIe simply does not scale well in these conditions. Ask HPC veterans why technologies like Infiniband still do well in high density environments for RDMA when PCIe interconnects have been around for years. SGI and Cray have consistently been strong performers by licensing technologies like QPI and custom designing better interconnects because PCIe simply isn't good enough for scalability.

So NVMe is great for some things. For centralized storage... nope.

As for storage clustering, I'm not aware of any vendors that cluster past 8 controllers currently. That's a major problem. Let's assume that somehow a vendor has implemented all their storage and communication logic in FPGA or dreadfully within ASIC. They could in theory build a semi-efficient crossbar fabric to support a few dozen hosts with decent performance. It is more likely, they have implemented their ... shall we say business logic in software which means that even if they had the biggest baddest CPUs from Intel, overall their bandwidth on scale will be dismal. There are only so many bits per second you can transfer over a PCIe connection and there are only so many PCIe lanes in a CPU/chipset. Because of this limitation, high performance centralized storage with only 8 nodes will never be a reality. Consider as well that due to fabric constraints in PCIe, there will be considerable host limits regarding scalability without inplementing something like NAT. This can be alieviated a bit by running PCIe lanes independently and performing mostly software switching and mostly eliminating the benefits of such a bus.

Centralized storage has some benefits such as easier maintainance, but to be fair, if this is an issue, you have much bigger problems. When using a scale out file server environment configured with tiers, for DR, backup, snapshots, etc... this makes use of centralized clusters of servers. You may choose to use a SAN for this, but that just strikes me as inefficient and very hard to manage. When configuring local storage properly, there is never a single copy of data and it is accessible from all nodes at all times with background sharding that copes well with scaling and outages. If there is a SSD failure, that means the blade failed and should be offlined for maintainance. This is no different that a failed DIMM, CPU or NIC. These aren't spinning disks, we generally know when something is going to die.

You're absolutely right about blades and PCIe lanes. Currently, so far as I know, no vendor is shipping blades like this which is why I have been forced to use rack servers. Thankfully, my current project is small and shouldn't need more than 100 per data center.

I am actually doing a lot of VDI right now. But that's just 35% of the project. The rest is big storage with a database containing a little over 12 billion high resolution images with about 50,000 queries an hour requiring indexing of unstructured data (image recognition) with the intent of scaling to 200,000 queries an hour. I am designing the algorithms for the queries from request through processing with every single bit over every single wire in mind.

I have worked with things as simple as server virtualization in the past on small and gigantic scale. With almost no exception, I have never achieved better ROI with centralized storage than with localized, tiered and sharded storage.

The only thing that centralized storage ever really accomplished is simplicity. It makes it easier for people to just plug it in and play. This is of great value to many. But I see centralized NVMe being an even biggest disaster than centralized SCSI over time.

0
0
CheesyTheClown

Scale up vs. scale out

Scale out exists not because you want to have more storage. It's because storage array controllers and SANs are too slow to meet the needs of high density servers. Storage has become some a major bottleneck that it's no longer possible to populate modern blades and actually expect to have even mediocre performance of VMs because it's like running a spinning disk on a laptop. It's just horrible.

Local storage scaled out is far better. So internal tiered storage works pretty good. You get capacity and performance in a single package. It doesn't scale up as well as a storage array... unless you buy more blades. Instead, it's pretty damn good for trying to make sure that your brand new 96 core blade isn't sitting at 25% CPU usage because all machines are waiting on external storage to catch up.

Scale-out in a SAN environment is just plain stupid. Even with the fancy attempts by some companies to centralize NVMe which is performed using PCIe bridge ASICs, the problem is that you'd need to have dedicated storage centralized for each blade to make use of that bandwidth. Additionally, NVMe is quite slow. NVMe generally only uses 4 PCIe lanes. Using local storage, I can use 32 PCIe lanes which is a small but noticeable improvement.

Scale up is still quite useful. Slow and steady wins the race. Cabinets that specialize in storing a few petabytes are always welcome. You really wouldn't want to use them for anything you might need to read, but an array that can provide data security would be nice. So, maybe Netapp should be focusing on scaling up instead of out. Cluster mode was kinda of a bust, it's just too slow. 8 controllers with a few gigs of bandwidth each don't even scratch the surface of what's needed in a modern DC.

0
1

'Geek gene' denied: If you find computer science hard, it's your fault (or your teacher's)

CheesyTheClown

Great idea but where's the actual research

I know this is the reg, the headlines are always basically click bait. So far as I can tell there's nothing in the research which can in any way be considered conclusive regarding whether genetics can impact this. That would require identifying a specific sequence to be tested and even then the results would simply say "We can't clearly identify whether this genetic sequence does or does not impact aptitude."

I'm quite sure there is a strong tie between genetics and aptitude. The gene involved is related to some form of obsessive compulsive disorder. Nearly all the "Nerds" (not geeks) I know and I know a lot are all people who :

1) Possess the ability to grind obsessively until they understand something

2) Possess incredible ability to recall information generally having cataloged it through associations

3) Have a weaker sense of community than others. This means they are perfectly willing to forego interpersonal interaction in favor of grinding on a problem.

4) A very high percentage show varying different levels of Asbergers. Ranging from appearing somewhat absent minded to having absolutely no interest in other people's perception of their behavior.

A nerd is generally someone who shows great aptitude (meaning willingness to work his/her ass off to learn something) towards one or more topics and therefore establishing a strong "genius like" ability in the topic. A nerd is generally quite confident in themselves for having achieved mastery in a field as such, they eventually are known to pursue other "hobbies"... very commonly an art (like guitar) or a sport (maybe soccer/football). This is where they'll establish their community and often attempt to mate.

A geek on the other hand generally has no particular aptitude for anything. They favor learning the "lingo" of something generally considered intellectual without actually achieving much more than a rudimentary knowledge of the topic. They present themselves as being nerds and even take pride in being permitted into social gatherings among nerds. The reason for this is to allow them to establish a sense of community. This happens from an early age. A person without the obsessive need to study and learn who doesn't have their own community as they are not athletic or maybe they don't see themselves as being pretty enough or important enough "to hang with the popular kids" and therefore latch onto the "brainy kids" who at that age are generally less interested in personal presentation than academia. They see the "brainy kids" as have some sort of adept talent for being brainy and believe that their skills are given to them by "being born smart" and as such see their gift as being similar to beauty or athletics. The geeks however establish an interest in what their friends are involved in and become something of an "accessory to the crime" for lack of a better phrase. The nerds being often quite happy to have a friend without the need to work hard to earn one or keep one then accepts the geek into their "circle".

Generally when puberty occurs, nerds will for the purpose of "satisfying their needs" attempt to groom themselves better, show interests in other topics (surprisingly music and marijuana are incredibly popular as it doesn't generally require physical prowess) and start blending into other cultures. Geeks on the other hand are generally what's left behind as being what appears as the mentally intellectual to the masses. In reality, the geek is simply someone that by this time shows a real interest in a given topic and seeing that the nerds "dropped out of the game", take over. This could mean pushing the projector carts around the hallways or working in the library. Generally just things that allow them to look like how the nerds should look but in reality, being just an awkward person with a strong (though often misused) vocabulary learned via osmosis from being in proximity to the nerds for so long.

A geek in modern times (not the old greek) was a person who joined the circus looking for safety in numbers during more dangerous times (like the old west) where a generally awkward person would be in danger from predators (like the more dominant males). Though these people weren't talented as they required hard work learned over time they would join as a "freak" where the other outcasts were. And while the person in question wasn't particularly freakish, they would perform freakish acts ...namely biting the heads off of live chickens. As such they established themselves within a community for safety.

I'd like to believe I'm a nerd... though who knows... maybe I'm a geek.

0
0

M.2 SSD drive format is under-rated. So why no enterprise arrays?

CheesyTheClown

Gbit/sec?

maybe missed a order of magnitude somewhere?

4
0

Google tries to lure .NET devs with PowerShell cloud bait

CheesyTheClown

Jury is out?

I was pretty sure that Azure has kinda proven itself already.

The real question is whether public cloud will survive now that you can build an entire Azurr Stack in a few rack units capable of running ten thousand users. It's now officially cheaper to run Azure Stack instead of Azure, AWS or Google public clouds.

I have a 26U rack with eight 16 Core blades w/192GB each, 80Gb/sec networking to each blade, 8 terabytes of scale-out storage pumping over a million IOPs. I also bought a NetApp FAS2020 for near line backup storage.

The total cost of deployment for the entire system was about $10000 on eBay. I tend to only keep 3 blades running at a time since I only have 100 VDI users at a time. It spins up new VDI systems in about 13 seconds each. It has IIS, Load Balancing, SDN, SDS, etc... I tend to be at about 8% capacity for the three blades for normal office loads with 100 users.

Currently, it's a development pod and classifies as being able to run under the MSDN terms as lab equipment.

Getting Azure Stack up initially was a pain. Now, I've scripted the whole thing. A laptop with a fresh Windows 10 installation can download all the ISOs and deploy the entire Azure Stack in about an hour. I'm not using any fancy tools, just PowerShell. Since prepping ISOs as VHDs needs WAIK anyway, there was no point using anything except Powershell. I wrote it all object oriented and implemented a simple command queue pattern to implement the entire system with test driven development.

Now, Microsoft update does the rest.

6
3

'We already do that, we’re just OG* enough to not call it DevOps'

CheesyTheClown

DevOps works... But only if you know how

Step 1) Avoid CVs/Resumes of people with DevOps on it

Step 2) Avoid technologies and products claiming to do DevOps

Step 3) Stop trying to teach IT guys how to code. They have more than enough to do just figuring out what should be coded

Step 4) There is no such thing as a DevOps degree. You're looking for computer science grads.

Step 5) Stop letting vendors try and tell you how to do DevOps

Step 6) Plan a project, build a high level design. Perform a PoC and document in detail step by step how to verify the system works.

Step 7) Write code to roll back the system when it fails

Step 8) On a whiteboard, make a REALLY clear plan of what changes are to be made and in what order

Step 9) Make the plan reflect zero downtime

Step 10) Write a script which can make the changes.

Step 11) Prepare for rollback, run the changes, verify the changes worked, verify the rest of the system didn't die, roll back when it screws up. Repeat until the change works without screwing everything up.

This is not complex. Any university comp-science grad can do this all day and night. We call it test driven development. Use Powershell to avoid stupid shit like 500 language syndrome. No don't use Python, Puppet, Chef, etc. You'll spend 99% of your time trying to figure out how to make Powershell work from inside them.

12
2

Is VMware starting to mean 'legacy'? Down and out in Las Vegas

CheesyTheClown

VMware can have and eat well off of legacy

I am about to deploy a 120,000 user VDI POC on Hyper-V/Azure Stack. I never even considered VMware for the project since it's just not well suited for VDI. I work with about 40 customers in 15 countries and for new deployments 3 years ago, they were 100% VMware. Now, 75% deploy about 80% Hyper-V and 20% VMware. The last 25% are 100% VMware.

The first reason is simple. Price. If you have to pay $12000 per blade for Windows licenses and $7500 per socket ($15000 per blade) for VMware, you might as well use Hyper-V and skip paying for another VMM

Memory consumption. Linux containers and Hyper-V integrate tightly with the guest system virtual machine memory managers and allow substantially denser guest deployments than ESXi. VMware still insists on simulating the an MMU as an API for interfacing with the SLAT. Hyper-V and LXC instead integrate via "system calls" between the guest virtual memory managers and the host. This tends to cut memory footprint of VMs on average by at least 60% over ESXi.

Management. vRealize as a suite looks like an absolute joke written by a retro software freak next to Azure Stack and Ubuntu's OpenStack management systems. If VMware would quit competing against themselves and focus on doing it once and doing it right, they could get somewhere.

vCenter... Let's be honest... vCenter is the best tool on the planet if you plan on automating absolutely nothing. No other product gives you that "I'm an NT 4 sys admin" feel better than vCenter. But if you actually want to manage more than 50 VMs, you don't manage it from there. That's what vRealize, UCS Director, Nutanix, etc... are for.

Storage. Am I the only person who looks at VMware's storage solutions and wonders "Did EMC tell them they can't make anything that might compete with their stuff?" and "Did someone tell VMware that storage is something you can charge for?". Cisco released HyperFlex with a 3rd party storage solution which I think is just GlusterFS and pNFS configured for scale-out and a VAAI NAS plugin. It blows the frigging doors off anything VMware makes and most of it's open source and freebies. Are you seriously telling me that VMware couldn't have made that a stock component within a few months of work?

Networking. NSX was SOOoOooo cool 8 years ago. It was revolutionary. Then VMware bought it and kept it hidden for years and by the time it shipped, the entire world had moved on to far better solutions. It's not even integrated into VMware's other stuff. It's like running a completely 3rd party tool and what's worse is that it's REALLY slow and they ended up implementing Microsegmentation because the other VMware management tools were so broken and unusable that you couldn't have more than a few dozen port groups before things just fell apart. So, instead of fixing their other stuff, they basically just hacked the shit out of NSX to break the whole SDN paradigm. Oh did I mention that NSX costs an absolute fortune when SDN is free with every other solution?

Graphics. NVidia Grid is absolute-friggin-lutly spectacular on Hyper-V. It's like a ray of sunshine blown from the bottom side of angels every time you start a VM. RemoteFX is insane. I'm not kidding that adding a Grid to a host with Hyper-V nearly tripled my VDI density. When I tested the same card on VMware, it was agony. I got it working... Kinda. It wasn't too bad. Once you got the drivers on the guest to finally recognize the card, it was nice but they were generations behind and the improvement was about half that of Hyper-V. I speculate it's because on Hyper-V communication with the card is bare metal but on VMware, a software arbiter running on a single core is required which is killing the CPU ring bus or QPI. The behavior even suggests it might be maintaining cache coherency through abuse of critical sections which across sockets can be so slow it's almost silly.

So... should VMware be scared? Are they obsolete? Hell no. They are legacy. I work with hundreds of people who like installing Windows by hand over and over. I work regularly with a team of 150 people who are paid to work 8 hours a day manually provisioning virtual machines as change requests come in. They are kind of like people being paid by a company to lick stamps and put them on envelopes because peel and stick is too "fancy and fangled" and they don't want to figure out this new stuff.

VMware will be needed and loved and sold so long as people are 100% focused on "it's always worked for us doing it this way." Think of VMware as the COBOL of PC virtualization. Microfocus is still banking big bucks on COBOL. I think the worst thing VMware could do is to be better. There are still tens of thousands of small-organization mind sets and a VMM that can be fully configured to a "good enough" state in 30 minutes should always be around.

9
0

How many zero-day vulns is Uncle Sam sitting on? Not as many as you think, apparently

CheesyTheClown

The department discloses... What about the hackers?

Seems to me that hackers are asked to hack. As such, they may or may not be asked to make the hack used part of the official catalog. So a simple work around to this is that you tell the hackers to only report the zero days that were low hanging fruits.

0
0

NASA peers through its SpeX: Aha! Jupiter's globe-warming hotspot

CheesyTheClown

Can't get internet?

If NASA can communicate with the probe and NASA has Internet, shouldn't it be possible to route it there?

1
0

Ex-Citibank IT bloke wiped bank's core routers, will now spend 21 months in the clink

CheesyTheClown

Caused congestion?

Unless he configured new links/routes, then the routers and switches in the network should have been run in pairs and the links should have been somewhat redundant in their design.

It appears that someone on the network team was doing a REALLY poor job if the loss of a link causes congestion. Don't get me wrong, this guy should be shot, but... I would be seriously embarrassed to publicly pronounce that the lost of a single link would cause my network to become "unusable" due to congestion in the banking industry. I know he shut down 9 routers... but unless there was a total rats nest in the infrastructure, the congestion would reoccur if a single link went down.

Sue the guy for intentionally threatening the stability of the network, don't air your dirty underwear like this.

1
1

Third time unlucky? HPE in redundancy talks with UK services staff

CheesyTheClown

What's left of HP?

HP - Sells laptops that doesn't fill needs. Kind of a Packard Bell. Sells printers which don't really work. Their consumer line has endless problems burning ink. Their large format printers (until the Latex series) score amazingly low on cost and quality.

HP (then Agilent, now something new) sells the stuff which made HP awesome to begin with.

HPE sells class servers like Non-stop and HPUX, Superdomes, etc... They sell substandard blade servers which don't function for shit in the data center (Java 6 required to manage the blades). They sell two dozen different and most incompatible network equipment lines. They sell storage that is so hellbent on fiber channel that they perform at about 1/10th the speed of a similar NAS solution from respectable companies. They sell management software suites that universally increase CapEx and OpEx by so much that ROI is not achievable.

CSC - Sells services that are provided by an organization that is so silo driven that the network guys can't even spell hard drive.

Isn't it time HP dumps someone who has a nice wardrobe in favor of someone who has some actual knowledge?

2
0

Starbucks bans XXX Wi-Fi

CheesyTheClown

What?

Honestly... Who sits in a Starbucks surfing porn?

11
0

Blighty will have a whopping 24 F-35B jets by 2023 – MoD minister

CheesyTheClown

Re: Why?

It could go into westernizing immigrants to assimilate themselves to Western European behavior and financing the growth of a financial empire that will profit England by allowing immigrants to establish British businesses and move product through the UK (at least on paper) and strengthen ties with middle eastern and eastern companies as well as strengthen economies of Africa (which will have to become emerging at some point) and South America.

Alternatively, they could use the money to cover the massive financial issues of trying to alienate themselves from the rest of the world via Brexit. The U.K. clearly does not understand that the rest of the world sees brexit as an elitist movement that denounces the rest of us as "less than a Brit". As a result, it drives us to avoid business with the English for fear that they will consider defaulting on our agreements as justified because they "want to screw us before we screw them".

Norway bought 50 of these planes as a membership fee to NATO. I don't know for sure, but I believe historically we have never owned so many arial war machines as we would prefer to not be paranoid assholes. Of course, now we'll have them and we'll probably put them on display as they are so damn expensive to operate that it's just not worth keeping more than a handful in the air. So 10 to play with and 40 for spare parts.

On the other hand, the jobs created by dumping trillions of dollars into the world economy probably is worthwhile.

1
0
CheesyTheClown

Finally an F-35 article that represents it properly

The F-35 program has been wildly successful to date. They have basically made the most useless plane ever. By the time it actually is in proper full production, remote contolled unmanned drones will have far surpassed their capabilities and based on recent testing, autonomous drones consistently outperform all human flown jets in dog fights. The problem with drones is that they're too easy and don't require so many people to produce and maintain. It won't be long before drones can be strategically positioned in high atmosphere on Zeppelin drones or solar powered propeller drones. When this happens, it should be possible to launch an attack and reach targets eliminating F-35 hosting carriers before even one jet has a pilot in a cockpit.

So why is the program wildly successful? It's because as the article says, the U.K. government can employ 1000 people for each F-35 which translates to a realistic number of about 8000 jobs per jet if you include the guy working at the local 7-Eleven who is in business because the workers need coffee. The U.S. and the U.K. instead of embracing socialism openly create jobs through programs oriented on fear. The US and UK are so scared of building and supporting me private sector companies that in order to feed their citizens, they need to make up bullshit excuses related to fear and hate to feed their people. So long as the US and UK can continue to negotiate favorable terms with other nations regarding their expansion of their national deficits and forcing a devaluation of their currencies (hence why you can sell your house for more than you paid), the US and UK and spend trillions on new prisons and on new militaries and such. The people just have to be scared enough into thinking they need these services or at least can't do anything against the government forcing it on them.

The F-35 program has nothing to do with defense. More or less every first world country can easily build drones to obsolete the F-35 completely. The program is about job creation and government sponsored economic stimulus.

Good job author ;)

4
2

Wannabe Prime Minister Andrea Leadsom thinks all websites should be rated – just like movies

CheesyTheClown

Nice idea and good spirit but impossible to implement

There are web site rating systems already in place from companies like Checkpoint and Cisco as part of their firewall services. In a modern web, with the advent of HTTP v2.0 and also with primarily randomized URLs, it would require application later inspection and filtering to implement such a system.

Even with data center scale computing, deploying clusters of tens of thousands for firewall instances, it would be computationally impossible to filter all we traffic effectively to make such a thing matter.

Add "dark web" resources (which I think means Tor) which simply requires the download of a free and public web browser to use and inline filtering would be absolutely impossible.

This sort of solution would depend instead on DNS filtering which doesn't work since most users don't actually use British DNS servers.

In the end, while she has a good heart and spirit and is trying to recommend something she believes could have a healthy and positive impact on her country, it would be simply wasted breath and resources to try and push such legislation into effect.

1
0

SPC says up yours to DataCore

CheesyTheClown

Why use and array of any type anyway?

I definitely want near-line storage and for that I'm trying to design a new controller with power control so that a full 42U array shouldn't be consuming much more than about 400w at any given time.

What I don't understand is why anyone would want to run centralized storage anymore. It was a terribly failed experiment on so many levels. With a theoretical average of 90,000 IOPs per SATA SSD and an average of 24x drives per server configured as mirror, let's assume a little less than 2 million IOPs per server. Then, distribute that out using RDMA over Ethernet with SMBv3 and Scale-Out-FS. Then consider that the servers themselves will have 4 or 8 40GbE Ethernet ports.

Centralized storage doesn't have the performance to cope with modern data center servers. The goal of data centers is higher density and lower power foot print. I'm in 12U these days what used to us take 6 racks even 3 years back.

DataCore, NetApp, Hitatchi.... they're all selling crap you just don't need anymore. Get 3 good servers for each data center, configure some proper networking equipment (stuff that accelerates DC tasks, I recommend Cisco ACI) and then just use the storage built into the hypervisors. And don't waste money on Enterprise SSD. Just buy a few boxes of Samsung 850's.

As for centralized storage, build a big ass server with lots of spinning 8-10TB disks and run Veeam on it.

I've been testing Windows Storage Space Direct in a small data center on relatively old hardware with relatively wimpy networking. It's still hitting insane IOPS. And that's 4 Windows Server 2016 Hyper-V blades with 6 SSDs per blade on an archaic Dell C6100 cloud system from ebay for $1500 (all blades included). Each machine as 8 cores and 96GB RAM, an Intel x520-DA2, 6xCrucial 250GB SATA SSD (deal of the day).

I will NEVER EVER EVER go back to centralized storage. Unless someone tells me a way I can guarantee network and IOPS scaling for every blade in the data center using centralized storage, it's just a crap solution.

P.S. On VMware, I use centralized... Virtual SAN is a little too 1990's or 1970's depending on how you see it. Their underlying storage architecture is simply not something I would be willing to depend on. When they learn what sharding is, I'll consider otherwise.

0
0

Page:

Forums

Biting the hand that feeds IT © 1998–2017