* Posts by CheesyTheClown

458 posts • joined 3 Jul 2009

Page:

So you want to roll your own cloud

CheesyTheClown

Why not buy your own cloud?

Honestly, just buy an Azure Stack from Cisco, Dell (or if truly desperate HP) and be done with it. Then you have a finished platform with all the cloud services including PaaS and SaaS without the headache of either rolling your own or selling your soul to Amazon, Google or Microsoft.

Yes, I know you'd be running Microsoft software... we already sold our souls to them for that.

0
0

Cancel your cloud panic: At $122bn it's just 5% of all IT spend

CheesyTheClown

Re: Biannual

I honestly have no idea how to respond to this. I am certainly not a speaker of the Queen's English as I find it a disorderly mess. I honestly lost absolutely all respect for the Queen's English when I heard her in an interview refer to the game of football as "Footie". People should prefer Oxford English over Queen's English, the Queen is a gutter slang speaker as well.

I recently learned while paying close attention on a visit to central England that the reason American's spell it color and the English spell it colour is because the American's pronounce it color and the English pronounce it colour which the ou in the English pronunciation is not the conjunction ou but instead the letter O and the letter U being rammed into each other softly.

There are many horrible words in the many different dialects of English. I believe that OED's persistence of documenting every single word ever without properly listed etymology as part of their new definitions any longer, practically disqualifies OED as an official dictionary as opposed to a competitor to "The Urban Dictionary". The last 5 times I've visited the OED, I received poor quality definitions with no further qualifications and have had to refer to wiktionary instead which supplied a slightly better experience.

As for your use of "Wheelie".

I believe that if you are an Englishman, you should be forced to use the term "wheel stand" instead because the British tongue has been grossly infected with a plague of "eeeeeeeee"'s. Every single possible noun in the British tongue has been reduced to a ridiculous single syllable followed by IEs. Honestly, Butties, Footie. The "cutsie shit plague" which has afflicted your nation is unforgivable. Call them sausages instead of bangers. Don't abbreviate mashed potatoes, there is simply no profit in that.

American English sucks like this as well. But unlike the British who seems to feel that they still have some resemblance of authority over than English language and more specifically "The Queen's English", one should strive to set an example of culture and dignity as opposed to allowing your language to degrade into a failed Hello Kitty cartoon.

My blogging/commenting grammar is reflective of my speech pattern as opposed to representative of grammatically correct writing as I would do elsewhere. I believe that if we are to take it upon ourselves to be grammar nazis in public, we should also strive to set a better example.

I'll forgive your wheelie comment today, I do believe that EEEEEEs affliction or not, it is likely the proper word in that place. However, as some point, I'd like to have a nice discussion with you about the British compression of the word "the". For example, I prefer to visit "The Hospital" when I'm ill as opposed to visiting someone named "Hospital". I feel one should be educated at "A University", "The University" or maybe at "Oxford University" or "The University of Chambridge" as opposed to simply "at university".

The almost random but accepted disappearance of the word "The" in The Queen's English would be considered guttural, unrefined or "Straight out damn near toothless redneck" in other dialects. For example, I would expect Kanye West to selectively omit the word "The" as he may not be able to spell it.

2
0
CheesyTheClown

More cloud spending when aaS is removed

This year marks the beginning of fully supported private clouds being shipped. You'll get the full public cloud experience with SaaS, PaaS and IaaS as a package you can buy in a box and have delivered. As such, most of the money currently earmarked for spending on "servers and storage and stuff" will be earmarked for "private cloud" instead.

We're about to see a massive move out of the public cloud as the cost of uncertainty increases throughout the world. With Theresa May being the first new leader of hate related politics and quickly followed by The Donald and Germany, France, Poland, etc... coming up soon, public cloud VERY SCARY right now. Possibly the worst choice any company can make is to place their business files on servers controlled by American or European countries that are lead by populist politicians. Consider that hosting data in the public cloud within the UK makes it susceptible to the snooper charter and the new follow up bills. The US government is suing Microsoft, Google, Amazon and others to claim the should have access to data held in data centers outside of America simply because American companies manage the data.

Populist propaganda removes human and civil rights from people generally under the heading of national security. While the cloud technology is perfectly sound, the problem is politics.

I was in a Microsoft Azure Security in the Cloud session last week held by Microsoft and asked "If I use one of the non-Microsoft Azure data centers located in Germany, does Microsoft U.S. have access to my data". The guy really avoided answering but eventually admitted in theory a subpoena issued in the U.S. would be all that would be required to give access to data in non-Microsoft data centers in Germany because it's still part of the Azure platform. Due to additional laws in America, Microsoft would be required to gag themselves and not tell anyone that the US government is snooping.

While I don't have anything to hide from the American's and certainly don't care if they are checking out the naked pictures I keep of myself (I'm not an attractive person) on my cloud accounts... I think that there are many companies out there that have to avoid that. There are no American companies currently delivering cloud services in any data center anywhere that can actually meet the requirements of EU Safe Harbor. UK companies are REALLY REALLY REALLY out on that one thanks to Miss May.

So... in the end, cloud will grow like crazy, but not in the public cloud. Instead, turn-key private cloud will be where we are in 5 years.

2
0

UnBrex-pected move: Amazon raises UK workforce to 24,000

CheesyTheClown

Cheap labor?

Hiring cheap labor is always good practice. Best part is, the new employees won't be able to afford international travel, so they'll be close during vacations.

0
0

The stunted physical SAN market – Dell man gives Wikibon forecasts his blessing

CheesyTheClown

Hyperconverged will die shortly after as well

HyperConverged simply means that software stores virtual disks reliably and efficiently on the virtualization hosts themselves. Windows Storage Spaces/ReFS and systems like GlusterFS/ZFS have be mature for some time. VMware is about 5 years behind but may eventually mature to a similar level as to Windows and Linux.

Once people eventually figure out that scale out file servers running natively on hypervisor hosts is more efficient and reliable, the entire aftermarket hyperconverged market will simply die.

0
2

Connected car in the second-hand lot? Don't buy it if you're not hack-savvy

CheesyTheClown

Pretty sure it's brand dependent

BMW makes it nearly impossible to connect to your own car. In many cases you can't even connect to a car you properly own. I'm pretty sure that their system which is paranoid strict about device connectivity won't let the new owner connect unless the old owner first releases it.

0
0

Hyperconverged market gets hyper-competitive as new riders enter field

CheesyTheClown

Re: HPE/Simplivity not a competitor

Like how Aruba, SGI, DEC, Compaq, 3com, Tandem, etc... all benefited from HPE sales and engineering? There are plenty more, but HPE buys companies in that top right quadrant, rides them a few months and as the customers start looking elsewhere, they buy someone else. HPE has been a chop shop since the dot com era.

I'm not saying Cisco is better with a track record like they have with Cloupia and now Cliqur, but HPE is where IT innovation goes to die.

Even HPE born and raised hardware is so ignored by engineering that ILO is damn near unusable at this time. It's API barely works, it's command line fails more often than it works. It's SNMP is actually insecure and practically and industry joke. Oh, and if you want it to work "right" it requires you keep an unpatched Windows XP with IE 7 or 8 to even get KVM to operate semi-ok. As for installing client certs... just don't bother.

1
2
CheesyTheClown

Windows 2016, Gluster & Docker/OpenStack?

Is it a competition to see who will pay the most money to keep using VMware? Honestly, storage is part of the base OS now... networking too... unless you want to pay more and use VMware... which well doesn't really solve anything anymore. Don't get me wrong, I'm all for retro things. But it seems like hyperconverged products from EMC, Cisco, HP/Simplivity or NetApp is more about spending money for absolutely no apparent reason.

In addition, I can't really understand why server vendors are still screwing around with enterprise SSD when Microsoft, Gluster and others have obsoleted the need for it. Dual-ported SAS or NVMe seems like the dumbest idea I've heard of in a while.

People, reliability, redundancy and performance comes from sharding and scale out. When you depend on things like dual-ported storage, you actually limit your reliability, performance and redundancy.

And no... fibre channel is longer a viable option for storage connectivity anymore. Why do you think the FC ASIC vendors are experimenting with alternative protocols over their fabrics?

0
3

UK Snoopers' Charter gagging order drafted for London Internet Exchange directors

CheesyTheClown

Didn't this behavior collapse the Empire?

I am not completely familiar with British history, but somehow I recall hearing that a blind overly-nationalistic belief was the primary flaw in the later empire which eventually led to its collapse.

It seems to me that as with the Americans, Britain seems to believe that simply having been squeezed from a particular vagina in a particular place justifies an unjustified belief in ones superiority.

Patriotism is a disgusting illness. It leads to some sort of lethargic behavior that allows a person to blindly believe they have no need to try to succeed since simply claiming membership in a birthright is a satisfactory alternative.

39
6

Global IPv4 address drought: Seriously, we're done now. We're done

CheesyTheClown

CGNAT?

I'm using my phone right now to post this. It has a private IP over LTE and works just fine. When I tether my laptop, it works just fine. I regularly visit sites behind load balances that multiplex at layer-5, in fact, there are often tens of thousands of major websites operating sharing a single IP.

Our current IPv4 problem is entirely greed based and artificial. There is absolutely no reason we can't solve the problem. With less than 100,000 registered active autonomous systems on the internet, we certainly should be able to make due with a few hundred thousand /24 networks.

0
1

Microsoft ups Surface slab prices for Brits. Darn weak pound, eh?

CheesyTheClown

Supply and demand?

I'm pretty sure that the people at Microsoft report their quarterly results in dollars. When they sell to customers in other countries, they account for value added tax where applicable, shipping if necessary, cost of support (employing locals), regionalization (spelling checkers with colour and favourite), etc...

If the value of a local currency drops too drastically relative to the value of the dollar, Microsoft must increase prices to cover the exchange rate related losses.

If the market can't or won't bare the adjustment, they will incur a different set of losses and choose to stay and fight or give up and leave.

Microsoft probably waited for the pound to reach a level they expect be stable and made a big painful adjustment that should compensate for possible further minor shifts allowing the U.K. Market to adjust to the change and go on as normal. I also assume they are not sitting and celebrating this change or even taking pride in it.

Consider as someone living in Norway, our currency devalued by 50% during the oil crisis and hasn't recovered even though oil more or less has. We feel your pain but also understand that $1 is $1 and it takes more crowns to make a dollar today than 3 years ago.

2
1

HPE, Samsung take clouds to carriers

CheesyTheClown

What the?

Network function virtualization is a standard component of Windows Server and OpenStack. I think Nutanix even has something that could be considered NFV if you ignore what NFV actually is. By using it with Docker and/or Azure apps, it's entirely transparent. Why the hell would anyone pay for this? More to the point, why the hell would anyone ever use any platform that doesn't make this a minimum standard feature?

0
0

Dell's XtremIO has a new reseller in Japan – its all-flash rival Fujitsu

CheesyTheClown

Bluray vs. HD-DVD?

Remember when Bluray won in the format wars? It was hilarious, Sony won the war when HD-DVD just died because they stopped pressing the discs and stopped making the players. Sony was sure that they would be rich because the whole world would flock to their format and what really happened was that Sony, should have learned that they probably should have just stopped making Bluray too because the world had already simply ditched using discs and moved to download services. Instead Sony went all in and now has almost no presence in the consumer video market to speak of. The moral is, neither Bluray or HD-DVD won, but the HD-DVD guys lost less because they knew when to pull out.

Dell/EMC, NetApp, Hitatchi, HP, etc... are all going all in on storage and all flash believing that they can win and take the cake using things like NVMe and such, but in reality, they're all hanging on to something which is already being forgotten.

SANs made a lot of sense in a time when file systems and operating systems lacked the ability to provide the storage needed for server farms and later virtualization. Now with the exception of VMware who seems to think that storage is a product as opposed to a component, the world is moving away from these technologies and we'll instead use scale-out file servers running on our compute nodes which provide performance and redundancy with none of the bandwidth problems SAN has. We'll use clouds and version controlled file systems to provide backups as well. It provides us with substantially lower TCO, better support, better integration and a clear long term path for growth in capacity and performance without the massive lost investments SAN are doomed to.

So, while the dozens or hundreds of storage companies battle it out, the hypervisor vendors will simply localize the storage and provide something better eliminating the need or desire to use these dinosaurs.

I wonder, which companies will be the smart ones who realize that the ship has sailed and they weren't on it first. I think Dell's merger with EMC will be interesting because the only thing of value they appear to have gotten from the deal is VMware and that company is so plagued with legacy customers demanding support, Dell will probably miss the boat on too many other opportunities by trying to force VMware to become something else.

0
6

Stallman's Free Software Foundation says we need a free phone OS

CheesyTheClown

Isn't he cute?

Stallman managed to make it into the news again. And here we thought he was finally gone.

1) You can make the best free phone OS but no one will use it

2) Every vendor will give it a try because ... well why not

3) Every vendor will stop supporting it within days of it being released

The consumer who defines the success of a platform or not doesn't give a shit about free. They want music, videos and games.

19
7

Ooops! One in three tech IPOs now trading below their starting price

CheesyTheClown

Re: Why?

VMware went to shit when it became board controlled. All their competitors are miles ahead of them in every area and VMware, possible the most innovative company of the first five years of the millennium has become a "me too... kinda" company. Hardware support for virtualization has eliminated competitive edges in hypervisors. It's become about integration and management of which VMware is thoroughly lacking. Even now, they actually sell their system APIs blocking developers from establishing a community and ensuring their vendors will get innovative features first.

Facebook and others actually produce a surprising amount. In the case of Facebook, they provide massive amounts of innovative technology to the community. Oh... and they have managed to monetize the shit out of their platform.

1
0

UK, you Cray. Boffins flex ARM in 'first-of-its-kind' bonkers HPC rig

CheesyTheClown

Re: Interesting opportunity for comparison.

Not really.

1) Supercomputing code generally is written by scientists and runs horribly. I've done multiple tests and found that I often can rewrite their code and perform better on 40 processor cores and 4 GPUs than they do on 3 million pound computers.

2) We're not comparing ARM to x86 here. That comparison can be accomplished far better with a few desktop systems. Performance-wise, you're making the assumption that performance is related to instruction set. It's generally about instruction execution performance and memory performance. Intel uses more transistors on their multipliers than ARM uses in their entire chip. This may sound inefficient, but it is those things which given Intel an edge. Let's also consider that memory performance is almost all about management of DDR bursts and block alignment. ARM has much tighter restrictions on those things. Also, more often than not, the scientific code makes profiting from cache utterly meaningless. Ask a scientist working on this code whether they can describe the DDR burst architecture or whether they can describe cache coherency within the CPU ring bus or whether they can explain the process of mutexing within a NUMA environment.

This is about whether shitty code costs less to run on one computer 100 times larger than it should be vs another.

For 3 million pounds, I would imagine they could have bought a gaming PC and a programmer.

1
0

Tintri, thrown on the El Reg grill: We'll support NVMe! We promise!

CheesyTheClown

NVMe fail

NVMe is a protocol for block storage across the PCIe bus. Like SCSI, it is intended as a method of storing block in a direct connected system and assumes lossless packet delivery. When FibreChannel came around, SCSI drives could be placed in a central system allowing the physical drives of a server to be located in a single box. When this happened FC was designed to deliver the SCSI QoS requirements across fiber.

A few brilliant engineers got together and found out they could provide virtual drives instead of physical drives over FC and iSCSI while still placing the same demands on the fabric to support SCSI QoS.

This is where things begin to go wrong... people wanted fabric level redundancy as well. This meant designing an active/standby solution for referencing the same block devices. The problem is, SCSI and now NVMe are simply not a good fit for this.

1) The volumes (LUNs) being accessed as block storage ARE NOT physical devices. They are files stored on file systems.

2) The client devices accessing the LUNs ARE NOT physical computers with physical storage adapters. They are virtual machines with virtual storage devices.

3) The computational overhead to simulate a SCSI controller in software, then translate the block numbers from the virtual machine to a reference in a VMFS or NTFS file system then look up the virtual block to reference in the virtual file system, convert that reference to a virtual file position, then lookup that block within a virtual file, translate that block to a physical block and the perform everything in reverse is wasteful and consumes power and slows everything down. In addition, it severely limits scalability.

4) Dual ported storage exists to compensate for limitations in block based storage. It would be far more intelligent and cost effective to plug a large number of single ported drives into a PCIe switch and then multi-master the PCIe bus. This technology dates back 20 years and is solid and proven. The problem is, PCIe is too slow for this. When facing NVMe and new storage technologies, the bus would max out at about 32 NVMe devices.

5) Scale out file servers simply scale out better than controllers. SCSI and now NVMe really can't probpey scale past two controllers and since NVMe and FC lack multicast, performance is simply doomed.

The solution is simple... build out either :

1) GlusterFS

2) Windows Storage Spaces Direct

3) Lustre

Build up each storage node with hotest(NVMe)/hot(SATA SSD)/cold(spinning disk)

Build 3 or more nodes

Run

NFS

ISCSI

SMBv3

FC (if needed)

Use proper time markers (not snapshots) for backup.

Be happy and save yourself millions.

PS - Hyper-V, OpenStack, Nutanix and more have this built in as part of their base licenses.

0
4

Well, FC-NVMe. Did this lightning-fast protocol just get faster?

CheesyTheClown

STUPID STUPID STUPID!!!!

Ok... this is 2016 almost 2017... WE DON'T SEND RAW BLOCK REQUESTS TO STORAGE!!!!

Let's make this very clear, SCSI and NVMe are the dumbest things you could ever put in the data center as an interconnect. When we used to connect physical disks in an array to the fabric, they wouldn't have sucked so bad. But now, we have things like :

1) Snapshots

2) Deduplication

3) Compression

4) Replication

5) Mirroring

6) Differencing disks

There are tons of nifty things we have. SCSI and NVMe are protocols designed to talk to physical storage devices not logical ones. There are two needs when talking to a storage array :

1) a VM is stored on the array

2) a physical host is stored on the array

When you install 5-500,000 physical hosts with VMware, Linux or Windows, you will use the exact same boot image with a fork in the array. This is REALLY REALLY easy and with some systems (like VMware) which can do stateless boot disks, you can use the exact same boot image without forking at all.

When you install 5 or 50 million virtual machines you do roughly the same thing. Clone an image and run sysprep for example.

What does this mean? The hosts or virtual machines DO NOT talk directly to the disks and therefore don't need to use a disk access protocol. Instead, a network adapter BIOS or system BIOS able to speak file access protocols will be far more intelligent.

There is simply no reason why block storage protocols should EVER be on a modern data center's network. Besides being shit to begin with (things like major SCSI SNAFUs) block storage protocols generally don't provide good security, they don't scale and you end up building impressively shitty networks... I'm sorry fabrics in pairs because FC routing never really happened.

iSCSI almost doesn't suck... but it's just an almost.

People are saying "NVMe is about latency..." blah blah blah... no it isn't. It's about connecting Flash memory to motherboards. It's basically PCIe. It's a system board interconnect. It is not a networking protocol and should never be used as one.

If QLogic is actually bent on making something that doesn't suck... why not make an Ethernet adapter which supports booting from SMBv3 and NFS without performance issues? I should be able to saturate a 100Gb/s network adapter on two channels when talking to GlusterFS or Windows Storage Spaces without using any CPU.

0
0
CheesyTheClown

Re: I remember...

FCoE was not really that great. From a protocol perspective, it had tons of overhead. Reliable Ethernet was absolutely shit because it depended on a slightly altered version of the incredibly broken 802.3 flow control protocol. Add to that that FCoE is still SCSI which actually needs reliable networking and it's a disaster compounded ontop of another disaster.

iSCSI was about 10,000 times better than FCoE since the overhead was roughly the same and it implements reliability at layer 4 which is highly tunable and not network hardware dependent. Add good old fashioned QoS on top and it's better.

Better yet, why not stop using broken ass block storage protocols altogether and support a real protocol like SMBv3 or NFS? They are actually far more intelligent for this purpose.

0
0

Trump's 140 characters on F-35 wipes $2bn off Lockheed Martin

CheesyTheClown

F-35 is about jobs

It's been said by others, but the US government has been quite successful at not only providing a lot of jobs by siphoning funds into defense contractors that gets spread out far and wide, but they did it under the heading of national security which always goes unchallenged and more importantly, they forced every NATO country to buy some as well feeding more money into the US economy. In the end, the F-35 program has been probably the most successful economy builder in the US for decades. And the best thing is, the cost of owning an F-35 is so ridiculously high that it will draw money into the US for decades.

That said, for aerial combat, drones will probably take over. There's really just no point in spending that much money on a jet which while being quite cool, puts the pilot's life in danger. You can build 2000 armed drones for the cost of a single F-35. While an F-35 may be more effective in battle than a drone, a fighter against 2000 drones probably won't do so well.

1
0

HPE 3PAR storage SNAFU takes Australian Tax Office offline

CheesyTheClown

Problem with SAN in general

I was recently told by a colleague of mine his company was about to upgrade firmware on their SAN controllers due to performance problems on a nearly exabyte SAN. I asked "Do you have a mirror?" And he said they have backup but not a mirror. I asked how long it would take to restore the backup and the number was nearly a month. I asked whether they have fully verified the contents of their backup and he said not recently because it would take a month just to stream the data from the backup.

The problem with SAN is that it centralized all problems. It's a single point of failure. The performance of even the fastest NVMe SANs are very very slow compared to distributed file systems.

They managed to do the upgrade it will now take about 6 weeks to run the rebuild on the array. The rebuild is destructive and they will have no idea whether the problem is fixed until it is done. They also don't know what caveats will be introduced from the upgraded firmware.

I don't experience these problems because I run two distributed file systems. One for performance and one for transaction oriented journaling. I have about 1Tb/s bandwidth between the two systems which can easily be saturated during transfer operations. What'a best is that my system cost less than a 10th of what his system cost per byte and instead of adding new disk shelves, I add disk, bandwidth and performance for each expansion. Instead of replacing SANs, I simply remove obsolete nodes and add newer and more efficient ones.

Trick one: Don't use VMware. Linux based GlusterFS systems only work with iSCSI or fiber channel which is slow and doesn't scale. VAAI NAS isn't available in Linux because of VMware's stupid policy of locking out open source developer.

Trick two: If you absolutely must run VMware, use Oracle Solaris for storage. Unlike EMC, NetApp, 3Par, etc... it can actually do proper scalability for performance and capacity. Consider Oracle Infiniband for the storage interconnect. Take classes on ZFS. Use Oracle servers. If you can afford $15,000 per blade for VMware, you can afford Oracle servers for storage. Oh... and don't use Infiniband for networking VMware or NSX. The CPU cost is too high.

1
8

'Toyota dealer stole my wife's saucy snaps from phone, emailed them to a swingers website'

CheesyTheClown

Re: Unless you're the FBI...

I regularly have conversations with my children regarding this exact problem. I explain that they should never want any photographs on their phones they don't want out in the wild. This has nothing to do with right and wrong. But an example of a conversation at breakfast this morning. We were discussing with our 13 year old daughter and 14 year old son about their friends using snus, drinking and vaping. I explained that while I don't condone these activities, under no circumstance are they to ever walk home alone or use a normal taxi while drunk. They are to pick up the phone and have me come get them or send an Über to them since it's safer than a random taxi being driven by the owner's brother-in-law. Also, they are never ever ever allowed to take a sip of a drink they haven't seen poured or have had out of their eyesight for even a second.

It is not right I should have to have these conversations with two children. But it's right that I do. Just because people shouldn't do bad things doesn't mean they won't.

So, while I agree with you, your point is overly altruistic and not meaninful because these things will happen and the best advice is... don't store pictures like these on any electronic devices.

Oh... and damn... lucky pastor.

32
1

Ford slams brakes on sales spreadsheets after fire menaces data center

CheesyTheClown

Re: DR done right

Did we read the same article? This was a piss poor example of disaster recovery. All I could think while reading the article was "Sounds like Ford".

Any company managing their own servers should have a minimum of 3 data centers spread out geographically. Their systems should have 100% (not 99.999) uptime and they should be thoroughly embarrassed by any announcement of this type. If I were in PCI enforcement or banking regulation enforcement, I would open a case to investigate gross negligence.

Ford should really outsource their data center to someone competent with technology. They have proven for nearly 100 years that anything with electronics designed by them is going to constantly suffer failures.

0
4

Good God, we've found a Google thing we like – the Pixel iPhone killer

CheesyTheClown

Uh... what?

I tend to hear this walled garden thing only from Android users who have locked themselves into Google's infrastructure for life.

Android is just as much of a lock-in as Apple.

That said, I can easily take all my Apple media and strip the DRM and play it on any phone or PC.

As for apps, Apps only work on the OS you bought them for.

7
5

Solidfire is 'for people who **CKING HATE storage' says NetApp Founder Dave Hitz

CheesyTheClown

Re: Scale up vs. scale out

I'all grant you have many good points. I work with quite a few different workloads. Agreed that NVMe is simply a method of remote procedure calling over the PCIe bus as well as a great method of command queuing to solid state controllers. It is designed to be optimal for single device access and has severe limitations in the queue structure itself for use in a RAID like environment. In fact, like SCSI, it has mutated from a single device interconnection protocol to something which it really sucks at. If creating virtualized devices in ASIC, there are extreme issues regarding upgradability. If implemented in FPGA, there are major issues with performance as even extremely large scale FPGAs have finite bandwidth resources. In addition, even using the latest processes for die fabrication, power consumption and heat issues are considerable. A hybrid design combining a high performance/low power crossbar along with FPGA for implementing localized storage logic could be an option, though even with the best PCIe ASICs currently available, there will be severe bandwidth bottlenecks as expandability is considered. PCIe simply does not scale well in these conditions. Ask HPC veterans why technologies like Infiniband still do well in high density environments for RDMA when PCIe interconnects have been around for years. SGI and Cray have consistently been strong performers by licensing technologies like QPI and custom designing better interconnects because PCIe simply isn't good enough for scalability.

So NVMe is great for some things. For centralized storage... nope.

As for storage clustering, I'm not aware of any vendors that cluster past 8 controllers currently. That's a major problem. Let's assume that somehow a vendor has implemented all their storage and communication logic in FPGA or dreadfully within ASIC. They could in theory build a semi-efficient crossbar fabric to support a few dozen hosts with decent performance. It is more likely, they have implemented their ... shall we say business logic in software which means that even if they had the biggest baddest CPUs from Intel, overall their bandwidth on scale will be dismal. There are only so many bits per second you can transfer over a PCIe connection and there are only so many PCIe lanes in a CPU/chipset. Because of this limitation, high performance centralized storage with only 8 nodes will never be a reality. Consider as well that due to fabric constraints in PCIe, there will be considerable host limits regarding scalability without inplementing something like NAT. This can be alieviated a bit by running PCIe lanes independently and performing mostly software switching and mostly eliminating the benefits of such a bus.

Centralized storage has some benefits such as easier maintainance, but to be fair, if this is an issue, you have much bigger problems. When using a scale out file server environment configured with tiers, for DR, backup, snapshots, etc... this makes use of centralized clusters of servers. You may choose to use a SAN for this, but that just strikes me as inefficient and very hard to manage. When configuring local storage properly, there is never a single copy of data and it is accessible from all nodes at all times with background sharding that copes well with scaling and outages. If there is a SSD failure, that means the blade failed and should be offlined for maintainance. This is no different that a failed DIMM, CPU or NIC. These aren't spinning disks, we generally know when something is going to die.

You're absolutely right about blades and PCIe lanes. Currently, so far as I know, no vendor is shipping blades like this which is why I have been forced to use rack servers. Thankfully, my current project is small and shouldn't need more than 100 per data center.

I am actually doing a lot of VDI right now. But that's just 35% of the project. The rest is big storage with a database containing a little over 12 billion high resolution images with about 50,000 queries an hour requiring indexing of unstructured data (image recognition) with the intent of scaling to 200,000 queries an hour. I am designing the algorithms for the queries from request through processing with every single bit over every single wire in mind.

I have worked with things as simple as server virtualization in the past on small and gigantic scale. With almost no exception, I have never achieved better ROI with centralized storage than with localized, tiered and sharded storage.

The only thing that centralized storage ever really accomplished is simplicity. It makes it easier for people to just plug it in and play. This is of great value to many. But I see centralized NVMe being an even biggest disaster than centralized SCSI over time.

0
0
CheesyTheClown

Scale up vs. scale out

Scale out exists not because you want to have more storage. It's because storage array controllers and SANs are too slow to meet the needs of high density servers. Storage has become some a major bottleneck that it's no longer possible to populate modern blades and actually expect to have even mediocre performance of VMs because it's like running a spinning disk on a laptop. It's just horrible.

Local storage scaled out is far better. So internal tiered storage works pretty good. You get capacity and performance in a single package. It doesn't scale up as well as a storage array... unless you buy more blades. Instead, it's pretty damn good for trying to make sure that your brand new 96 core blade isn't sitting at 25% CPU usage because all machines are waiting on external storage to catch up.

Scale-out in a SAN environment is just plain stupid. Even with the fancy attempts by some companies to centralize NVMe which is performed using PCIe bridge ASICs, the problem is that you'd need to have dedicated storage centralized for each blade to make use of that bandwidth. Additionally, NVMe is quite slow. NVMe generally only uses 4 PCIe lanes. Using local storage, I can use 32 PCIe lanes which is a small but noticeable improvement.

Scale up is still quite useful. Slow and steady wins the race. Cabinets that specialize in storing a few petabytes are always welcome. You really wouldn't want to use them for anything you might need to read, but an array that can provide data security would be nice. So, maybe Netapp should be focusing on scaling up instead of out. Cluster mode was kinda of a bust, it's just too slow. 8 controllers with a few gigs of bandwidth each don't even scratch the surface of what's needed in a modern DC.

0
1

'Geek gene' denied: If you find computer science hard, it's your fault (or your teacher's)

CheesyTheClown

Great idea but where's the actual research

I know this is the reg, the headlines are always basically click bait. So far as I can tell there's nothing in the research which can in any way be considered conclusive regarding whether genetics can impact this. That would require identifying a specific sequence to be tested and even then the results would simply say "We can't clearly identify whether this genetic sequence does or does not impact aptitude."

I'm quite sure there is a strong tie between genetics and aptitude. The gene involved is related to some form of obsessive compulsive disorder. Nearly all the "Nerds" (not geeks) I know and I know a lot are all people who :

1) Possess the ability to grind obsessively until they understand something

2) Possess incredible ability to recall information generally having cataloged it through associations

3) Have a weaker sense of community than others. This means they are perfectly willing to forego interpersonal interaction in favor of grinding on a problem.

4) A very high percentage show varying different levels of Asbergers. Ranging from appearing somewhat absent minded to having absolutely no interest in other people's perception of their behavior.

A nerd is generally someone who shows great aptitude (meaning willingness to work his/her ass off to learn something) towards one or more topics and therefore establishing a strong "genius like" ability in the topic. A nerd is generally quite confident in themselves for having achieved mastery in a field as such, they eventually are known to pursue other "hobbies"... very commonly an art (like guitar) or a sport (maybe soccer/football). This is where they'll establish their community and often attempt to mate.

A geek on the other hand generally has no particular aptitude for anything. They favor learning the "lingo" of something generally considered intellectual without actually achieving much more than a rudimentary knowledge of the topic. They present themselves as being nerds and even take pride in being permitted into social gatherings among nerds. The reason for this is to allow them to establish a sense of community. This happens from an early age. A person without the obsessive need to study and learn who doesn't have their own community as they are not athletic or maybe they don't see themselves as being pretty enough or important enough "to hang with the popular kids" and therefore latch onto the "brainy kids" who at that age are generally less interested in personal presentation than academia. They see the "brainy kids" as have some sort of adept talent for being brainy and believe that their skills are given to them by "being born smart" and as such see their gift as being similar to beauty or athletics. The geeks however establish an interest in what their friends are involved in and become something of an "accessory to the crime" for lack of a better phrase. The nerds being often quite happy to have a friend without the need to work hard to earn one or keep one then accepts the geek into their "circle".

Generally when puberty occurs, nerds will for the purpose of "satisfying their needs" attempt to groom themselves better, show interests in other topics (surprisingly music and marijuana are incredibly popular as it doesn't generally require physical prowess) and start blending into other cultures. Geeks on the other hand are generally what's left behind as being what appears as the mentally intellectual to the masses. In reality, the geek is simply someone that by this time shows a real interest in a given topic and seeing that the nerds "dropped out of the game", take over. This could mean pushing the projector carts around the hallways or working in the library. Generally just things that allow them to look like how the nerds should look but in reality, being just an awkward person with a strong (though often misused) vocabulary learned via osmosis from being in proximity to the nerds for so long.

A geek in modern times (not the old greek) was a person who joined the circus looking for safety in numbers during more dangerous times (like the old west) where a generally awkward person would be in danger from predators (like the more dominant males). Though these people weren't talented as they required hard work learned over time they would join as a "freak" where the other outcasts were. And while the person in question wasn't particularly freakish, they would perform freakish acts ...namely biting the heads off of live chickens. As such they established themselves within a community for safety.

I'd like to believe I'm a nerd... though who knows... maybe I'm a geek.

0
0

M.2 SSD drive format is under-rated. So why no enterprise arrays?

CheesyTheClown

Gbit/sec?

maybe missed a order of magnitude somewhere?

4
0

Google tries to lure .NET devs with PowerShell cloud bait

CheesyTheClown

Jury is out?

I was pretty sure that Azure has kinda proven itself already.

The real question is whether public cloud will survive now that you can build an entire Azurr Stack in a few rack units capable of running ten thousand users. It's now officially cheaper to run Azure Stack instead of Azure, AWS or Google public clouds.

I have a 26U rack with eight 16 Core blades w/192GB each, 80Gb/sec networking to each blade, 8 terabytes of scale-out storage pumping over a million IOPs. I also bought a NetApp FAS2020 for near line backup storage.

The total cost of deployment for the entire system was about $10000 on eBay. I tend to only keep 3 blades running at a time since I only have 100 VDI users at a time. It spins up new VDI systems in about 13 seconds each. It has IIS, Load Balancing, SDN, SDS, etc... I tend to be at about 8% capacity for the three blades for normal office loads with 100 users.

Currently, it's a development pod and classifies as being able to run under the MSDN terms as lab equipment.

Getting Azure Stack up initially was a pain. Now, I've scripted the whole thing. A laptop with a fresh Windows 10 installation can download all the ISOs and deploy the entire Azure Stack in about an hour. I'm not using any fancy tools, just PowerShell. Since prepping ISOs as VHDs needs WAIK anyway, there was no point using anything except Powershell. I wrote it all object oriented and implemented a simple command queue pattern to implement the entire system with test driven development.

Now, Microsoft update does the rest.

6
3

'We already do that, we’re just OG* enough to not call it DevOps'

CheesyTheClown

DevOps works... But only if you know how

Step 1) Avoid CVs/Resumes of people with DevOps on it

Step 2) Avoid technologies and products claiming to do DevOps

Step 3) Stop trying to teach IT guys how to code. They have more than enough to do just figuring out what should be coded

Step 4) There is no such thing as a DevOps degree. You're looking for computer science grads.

Step 5) Stop letting vendors try and tell you how to do DevOps

Step 6) Plan a project, build a high level design. Perform a PoC and document in detail step by step how to verify the system works.

Step 7) Write code to roll back the system when it fails

Step 8) On a whiteboard, make a REALLY clear plan of what changes are to be made and in what order

Step 9) Make the plan reflect zero downtime

Step 10) Write a script which can make the changes.

Step 11) Prepare for rollback, run the changes, verify the changes worked, verify the rest of the system didn't die, roll back when it screws up. Repeat until the change works without screwing everything up.

This is not complex. Any university comp-science grad can do this all day and night. We call it test driven development. Use Powershell to avoid stupid shit like 500 language syndrome. No don't use Python, Puppet, Chef, etc. You'll spend 99% of your time trying to figure out how to make Powershell work from inside them.

12
2

Is VMware starting to mean 'legacy'? Down and out in Las Vegas

CheesyTheClown

VMware can have and eat well off of legacy

I am about to deploy a 120,000 user VDI POC on Hyper-V/Azure Stack. I never even considered VMware for the project since it's just not well suited for VDI. I work with about 40 customers in 15 countries and for new deployments 3 years ago, they were 100% VMware. Now, 75% deploy about 80% Hyper-V and 20% VMware. The last 25% are 100% VMware.

The first reason is simple. Price. If you have to pay $12000 per blade for Windows licenses and $7500 per socket ($15000 per blade) for VMware, you might as well use Hyper-V and skip paying for another VMM

Memory consumption. Linux containers and Hyper-V integrate tightly with the guest system virtual machine memory managers and allow substantially denser guest deployments than ESXi. VMware still insists on simulating the an MMU as an API for interfacing with the SLAT. Hyper-V and LXC instead integrate via "system calls" between the guest virtual memory managers and the host. This tends to cut memory footprint of VMs on average by at least 60% over ESXi.

Management. vRealize as a suite looks like an absolute joke written by a retro software freak next to Azure Stack and Ubuntu's OpenStack management systems. If VMware would quit competing against themselves and focus on doing it once and doing it right, they could get somewhere.

vCenter... Let's be honest... vCenter is the best tool on the planet if you plan on automating absolutely nothing. No other product gives you that "I'm an NT 4 sys admin" feel better than vCenter. But if you actually want to manage more than 50 VMs, you don't manage it from there. That's what vRealize, UCS Director, Nutanix, etc... are for.

Storage. Am I the only person who looks at VMware's storage solutions and wonders "Did EMC tell them they can't make anything that might compete with their stuff?" and "Did someone tell VMware that storage is something you can charge for?". Cisco released HyperFlex with a 3rd party storage solution which I think is just GlusterFS and pNFS configured for scale-out and a VAAI NAS plugin. It blows the frigging doors off anything VMware makes and most of it's open source and freebies. Are you seriously telling me that VMware couldn't have made that a stock component within a few months of work?

Networking. NSX was SOOoOooo cool 8 years ago. It was revolutionary. Then VMware bought it and kept it hidden for years and by the time it shipped, the entire world had moved on to far better solutions. It's not even integrated into VMware's other stuff. It's like running a completely 3rd party tool and what's worse is that it's REALLY slow and they ended up implementing Microsegmentation because the other VMware management tools were so broken and unusable that you couldn't have more than a few dozen port groups before things just fell apart. So, instead of fixing their other stuff, they basically just hacked the shit out of NSX to break the whole SDN paradigm. Oh did I mention that NSX costs an absolute fortune when SDN is free with every other solution?

Graphics. NVidia Grid is absolute-friggin-lutly spectacular on Hyper-V. It's like a ray of sunshine blown from the bottom side of angels every time you start a VM. RemoteFX is insane. I'm not kidding that adding a Grid to a host with Hyper-V nearly tripled my VDI density. When I tested the same card on VMware, it was agony. I got it working... Kinda. It wasn't too bad. Once you got the drivers on the guest to finally recognize the card, it was nice but they were generations behind and the improvement was about half that of Hyper-V. I speculate it's because on Hyper-V communication with the card is bare metal but on VMware, a software arbiter running on a single core is required which is killing the CPU ring bus or QPI. The behavior even suggests it might be maintaining cache coherency through abuse of critical sections which across sockets can be so slow it's almost silly.

So... should VMware be scared? Are they obsolete? Hell no. They are legacy. I work with hundreds of people who like installing Windows by hand over and over. I work regularly with a team of 150 people who are paid to work 8 hours a day manually provisioning virtual machines as change requests come in. They are kind of like people being paid by a company to lick stamps and put them on envelopes because peel and stick is too "fancy and fangled" and they don't want to figure out this new stuff.

VMware will be needed and loved and sold so long as people are 100% focused on "it's always worked for us doing it this way." Think of VMware as the COBOL of PC virtualization. Microfocus is still banking big bucks on COBOL. I think the worst thing VMware could do is to be better. There are still tens of thousands of small-organization mind sets and a VMM that can be fully configured to a "good enough" state in 30 minutes should always be around.

9
0

How many zero-day vulns is Uncle Sam sitting on? Not as many as you think, apparently

CheesyTheClown

The department discloses... What about the hackers?

Seems to me that hackers are asked to hack. As such, they may or may not be asked to make the hack used part of the official catalog. So a simple work around to this is that you tell the hackers to only report the zero days that were low hanging fruits.

0
0

NASA peers through its SpeX: Aha! Jupiter's globe-warming hotspot

CheesyTheClown

Can't get internet?

If NASA can communicate with the probe and NASA has Internet, shouldn't it be possible to route it there?

1
0

Ex-Citibank IT bloke wiped bank's core routers, will now spend 21 months in the clink

CheesyTheClown

Caused congestion?

Unless he configured new links/routes, then the routers and switches in the network should have been run in pairs and the links should have been somewhat redundant in their design.

It appears that someone on the network team was doing a REALLY poor job if the loss of a link causes congestion. Don't get me wrong, this guy should be shot, but... I would be seriously embarrassed to publicly pronounce that the lost of a single link would cause my network to become "unusable" due to congestion in the banking industry. I know he shut down 9 routers... but unless there was a total rats nest in the infrastructure, the congestion would reoccur if a single link went down.

Sue the guy for intentionally threatening the stability of the network, don't air your dirty underwear like this.

1
1

Third time unlucky? HPE in redundancy talks with UK services staff

CheesyTheClown

What's left of HP?

HP - Sells laptops that doesn't fill needs. Kind of a Packard Bell. Sells printers which don't really work. Their consumer line has endless problems burning ink. Their large format printers (until the Latex series) score amazingly low on cost and quality.

HP (then Agilent, now something new) sells the stuff which made HP awesome to begin with.

HPE sells class servers like Non-stop and HPUX, Superdomes, etc... They sell substandard blade servers which don't function for shit in the data center (Java 6 required to manage the blades). They sell two dozen different and most incompatible network equipment lines. They sell storage that is so hellbent on fiber channel that they perform at about 1/10th the speed of a similar NAS solution from respectable companies. They sell management software suites that universally increase CapEx and OpEx by so much that ROI is not achievable.

CSC - Sells services that are provided by an organization that is so silo driven that the network guys can't even spell hard drive.

Isn't it time HP dumps someone who has a nice wardrobe in favor of someone who has some actual knowledge?

2
0

Starbucks bans XXX Wi-Fi

CheesyTheClown

What?

Honestly... Who sits in a Starbucks surfing porn?

11
0

Blighty will have a whopping 24 F-35B jets by 2023 – MoD minister

CheesyTheClown

Re: Why?

It could go into westernizing immigrants to assimilate themselves to Western European behavior and financing the growth of a financial empire that will profit England by allowing immigrants to establish British businesses and move product through the UK (at least on paper) and strengthen ties with middle eastern and eastern companies as well as strengthen economies of Africa (which will have to become emerging at some point) and South America.

Alternatively, they could use the money to cover the massive financial issues of trying to alienate themselves from the rest of the world via Brexit. The U.K. clearly does not understand that the rest of the world sees brexit as an elitist movement that denounces the rest of us as "less than a Brit". As a result, it drives us to avoid business with the English for fear that they will consider defaulting on our agreements as justified because they "want to screw us before we screw them".

Norway bought 50 of these planes as a membership fee to NATO. I don't know for sure, but I believe historically we have never owned so many arial war machines as we would prefer to not be paranoid assholes. Of course, now we'll have them and we'll probably put them on display as they are so damn expensive to operate that it's just not worth keeping more than a handful in the air. So 10 to play with and 40 for spare parts.

On the other hand, the jobs created by dumping trillions of dollars into the world economy probably is worthwhile.

1
0
CheesyTheClown

Finally an F-35 article that represents it properly

The F-35 program has been wildly successful to date. They have basically made the most useless plane ever. By the time it actually is in proper full production, remote contolled unmanned drones will have far surpassed their capabilities and based on recent testing, autonomous drones consistently outperform all human flown jets in dog fights. The problem with drones is that they're too easy and don't require so many people to produce and maintain. It won't be long before drones can be strategically positioned in high atmosphere on Zeppelin drones or solar powered propeller drones. When this happens, it should be possible to launch an attack and reach targets eliminating F-35 hosting carriers before even one jet has a pilot in a cockpit.

So why is the program wildly successful? It's because as the article says, the U.K. government can employ 1000 people for each F-35 which translates to a realistic number of about 8000 jobs per jet if you include the guy working at the local 7-Eleven who is in business because the workers need coffee. The U.S. and the U.K. instead of embracing socialism openly create jobs through programs oriented on fear. The US and UK are so scared of building and supporting me private sector companies that in order to feed their citizens, they need to make up bullshit excuses related to fear and hate to feed their people. So long as the US and UK can continue to negotiate favorable terms with other nations regarding their expansion of their national deficits and forcing a devaluation of their currencies (hence why you can sell your house for more than you paid), the US and UK and spend trillions on new prisons and on new militaries and such. The people just have to be scared enough into thinking they need these services or at least can't do anything against the government forcing it on them.

The F-35 program has nothing to do with defense. More or less every first world country can easily build drones to obsolete the F-35 completely. The program is about job creation and government sponsored economic stimulus.

Good job author ;)

4
2

Wannabe Prime Minister Andrea Leadsom thinks all websites should be rated – just like movies

CheesyTheClown

Nice idea and good spirit but impossible to implement

There are web site rating systems already in place from companies like Checkpoint and Cisco as part of their firewall services. In a modern web, with the advent of HTTP v2.0 and also with primarily randomized URLs, it would require application later inspection and filtering to implement such a system.

Even with data center scale computing, deploying clusters of tens of thousands for firewall instances, it would be computationally impossible to filter all we traffic effectively to make such a thing matter.

Add "dark web" resources (which I think means Tor) which simply requires the download of a free and public web browser to use and inline filtering would be absolutely impossible.

This sort of solution would depend instead on DNS filtering which doesn't work since most users don't actually use British DNS servers.

In the end, while she has a good heart and spirit and is trying to recommend something she believes could have a healthy and positive impact on her country, it would be simply wasted breath and resources to try and push such legislation into effect.

1
0

SPC says up yours to DataCore

CheesyTheClown

Why use and array of any type anyway?

I definitely want near-line storage and for that I'm trying to design a new controller with power control so that a full 42U array shouldn't be consuming much more than about 400w at any given time.

What I don't understand is why anyone would want to run centralized storage anymore. It was a terribly failed experiment on so many levels. With a theoretical average of 90,000 IOPs per SATA SSD and an average of 24x drives per server configured as mirror, let's assume a little less than 2 million IOPs per server. Then, distribute that out using RDMA over Ethernet with SMBv3 and Scale-Out-FS. Then consider that the servers themselves will have 4 or 8 40GbE Ethernet ports.

Centralized storage doesn't have the performance to cope with modern data center servers. The goal of data centers is higher density and lower power foot print. I'm in 12U these days what used to us take 6 racks even 3 years back.

DataCore, NetApp, Hitatchi.... they're all selling crap you just don't need anymore. Get 3 good servers for each data center, configure some proper networking equipment (stuff that accelerates DC tasks, I recommend Cisco ACI) and then just use the storage built into the hypervisors. And don't waste money on Enterprise SSD. Just buy a few boxes of Samsung 850's.

As for centralized storage, build a big ass server with lots of spinning 8-10TB disks and run Veeam on it.

I've been testing Windows Storage Space Direct in a small data center on relatively old hardware with relatively wimpy networking. It's still hitting insane IOPS. And that's 4 Windows Server 2016 Hyper-V blades with 6 SSDs per blade on an archaic Dell C6100 cloud system from ebay for $1500 (all blades included). Each machine as 8 cores and 96GB RAM, an Intel x520-DA2, 6xCrucial 250GB SATA SSD (deal of the day).

I will NEVER EVER EVER go back to centralized storage. Unless someone tells me a way I can guarantee network and IOPS scaling for every blade in the data center using centralized storage, it's just a crap solution.

P.S. On VMware, I use centralized... Virtual SAN is a little too 1990's or 1970's depending on how you see it. Their underlying storage architecture is simply not something I would be willing to depend on. When they learn what sharding is, I'll consider otherwise.

0
0

Non-US encryption is 'theoretical,' claims CIA chief in backdoor debate

CheesyTheClown

From 55 Countries?

Let's be fair for a moment. Encryption standards as they stand today originate in the US. There are many many good encryption techniques, many which are likely stronger and better than AES, RSA and DH. The issue however is that nearly every product in the report about encryption coming from 55 countries is they use standards.

We use standard ciphers because at some point we believed they were strong enough to keep us safe. Some people who call themselves security experts think they're unbreakable, that is sheer vanity and silliness. There have been many enhancements made to AES for example which strengthen it, but the AES block cipher itself isn't particularly strong.

The reason we still use these ciphers has more to do with dependence on things like hardware and software for encryption. Intel and ARM CPUs have acceleration engines for the standard ciphers as well as some of the more popular non-standard ciphers. Processing the encryption in software is not practical for most applications. For example, running full disk encryption in software would take that awesome SSD and make it feel like MFM drives from the 80s.

For messaging and mail and basic storage encryption, software can be used. But which cipher should we trust?

AES became a standard after a massive amount of peer review and a great deal of experimentation by thousands of researchers, mathematicians and hackers. To find a suitable replacement would require a European or UN effort of similar scale. Even now, there is a certain belief that unless a cipher is blessed by the NSA or the Israeli Mossad, it's likely considered weak. There are many cipher researchers outside the US and Israel, but it is unlikely they are as public or well funded as thsoe guys are.

Is it time for something better? Sure... just need to run a competition like they did for AES, find a suitable review board and pass European laws to mandate that Intel and ARM can't ship AES acceleration unless they also support the new standard... this can take a few years.

0
0

Microsoft releases open source bug-bomb in the rambling house of C

CheesyTheClown

Re: C is not an applications programming language

C and Assembler was the way to go for everything back then. Assembler was actually used as an application language by many people. When a CPU could realistic process 75,000 instructions per second, we counted cycles even when we were drawing text on the screen. When a language that when coded properly like C reached levels of perfomance that we could do less in assembler, we mixed the two. It wasn't that better languages for apps didn't exist. It was that they were too slow to be useful.

4
0

Freeze, lastholes: USB-C and Thunderbolt are the ultimate physical ports

CheesyTheClown

SCSI anyone?

Wireless is nifty, but wired is and always will be better. As for connector types, there will be plenty of upgrades to come. I just hope they're not smaller

20
0

HPE bolts hi-capacity SSD support onto StoreServ

CheesyTheClown

Purpose?

Where does a product like this fit?

While I think all flash is nifty, where's the value in array's today?

Distributed file systems far out scale and outperform arrays in every category. Even with custom ASICs, unless an array has 100Gb/S physical access per server, it will be a gigantic bottleneck feeding data to the servers.

Oracle and MS SQL have supported sharding for a long time. GlusterFS, ReFS as well as many others also have sharding now. Add data teiring for near-line on a hybrid array and HP's new product isn't just obsolete before even shipping, it's also far more expensive and less efficient than "hyper-converged".

A few NVMe drives on each server combined with a 160Gb/sec network will far out perform arrays with centralized storage.

I am training a major British corporation this week in precisely this topic. Shared drives and centralized storage just can't compete with distributed data. The interconnect is too slow, the lack of application support make backup unintelligent. The disk latency is too high. The scalability is nearly non-existent.

Better to spend money on good, solid, cheap, non-proprietary storage.

Let me guess... The system is super nifty 32gb/s fibre channel? That's ridiculously slow.

0
3

Windows 7, Server 2008 'Convenience' update is anything but – it breaks VMware networking

CheesyTheClown

Re: I am not surprised at anything MS do anymore

So... This is a dirty trick? Can't possibly be that they fixed something in the driver model which incidentally broke VMware's driver?

I tried to obtain the SDK for VAAI NAS from VMware. This is probably about 13 pages of PDF when it should have been a simple public REST API. VMware wanted to charge me $5000 and force me to sign an NDA that would not permit me to open source my code. I would have paid $1000 without such a contract. But documentation and header files for 10 functions for $5000 and restrictive terms was thievery.

In case you're not aware, writing a network driver with good performance with the flexibility of VMXNET3 which is not so much a driver as an RPC mechanism to make networking function calls across protection rings takes something of a small miracle to make work if you don't have source access to the kernel of the guest OS. The same is true in reverse. You can't make a change to the kernel of the guest without likely breaking a virtual driver or two.

The end result is, VMware will release a patch within a few days. They just need to boot Windows Server 2008 in Workstation and run their unit tests via Visual Studio and they'll find what changes they have to make and they'll release a new driver ISO.

The real question is... Why hasn't VMware implemented a proper network driver for Windows yet? They still use almost archaic (and SLOW!!!) methods of implementing networking

10
6

SELECT features FROM bumf... What's new in MS SQL Server 2016

CheesyTheClown

Re: I'm sure it's lovely but

Postgres and Maria are great products and if you are happy with them, good luck.

Some of us depend on things like full support for scalability. MSSQL scales like crazy. Postgres and Maria do pretty well these days too, but if you need to store 400,000,000 records replicated across 50 data centers and processed by 400 transaction servers for processing stored procedures... They're pretty lightweight. I suppose it could be done, but it probably would be almost impossible to manage.

MSSQL is actually pretty good for massive scale. It's just another DB for normal scale. It's sweet for embedded. What I don't know is why anyone would use Oracle.

10
2

Surface Book nightmare: Microsoft won't fix 'Sleep of Death' bug

CheesyTheClown

stopped happening on mine.

I was an early adopter and got mine a few days after the i7 512gb started shipping... had the problem A LOT and now I don't. No idea why it went away, but ever since it did, it's been the best machine I've ever owned. I knew going in I should expect some bumps... I must admit though, I really miss Windows 8... it was the best OS Microsoft ever made :( Unfortunately, a bunch of whiner babies said "I'm too stupid to live without my start menu"... so Microsoft killed it with 10 :(

1
2

$10bn Oracle v Google copyright jury verdict: Google wins, Java APIs in Android are Fair Use

CheesyTheClown

Re: Phew...

Agreed but... the code copied was a published API in the sense that it was open source at the time.

What makes this a problem is that so far as I know, the Java APIs are not published as header and source but instead as a simple Java file. I don't recall universal interface definitions being kept intentionally separate from the code which implements the API. So, while a script could easily extract 99% of the API from a .JAR file with no assistance, to get the whole thing probably required extracting it from code.

1
0
CheesyTheClown

Re: Google must have paid big bucks...

Google is a REALLY REALLY scary company these days. There are endless numbers of ethical issues with Google... far too many to count. This is a case where they actually are doing something good instead of unintentionally evil. Others have covered the API things... I'll dig into some others.

1) Cartography.... we are so insanely dependent on Google maps these days that if Google shut them off, there would be airplanes completely lost.

2) Search... they're the only company which understands that people want search results that actually give what they're looking for. As a result, they mine data no matter how unethical that data mining is in order to establish a search result listing that prioritizes what you mean instead of what you say. As such, Google watches absolutely everything you do and makes it a secret as long as it can because they don't even want you knowing.

3) Broadband... Google already carries insane amounts of data worldwide. In addition to being what may be the biggest or second biggest tier-1 service provider at the moment, they're running fiber to data centers and to the home and supplying predatory pricing to get you to switch. Their prices are so low the packages so good that no business with responsibility to it's shareholders can justify not switching. If Google shut down their transport network, the Internet would actually break.

4) DNS servers... by this time, changes are you or your ISP, your ISP's ISP, etc... are using Google's DNS servers. As such, Google is tracking absolutely everything everyone does whether they use Google or not.

5) Google car... Google will probably give away 90% of all their self driving technology to as many vendors as possible to ensure as much guaranteed information tracking as possible in exchange.

6) Google Docs... if you start using them and you really use them... you're locked in... you can't move them back home again. OpenOffice/LibreOffice etc... suck terribly and there's far more to Docs than just what's compatible and open. An open file format doesn't mean every program which implements it implements every feature. It's pretty much guaranteed lock in once you start.

7) Google Mail... awesome service most of us couldn't live without. But Google doesn't give this service away for free out of the goodness of their hearts... tracking tracking tracking.

I can keep going for a LONG WHILE!!!

Google is basically big brother. For the moment, Google is run by a group of people who seem to be someone with good intentions. So long as the company grows and turns a profit, this will stay like this. Once the stock stagnates etc... the board will step in an replace CEOs with shareholder representatives with no interest in the customers themselves. Things like selling your tracking information to others will sound like a good and profitable idea.

What's worse is, Google's too big to fail now. It makes absolutely no difference what Google does. We honestly can't manage in 2016/2017 without dependence on them. There's no alternatives and no replacements and if Google went bye bye and just shut off all their toys... there wouldn't be much left of the Internet. It would be far more devastating than when Infoseek changed their search engine back around 1995 and we couldn't find ANYTHING anymore. There are major systems around the world which simply will stop working of Google shuts down.

0
0
CheesyTheClown

Re: @tekHedd - I haven't downvoted a post in a long time...

Actually, unhandled exceptions seem to be the problem here. The stack trace in this case might be the only means of this user to get support from Cisco TAC in resolving the issue.

This case in my opinion is that :

1) The sysadmin has a point but doesn't know what his point is and therefore is not well suited for problem solving. He's the type that simply asks the wrong question due to lack of knowledge. Instead of learning more about what stack traces are and finding the right thing to complain about, he's simply picking the first thing he doesn't understand and focusing his hate at Java unfairly

2) Cisco as always has done a shit job on writing code. I can see a mom and pop operation or open source project dumping a stack trace without an exception handler. But Cisco should have enough developer resources to not only implement program flow, but also implement error management even if it's just a top level exception handler for 'Unknown exception'.

3) Cisco ACS is an old product that only receives update begrudgingly. It was written during the dark era of Cisco which means their products were meant to be used but never seen by anyone. Cisco has a long an glorious history of absolutely disgusting user interfaces. They're getting a little better.

4) This won't be a big problem in the future. Cisco has more or less moved to Python. Cisco does programming languages like a flake does religions. Java used to be king... now Python. The marketing and management at Cisco know even less about what an API is than Oracle's lawyers. Believe it or not, all new Cisco courses this past year have at least one slide making a huge deal about what a northbound and a southbound API is. They also all seem to try and fit a plug on programming Python in. So... without exception, Cisco HAS NEVER made a single point as to why someone would need an API, but instead makes a huge point out of making incredibly bad scripts that do absolutely nothing in a programming language that is not particularly well suited for calling the APIs as opposed to implementing them. You should see the shitload of new information on "YANG" which must be the most insanely lazy and somewhat sloppy approach to API development in history.

Java is a perfectly ok language... it's not particularly good as a runtime environment anymore. JVM could use a massive update and Google did a little with that in Dalvik. Sadly, there are no good and modern extensions to Java regarding things like assisted auto-vectorization... or things like OpenMP style multithreading assistance. Their libraries are impressively bad for things like threading in general. There are far too many conflicting models in Java for multitasking programming. Due to REALLY REALLY bad GUI support in Java, there's no decent runtime for handling calls to the UI thread. Microsoft "solved this" in C# by building a serialized delegate mechanism which doesn't suck... too badly. Even better was the Task<> structures attached to the language async mechanisms. Java has sort of died in this place. There are solutions, but you can't find which one to use or how to intermix them when needed.

After this case, Google should probably make a replacement for Java which operates on the Java class libraries but fork them substantially enough that they can extend and improve the language as much as possible

0
0

Page:

Forums