Its 1.49bn, so its not a Mega fine, its a Giga fine.
1010 posts • joined 11 Dec 2014
I wonder how much of this relates to customers going "damn, that's getting expensive, lets do without it", rather than the previous iterations where people purchased software and used it as necessary.
The cloud models are often based on the view that it "only costs you £x per day", which is great if you live in that software, but useless when you use it once a week or once a month, then the value proposition looks very different.
@John from Nutanix here.
Bad maths for RF3 - hmm, sounds like sales BS to me, the concept is very simple, 3 copies of each block as opposed to RF5 with 5 copies of each block, loose a node with RF3 and you only have 2 copies of the critical data, doing maintenance on another node at the same time - now only a single copy of the precious customer data. Further the response on storage heavy missed the point - if your resilience is to not use RAID and to do data resilience manually across the nodes in a cluster Then nodes that have far larger percentages of storage represent far higher risk to data integrity when they fail, this is just basic maths.
The same applies to small node count clusters where unbelievably you claim that ‘there are local redundancies’ perhaps you can explain this claim given that there is no hardware offload (which often provides resilience) and it’s a very well known design principle that single systems are not resilient - for any of a large number or failure modes, this is where the concept of n+1 comes from, even for one node, that results in 2, again basic maths.
I was touched by you thought that I would be installing systems - nope, one of your SI’s was doing that - but thanks for the blame the customer tactic.
I also note that you completely ignored a number of questions / observations over previous posts - perhaps those are the hard questions, but then on reflection and re-reading the whole thread, I note that the main thrust of your posts is just to claim that the views are wrong / uninformed whilst not provoding any referenceable information sources or answers. For those of us who have worked with the product, reflect that that might be why the sales are down. Sales claims only work for a while, but when the workload hits the metal, customers get to see the warts in the designs that makes them consider their options when outages and performance issues hit the planned scalability.
@John from Nutanix Here.
So, no sweet spot, but a reference to the nutanixbible site (which is an interesting read). The 5 node minimum is real - to get acceptable levels of resilience of the data (inc. metadata) in the cluster, even says so on the above web site.
Then you go and say "We have options for 1 and 2 node clusters if that better fits.". A 1 node system isn't a cluster and it offers no resilience at all, so not enterprise ready,
Secondly 2 nodes is below the RF3 you talk about in the nutanixbible web site.
Then you go on to imply that cluster size is a function of workload. Well, if it scales properly, then you should be able to scale out linearly, unless there is some bottleneck that prevents this - otherwise why wouldn't customers do this in all cases ?.
I challenge your "no 8 node limit" statement. I've had several designs come across my desk whilst at different customer sites and they never go above 8 nodes. When I ask what happens next when we expand - its always a new cluster, now why would that be ?
Could it be that the age old mantra that the interconnect traffic increases as the node count goes up, so the bang for your buck decreases as you scale out making it less and less viable. Is this also the reason why you have to have storage heavy nodes to offset the disk performance issues. How many of those do we need for performance AND resilience for RF5 ???
You also say about broad hardware compatibility, but again, first hand experience, incompatible hardware derailed one site build as some standard built-in mainstream vendor card wasn't compatible with the software and it refused to install with a cryptic error message.
So, my position on hyper converged hardware - plenty of hype all round.
Hardware offload, sounds like Dwarf is connected to SimpliVity.
Nope, Dwarf is not affiliated or connected to any vendor, I make up my own mind and I'm good at digging into products.
The hardware offload question is a very simple one. One clock tick of a microprocessor does one or less operations depending on the complexity of the operation being performed, Dedicated hardware to do specific tasks is well established - RAID controllers, TCP offload engines for NIC's, encryption cards that handle the complex encryption calculations all increase performance as hardware can always outperform software - because software runs on hardware. Think what other vendors are doing - MS stuffing FPGA's into their cloudy servers, now why would they bother to do that ???
There's also the little point that any CPU cycles doing things that are not directly related to the user experience are effectively wasted, so minimising this wastage by getting a specialist piece of hardware to do it results in better user performance.
The concept goes far further back - all processor architectures have DMA capabilities - taking the CPU out of the equation to move large blocks of data around. offload cards DMA the data into them, doing a CPU based linear read is going to be slower from the outset.
So, when I see a vendor pushing their product and saying its as fast as / faster than offload cards, I smell BS. pulling storage blocks from other nodes across the wire without using dedicated storage connectivity means congestion for other user facing IO, so again performance can't be better than when that traffic is not there, yet were told its fine, nothing to see here. Then there was the lack of public benchmarks - now why would vendors do that unless they had something to hide ?
Google for it.. plenty of articles out there..
Then wonder why the sales people never specify clusters larger than 8 in any designs.
Then look at the minimum configuration of replication factor of RF3, but recommendation of RF5, so new clusters need 5 nodes minimum for acceptable resiliency.
So, the sweet spot is 5-8 nodes a cluster, which is quite a cost step when you need to increase capacity. Anything less than that and you accept failure modes that will adversely affect resiliency. Anything more than 8 and you need to roll a whole new cluster again.
...analysts grilled the HCI vendor over inadequate marketing spend and sales hires.
Just because analysts and sales droids want to sell things doesn't mean that people want to buy things.
Too much voodoo, too little use of hardware offload and too narrow a hardware compatibility list to ever make it onto any my proposals.
Oh and be open about that 8 node limit that we all know you have but you won't talk about. Let people do performance testing of your platform rather than restricting their ability to do so -- unless of course you have something to hide.
Anderson was ordered to pay a total of $12,804 to cover the costs of getting the two government websites patched and back online.
Er, isn't half the problem that their sysadmins should have been doing that patching anyhow and that's part of the reason they had a problem in the first place !!
Doesn't this send the wrong message to lazy companies that they can still bill for doing a rubbish job in the first place.
How can any MS cloud sales droid seriously expect customers to believe that their blue cloud thing is resilient if they can’t even make their own platforms work reliably on it, it’s not like this is the first time something has gone done on it after all.
I’m running out of fingers and toes counting all the “blue sky of death” events - where there is not a cloud to be seen.
How about this workaround to get the business back on its feet again
1. Change the date back on the system to a point several years ago.
2. Reboot the server / restart application. Licence product will probably come up OK
3. Change the date back to the correct value.
4. Talk to vendor to get a longer term fix.
5. Talk to management about resolving the issue permanently.
Given that "user friendly" can mean so much to so many people, can you be a bit more precise on what you don't like on Linux and when was the last time you tried it ?
Consider that there are several user interfaces you can choose between and different apps that provide similar functionality if you find there is one you specifically don't like.
Only 4 people complained.
Well, that kinda missed the point didn’t it.
Most are probably non technical and wouldn’t know how to report things or understand what this means, then there are the hacking type, well, they aren’t going to look a gift horse in the mouth and start complaining are they ?
Oh and there is the little tiny issue that they overlooked - THIS SHOULDNT HAVE F’KIN HAPPENED IN THE FIST PLACE. Have fun explaining that when you submit the paperwork for the GDPR breach. Personal information is personal information after all.
There's still one very useful .com file that still fits in far less than 64K...
Stick the following into notepad and save it as eicar.com
Your AV software should detect this standard test sequence and destroy it. If it just display the message which is readable above, then its time to go and buy yourself some better AV software.
For those not familiar with the above, its a benign test sequence that is used to check AV software works without having anything malicious involved, its available from EICAR.ORG whats clever about it is that its an executable piece of code that is encoded in such a manner as to be based solely on printable ASCII characters which means that you can cut-and-paste it without needing any other advanced tools or the ability to download executable from the Internet - which most corporate platforms will prohibit.
Yet another MS app that is not being used by businesses, so they have to hook up to the industry leading platforms so that they can show some business users.
If only they had listened to customers instead of know-nothing internal experts, then the downhill spiral could have been avoided.
You have to wonder if anyone inside MS has put 2+2 together and worked out that the future is not Microsoft shaped.
But surely the DoD should have two providers for the days when AWS goes tits-up.
That's the reason that all the main cloud providers have multiple regions and multiple availability zones, so that a failure in one area can be worked around. Nothing new here, multiple sites that are physically separate and logically linked has been the norm for a very long time even for on-prem data centres.
Going multi-cloud means that you can't take advantage of the specifics of any cloud vendor, but have to settle for the lowest common denominator functionality, which kind of defeats the purpose of having access to the vendors new sparkly technologies and the benefits it can bring.
Security, yep, we’ve heard of that.
Good job nobody relies on them for the security and privacy of their data.
Are we still expected to trust them on the separation between customers in their cloudy thing when they can’t even secure their internal systems ?
I wonder how many other systems are poorly configured and yet to be discovered.
PC sales are down a lot (mostly due to Win 10 being a bag of nails that tries to advertise and sell your soul). Most have moved to tablets and mobile phones, Mac's and Linux machines. Your Mac figure is clearly wrong - get on a train and look at what people use. Go to a university or college and watch what student have. Go and work in a corporate with a BYOD policy and count the number of Mac's vs Windows Laptops and Linux machines used.
Most enterprises are delivering a lot more on Linux servers rather than Windows based platforms, but many are stuck (for the moment) on Windows PC's due to the software legacy, but virtually all modern apps are web based, so the dependency on Windows and Windows desktop is being eroded all the time - as people (including those with very large corporate budgets) are fed up with it..
In regard to reliability, it depends on what you are measuring, focusing on the topic here which is activation, can't recall a single event where any Android, Mac or Linux machine decided to deactivate licences on a global scale, Windows has counted two now, One last year for Windows 10, one this year for Windows 7.
Broadening it out to all reliability, Windows has had an atrocious record in recent years compared to the competition. Virtually each week there is a major problem or something that worked yesterday is borked due to the forced update - with an ETA for fix being several weeks or months out,.
I've not seen that on Mac, Android or Linux to anything like, mostly as they let the user when to update. Apple is not perfect but even they test things to minimise impacts and react fairly quickly when things go wrong.
.. And they wonder why people are moving away from Windows where they can, you simply can't rely on Windows to work reliably any more.
One or two errors could be put down to bad luck, but the more it happens and the impact of the problems, the more it looks like incompetence.
Reminds me of that "Where do you want to go today" strap-line that got the response of "Safely back to where I started this morning" response, then for some reason MS stopped using it..
If only they would test things better, then they may still have feet connected to those stumps at the bottom of their legs.
"Turning off our legacy technologies is a critical to ensure we are investing for the future, reducing our energy costs and making sure our customers are able to take advantage of the latest broadband services."
So, exactly how much of the C&W network is still a separate network within Vodafone, given that you purchased them 7 years ago ?
Last time I was talking to people, the were both very separate and it sat in the "too complicated to move" box. Think it was just the badge over the door that had been replaced and not a lot more.
If you make an API available for external use, then you should expect it to be used. I not, then secure it appropriately to prevent its use in other manners. This is not misuse !!
Note to sales and engineering teams. If you make a technology that people find useful, don't be surprised when they use it and tell others what it can do. This will in turn lead to additional sales and good reviews.
Conversely, if you make something cool and lock it down to make it unusable, then don't be surprised if people shun your product in favour of the others that do it better and your sales pile into the ground.
Don't forget that API's are there to allow people to expand products in manners that make them better where the OEM decided it was too much trouble to make the product work properly in the first place.
The other one-line is that you get what you sow.
Is that you are less likely to need to change direction completely and for no apparent benefit to the customer.
If the new fangled way is better, why can’t they migrate the existing configuration automatically so that it doesn’t impact customers ?
We used to call this backwards compatibility, it wins big brownie points with management when we need to change things
Has it got a self righting mechanism ?
Can't see how it would recover well when it falls over, if for example someone were to give it a shove.
What about all the normal obstacles that people have to deal with - gates, steps, ramps, muddy paths, etc.
It also looks like the sort of thing that Dogs would really like to claim as theirs with a little squirt.
Having speed is fine, but it means nothing without reliability or coverage to match.
When travelling on the trains, you have to pre-plan what you are doing before getting to certain places on the trip - both for voice and data services. 4G is nice one minute, then 3G, Edge and then nothing...
App vendors need to better consider the unreliability of coverage in their apps. Why for example can't the Youtube app download locally so that I can watch later, rather than having to be on-line all the time ??
Why does Spotify decide that if I open the app after 30 days of not using it on that device and I happen to be in a not-spot at that point, then it drops ALL the locally cached music. Why not keep the cached content and just not let me access it, its the same effect at that point, but far easier to recover once I'm connected again as I don't have to waste loads of my monthly quota re-obtaining what I already had.
The other concern is that it appears that the mobile networks take bandwidth from yesterdays technology to make space for the new ones - this is nice in principle, up to the point where it makes an existing device that is on-contract go slower or less reliable than it did when you brought it. Presumably this is done to simplify things for the network provider and to try and force people up to newer devices.
Backwards compatibility matters more in my book than the latest whizz bang technology that nobody has yet since its price loaded by the manufacturers. I'll generally only buy it when its at sensible prices, so if you want earlier adoption and customers to move over faster, then don't price load the technology. We've already been paying monthly charges for a service, so fund it from that instead.
Joe Belfiore, corporate veep of Windows, announced the plan, "Ultimately, we want to make the web experience better for many different audiences," he said.
One of those audiences may be macOS users, who despite not clamoring for Edge should have access to Microsoft's browser at some point: Belfiore said the company expects to bring Edge to other platforms like macOS.
Translation - our browser is shit, but we have plans to push it onto other platforms anyhow.
So, how about you fix your own platform before you try and screw up other ones ?
For clarity, I mean the whole platform, not just the browser, look objectively at the browser, Skype and Windows 10 to name but 3
Don’t forget that all OS’s already have a choice of good and reliable browsers, it’s just that none of them have Microosoft logos on them. Do you really expect that people will want to pollute their already working platform with a runt of the litter browser from Microsoft.
Then I see the author name at MS and realise that they have been spouting crap for years.
That's what I was trying to picture in a vending machine.
Put another way, the The Hog Roast O'Matic, just with a vending machine function too that dispenses the goodness after its all done.
Clearly I failed in painting the picture, or possibly all your brains decided to picture reference material from real world experience, rather than something promised from a product that doesn't exist yet. As you can probably tell, I don't work in marketing, but can see an engineering opportunity.
Well, I suppose that V1.0 was the pork scratching vending machine and this innovation is indeed a step in the right direction, however, but now its time to upgrade their great concept to something far, far better - the hog roast vending machine.
Imagine that, you see the spit and get the smell for hours before hand, then at the correct time, pure porcine perfection with taste, fat and none of that "garnish" that people insist on putting on the plate.
The second option is an on-demand, just cooked proper bacon and egg roll with thick English bacon, fresh eggs and your choice or soft or crusty roll and brown or tomato sauce, oh and a napkin to catch all that juicy and tasty fat.
These innovations would be an absolute money spinner and both get very close to proving that from a sows ear, you can indeed make something of value, even if its only pork scratchings.
I suddenly feel the need to go for a pint at the local pub for the appropriate accompaniment.
No, firewalls do the same function on ipv4 and ipv6. If you have an insecure network design then that has nothing to do with the protocol version.
If you are implying some security benefit of say NAT to protect things, then refer to all the public guidance that NAT is not a security technology. It never has been and it never will be.
Biting the hand that feeds IT © 1998–2019