* Posts by Trevor_Pott

6991 publicly visible posts • joined 31 May 2010

Brit teen who unleashed 'biggest ever distributed denial-of-service blast' walks free from court

Trevor_Pott Gold badge

Re: @Ledswinger Serious.

If he is receiving leniency because of mental illness then the public has a right to know the details

Not if he was a minor while the crime was committed.

PLUTO SPACE WHALE starts to give up its secrets

Trevor_Pott Gold badge

Re: Call me simple

It seems to me that the current classification system for planets does not give enough detail as to what a planet is and would benefit from the addition of information about size and makeup at the very least.

This. 10,000 times this. There are some basic classifications that are informally used.

Brown Dwarf Star: not actually a star, this is a really large gas giant that outputs substantial heat but does not sustain fusion of hydrogen. Brown dwarfs can be extremely heavy and may well undergo hyrdrogen fusion in smallish amounts (in addition to the massive amounts of fission they are undergoing) but never quite "ignite" into burning balls of plasma in the sky. Can range from similar or smaller radius do superjovian to nearly the size of a red dwarf. May actually have a habitable zone, but many questions about radiation belts and magnetospherics remain.

Superjovian: large enough to output minor radiation and potentially notable heat to its inner moons but not large enough to be classified as a "brown dwarf star". Heat is primarily fission-based, megnetodynamic (depending on stellar flux) gravitational and remnants of its accretion. Can be absolutely huge planets, but don't start getting much bigger than Jupiter until they've gotten to be about 80 Jupiters in mass. Gravitational interaction with most moons will cause internal heating.

Jovian: Gas giants. They range from large but "fluffy" and not very dense planets (like Saturn) to several time Jupiter's mass. They they do not emit any noticeable radiation or heat past their roche limit, however, they are not massive enough for their megetosphere to deflect stellar radation completely outside their likely moon orbits. Thus most moons will pass through a belt of deadly radiation similar to the Van Allen belts here at Earth. Gravitational interaction with larger moon may cause internal heating of moons.

Neptunian: These planets have a roche limit that is far enough away from the planet that many planets this size will be capable of supporting ring systems. Planets are large enough to capture smaller terrestrial planets as moons. Gravity is high enough to hang on to hydrogen in the atmosphere. Expect ammonia in the atmosphere as well as water.

Superterrestrial: Planets larger than Earth but unable to hang on to hydrogen unless it is bound into a heavier molecule like water or ammonia. May have ring systems but highly unlikely. Unlikely to have captured moons. Can potentially be habitable. Very thin atmosphere compared to Neptunians.

Terrestrial: Planets just large enough to hang on to an atmosphere. Like all planets larger than it, silicates and metals form the planet's core/mantle/crust. Very thin atmosphere. Marginally habitable in that it is continually bleeding away lighter gases into space. These planets require life to be able to continually recycle molecules in order to have an atmosphere that contains lighter molecules that are critical to complex life.

*****Special case terrestrial: Metal Planet. Metal Planets can form very near a star where metals are hyper-concentrated in the core of a large gas giant as part of the regular formation process. The gas giant's atmosphere is then blown off (typically by the sun expanding to engulf the planet as a red giant for a billion years or so) and the metallic core is left behind, typically orbiting a dwarf star.

Subterrestrial: rocky planets that are too small to hold on to much of an atmosphere at all. Likely to cool after only a few billion years and not have much of a magnetosphere. May briefly be habitable. May briefly sustain a hydrosphere.

Dwarf: smaller than a subterrestrial. Not likely to ever have a hydrosphere. Not likely to ever have much of an atmosphere. (Minor outgassings and capturing of an inch or two of solar wind aside.) Dwarfs are separated from subterrestrials mostly because of density. They may be nearly as large as a subterrestrial, but they contain a lot more ices. Some dwarf planets at the fringes of a system may be mostly or all ices.

The big issues are drawing firm lines between the classifications. While in use by many astronomers, formal definitions require drawing arbitrary lines and this causes much consternation and debate.

Plus ca change...

Trevor_Pott Gold badge

Re: Call me simple

Vesta, Ceres, Orcus, Quaoar, Eris and Makemake are, of the top of my head, round. Eris is bigger than Pluto. Sedna is suspected to be round. Haumea is an oval. Eris, haumea, Quaoar and Orcus - at a minimum - all have moons.

Do we have 8 planets and many dwarves? Or do we have dozens of planets? With 8 planets + dwarves we accept that "planet" comes in gradations and we start to be able to reasonably classify them in a useful fashion.

With "there are dozens of planets" we're just lumping anything that happens to be round and orbiting a star - as opposed to another planet - as a planet.

For that matter, Pluto isn't a planet even by the "forget the orbital path clearing" criteria. Vesta, Ceres, Orcus, Quaoar, Haumea, Eris and Makemake all are, but Pluto isn't. It's a double planet. Or a double moon, depending on how you want to look at it.

Pluto and Charon both rotate around a common barycenter that it outside of Pluto proper. Pluto is Charon's moon. Charon is Pluto's moon. Add in Styx, Nix, Kerberos and Hydra and it's not really so much "planet" or "double moon" as "pile of rubble that somehow hasn't all collapsed in on itself yet".

Now let's have a conversation about Luna. And the Galilean moons. And Titan. And Triton. These are all worth considering as "planet-sized" bodies in some definition or another. Titan as a denser atmosphere than Earth. Triton is almost certainly a "captured planet". Hell, Staurn alone has 7 moons in hydrostatic equilibrium!

Being round does not make a ball of space debris special.

Anywho. The solar system, eh? Endless wonder. Endless debats to have about categorization and classification.

I cannae dae it, cap'n! Why I had to quit the madness of frontline IT

Trevor_Pott Gold badge

Re: That includes the firmware.

Except that in the scenario discussed the items are coming from the factory to you for internal use. You can then reload clean firmware (actually, every end user should be doing this for every product anyways) and running your test suite on the product before you deploy.

Since you own the company that designed it and employ the devs that code it and the multiple teams that develop the testing suites there should be no way for tinkering to go undetected.

Trevor_Pott Gold badge

Re: That includes the firmware.

So you get someone in the factory. And? Precisely what do you think they will do there that can't be tested for elsewhere?

Trevor_Pott Gold badge

I have recently been told Audi may have been the better analogy and been given a host of reasons why. I am investigating.

Trevor_Pott Gold badge

Re: @ Trevor

"I meant 'bomproof' figuratively. With the sort of distributed supplier chain you're describing, compromising it would be relatively easy, if the assailant also had a decent budget. It'd be another "You have to be lucky all the time; whereas the other team would only have to be lucky once" scenarios. Depends greatly on what sort of product, who the customers are and what useful information is passing through."

Interesting viewpoint. I'd love to buy you a beer and go back and forth about it. I think there's merit in both approaches. There are just two different things that need defending against: 1) external actors attempting to "poison" the supply chain and 2) loss of one of more elements of the supply chain. (I'd argue that a "poisoned" supply chain component can be treated the same as one that is destroyed or embargoed.)

So what is the optimal distributedness versus risk solution? I could spend quite a bit of time in front of a whiteboard having fun with that calculation. :)

Trevor_Pott Gold badge

You are absolutely sure that the VHDL compiler they used to cook up the masks for all the chippery or the microcode for your CPU is not compromised?

No. Which is why I talked about ensuring that you write firmware that presumes you can't know this and tries to compensate. By doing that you have done everything humanly possible and can stand up in front of a judge and say so.

"hat Japanese factory making the tantalum's didn't place a transmitter inside?"

Yes. Yes, this one I can think at least three ways to verify conclusively.

"You have to assume that the operation is compromised and then work out what the consequences are and how to mitigate this."

Which is exactly what I said. But you also work to minimize the number of different points of compromise so that you have fewer potential holes through which nasties can get you. Security is a comprehensive affair that should be done in depth.

Trevor_Pott Gold badge

Re: Nothing to see here move on

"Secondly, Potty actually says he likes to think he can make a system secure, but goes on to state later that he can't. "

I never said I could make a system 100% secure. I said I could build the best network out there, given enough resources. Nowhere does that mean that the network will be 100% secure. It means that it will be as secure as is possible and tha tit will be able to cope with the inevitable security breach.

Learn to read.

Trevor_Pott Gold badge

Re: IT Sales Problem

"In order to be good at IT you must first master business, sales, marketing, procurment, etc. etc. BTW, the pay is shit, you get no respect and people on the internet will tell you you're a failure if you aren't capable of doing the job of 15 departments full of people for less than the average mortgage payment with a smile on your face."

Thanks for remaining me why I quit, Justin.

Trevor_Pott Gold badge

Re: @ Trevor

I'd guess at automated probes back, followed possibly by throwing a few exploits at whatever you detected

Nope. Nope nope nope. That is illegal on so many different levels I just...that's a great big huge bucket of nope.

"Incidentally, I've been hatching evil plans to get at your bombproof "networked product" factory ever since you mentioned the concept; favourite so far is build the nastyware into the case."

Why bombproof? Hardware is cheap and cheerful. Source from multiple suppliers. Write your own firmware. Have it checked by teams in different countries and foster some spirit of competition between them.

Expect that people will try to compromise your hardware (building nasties into the CPU/ASIC?) and try your hardest to write stuff that will detect it. Shut down if required, work around if possible. Have network gear that doesn't trust what's attached; always look for suspect traffic, etc.

No single point of failure. Not even in your supply chain. Someone bombs your motherboard factory? That's why you source from multiple places! Etc.

Trevor_Pott Gold badge

Re: @ Trevor

The stuff I've seen is a little bit more advanced than that. Some of it's in beta with stealth companies. Some of it is just tip-top-secret squirrel because it actually works for now and the current "big fish" customers pay the startups a truly obscene amount of money to ensure that only a very limited number of people have access to the technology. (I.E. not their competitors.)

Security products and services are a disgusting business. Companies aren't willing to spend a lot to defend themselves, but holy crap will they spend money to thwart their rivals.

Trevor_Pott Gold badge

Prisoner's dilemma.

Trevor_Pott Gold badge

Re: Wearing my rubber-soled shoes

That network you've got protecting you... surrounding you... do you really think it'll stand up against the big gangs? We are everywhere living in the worst parts of the worst big city. Be ready to run like hell. Individually if need be.

Absolutely not. But that's why proper security is about far more than prevention. You need the following covered, at a minimum:

1) Prevention

2) Detection

3) Monitoring

4) Mitigation*

5) Incident response**

6) Penetration testing

7) Randomization

*Compartmentalisation/isolation/segmentation of data so that no one breach can pwn your whole network or all relevant data.

** You will be pwned. Accept this. Have plans of action to deal with it.

You will eventually have a security breach. Make that reality part of your security plans. Too many IT "professionals" think that security stops at "prevention". There's a hell of a lot more to security than patching and firewalls!

Trevor_Pott Gold badge

Re: @ Trevor

Read the article. Nowhere did I say I would solve every security need.

I merely said that I could build the best network that has ever been built, if the resources were provided. That includes counters for every known security problem, policies/procedures that limit new problems for occurring, incident response plans to mitigate damage when breaches do occur and resolution plans to deal with breaches once they have occurred.

Now, bad code, state actors slipping things into hard drives/switches/etc...these are all easy to solve. Expensive, yes, but these are known issues that can be worked around. Automated testing can be built to look for them. Mitigation programs designed to handle them. If you know about an attack vector you can plan for it, assuming the resources are there to do so.

This includes social engineering. It even includes some thigns I can't talk about related to automated incident response because I'm under NDA with several companies developing next generation technologies.

Suffice it to say that yes, security is actually not that hard. It's spectacularly expensive, and the experts required to implement the things you need to be properly secure are in high demand, but it's all perfectly doable.

That's the problem. It is doable. Worse: I know how it's doable. I can detail for you every single corner cut, every compromise made, every bent copper clawed back in exchange for deepening the risk pool.

You can't guard against what you don't know, but you can absolutely can put in place mitigation and response, compartmentalization and...and...and...FUCK IT. ENOUGH! I'm not going down this goddamned rabbit hole one more time.

Look, companies aren't willing to pay money to secure themselves. Sony wasn't. The US Government wasn't willing to. Many health care providers aren't willing to. Over and over and over and over, up and down the whole damned list.

Every week I have sysadmins from the largest companies on earth telling me very blunt, honest tales about how they have raised flags about things they KNOW are issues, but which management utterly refuses to address. They want me to write about these things in The Register, but somehow keep them completely isolated so that nobody can trace the leak of info back to them.

Government malpractice? Pick a fucking country! SMBs? Cloud providers? SaaS providers? Startups? You name a segment, I'll tell you tales of cut corners that will make your blood run ice cold. Corners they know they are cutting, but take the risk to cut anyways because they delude themselves into thinking that the risk of incidence is low.

Christ man, you read about these things here in The Register every single week! It's now gotten to the point that most of us just tune it out because the frequency and scope of the digital apathy and ignorance is so astoundingly staggering that we, as pratitioners of the art can do nothing but weep.

Then we go to work and pretend that same restrictive penny-pinching bullshit approach to everything is somehow not leaving our precious networks vulnerable. Or we fellate marketing (and oruselves) with the trumped up idea that by using public cloud computing we will somehow offload all risk and responsibility to a third party provider, without, of course, reading the EULA which very explicitly is Nelson Muntz says "ha ha" with both middle fingers in the air.

It is not naieve to think that with the right resources a competent administrator can build the best network on earth. Not impenetrable, but damend close, well monitored, segmented, compartmentalized, isolated and with incident response for when it is inevitably compromised.

What is naive is thinking that anyone will ever be given even a fraction of the resources required to do so, or that any of us are even remotely secure unless and until we do.

And who takes the blame when the hammer falls? When you don't have the incident response you should have? When you are pwned by a known vulnerability, or you didn't have the latest security measures due to budget cuts? Your boss? Accounting? The shareholders?

Nunh uh.

You. The systems administrator. Every single person reading this comment does not have the resources to secure their networks enough to be able to stand in front of a judge and say "I did everything I could, your honour". The best that they can hope for is to document each and every incidence of resources being denied, log strenuous objections and keep paper copies of it all locked away in case you end up in front of that judge.

And if you don't? You just leave room for the attorneys of your employer to blame you. You should have known. that's your job. By not objecting you either didn't know - and are thus incompetent - or you didn't object, and thus committed malpractice. Either way, it's your fault.

But no, sir. Nobody is willing to pay "big $$$ to secure themselves". That's the whole goddamned problem right there.

Trevor_Pott Gold badge

Re: Been there done that - To some extent money and time are the easy bits ............

It..but...

...goddamn it...

Trevor_Pott Gold badge

Yeah but how can you be *sure* that those companies haven't been got at?

By owning the companies. If you own the companies you own the code. Do external code audits...like you should be doing with all the code you own anyways. Never trust anyone. Not even yourself. Everything and everyone is a potential point of failure. Build as many checks and balances as you can with the resources you have. Then try to get more resources.

Trevor_Pott Gold badge

Re: I also agree, but...

There are three stages here. 1) Learning the truth. 2) Accepting the truth. 3) Being in a position to do something about it. It's only in the past few years that I started to get in a position to exit, and doing so without screwing over some good people in the process took time to orchestrate. I'll not dwell on how long it took me to go from "learning the truth" to "accepting the truth" because that's more than a little depressing.

Trevor_Pott Gold badge

"A state level attacker who has the capacity to subvert the firmware on hard disks, routers and the like in transit, if not before they leave the factory."

If that is sincerely your concern then working around those requirements requires controlling those elements of the supply chain yourself. Either by having the ability to write your own firmware/replace the operating system on your routers or by buying a firm who makes them from the ground up and rolling your own from scratch.

I never said it would be cheap. I said I could do it. And you know what? There are plenty of companies out there who make their own routers and an ever increasing number that make their own flash drives/flash arrays. That includes the firmware. So yeah, it's doable.

OCP supporters hit back over testing claims – but there's dissent in the ranks

Trevor_Pott Gold badge

Re: Cole is Delusional

Feel free to call me delusional but I can't imagine any company the size of Facebook not testing the gear that goes into their data center

What, what whaaaaaaaaaaaaaaaaaaaaaaaaaaaaaat? You are expecting each company that buys OCP hardware to do their own tier 1 class testing? What?

How does that make sense at all? OCP is about driving down cost. Testing is a cost that should be centralized so that it doesn't have to be replicated.

More to the point: lots of companies that aren't the size of Facebook have come to rely on OCP gear. What are you doing to the open compute project? Why are you doing it?

Why the USS NetApp is a doomed ship

Trevor_Pott Gold badge

Re: Baby, I'm just gonna shake, shake, shake, shake, shake I shake it off, I shake it off

Mainframes are still around. Should that be NetApp's ambition? To become the mainframe (circa 2015) of storage?

Trevor_Pott Gold badge

Re: where will other vendors be?

Yes.

Maybe it wouldn't be so bad if they were innovating beyond their flagship much, but they aren't. And they really push any consideration of anything else aside. That is very much so a huge, huge, huge problem.

Next?

Trevor_Pott Gold badge

Re: New technology - jets

Don't apologize, I like learning! Keep up the great comments, please!

Trevor_Pott Gold badge

Re: Nothing special about it??

Nothing special about it. NetApp was cool 5 years ago. In case you hadn't noticed, the storage world exploded in that time. There's a lot of options that can take NetApp for one hell of a ride, thanks.

An enterprise storage product that is mature in serving all major protocols

*Yawn*

(and FYI, retrofitting enterprise NAS on other systems is insanely hard, which is why nobody's done it).

Funny, others seem to be doing just fine at it. Maybe your assertions are based on dated knowledge.

With no downtime for pretty much any operation, including things that would mean major downtime for other systems.

Some other systems. Old systems. Shitty low end SMB systems. Some legacy stuff from other fossils that haven't grown up in the past five years. But there's a hell of a lot out there that is perfectly okay with doing all sorts of major changes without downtime. *shrug*

With the best application integration tools on the planet.

Let's agree to disagree.

The best heterogeneous fabric management system in the world (OCI).

I'm neutral on this. More testing required. Mind you, a storage fabric isn't the only way to do scale out, scale up storage. At least, not fabric in the same way NetApp goes about it. But I'm willin gto be convinced on this point.

Amazing automation (WFA).

*shrug* Automation is increasingly table stakes for enterprise storage. Enough do it that it's really not making me excited.

Great performance.

Granted. But performance from almost everyone reached way the hell better than "good enough" some time ago. We're well past worrying about that. Now it's about driving down latency, heaping on the features, building for failure and driving down cost.

Insane scalability.

Define "insane scalability". Almost everyone has scalability. Is it rational to think about going to hyperscale with NetApp? The cost of that alone turns most people pale. NetApp is hugely, hugely, hugely expensive and thus is left out of conversations where "insane scale" would be a serious consideration.

Netapp is a great tier 0 storage platform, especially for legacy workloads. If I had a workload that I know wasn't modern and designed for failure, needed life-or-death class reliability and I needed to know I could scale it up to a moderate-sized array, NetApp would be in my top five companies, no question.

But the number of these tier 0 workloads out there is diminishing. New applications aren't written in in legacy Win32-style monolithic single points of failure. Scale is increasingly cognate with price sensitivity in today's storage market and so I am not sure exactly how scalability helps the NetApp marketing pitch. It just focuses the mind on what that much NetApp would cost and immediately starts one thinking about alternatives.

Technology that literally keeps the lights on (part of the control chain of many power distribution systems).

Or deployed in life or death situations. By the most paranoid organizations in the world.

See above: great tier 0 storage for legacy workloads. No question. That's why it's still relevant for this refresh cycle. But it isn't the only storage capable of performing here...and increasingly competitors are earning their stripes as being reliable on this level. Even as virtually everyone is moving away from the requirement for this kind of storage.

Is that what NetApp wants to be? The mainframe of storage? Because that's what I was warning about in my article: that slow fade to niche, maybe, ultimately, complete irrelevance.

NetApp's reliability is a huge plus...but it's not enough to make it more than a point consideration for specific niche workloads.

That's the storage foundation behind the largest companies in the world.

Actually, the storage foundation behind the largest companies in the world is either OCP-style home-rolled storage (Google, Facebook, etc) or EMC. NetApp has a few wins as primary, but mostly NetApp serves the purpose of being a credible second-string provider used to threaten EMC during negotiations and you know that.

The damning thing is that many startups are eating your lunch as regards to playing "beat EMC over the head and drive down margins". See: EMC versus Pure Storage as one example.

That's nothing special?

Not overly, no.

I'd love to see what you consider special. Must really be something.

Go to VMworld. It really should be renamed "storage world". You'll see all sorts of lovely things there. Marvelous and entertaining things. Storage to wow and amaze. Storage to run screaming from. Storage and compute that works together. Storage, compute and networking and automation and orchestration and NFV and hybrid extensions and more that work together all in one SKU.

Maybe you should buy one or two. Incorporate their fresh thinking into your company after you buy them, instead of driving them out. Maybe you can bring to their products the level of QA and testing that makes your products so stable and reliable while gaining a much needed DNA infusion.

What I find impressive doesn't exist yet. I'd link you to a much newer, more comprehensive article but sadly, that one hasn't been published yet. (Written, but not out the door, quite yet.)

I'm rarely impressed by the past. Having the ultimate solution to fighting the last war is as irrelevant as the development of the F-22 Raptor. Just who exactly did the US expect they would be fighting with those things, hmm? That cold war wet dream was sure useless in clearing out a bunch of entrenched resistance fighters from hastily constructed bunkers in the middle of the desert.

That is how I see NetApp. Not just the technology - that can be excused - but the corporate culture. The best and brightest laser focused on solving the last war's problems, completely ignoring the one currently being fought or the one brewing on the horizon.

Alas, I feel that conversation may be falling on deaf ears.

Trevor_Pott Gold badge

Who the hell do you think you are, Trevor?

The honest truth? I wrote the article because of the number of NetApp people stampeding around conferences, forums, Twitter, comments sections and literally everything else claiming overwhleming superiority of NetApp over all things. Now, that's fine and good when they can back it up, but it really did not line up at all with what everyone who was not from NetApp were saying...and quite frankly what many who worked inside NetApp were saying behind closed doors.

What really got me digging on this was discussions by VARs, MSPs and other people who have to actually sell storage. The tales are many and varied, but they rarely end with NetApp winning. Some of this I can (and do) discard as bravado and smack talking, but the sheer volume of different sources started to make me think that NetApp was presenting to the world an image of itself that was untrue.

Very specifically I feel that NetApp is not a secure long term bet for companies, for all the reasons in the article and here in the comments section. This is a problem. NetApp is more than "just a filer". There is an entire data management ecosystem that NetApp is trying to sell people on. If NetApp fails to innovate and evolve at a pace required to keep up with a rapidly exploding storage market then their customers will find themselves locked in to a stagnant storage architecture that will ultimately place them at a disadvantage.

Okay, so we have my views on the matter. But what does that matter, in the grand scheme of things? Why tell the world what I think? It all goes back to the "full court press" by NetApp evangelists and marketdroids. I hate stretched truths, half truths and outright lies. I cannot abide them.

So I wrote about NetApp. Somone had to talk about the elephant in the room. The conversation needs to be had. It needs to be had for NetApp's customers - they need to think about all of this and decide for themselves if I am right or wrong...and what that means for them and the long term strategies around storage and datacenter architecture in their environments.

NetApp needs to think about it too. If I am right - and based on the evidence I've assembled, I believe that I am, - NetApp needs to make changes if it is to thrive...maybe even if it is to survive. NetApp has been in full-on denial mode about these issues for quite some time...having someone who isn't the same list of enterprise storage bloggers and storage journalists speak up about it might well shock NetApp into contemplation.

And oh, look, here you are.

So the "why" I wrote the article isn't really simple and easy to write off. Nobody asked me to write it. I don't gain anything by writing it beyond the word rate I get for writing. I could have written about the memetics of cats on the internet or how to build a cantenna or a treatise on deterministic lockstep usage in video games. I get paid the same word rate either way.

I don't think I made too many friends amongst the storagerati by writing this piece; we've all had this conversation at conferences before a dozen times. You don't get storage bloggers in a bar without eventually someone laughing about NetApp and then moving on something less depressing. I know I probably made some enemies.

But with the CEO change and the "great purge of those who disagree/think differently", I felt the time had come to speak up. If not now, when? If not me, who? Everyone else who makes a living as a "storage blogger" or an analyst or whatever the box it is I am being put into now can openly say mean things about storage companies. They rely on those selfsame companies for income. They need you to like them, or their livelihoods collapse.

I don't. Oh, don't get me wrong; I make my money doing analysty things, writing whitepapers and so forth just like they do...but I'm not nearly so monofocused as most of the others. If I get banhammered by the storage kingpins for telling the truth as I see is then I just write about SDN, or Automation or maybe I'll take up robots.

I'm not so long gone from the coalface that I've given up my generalist tendencies.

I haven't gotten invited to a NetApp briefing. I've certainly taken the time out to get the spiel from a number of folks. Watched all the webinars I could find and had quite a few great discussions about some really nitty gritty technical details will all sorts of different folks.

You ask what I do for a living. I investigate for a living. I test hardware and software. Sometimes I set it on fire. I snoop in nooks and crannies and listen in on conversations I shouldn't. I pay attention to what everyone says. I write notes.

I talk in public about things that others only talk about in private.

I don't know what you call that. People call me to get the honest straight goods on a topic; lately, storage is a popular one. I write about it. I review things.

Your competitors didn't ask me to write this article. I can think of no way in which I benefit from writing it (as opposed to having written something else). Yet I felt it had to be written. Maybe - just maybe - that's the thing to ponder here.

I don't unload on companies without reason. Not even Microsoft. It's just not worth it to do so. I do it when I feel that the balance has been disrupted. When the needs of the many are better served by raising a fuss, or when something grievous has been done and a voice deserves to be raised in protest.

The balance is off. Seemingly everyone can see it but NetApp. NetApp has carefully constructed their world so that dissent is not voiced, and opposing views are not heard.

Count me honoured to have pierced the veil, however briefly.

Trevor_Pott Gold badge

Re: Doomed really?

Yes. Doomed. Read the article. In it I said the following:

NetApp made a great product, but that one product is all that they have, and in today's storage world there's really not a heck of a lot that's special about it anymore. NetApp solves the storage problems of last refresh cycle, and might be relevant for part of the next, but the company as a whole isn't doing anything substantial to address the fact that the compounded average growth rate (CAGR) of the storage array market is about to go negative.

Nothing that any of the NetApp staffers who have commented in this thread has said addresses the actual point of the article.

That you are making money today means nothing regarding tomorrow. Ask Novell. Or SCO. NetApp in 2015 is Novell in 1995. ON top of the heap and thinking they'll stay there out of sheer largess. Novell didn't see their utter annihilation coming either.

Trevor_Pott Gold badge

Re: where will other vendors be?

"you've stated in comments that it may take 15-20 years for Netapp's demise. Do you really think there will be no innovation or absorption into another company by then?"

Okay, let's break this into three bits.

1) In order to be able to be relevant 5 years from now you need to have been investing in innovation 5 years ago. You need to be putting out innovative product into people's hands about now and 5 years from now you'll have enough adoption to be able to compete. This is something NetApp has cataclysmically failed at.

2) You can not simply buy a company today and be relevant 5 years from now unless you have the greatest luck integrating them. so far, NetApp's track record for integration is shittastic. They also don't seem to keep the thinkers and innovators around very long after absorbing their tech. BAd, bad, bad. You pay how many hundreds of millions or billions to get a product that isn't going to evolve much? You'd better be able to get enough revenue off that new widget to deal with your ongoing lack of innovation and the decline of traditional markets! Not seeing this as a NetApp strength.

3) NetApp is so large that it could survive irrelevance in 2020 and still be around (in diminished form) in 2025. I don't think anyone disputes that. Inertia is powerful in tech purchasing. But for NetApp to be relevant in 2025 they need to start innovating now, and that just isn't happening either.

So NetApp is wielding the ONTAP hammer of singularness while spectacularly failing to buy the right companies and integrate them well. That combination doesn't bode well at all. NetApp desperately need new DNA, but I honestly don't think their corporate culture is capable of coming to terms with that fact.

Trevor_Pott Gold badge

Re: Beware of Confirmation Bias

Dimitris: for there to be confirmation bias you have to actually have something you're looking to confirm.

Here's a quick hint: I don't give a bent damn who "wins" storage, or why. None of this crap - Netapp, EMC, Nutanix, Tintri, Pure, any of it - is stuff I'm going to be buying for my own use any time soon. So i research storage and the people who buy it and look at what they are buying, why they are buying it ans I ask all sort of nice, probing questions about what they like, don't like, the why of each and what they are looking for from future purchases.

I do this at companies from the 5 man SMB all the way up to the largest Fortune 500s and government purchasers. I aim to get some pretty comprehensive information.

Then - and here's the real interesting bit for me - I talk to people inside the vendors. I get a view into what corporate culture is like. Where R&D is being spent. How new ideas by staff are treated and how bureaucratic the companies are. I start doing my research and I talk to people who worked at other companies in the past during periods of success and periods of failure.

I learn what makes different types of companies fail and what makes them succeed. I learn how in (or out) of touch different companies are with their potentially addressable market. I look at the CAGR of their competition and do ROI calculations (and estimations) based on various competing technologies and from all of the above attempt to extract a reasonable trajectory for businesses.

I do this in storage because people tell me they want me to do it in storage. Here's a hint: I loathe storage. Always have. But I got sucked into it a few years ago (thank, Maxta!) and haven't been able to escape. There's just so much brutal, merciless warfare by desperate companies trying to murder eachother that there is almost literally endless work doing research and analysis in this area.

So no, there's not much confirmation bias. I don't care who wins. I think the vicious axe-murdering of the storage industry is hilarious and I hope you all gnash eachother to bits for my own personal amusement.

If you want the area where I absolutely do</i. have religion, and need to be <i>very careful about confirmation bias it's endpoint management. (I'm a Ninite fanboy, so sue me!) I get very religious about it to the point that if I am asked to write about it I ensure that i am getting my articles vetted by at least a dozen people before publishing, instead of the usual 3-4 for storage or SDN.

That said, if you or yours want to take the time to walk through your portfolio and explain where you think I am so very desperately wrong about everything, I'll take the time out and listen carefully. That's my job.

And, also: good luck with the storage wars. As I said, I don't care who wins, so if you prove me wrong and end up not only surviving, but thriving....good on you! I think it would be great fun to see a come-form-behind victory. The storage wars could use a good upset, they're getting a little stale.

Cheers,

--Trevor

Trevor_Pott Gold badge

Re: CV-6...

Yeah, the whole "she wasn't turned into a museum ship" thing was something I never understood.

Trevor_Pott Gold badge

Re: Baby, I'm just gonna shake, shake, shake, shake, shake I shake it off, I shake it off

"To the author - Why is Isilon, a company nearly 15 years old, is still described by you (and others) as "the future""

Where did I describe Isilon as "the future"? Link please?

Also: hi, we're from IT. IT obsoletes way the hell faster than aircraft carriers. Aircraft carriers periodically get refurbished and have most of their tech stripped out and replaced. What about your datacenter, when was the last time you replaced the technology, not just swapping out one ONTAP full of disks from an ONTAP with a slightly larger number of disks? Hmm?

I await your description of how you've invented the IT equivalent of a rail gun and implemented it, changing "datacenter warfare" forever.

Trevor_Pott Gold badge

Re: The slowest death ever?

How long will their death take? 10 years?

At least. if not 15 or 20. I mean, hey, Novell are still around...but the Novell of today doesn't have a bent fraction of the relevance of Novell in the 90s. $deity be still, SCO are still around, despite everyone's fervent efforts to murder them.

The existence of a company doesn't mean that company is anything but a shadow of it's former self.

As I said in the article: Netapp are relevant for today's refresh. Maybe the next one. But not beyond that. And the problem is that they aren't preparing for the future. They aren't making the radical changes required to be relevant past the immediate term.

Worse: Netapp are not very good at integrating acquisitions, so they can't simply buy innovation and hope it all goes to plan. Their toxic "OnTap is the only hammer you need" kills any new hotness they buy in a right hurry.

NetApp doesn't need to innovate at some nebulous point in the future. They need to innovate now in order to be ready for that future! Sadly, that is not possible given NetApp's current corporate culture.

Trevor_Pott Gold badge

Re: Seems like an article set up for bashing NetApp, but...

Sorry mate, but you're just wrong. I "bash" everyone. The Register doesn't pull punches, mate...and NetApp quite frankly isn't pulling their weight. You strike me as someone rather invested in them. If not financially then certainly emotionally. Sorry if the truth is upsetting, but it is what it is.

Talk to NetApp about adapting. Once they manage to start making real movement in that direction perhaps there won't be so many negatives to talk about. Sorry about your feelers.

Trevor_Pott Gold badge

Re: Seems like an article set up for bashing NetApp, but...

But I don't know of a hyper-convergence solution that has a filer

Nutanix. Announced at their .NEXT conference. Everyone else I know in the HCI field is actively building filers into their products for future releases as well.

10 years goes by fast. What happens when these "smaller companies" have, in fact, been around for 10 years and Netapp has done sweet fuck all in the meantime? Hmm?

Netapp isn't innovating now. That means they are already dead and they just can't admit it.

Trevor_Pott Gold badge

Re: Beaten?

"Beaten" as in "some beat upon them" not "beaten" as in "defeated".

And yes, the US Navy had the shit kicked out of it. But ho! You should see the other guy...

Samsung's latest 2TB SSDs have big hats, but where's the cattle?

Trevor_Pott Gold badge

Gotta disagree. A few tereabytes a month written is really not that big a deal. "consumer" drives basically mean "everything that is not server". You shouldn't have to put server drives into your business desktops. An endpoint drive is an endpoint drive.

Maybe what they should do is sell this marketed to as "for aunt Tilly who never uses her computer for anything except the internet". You know, people who think the little blue "e" is the internet, that the box under their desk it "the hard drive" and who smack the monitor when the computer doesn't do what they expect?

And then they can sell real endpoint drives for people who actually use their computers for everything from CAD to entertainment, and who aren't limited by the speed of their internet connections.

Trevor_Pott Gold badge

Watching videos does write in that the player buffers the video locally (not just to RAM). We switched to one of these because streaming 1080p over WiFi can be quite shit.

Games do write. They have *massive* patches on a regular basis. Character files and save games are written locally...in many cases the save games can be quite big.

Browsing the web writes all sorts of temporary files locally. Hell, Google Earth alone is pretty big; my Earth directory is something like 20GB of cache.

Windows updates. All the applications run updates. Over and over and over...it adds up.

I have maybe created 100-150GB of new content on this drive in the past week and a half, but the system just doing its thing has churned through rather a lot of writes.

Maybe you lot should actually update things once in a while...

Trevor_Pott Gold badge

I examined all 4 endpoints in my house and we've an average about 2.5TB since the beginning of July written. We use these for watching videos, browsing the web, playing games and doing work. (Mostly writing, photo manipulation and very rarely some minor video work.) We occasionally do some data recovery for a friend, but that's about it.

We do use the computers intensively. 8-16 hours a day, as they are our work boxes, as well as our home entertainment systems, but 150TB - which is what one of the new Samsung 2TB drives guarantees for writes) is just not enough. Even 300TB is low.

Maybe ever shrinking processes aren't the answer to SSDs. Maybe we need more chips of a lower process to keep the write limits up.

Everything you know about OpenStack is wrong

Trevor_Pott Gold badge

Re: Skeptical

Well, I'm not going to lie to you. If you try to use Openstack by going out and assembling the various open source components and trying to build up a distribution yourself you are going to have a terrible experience. The downside to openstack being so modular is that it doesn't come with a nice, simple installer, great UI and all the things that make it easy to consume gratis.

In fact, I'll be perfectly blunt and say that without using a commercial distribution, Openstack is more diffuclt to install and configure than System Center and about as friendly to manage and maintain.

That said, many of the commercial distributions have free or cheap versions/programs for home/lab use.

Trevor_Pott Gold badge

Re: Skeptical

Terrible UI. Basic functionality in the web UI, rest is via shell

Default UI is awful. Some of the commercial replacements are quite good. OpenNebula can be beaten into shape with enough effort.

Very complex. The open source boffins running it were in over their heads and struggle to understand it

No question. Anyone who tries to use Openstack without using a commercial distribution is in for a real headache. Anyone looking to use Openstack to "save money" is probably in for a real surprise. Pay the fee, get a commercial distribution and a lot of the problems go away.

Modularity is nice - powerful, but also brings with it complexity

Proof? In my experience swapping out components in Openstack has been pretty easy.

Performance was poor using Ceph. Although the 3rd party says this is due to a defragmentation that was running at the time. Regardless the disk IOPS are much worse on the VM on OpenStack with Ceph than a Hyper-V host and a Lefthand SAN.

Ceph is shite. Right terrible shite. But just about every storage widget on earth has a Cinder driver, and many have Swift drivers. Openstack screams when you use a Tintri as your Cinder storage! Maxta, SimpliVity or Nutanix make great hyperconverged solutions for Openstack, and I really like Nexenta Edge for Swift.

The overall experience I got had me going away thinking it's very immature

It sounds to me like someone deployed (badly) the reference implementations of the various modules, then tried to say "here you go, it's Openstack". That's like someone installing a basic Gentoo Linux and saying "here's your Linux server!"

Commercial versions make life a lot easier and have all the sharp edges smoothed off. Just like Linux can be greatly improved by simply adding in Webmin, Openstack works a hell of a lot better if you put a proper UI on the thing, replace Neutron's NFV with something that isn't awful and toss out Ceph for pretty much anything else.

Openstack isn't a product. It's a framework. The reference implementations are never going to be as good as the commercial products that slot into the various pieces of that framework.

And really, that's kind of the point. The interchangeability. If you want "free" then you'll get your money's worth. But if you are willing to pay for commercial products, but with the escape hatch that any one of them can be replaced if the vendor pisses you off, Openstack is the right choice.

Trevor_Pott Gold badge

Re: openstack not ready

Openstack isn't ready if what you want it to be is an exact copy of the infrastructure you have today, but free. It will probably never be that.

But in terms of standing up a reliable, working infrastructure that behaves in the way Openstack was designed? It takes a couple of hours tops to get going and the commercially supported distributions are stable through every test I've been able to throw at them.

I have spent the past several months trying very - very - hard to break commercial Openstack distros through any sort of regular operation (including using automated configuration and deployment tools) and they are actually quite solid. It sustains failures, misconfigurations and all sorts of other bizarreness.

But yes, Nate, you are absolutely correct in that Openstack is not by itself a replacement for VMware in running your traditional legacy applications. That's one of the reasons you can manage ESXi with Openstack. You can also manage traditional metal servers in the same fashion.

Openstack isn't all things to all people for all workloads. But it is damned good at what it does, and you can bring your old workloads into the Openstack fold and have a single point of management.

Maybe you should spend some time with it and find out for yourself that it really isn't anywhere near as scary as you think...and that what you are expecting it to be and what it is designed to be aren't at all the same thing.

Take Openstack for what it is, not what you want it to be, and you'll find it's actually pretty good.

Samsung stuffs 2 TERABYTES into flash drive for ordinary folk

Trevor_Pott Gold badge

I just checked all 4 endpoints in my house. We have written an average of 2500GB per endpoint since the beginning of July. That's not through torrenting or video creation. It's just through regular use, updating playing some video games and so forth. (For the record, temporary/cached file storage for web usage seems to be really a lot heavier than you'd think, as does cache usage for video players when we're streaming videos from the media server.)

We're writers. We don't exactly have demanding requirements for our endpoints. We're not "abnormal" in our usage.

So your whole premise is kind of out the window.

Trevor_Pott Gold badge

150TB write life. What. How is that remotely adequate?

I have all of the sads. Every single sad.

Pro-privacy titan Caspar Bowden dies after short cancer battle

Trevor_Pott Gold badge

@Caspar, wherever you may be.

You will be missed. I hope we have learned enough from your example to continue adequately in your name.

I, for one, will miss our late night debates. Good journey, sir.

Office 365 prices 'to rise by up to 13 per cent'

Trevor_Pott Gold badge

I told you so.

How a Cali court ruling could force a complete rethink of search results

Trevor_Pott Gold badge

Re: Irritating

When I jump into the comments I expect it to be rant free and on topic!

You may be interested to this introduction to The Register's comment section in .gif form. It may explain some things to you.

Ginormous HIDDEN BLACK HOLES flood the universe – boffins

Trevor_Pott Gold badge

Re: Couple reasons why this can't be the missing mass

Personally my money's on theories needing to be revisited. Not necessarily MOND

Pretty much guarantee it's not MOND. That's been almost completely ruled out by now. I don't think you'll get much disagreement that the standard model needs revisiting, but the question remains by how much. Supersymmetry is a candidate that would require almost no revisiting.

The big one is going to be explaining the matter/anti-matter imbalance while also explaining the extra mass (and dark energy). Preferably without getting into too many additional dimensions. (Damn you, string theory!)

Trevor_Pott Gold badge

Re: so...

No, this doesn't account for the missing mass. Black holes are (sort of) baryonic in nature, and we long ago predicted how many there would have to be. We have a rough handle on how much baryonic matter there is in the universe. The "missing mass" of dark matter is non-baryonic mass, and we don't know what that might be. (Though some have ideas. See: WIMPs as a great place to start learning.)

Migrating from WS2003 to *nix in a month? It ain't happening, folks

Trevor_Pott Gold badge

Re: Sorry...

Right, so even you know it's not true and that isn't my position.

No, I think you're capable of compartmentalized thinking. You seem perfectly capable of believing one thing but rabidly espousing another. Typical of religious types, actually.

All I was hoping for was that folks would spend 5 minutes having a look at plan B because it may work out well for them. I don't think that counts as zealotry and I don't really think it's worth having a flamewar over either because it's common sense.

Which you failed to make a case for, especially given the timeframes involved, all while freaking out because extreme edge case scenarios weren't given equal billing to realistic solutions for the overwhelming majority.

It is you who is behaving like a "religious wanker" (insults, misrepresentation, pretending you know what other people think, asserting you know best with zero evidence to back it up, intolerance), and as a rule dogmatic loudmouths don't tolerate competition, so that comes as no surprise.

I love competition. It keeps us all on our toes. If and when any shoes up, I'll gladly engage with it. I've proven that by cheerfully having comment thread discussions that have gone on for dozens of comments about different ways to solve problems. Where the commenters involved are looking to actually debate (as opposed to proselytize) I genuinely enjoy such discussions. They're the reason I read El Reg.

As for "knowing what you think", I haven't a clue what's going on in your mind. I do know what you are saying in your comments. And what you are saying in your comments is completely disconnected from what you claim to be thinking, so either you are absolutely awful at expressing yourself or you are engaging in some pretty perverse compartmentalized thinking.

... because being around someone who misrepresents folks and then flames them on the basis of that misrepresentation isn't fun, it's just plain old bullying and bullshit.

I haven't misrepresented what you are saying at all. In fact, I'm decreasingly sure that you're even sure what you are saying. You want people to think about Linux as an alternative, that's fine. Nobody here will give you shit for that. Everyone in this entire comment thread agrees that Linux is a viable alternative that should be considered when and where possible.

But you don't end there. You have gone off into crazy freak-out land about the fact that the article didn't present Linux as a viable enough alternative which is completely false: I gave Linux far more column inches than it is due, given the insignificant number of workloads that can be reliably migrated in teh one month timeframe under discussion in the article.

You seem to believe that I - and others - are somehow bashing Linux, saying Linux shouldn't be considered and otherwise pissing in your religion-flavoured cheerios when absolutely fucking nobody here is doing that. The discussion we're having is about when and where it's viable, in what timeframes and for what workloads.

You're acting as though we killed your dog because we aren't championing Linux as the first, last, and only solution regardless of the viability, practicability or you'll-get-fucking-sued-for-taking-those-risks involved. Which is nuts. Wonko. Loony Tunes.

Your whole approach to commenting here has been off. You have basically just walked up and taken a dump on my lawn, claiming you did so because you're just standing up for the beleaguered natural fertilizer industry, whom I neglected to adequately endorse in my article about how to winterize your perennials before the first frost. Completely disregarding the part where two paragraphs of the article were devoted to the fact that for some plants it's a good idea mix in some natural fertilizer for the overwinter process.

Telling you to stop shitting on my lawn isn't bullying. Chasing you off my lawn with a rider mower would be.

Blurred lines: How cloud computing is reshaping the IT workforce

Trevor_Pott Gold badge

Re: Here we go again...

"I think that most people here realize that the cloud is a little bit more complex than that basic definition."

I have no such illusions about the competence of commenttards. I might once have believed that, but I've recently become quite disillusioned in the basic level of technical understanding of the readership.

Trevor_Pott Gold badge

Re: i have a question

"Can somebody tell me the difference between Cloud and Thin Client?"

Two things:

1) If you are talking about "Software as a Service" consumed through a browser (which is the main part of "cloud computing" that end users typically see) then the big difference is thin client versus browser.

Thin clients are thin. In the most traditional of thin clients (which we now term "zero clients" today) they have just enough brains to pass KVM back to the super and bring a screen to you. Modern "thin clients" are actually full "fat clients", but heavily locked down. That's a whole other discussion, as this is largely unnecessary, but done because it's easy.

Browsers are fat. They are basically their own operating systems at this point that run on top of another operating system. Browsers are the most common means to consume "cloud" apps, but not the only one. which leads us to number 2.

2) Cloud is about self service of IT resources. Those resources can be anything from a place to stick a fat, traditional Win32 application, to Desktop as a Service to a backup destination to a browser-based Software as a Service application.

The key differentiator between cloud-based and traditional architectures is that "cloud" removes people from the provisioning portion of the exercise. Developers can call resources through APIs. End users or administrators can create resources through a web interface. You don't fill out a request on a piece of paper, submit it to a bureaucracy and wait months.

Clouds can be internal to an organization (private cloud) consumed from an outsourcer (public cloud) or they can have the ability to use resources in either place (hybrid cloud).

That last is something that needs to be remembered. "Cloud" does not have to mean "the public cloud". You can build clouds in your own home if you want. Just start layering self-service interfaces on top of your existing network and voila: a private cloud.