* Posts by Trevor_Pott

6593 posts • joined 31 May 2010

Server storage slips on robes, grabs scythe, stalks legacy SANs

Trevor_Pott
Gold badge

Oh there's some debate here. So Gartner (and some internal EMC projectsions) say that hyperconverged solutions will have 51% of the market by 2018. I disagree and think it's going to be 2020. The wikibon people seem to be somewhere in the middle.

What nobody seems to understand when they do these calculations is that - with the exception of NetApp - array vendors will adapt. EMC is already doing so. Tintri is doing so. Others are slowly trying, at least, for change.

With the exception of Nutanix, hyperconverged vendors are still in startup mode. They don't have the R&D capacity to really go toe to toe with someone like Dell. Array vendors will start to add value by acquiring new startups (like copy data management experts) and raising the bar for enterprise storage functionality. This will force hyperconverged players into a feature way they may well not win.

The end result will be a thinning of the herd on both sides. I ultimately think that hyperconverged vendors will win, but I am expecting a rally by array vendors around the end of 2016 that will buy them a couple of years before arrays are finally reduced to a niche.

The war is already over, but arrays will fight to the last man to keep their margins. And they'll ultimately lose.

1
0
Trevor_Pott
Gold badge

Re: You forgot a player

Gridstore's cool, but has a few problems

1) Next to no sales. Who has ever seen a Gridstore in the wild? Half the storage analysts I talk to are convinced they're functionally a myth. I'm not entirely sure they're really more than trolling myself.

2) Nutanix does Hyper-V. And they do it damned well. SimpliVity, Maxta and many, many others will be there very soon. (I expect by end of year for most of them.)

3) Marketing. Gridstore's budget for marketing and community engagement appears to be the square root of negative fleventy. This goes back to "who has ever seen a Gridstore box in the wild?" These things aren't in front of the kinds of people who to talks at user groups or Spicecorps or what-have-you. Gridstore has virtually no mindshare amongst the technorati, so even people who know about it tend to forget when it comes crunch time and they have to choose a solution. This leads us to...

4) Really terrible channel support. Gridstore may have a channel strategy. If so, I haven't been able to detect it. If they do have someone out there kicking the channel in the ASCII then those channel monkies aren't doing their job. (See: 3.) They aren't pushing Gridstore as a solution when customers come to call and this is hurting them.

I can't comment much on price - I seem to recall vaguely that it was actually not bad - or functionality - the last time I saw a demo it seemed to do what was required in a reasonable enough fashion - but the fact that I can't summon that information immediately and it is essentially my job to know this stuff just reinforced how ineffective Gridstore has been at remaining "sticky" with mindshare.

By all accounts Gridstore seems a good product, but the company that sells that product is about to get absolutely pwned by the fist of a dozen angry gods as they all turn their eyes from KVM to Hyper-V. Everyone has an ESXi hyperconverged solution. They're all finishing up with KVM/Openstack. Hyper-V is next. After that: Xen.

Gridstore doesn't seem ready to go to war. They don't seem to even understand what is about to happen to them, let alone be remotely ready for it. Too bad, really. They seemed like nice folks.

0
0

You've tested the cloud – now get ready and take a bigger step

Trevor_Pott
Gold badge

Network connectivity is one of the major barriers. Just how do you know which part of the web is causing a problem? How do you test and manage response times and latencies, especially if your audience now includes users on mobile devices?

www.thousandeyes.com. Solved.

Is Azure, Amazon or one of the hundreds of different cloud providers offering the right platform for your particular application?

Nope. Not unless you're A) American, B) picking a public provider in your jurisdiction or C) building a private cloud.

“To burst into the cloud sounds like a great vision but how do you actually do it? How do you implement it? How do you set it up to be flexible? You need clever bits of software and people who can manage that software,”

Not really. The software is relatively commonplace and the skills for it are mundane. You just have to be prepared to spend. And spend and spend and spend. People with those skills are smart, and they won't be treated like crap. Companies with that software charge a lot. And you need great reliable internet connectivity and your ISP is going to take it out of your genitals. With prejudice.

Sounds easy? Maybe not, but if you plan your transition to a hybrid cloud setup correctly you will retain control over your IT. You get to choose how you customise your environment for your workloads.

No, your ISP does. They control the pipe and you do exactly as they say. Same with your hypervisor/management tools vendor. And your tin shifter. And your storage overlords. And - above all else - your government, who may well demand that any company large enough to be seriously looking at hybrid cloud computing build in back doors to allow the spooks to pwn us all in the information so that they can nose out political dissidents.

You are not in control. Everyone else is in control. You just give them money and hope they leave you alone long enough to retire. Even if you're a Fortune 1000 company.

So design your networks with that in mind. If your duty of care - and your legal obligations - run towards the protection of your customers'/employees' data, then you absolutely must treat your ISP, your vendors and your government as hostile agents who are just as likely to try to cause compromise as any outside hacker. They'll use different means, but you need to be prepared to defend against them nonetheless.

5
0

Ad slingers beware! Google raises Red Screen of malware Dearth

Trevor_Pott
Gold badge

Re: Ads in the Pot meet Kettle world.

Can we trust Google

Probably not. But more than we can trust most governments. Absolutely more than we can trust almost any other company in the tech industry and probably more than we can trust any other fortune 2000 company.

Google are awful, thieving, sociopathic kleptocrats, but of the options available to the hoi polloi they're still the fuckwits likely to do the least amount of damage.

3
1

ARM servers look to have legs as OVH boots up Cavium cloud

Trevor_Pott
Gold badge

Want.

0
0

India reveals plan to fix poverty by doing ANYTHING-as-a-service

Trevor_Pott
Gold badge

India also has big bump in its demographic bell curve, as more than half the population is under 35. That smartphone-toting generation is just the kind of demographic to stir up strife if under-employed.

Part of the problem is to stop having so many goddamned children.

*sigh*

1
1

Mozilla loses patience with Flash over Hacking Team, BLOCKS it

Trevor_Pott
Gold badge

Re: The best bit is....

Flash insecurity does not go up from those nasty guys to regular otherwise properly secured websites that happen to be using flash

Yes it does. Infected ad networks are, in fact, a thing.

2
0
Trevor_Pott
Gold badge

Re: The best bit is....

If a major news site stopped supporting flash, the ad houses would fall into line.

You're funny.

0
0
Trevor_Pott
Gold badge

Re: The best bit is....

two reasons news and/or entertainment sites use flash:

1) third party advertisement houses use it. If you want the revenue, you post the ads.

2) DRM.

I don't even want to enter attempting to debate any of the many sides of either of those.

0
0

Why the USS NetApp is a doomed ship

Trevor_Pott
Gold badge

Re: Seems like an article set up for bashing NetApp, but...

How can you not be sure who I am referring to when you wrote what I was referencing. I am not twisting your words like you are with me. Your writing has no real spec.

You make an educated guess and then you verify. It's how grown up learn things.

How can you not be sure who I am referring to when you wrote what I was referencing. I am not twisting your words like you are with me.

If it walks like a duck and quacks like a duck, clearly it's a pony. Clearly.

Maybe you cannot be direct and because of this you believe everyone has ulterior motives.

Not everyone. Just the people who are so overwhelmed with rage that they feel the need to comment. Commenters make up less than 1% of the readership, and among them most aren't quite so angrily tart as you've proven to be. So when you get to fractions of a single percent of pure internet rage, yeah, I start asking some questions.

0
2
Trevor_Pott
Gold badge

Re: Seems like an article set up for bashing NetApp, but...

You think I work or have worked for NetApp

No, I think you're affiliated with Netapp because of your "Clearly NetApp is not superior over all things and any sane person knows this," attitude. It's the sort of thing NetApp has been very careful to ensure is the only view allowed internally.

Why can't you just do your job and go into the specific details of how storage requirements are changing and where specific NetApp portfolio (not my) gaps exist?

Who says that's my job? You? And how do you know I'm not putting together such a piece already?

Why is this your motivation and not what is really going on in the data center?

Because there's usually an interesting reason why the smell of bullshit is stronger in some places. Besides, what people are doing in the datacenter today is not really all that relevant. They're doing that with stuff they've already bought. What matters is what people will be doing in their datacenters tomorrow, as that drive innovation, competition and - most importantly - sales.

Are you sure all of those people work at NetApp? Not saying they do not but you think I do when I do not.

Not sure to whom you're referring, but yes, I am absolutely positive that some of my sources work at NetApp. As for you, if you don't work at NetApp what rational reason would you have to be so frothing? Why should I assume anything other than a fairly direct association?

Now, being an anonymous commenter on the internet you're fairly useless for analysis or quoting purposes, but as far as "frothing commenter in a comment thread" it's fairly safe to presume affiliation or insanity. It's not polity to presume insanity, so I choose to presume affiliation.

but you had to write a passive aggressive article under the guise of analysis?

Reading comprehension is important to all people at all times. The article is tagged "comment", not analysis.

Anything else you would like to vomit on the carpet? Or are we done here?

0
0
Trevor_Pott
Gold badge

Re: Seems like an article set up for bashing NetApp, but...

Your customers are whiners? The people trying to sell your stuff are whiners? I'm glad to see you hold the opinions of the individuals and corporations who purchase NetApp's products in such high regard.

I'll be sure to link them to this post with your views on the subject.

But thanks for proving my point so spectacularly.

0
0
Trevor_Pott
Gold badge

Re: Beware of Confirmation Bias

I don't see why even if they were in the dire straits you describe they couldn't turn it around - they have the capital and base to work with. They probably could simply buy one of the smaller, newer storage companies if they need to.

Then you must work for NetApp. As has been described ad nauseam in this thread - and in the article - the big reason NetApp can't succeed is because the people who work at NetApp are some of the only people out there who can't see why it is that NetApp is doomed. They simply cannot comprehend the problem.

An inability to recognize there is a problem limits the likelihood of buying the right companies to plug the gaps. And if, by some chance, NetApp does buy the right companies, history tells us that NetApp will do a piss poor job of integrating them, kill the best products, destroy the remaining products and drive out all the innovate talent from the acquired companies in short order.

But, of course, the people who work at NetApp can't see that this is what happens. Everyone else can. Which leads up back to...

1
1
Trevor_Pott
Gold badge

Re: Baby, I'm just gonna shake, shake, shake, shake, shake I shake it off, I shake it off

Mainframes are still around. Should that be NetApp's ambition? To become the mainframe (circa 2015) of storage?

1
0
Trevor_Pott
Gold badge

Re: where will other vendors be?

Yes.

Maybe it wouldn't be so bad if they were innovating beyond their flagship much, but they aren't. And they really push any consideration of anything else aside. That is very much so a huge, huge, huge problem.

Next?

1
0
Trevor_Pott
Gold badge

Re: New technology - jets

Don't apologize, I like learning! Keep up the great comments, please!

0
0
Trevor_Pott
Gold badge

Re: Nothing special about it??

Nothing special about it. NetApp was cool 5 years ago. In case you hadn't noticed, the storage world exploded in that time. There's a lot of options that can take NetApp for one hell of a ride, thanks.

An enterprise storage product that is mature in serving all major protocols

*Yawn*

(and FYI, retrofitting enterprise NAS on other systems is insanely hard, which is why nobody's done it).

Funny, others seem to be doing just fine at it. Maybe your assertions are based on dated knowledge.

With no downtime for pretty much any operation, including things that would mean major downtime for other systems.

Some other systems. Old systems. Shitty low end SMB systems. Some legacy stuff from other fossils that haven't grown up in the past five years. But there's a hell of a lot out there that is perfectly okay with doing all sorts of major changes without downtime. *shrug*

With the best application integration tools on the planet.

Let's agree to disagree.

The best heterogeneous fabric management system in the world (OCI).

I'm neutral on this. More testing required. Mind you, a storage fabric isn't the only way to do scale out, scale up storage. At least, not fabric in the same way NetApp goes about it. But I'm willin gto be convinced on this point.

Amazing automation (WFA).

*shrug* Automation is increasingly table stakes for enterprise storage. Enough do it that it's really not making me excited.

Great performance.

Granted. But performance from almost everyone reached way the hell better than "good enough" some time ago. We're well past worrying about that. Now it's about driving down latency, heaping on the features, building for failure and driving down cost.

Insane scalability.

Define "insane scalability". Almost everyone has scalability. Is it rational to think about going to hyperscale with NetApp? The cost of that alone turns most people pale. NetApp is hugely, hugely, hugely expensive and thus is left out of conversations where "insane scale" would be a serious consideration.

Netapp is a great tier 0 storage platform, especially for legacy workloads. If I had a workload that I know wasn't modern and designed for failure, needed life-or-death class reliability and I needed to know I could scale it up to a moderate-sized array, NetApp would be in my top five companies, no question.

But the number of these tier 0 workloads out there is diminishing. New applications aren't written in in legacy Win32-style monolithic single points of failure. Scale is increasingly cognate with price sensitivity in today's storage market and so I am not sure exactly how scalability helps the NetApp marketing pitch. It just focuses the mind on what that much NetApp would cost and immediately starts one thinking about alternatives.

Technology that literally keeps the lights on (part of the control chain of many power distribution systems).

Or deployed in life or death situations. By the most paranoid organizations in the world.

See above: great tier 0 storage for legacy workloads. No question. That's why it's still relevant for this refresh cycle. But it isn't the only storage capable of performing here...and increasingly competitors are earning their stripes as being reliable on this level. Even as virtually everyone is moving away from the requirement for this kind of storage.

Is that what NetApp wants to be? The mainframe of storage? Because that's what I was warning about in my article: that slow fade to niche, maybe, ultimately, complete irrelevance.

NetApp's reliability is a huge plus...but it's not enough to make it more than a point consideration for specific niche workloads.

That's the storage foundation behind the largest companies in the world.

Actually, the storage foundation behind the largest companies in the world is either OCP-style home-rolled storage (Google, Facebook, etc) or EMC. NetApp has a few wins as primary, but mostly NetApp serves the purpose of being a credible second-string provider used to threaten EMC during negotiations and you know that.

The damning thing is that many startups are eating your lunch as regards to playing "beat EMC over the head and drive down margins". See: EMC versus Pure Storage as one example.

That's nothing special?

Not overly, no.

I'd love to see what you consider special. Must really be something.

Go to VMworld. It really should be renamed "storage world". You'll see all sorts of lovely things there. Marvelous and entertaining things. Storage to wow and amaze. Storage to run screaming from. Storage and compute that works together. Storage, compute and networking and automation and orchestration and NFV and hybrid extensions and more that work together all in one SKU.

Maybe you should buy one or two. Incorporate their fresh thinking into your company after you buy them, instead of driving them out. Maybe you can bring to their products the level of QA and testing that makes your products so stable and reliable while gaining a much needed DNA infusion.

What I find impressive doesn't exist yet. I'd link you to a much newer, more comprehensive article but sadly, that one hasn't been published yet. (Written, but not out the door, quite yet.)

I'm rarely impressed by the past. Having the ultimate solution to fighting the last war is as irrelevant as the development of the F-22 Raptor. Just who exactly did the US expect they would be fighting with those things, hmm? That cold war wet dream was sure useless in clearing out a bunch of entrenched resistance fighters from hastily constructed bunkers in the middle of the desert.

That is how I see NetApp. Not just the technology - that can be excused - but the corporate culture. The best and brightest laser focused on solving the last war's problems, completely ignoring the one currently being fought or the one brewing on the horizon.

Alas, I feel that conversation may be falling on deaf ears.

1
2
Trevor_Pott
Gold badge

Who the hell do you think you are, Trevor?

The honest truth? I wrote the article because of the number of NetApp people stampeding around conferences, forums, Twitter, comments sections and literally everything else claiming overwhleming superiority of NetApp over all things. Now, that's fine and good when they can back it up, but it really did not line up at all with what everyone who was not from NetApp were saying...and quite frankly what many who worked inside NetApp were saying behind closed doors.

What really got me digging on this was discussions by VARs, MSPs and other people who have to actually sell storage. The tales are many and varied, but they rarely end with NetApp winning. Some of this I can (and do) discard as bravado and smack talking, but the sheer volume of different sources started to make me think that NetApp was presenting to the world an image of itself that was untrue.

Very specifically I feel that NetApp is not a secure long term bet for companies, for all the reasons in the article and here in the comments section. This is a problem. NetApp is more than "just a filer". There is an entire data management ecosystem that NetApp is trying to sell people on. If NetApp fails to innovate and evolve at a pace required to keep up with a rapidly exploding storage market then their customers will find themselves locked in to a stagnant storage architecture that will ultimately place them at a disadvantage.

Okay, so we have my views on the matter. But what does that matter, in the grand scheme of things? Why tell the world what I think? It all goes back to the "full court press" by NetApp evangelists and marketdroids. I hate stretched truths, half truths and outright lies. I cannot abide them.

So I wrote about NetApp. Somone had to talk about the elephant in the room. The conversation needs to be had. It needs to be had for NetApp's customers - they need to think about all of this and decide for themselves if I am right or wrong...and what that means for them and the long term strategies around storage and datacenter architecture in their environments.

NetApp needs to think about it too. If I am right - and based on the evidence I've assembled, I believe that I am, - NetApp needs to make changes if it is to thrive...maybe even if it is to survive. NetApp has been in full-on denial mode about these issues for quite some time...having someone who isn't the same list of enterprise storage bloggers and storage journalists speak up about it might well shock NetApp into contemplation.

And oh, look, here you are.

So the "why" I wrote the article isn't really simple and easy to write off. Nobody asked me to write it. I don't gain anything by writing it beyond the word rate I get for writing. I could have written about the memetics of cats on the internet or how to build a cantenna or a treatise on deterministic lockstep usage in video games. I get paid the same word rate either way.

I don't think I made too many friends amongst the storagerati by writing this piece; we've all had this conversation at conferences before a dozen times. You don't get storage bloggers in a bar without eventually someone laughing about NetApp and then moving on something less depressing. I know I probably made some enemies.

But with the CEO change and the "great purge of those who disagree/think differently", I felt the time had come to speak up. If not now, when? If not me, who? Everyone else who makes a living as a "storage blogger" or an analyst or whatever the box it is I am being put into now can openly say mean things about storage companies. They rely on those selfsame companies for income. They need you to like them, or their livelihoods collapse.

I don't. Oh, don't get me wrong; I make my money doing analysty things, writing whitepapers and so forth just like they do...but I'm not nearly so monofocused as most of the others. If I get banhammered by the storage kingpins for telling the truth as I see is then I just write about SDN, or Automation or maybe I'll take up robots.

I'm not so long gone from the coalface that I've given up my generalist tendencies.

I haven't gotten invited to a NetApp briefing. I've certainly taken the time out to get the spiel from a number of folks. Watched all the webinars I could find and had quite a few great discussions about some really nitty gritty technical details will all sorts of different folks.

You ask what I do for a living. I investigate for a living. I test hardware and software. Sometimes I set it on fire. I snoop in nooks and crannies and listen in on conversations I shouldn't. I pay attention to what everyone says. I write notes.

I talk in public about things that others only talk about in private.

I don't know what you call that. People call me to get the honest straight goods on a topic; lately, storage is a popular one. I write about it. I review things.

Your competitors didn't ask me to write this article. I can think of no way in which I benefit from writing it (as opposed to having written something else). Yet I felt it had to be written. Maybe - just maybe - that's the thing to ponder here.

I don't unload on companies without reason. Not even Microsoft. It's just not worth it to do so. I do it when I feel that the balance has been disrupted. When the needs of the many are better served by raising a fuss, or when something grievous has been done and a voice deserves to be raised in protest.

The balance is off. Seemingly everyone can see it but NetApp. NetApp has carefully constructed their world so that dissent is not voiced, and opposing views are not heard.

Count me honoured to have pierced the veil, however briefly.

1
0

Infosec bigwigs rally against US cyber export control rule

Trevor_Pott
Gold badge

Re: if only

That is going to depend entirely on whether or not we kick out that nutjob Harper while still managing to keep the traitorous coward Trudeau from office. If either wingus or dingus get in charge, we're pretty much screwed.

1
0
Trevor_Pott
Gold badge

Re: if only

Who says it's alcohol. The US is drunk on it's own overinflated sense of exceptionalism!

4
1
Trevor_Pott
Gold badge

Go home, USA, you're drunk.

4
1

OCP supporters hit back over testing claims – but there's dissent in the ranks

Trevor_Pott
Gold badge

Re: Cole is Delusional

For guest pieces, content is more important than style and form.

As for not being technical in nature, I agree (to a limited extent). That said, there's still a lot of research. We're treading ground that vendors have to walk, so how do they do it, and why? What corners do they cut? What lessons have they learned? Can OCP implement the difficult stuff and leave the easy stuff as a todo for buyers?

And if OCP doesn't move downmarket beyond Facebook-class deployments, what's the relevance? There are only a handful of Facebook-class entities that will ever exist at any one time, and I'm not remotely sure that systems integrators have the capability to take up the slack.

If they do, what's in it for them? What's the business case for them to do so? Will it help save them in the face of the public cloud, or just draw out an inevitable painful death?

At the same time, large vendors are moving towards massive "black-box" vertically integrated endgame machines. Is OCP - and for that matter systems integrators - relevant in the fact of that sort of market shift?

As developers cut their teeth on cloud tech (private, public and hybrid) first, is OCP still relevant? Will regular enterprises even be able to field sysadmin teams and developers who code to anything other than the black-box style clouds?

And these are just the questions off the top of my head.

0
0
Trevor_Pott
Gold badge

Re: Cole is Delusional

"why not run it by" I mean could El Reg run an opinion piece?

Well, of course El Reg can. It becomes a question of who is qualified to write it. If you wanted to write something I could get you in touch with the relevant people to see about a guest piece.

As for myself, the truth is that I don't know enough about all the nooks and crannies of this just yet to open my big mouth in print. There's a lot of research to be done and many opinions and views to gather before I weigh in.

OCP is a different world from the one I normally inhabit. Perhaps more to the point VMworld is a month and a half on fire and the vendors are on fire and their content is on fire and I'm on fire and everything's on fire and air travel is hell. I'm full up for the next while and don't have time to learn a whole new world until after the big game. (I just learned OpenStack and am putting my free time to SDN/NFV at the moment.)

I think the problems presented are deep and complex. They deserve a full research and analysis treatment. Ideally, I'd like to see the OCP become much more important and central to the how we all procure IT, and I fear that going off half-cocked writing about it could do far more harm than good.

0
0
Trevor_Pott
Gold badge

Re: Cole is Delusional

I grok the liability argument, I really do...but I think centralized testing is core and critical to economies of scale. There has to be a balance between "certifying everything works together" and "meh, I hope it all goes to plan".

I think that balance rests on testing for established standards. E.G. meeting JEDEC standards for your memory channels/traces/controllers/etc.

In a perfect world I envision the OCP as essentially becoming the "reference implementation" of various hardware standards. If your RAM doesn't work in an OCP box then chances are you screwed up and didn't meet spec because OCP verified that their widgetry meets the published specs.

The other side of it is that if the testing is to be left up to the customer than I think those folks behind OCP should open source testing tools relevant to all elements as well as procedures for using them/expected results for the tests. This would let any tom dick and harry assemble OCP gear, select parts from various suppliers and verify it all works to plan before ordering it by the datacenter load.

If OCP is to be just some plans for someone to (apparently badly) put together some motherboards then what's the point? It becomes something you can't trust to do the job and ultimately doesn't drive down the costs, because instead of centralising the costs of testing, verification and R&D those costs have to now be replicated by each and every company implementing OCP systems!

Lots of companies don't feel the need for the liability portion of the equation to be taken by the vendor. And, to be frank, that's a huge part of the cost. But making sure that at least basic quality is dealt with and that testing R&D is central and open is essential.

The OCP doesn't have to be "a cheap Tier 1" vendor. We have Supermicro for that. But OCP should also be more than a PR exercise or a way to offload hardware engineering on "the community". The community will contribute back if there is a great base to start from. That starts at verifying standards compliance and making available the ecosystem of testing tools and procedures required for companies to do testing in house.

At least, that's my take on it. I understand entirely that others may well see it differently.

0
0
Trevor_Pott
Gold badge

Re: Cole is Delusional

Feel free to call me delusional but I can't imagine any company the size of Facebook not testing the gear that goes into their data center

What, what whaaaaaaaaaaaaaaaaaaaaaaaaaaaaaat? You are expecting each company that buys OCP hardware to do their own tier 1 class testing? What?

How does that make sense at all? OCP is about driving down cost. Testing is a cost that should be centralized so that it doesn't have to be replicated.

More to the point: lots of companies that aren't the size of Facebook have come to rely on OCP gear. What are you doing to the open compute project? Why are you doing it?

0
0

Micron re-furtles its data centre SSD offering

Trevor_Pott
Gold badge

The M500DCs have served me very, very well. Looking forward to using the 510s for new deployments, as they look like a solid upgrade. Keep 'er steady, Micron!

0
0

Proxyham Wi-Fi relay SUPPRESSED. CONSPIRACY, yowl tinfoilers

Trevor_Pott
Gold badge

"Graham finds – and so does Vulture South – the idea that the FBI would hit the roof about simple and basic technology “implausible”."

You're talking about an organization led by a cryptography denier. Sorry, but <i.any</i> level of stupidity is plausible for them.

1
0

Microsoft's Surface Hub mega-slab DELAYED 'cause you demanded it

Trevor_Pott
Gold badge

The tabletop didn't use a multitouch film. It used IR sensors that tracked fingers from underneath. It was actually designed completely differently.

0
0
Trevor_Pott
Gold badge

Good for Microsoft. Hope it sells well.

4
3

I cannae dae it, cap'n! Why I had to quit the madness of frontline IT

Trevor_Pott
Gold badge

Re: Inherent vice

It is useful to think of it as another version of the tragedy of the commons.

That may be the single most interesting thought in the whole thread.

1
0
Trevor_Pott
Gold badge

Re: That includes the firmware.

Getting the right core team together would be the make-or-break of the whole enterprise.

No, no, no and 10,000 times no. This is absolutely wrong. The whole point of security by design is to design out any single point of failure, including the failure of individuals. You don't need a stellar core team to run a secure, successful business. You need one to run a business that will rock wall street and perpetually exceed expectations.

There are literally thousands of examples of large enterprises around the world that are well run, stable, steady businesses that do things in a secure fashion. They don't make the news because they aren't prima donnas, they aren't high-stakes wall street derivatives stocks but many of them are household names.

If you design your business to rely on the charisma and personality of individual members of your corporate team you have already failed at information security. Everyone in a company is disposable. Even the CEO. That's proper security. Nobody can be indispensable. Nobody can be in a position to "leverage" the company. No one person - not even the CEO - can be allowed to have full security access to anything.

Policies, procedures and best practices determine how operations are carried out. Changes to those policies procedures and best practices are researched, audited, vetted and tested before being implemented.

It means the company evolves slowly. It means they will never be on the bleeding edge. But it can mean - assuming the design is correct - that they will be secure.

Anyone who is "exceptional" is a threat to the stability of such a company. Exceptional individuals have no place in the smooth running of an organization. They may be useful in research and development, but not on the implementation side.

None of this is a dig, by the way. I'm almost certainly worse at this hoo-man stuff than you are. People can also be considered as exploitable flaws, however, and a bit of introspection does no harm.

People are exploitable flaws. But the biggest risks are in ongoing operations (and the people making those operations go). New equipment can be vetted and tested and verified before being put into service. Any behaviour that deviates from modeled behaviour can be/should be analyzed. Equipment can be deployed in test/simulation environments before going into real ones.

Individuals responsible for design of equipment should be isolated from those designing testing. Those implementing testing should be separate from those implementing production and from those who designed the tests. Those who deliver the goods should be separate from everyone. There should be a "chaos monkey" group internally whose job it is to try to break things. Talk to Netflix about it and you'll understand the benefits.

But the people who are doing day-to-day production. Who are working the help desk, who have access to backups, administrative privs, commit privs, push privs, deploy privs...all these people are threats. They need to be categorized. They need to be maintained. They need to be well cared for, kept happy and - above all - their activities need to be closely monitored and documented so that if they attempt to screw up you not only know about it, but you can replace them at a moment's notice.

That said, that doesn't mean you have to be the evil overlords. You make it clear to people up front that you are a secure environment. They will be monitored. The company doesn't care if they watch porn while waiting for something to break. The company doesn't care if they listen to music or drink coffee at their desks.

The company does have issues with communications with the outside world during office hours unless they agree to allow that communication to be monitored for corporate secrets getting out. If they want to type sexy somethings to their significant other, that's fine: but it's going through the corporate network, not their cell phone, and the content will be analyzed by computers.

Make sure the corporate policy doesn't prevent them from typing sexy sweet nothings, and that corporate policy prevents anyone other than security teams from accessing those messages. respect privacy as much as possible and provide as relaxed an environment as possible, but make it clear that there are concessions to security.

If they don't act against the company's interests then they are guaranteed a job as long as they perform adequately. If the systems detect them acting against company interests a specially qualified, vetted individual trained in discretion and personal privacy ethics will examine thier suspect events/traffic and determine if they pose a risk to the company. The individual will be informed of the event and information about whether or not the data was false or positive will go back to the algorithm team to make the machine better.

That's the best design I have for keeping operations teams satisfied, but I am still not sure if it manages the balance quite well enough. And it is here where, if there is a failure in my design or a breach in the company that it will occur.

This is why I would personally bring experts in to pick apart various stages of my design.

That said the design is based on a lot of research. Failures and successes of other companies. Every single security expert I've talked to - and most that I've read - are adamant that the biggest risk to any company is ongoing operations. Not procurement.

What's more, the procurement design discussed here ad nauseam is one that aligns not only with the best expert advice, but with game theory as well. I simply do not understand why you seem so obsessed with the idea of compromising devices as opposed to compromising the people who will be safeguarding and using those devices every day.

0
0
Trevor_Pott
Gold badge

Re: That includes the firmware.

Pretty sure that your biggest problem is going to be people

So you design everything in such a way that noone is 100% trusted. Which is what I've been saying all along...and was really hoping for a great discussion on how one might go about that.

mentioned earlier that IT knowledge and loyalty-inspiring charm weren't exactly synonymous and we spent subsequent posts proving exactly that. Now you can mitigate that with good working conditions; but that only goes so far.

Where did I claim to have loyalty-inspiring charm? That's a purchasable commodity. A network architect doesn't require it directly. Picard didn't have to deal with children on his ship: he had Riker for that. Etc.

And that goes for parts of the supplier chain you own.

Holy fuck, you're back on that again.

As you said in the article, a project of this type requires leadership and a company-owning leader who pisses people off may quickly end up in a worse position than someone more personable who has never been near the place.

Well, yes and no. You see grown ups don't sabotage their workplaces because they don't like their boss' boss' boss' boss. Good HR can root out most of those who would before they do and good network design can limit access of the hoi polloi, with only the most trusted (and well vetted, well compensated) individuals having deep knowledge of more than their segment of the network and/or access to multiple areas.

In addition, quite frankly, when someone is as obstinate or stubborn as you have been - not to mention unable to read - I'd simply let them go.

A better approach might be to send in a personable member of the team to negotiate a fixed-term contract that includes all the data/diagrams that you need.

Uh, no. That would leave your designs in the hands of someone else and raises all sorts of interesting rights issues. The goal is total control. You can't control what's in peoples' minds, but you can bind them by contract not to disclose and you can expunge their access to any design materials when they are not actively working on a design-related project.

and as a bonus, anyone who is trying to attack you by spiking hardware and the like will have to do it all over again at regular intervals; thus making it more expensive

Wrong. it makes it easier for them to do so, because you've created multiple soft targets that have information about your designs. You're far better to control the supply chain and still design your testing and validation regiment to expect potential compromise. Reduce the risk of compromise at the outset, but test for it anyways.

I put it to you (and this is not a snark, although it absolutely would have been if I'd said it 8 hours ago) that "If you own the folks who make the devices" is the worst possible way of approaching things...you're already inciting revolution (or at least mumbles of "fuck that guy rhubarb, rhubarb, rhubarb") before you've even plugged in your first box.

You're really, really bad with people, aren't you? Funny how it seems to be fairly easy to get qualified, talented, relatively loyal people to work for you if you pay them well and get them to work on projects they enjoy. I don't have problems with "rebellion", and hundreds of millions of other businesses don't have problems with rebellion. There are entire disciplines devoted how to treat people right to ensure you don't have rebellion. You could also - holy shit - listen to your staff regularly and find out if they feel they need anything.

And there's trust (although this may just be a different name for the people problem). You have testing and monitoring kit; but who wrote it? You have testing and monitoring kit; but who wrote it? The software stack it runs on? The hardware? Do you need monitoring software to monitor the original monitoring software? Who writes it? And so on.

As discussed eleventy billion times already, multiple independant teams who are given the design materials and tasked with coming up with independent testing regimens. They are not related to the original design team at all.

This is ground I've been over dozens of times. And you're still obsessed with the kit going into the business as the point of attack. Holy wow, man. Holy fucking wow.

Questions along these lines end up in a recursive loop and your brains running out of your nose. </I.

No, they're really quite straight forward. As a matter of fact there are quite a few very simple bits of game theory that apply here. They even give you the optimal number of independent teams, etc. Verifying supply chain is not hard if you own it. It's really, really not.

<i>For an enterprise of this type, it'd probably be better if you took the Merlin role and appointed someone else to do the King Arthuring.

No, Merlin had to read.

You'd also need someone truly stellar in HR...one of those rare ones who are very good at reading people.

The ability to read English and retain it would be where I'd start. We would then proceed to see how much charisma was required from there...but honestly that skillset if not that difficult. There are entire business schools full of them.

0
0
Trevor_Pott
Gold badge

Re: That includes the firmware.

You did say that the network would have monitoring, mitigation and all the rest. First order of business for a potential evil attacker is gaining some sort of access. Thinking of ways to do that and sidestep as much security as possible sounded like a fun thought experiment

I agree that it is a fun thought experiment. I certainly enjoy kicking holes in such designs. I'm merely saying you're attacking the wrong end. If you own the folks who make the devices then guarding against tampering with the devices themselves is trivial.

This means that if you want to attack the network you need to get closer. Your attacks - and your reconnaissance - need to be at the point of deployment, not at the point of procurement.

I'm positive there are vulnerabilities here. There is, after all, only so much isolation and mitigation you can do. Systems have to interact in some fashion. How to you do that so they can, but are still as isolated as possible? How do you do this in a manner than can change in an automated fashion so no one person can know the whole design?

These are the weak points.

My poking at you was to get you to see this. To do a broader security analysis of the design and move your focus away from the easily defended procurement and towards the areas of the network where there actually are real questions.

That would have been a much more interesting debate because there are some very real limits to what's possible with today's technology. Even "new" application designs present problems. Any legacy apps would be huge security holes.

I have some thoughts - application proxies, mainly - but it's the area where my knowledge hits its limits and I would have to bring in a series of specialists to help me work out the fine details.

0
0
Trevor_Pott
Gold badge

Re: That includes the firmware.

What I thought we were doing was wargaming a 'perfect' networking product to look for chinks as a fun thought experiment. Bear in mind that I do not know the size; shape; purpose or likely customers of your product; nor do I have any idea what precautions you are taking. From the above post, I gather it's some sort of data centre. Up until then, I was assuming some sort of mass-produced physical product.

Which is kind of the point. I had mentioned on several occasions in our thread that A) I had never claimed 100% ability to prevent all attacks and b) the purpose of the exercise was to ensure that you, as the implementer of technologies could ensure that your own network was secure. And that you would do so - in part - by owning the supply chain for your widgetry.

Just a second there matey. You said you had that bit pretty well covered at the start; so I was trying for ways of accessing the kit without tripping any alarms.

No, I didn't. Please go back and actually read the thread. You were arguing your own points and not what it was that I was actually discussing.

Based on what you said in the above post and going with tradition; possibly a combined attack might work...simultaneously getting at one or two of your star-chamber auditors and also trying to spike your monitoring software.

The liklihood of your being able to get both (or more) auditing teams as well as the designers and/or the manufacturing is pretty small. And if you did - and the super secret squirrel one that i threw in for funsies - I can still stand in from of a judge and say "I did everything humanly possible, ma'am." You're up into "living to see the asteroid that wipes out humanity" territory of unlikely there.

Either way, I still accept it as a possibility in my design and there are still countermeasures and mitigation and incident response. Because it's actually possible to implement those things without spending too much (on an enterprise budget).

But again - it depends upon the product and the customers. Dribbling selected data over a time period might be the best payoff; or maybe showing up with a couple of guns and emptying your warehouse into the back of a lorry.

And you're back to "you can somehow pwn my network if you can pen the stuff en route". Wrong. You'd have to get the stuff en route, both public audit teams, the secret squirrel audit team, have the previous version testing software also have been pwned, and find a way to bypass automitigation and auto-incident response to get the data out. Even if you did, you'd get a small amount of data. That's a whole fuck of a lot of effort for not very much.

Whatever. Anyway, I could return the serve with some personal abuse and aspersions on your professional competence or I could wander off and do something different. On this occasion, I shall go for the latter, I think.

Oh, please, don't leave everyone hanging. Do demonstrate your ability to pwn the proposed design. But, before you do, please actually read what the idea is, so you're arguing what I'm discussing and not your own, completely unrelated idea of what I'm discussing.

Cheers.

0
0
Trevor_Pott
Gold badge

Re: That includes the firmware.

I was talking of spiking your product after it has left the factory en-route to the market. You have to do it sometime and that's a weak point.

Except it's not. If you have known-clean firmware to load on the device after it arrives at it's intended location and a set of test software/hardware devices to look for compromises then it doesn't matter if someone compromised it en route; you'll be able to detect that because you know exactly what it is supposed to behave like and you can both load clean firmware and test the behaviour to all conditions.

You seem to feel that owning a company is some sort of ward against evil. I would suggest that that is not the case.

And you're wrong. Owning the company means you own the designs of the hardware and the software. You can have those designs and software independently audited. You thus know what clean firmware should look like and how devices should behave under all circumstances. So even if you are compromised by the workers in your assembly plant, or compromised en route you have the means to test for compromise and even to potentially correct it.

There is no requirement to trust anyone. In fact, if you are doing it right, you don't trust anyone. You establish multiple teams each in different jurisdictions, each serving different masters and with different personality mixes whose job it is to find flaws and compromises. There is no single point of trust or failure throughout the entire procurement process.

You're outsourcing all over the place and each time you do that you're introducing potential holes. It gets to be a fractal coastline thing very quickly; impossible to police properly

Wrong. My procurement design doesn't rely on trusting anyone. It doesn't even need to have many suppliers. Two for any given component will do, each with their own logistics arrangements. Two (preferably three) teams of auditors auditing firmware, designs and creating test suites, each in different jurisdictions. Done. The chances of compromising all of them is effectively zero, and you can always add a hidden auditing team that the rest don't know about for extra paranoia.

When stuff arrives onsite you look for compromises. You absolutely don't trust that it somehow wasn't compromised en route.

If you're assuming an essentially unlimited budget then you have to assume that for your potential attackers. How about if your opponent had a secret chip fab and somewhere to make custom batteries? They could get up to loads of shenanigans

I do assume an unlimited budget for potential attackers. if they compromise things, regardless of if that compromise is done in firmware or in hardware a proper suite of tests designed by people with access to the original designs of the hardware and source code of the firmware will be able to detect the compromise. Doubly so if there are multiple independent teams who are creating separate tests.

You seem obsessed with the idea of prevention as the sole means of doing security. This means you are terrible at security. Nowhere are you looking at detection, monitoring, mitigation and incident response. The fact that an item might be compromised isn't the end of the world. You simply need to detect the compromise before being put into production, or during production if it slipped by. You need to mitigate the damage that any compromised item can do and you need to plan for the fact that you will inevitably miss some and how you will deal with individual breaches.

I can reduce the possibility of compromised equipment being put into my datacenter to damned near zero if I own and operate the supply chain. (Prevention)

I can detect almost all compromises that do slip through by having access to the hardware designs and the source code of the firmware and software. (Detection)

I can further reduce the impact of any compromised equipment by designing my network so that no individual component or piece of software has access to all of my data. (Mitigation)

I can continuously test and monitor all equipment, firmware and software to ensure that it is behaving as expected and immediately trigger alerts if it does otherwise. (Monitoring)

I can immediately lock down my network and trigger security audits and/or alerting of authorities/customers/insurance/etc if a compromise is detected. (Incident response)

You will not catch 100% of incidents this way. You will catch 99.9999%+ of incidents this way. You will also be able to stand in front of a judge and say that you have done literally everything humanly possible to protect your customers' data from harm.

What's more, that's all perfectly doable on the budget of a mid-sized enterprise. None of it requires "unlimited" budget to accomplish. None of it is even particularly hard.

You just have to understand something about information security and best practices, which your comments lead me to believe you do not.

1
0
Trevor_Pott
Gold badge

Re: That includes the firmware.

Except that in the scenario discussed the items are coming from the factory to you for internal use. You can then reload clean firmware (actually, every end user should be doing this for every product anyways) and running your test suite on the product before you deploy.

Since you own the company that designed it and employ the devs that code it and the multiple teams that develop the testing suites there should be no way for tinkering to go undetected.

1
0
Trevor_Pott
Gold badge

Re: That includes the firmware.

So you get someone in the factory. And? Precisely what do you think they will do there that can't be tested for elsewhere?

1
0
Trevor_Pott
Gold badge

I have recently been told Audi may have been the better analogy and been given a host of reasons why. I am investigating.

0
1

Hacking Team: We're the good guys, but SO misunderstood. Like Batman

Trevor_Pott
Gold badge

Trevor, what do you do for a living?

I say in public things others only say in private.

3
1
Trevor_Pott
Gold badge

Dear Hacking Team,

You are not the good guys. You are evil, sociopathic asshats. I hope you rot in prison for your crimes. Or, at least, that's the politically correct way of saying things. The truth is that I am much, much more hateful towards you. What you have done is evil. Worse, you not only show no remorse for your crimes, you seem to honestly believe that you have done nothing wrong.

There is a part of me that believes you are entirely beyond redemption. There is a part of me that is truly hateful.

Hacking Team: I hope you get cholera and shit yourselves to death.

There, I've said it. It probably makes me a bad person...but if my "badness" comes in wishing ill upon those who have actively sought to help remove rights from the majority of people in the world, I'm oddly okay with my own incivility. May you reap in full everything you have sown.

12
0

Someone at Subway is a serious security nerd

Trevor_Pott
Gold badge

Subway devs employ security by design

About time. I hope the chaps behind this app get bought up by some top end development houses and spread their approach far and wide. Preferably attached to salaries large enough to buy themselves private islands.

Good job those folk.

29
2

Pan Am Games: Link to our website without permission and we'll sue

Trevor_Pott
Gold badge

Re: Okay... let me be the first to do this here...

The first rule of Toronto - nobody can talk about Toronto

Why would anyone want to?

1
0

KERR-PAO! Reddit interim CEO Ellen quits amid Redditor revolt

Trevor_Pott
Gold badge

Re: "she and Altman both criticized the sometimes vitriolic comments leveled at Pao by reddit"

Where did I say leadership was a requirement of her job? She was hired to be gestapo. Stamp out the wretched hives of scum and villainy, attract the ire of the hivemind for doing so and get ushered out so that the hivemind would feel they had "won"...all the while Reddit ends up a less controversial place (and friendlier for advertisers) because of her guillotine work.

There were (and sadly, still are) some pretty dark corner on Reddit. She was to clean them and take the blame for it. She didn't get to finish before this little bit of WTF exploded and they needed an emergency sacrificial lamb.

Remember: it wasn't Pao who sacked Victoria or generally made Reddit crappy. It was Alexis Ohanian. If Ohanian had not been such a goddamned idiot, Pao could have finished cleaning up Reddit then taken the fall, left, and Redditors would feel they'd have struck a blow for free speech and victory and all that good stuff by driving her out.

Now Reddit still has to clamp down on the dark corners of the site but they don't have an obvious fallperson, and Ohanian has proven to be a wild card running around doing completely batshit crazy stuff that ruins everything for everyone.

2
0
Trevor_Pott
Gold badge

Re: "she and Altman both criticized the sometimes vitriolic comments leveled at Pao by reddit"

Her job was in making money from an Internet forum that hosts people's comments, bad or not.

Wrong. Her job was to crack down on Reddit's commenters by banning the worst of the worst and guiding the rest towards a more civilized self-moderating community that wouldn't be so controversial. Then take the fall if they lashed out at her, which they did.

Her job was not to make money from vitriol and hate. It was to stamp it out so that Reddit would be more attractive to advertisers.

4
3
Trevor_Pott
Gold badge

Trump

Upside-down piece of candy corn in a wig made of used medical gauze.

1
0

Canadian dirtbag jailed for SWAT'ing, doxing women gamers

Trevor_Pott
Gold badge

Re: What a difference...

It certainly ought to cut his narcissism down to size and adjust his oversensitivity.

You, like pretty much every other commenter in this thread, know absolutely fucking nothing about psychology or psychiatry.

I'm appalled at the lot of you.

12
2

PLUTO SPACE WHALE starts to give up its secrets

Trevor_Pott
Gold badge

Re: Call me simple

There are thousands of Jovian trojans in the same orbit as Jupiter. Does that constitute "in the neighbourhood"?

No. They aren't orbiting in Jupiter's orbital path excepting at the Lagrange points. Those asteroids are essentially trapped by Jupiter's gravity. All the rest of the crap that was in it's orbital path has been eliminated. There aren't a bunch of free floating rocks careening out of control behaving willy-nilly. There are a handful of rocks being dragged around at fixed points by Jupiter itself.

0
0
Trevor_Pott
Gold badge

Re: Call me simple

It seems to me that the current classification system for planets does not give enough detail as to what a planet is and would benefit from the addition of information about size and makeup at the very least.

This. 10,000 times this. There are some basic classifications that are informally used.

Brown Dwarf Star: not actually a star, this is a really large gas giant that outputs substantial heat but does not sustain fusion of hydrogen. Brown dwarfs can be extremely heavy and may well undergo hyrdrogen fusion in smallish amounts (in addition to the massive amounts of fission they are undergoing) but never quite "ignite" into burning balls of plasma in the sky. Can range from similar or smaller radius do superjovian to nearly the size of a red dwarf. May actually have a habitable zone, but many questions about radiation belts and magnetospherics remain.

Superjovian: large enough to output minor radiation and potentially notable heat to its inner moons but not large enough to be classified as a "brown dwarf star". Heat is primarily fission-based, megnetodynamic (depending on stellar flux) gravitational and remnants of its accretion. Can be absolutely huge planets, but don't start getting much bigger than Jupiter until they've gotten to be about 80 Jupiters in mass. Gravitational interaction with most moons will cause internal heating.

Jovian: Gas giants. They range from large but "fluffy" and not very dense planets (like Saturn) to several time Jupiter's mass. They they do not emit any noticeable radiation or heat past their roche limit, however, they are not massive enough for their megetosphere to deflect stellar radation completely outside their likely moon orbits. Thus most moons will pass through a belt of deadly radiation similar to the Van Allen belts here at Earth. Gravitational interaction with larger moon may cause internal heating of moons.

Neptunian: These planets have a roche limit that is far enough away from the planet that many planets this size will be capable of supporting ring systems. Planets are large enough to capture smaller terrestrial planets as moons. Gravity is high enough to hang on to hydrogen in the atmosphere. Expect ammonia in the atmosphere as well as water.

Superterrestrial: Planets larger than Earth but unable to hang on to hydrogen unless it is bound into a heavier molecule like water or ammonia. May have ring systems but highly unlikely. Unlikely to have captured moons. Can potentially be habitable. Very thin atmosphere compared to Neptunians.

Terrestrial: Planets just large enough to hang on to an atmosphere. Like all planets larger than it, silicates and metals form the planet's core/mantle/crust. Very thin atmosphere. Marginally habitable in that it is continually bleeding away lighter gases into space. These planets require life to be able to continually recycle molecules in order to have an atmosphere that contains lighter molecules that are critical to complex life.

*****Special case terrestrial: Metal Planet. Metal Planets can form very near a star where metals are hyper-concentrated in the core of a large gas giant as part of the regular formation process. The gas giant's atmosphere is then blown off (typically by the sun expanding to engulf the planet as a red giant for a billion years or so) and the metallic core is left behind, typically orbiting a dwarf star.

Subterrestrial: rocky planets that are too small to hold on to much of an atmosphere at all. Likely to cool after only a few billion years and not have much of a magnetosphere. May briefly be habitable. May briefly sustain a hydrosphere.

Dwarf: smaller than a subterrestrial. Not likely to ever have a hydrosphere. Not likely to ever have much of an atmosphere. (Minor outgassings and capturing of an inch or two of solar wind aside.) Dwarfs are separated from subterrestrials mostly because of density. They may be nearly as large as a subterrestrial, but they contain a lot more ices. Some dwarf planets at the fringes of a system may be mostly or all ices.

The big issues are drawing firm lines between the classifications. While in use by many astronomers, formal definitions require drawing arbitrary lines and this causes much consternation and debate.

Plus ca change...

1
0
Trevor_Pott
Gold badge

Re: Call me simple

Vesta, Ceres, Orcus, Quaoar, Eris and Makemake are, of the top of my head, round. Eris is bigger than Pluto. Sedna is suspected to be round. Haumea is an oval. Eris, haumea, Quaoar and Orcus - at a minimum - all have moons.

Do we have 8 planets and many dwarves? Or do we have dozens of planets? With 8 planets + dwarves we accept that "planet" comes in gradations and we start to be able to reasonably classify them in a useful fashion.

With "there are dozens of planets" we're just lumping anything that happens to be round and orbiting a star - as opposed to another planet - as a planet.

For that matter, Pluto isn't a planet even by the "forget the orbital path clearing" criteria. Vesta, Ceres, Orcus, Quaoar, Haumea, Eris and Makemake all are, but Pluto isn't. It's a double planet. Or a double moon, depending on how you want to look at it.

Pluto and Charon both rotate around a common barycenter that it outside of Pluto proper. Pluto is Charon's moon. Charon is Pluto's moon. Add in Styx, Nix, Kerberos and Hydra and it's not really so much "planet" or "double moon" as "pile of rubble that somehow hasn't all collapsed in on itself yet".

Now let's have a conversation about Luna. And the Galilean moons. And Titan. And Triton. These are all worth considering as "planet-sized" bodies in some definition or another. Titan as a denser atmosphere than Earth. Triton is almost certainly a "captured planet". Hell, Staurn alone has 7 moons in hydrostatic equilibrium!

Being round does not make a ball of space debris special.

Anywho. The solar system, eh? Endless wonder. Endless debats to have about categorization and classification.

4
0

Brit teen who unleashed 'biggest ever distributed denial-of-service blast' walks free from court

Trevor_Pott
Gold badge

Re: @Ledswinger Serious.

If he is receiving leniency because of mental illness then the public has a right to know the details

Not if he was a minor while the crime was committed.

6
1

Forums