Re: seti @ home
Explain to me exactly how the NSA is changing the readout on this in real time to match their usage?
5500 posts • joined 31 May 2010
Explain to me exactly how the NSA is changing the readout on this in real time to match their usage?
Your power bill is how.
Capitalism doesn't work so much as "devolves". But that's a discussion for another time...
"Why claim victory for "server SAN" instead of the broader category, except as a marketing move?"
A) "Server SAN" was coined by Stuart Miniman. An analyst, not a marketing bloke, because we needed something to call "lashing together storage from multiple servers and present it to the cluster" that was shorter than "lashing together storage from multiple servers and present it to the cluster"
B) Because not all "clusters of storage lashed together" are the same. Object storage is, for example, going it's own say, despite being something we could reasonably call a "server SAN" as a technicality. The big money isn't in object storage. You can't charge the big margins for it. It's things like VSAN, Nutanix, etc that are wining out and going to form the "default" for enterprise workloads.
C) I calls it like I sees it. Arrays, clusters of arrays and "lashed together clusters of storage that don't run workloads on the same nodes as the storage" are simply not winning out over more modern "hyperconverged" (and by, do I loathe that term) setups.
And it's enterprise workloads that matter, mate. They're where the money is. They're where you get the margin.
5-10 minutes is a long way from a half hour. Also: for the record, you don't tend to need such long boot times in server SAN boxes, because you need RAID cards so complicated that they need to load an OS from the future when they init.
Of course, you could also be seeing extended boot times because you're using 1.5TB of RAM and doing extended memory tests on each boot. Dell in particular seems to like really indulgent mem tests.
Compute workload om? On you miserable, non-tactile touchscreen craptasm! Why isn't there an edit button in the mobile UI?
The mainframe is ultimately what everyone wants, but IBM refuses to price it reasonably.
Server SANs are hyperconverged. Lash together storage from multiple servers, present it to the cluster as shared storage. Or, more to the point, hyperconvergence is one of the possible means by which a serverSAN can manifest.
Server SANs can be done without running a compute workload on the same node. Then it's not hyperconverged. Run a compute workload om the node and it is hyperconverged.
Marketing terminology. It are the dumbs.
Who the metric fuck has the resources to fork an app that's a half million lines of code? What about once it's a million? Two? Ten?
Past a certain point, the fact that it's GPL means nothing.
You are basically making the argument that the American people have power over their government because the second amendment gives them the ability to carry around AK-47s. Ignoring completely the part where the government has everything from tanks to helicopters to drones, and there isn't a hope in hell of "the people" rising up against the government without being beaten back down as domestic terrorists.
The same basic principles apply to open source. Past a certain point, the complexity, integration and sheer size of a project make it functionally impossible to fork, unless you can convince the overwhelming majority of the original developers to move to the fork. (LibreOffice, MariaDB, etc.)
SystemD is a fucking cancer. The goal behind it's ongoing metastasization is nothing more than control of the Linux ecosystem in it's entirety. You can lay down any excuses you want, attempt to dismiss any criticism with a half truth or a technicality, but you can't change the fact that the thing has been rammed up our asses against the express wishes of a huge chunk of the community and with the developers openly and proudly refusing to engage with the community about any aspect of either design or implementation.
it's the same arrogant egofuckery that the Gnome team displayed and it has no fucking place as a hydra-like unkillable monster at the core of open source upon which everything expected to hang.
There is room for exactly one Linus Torvalds in the Linux ecosystem, and he's right where he belongs: keeping the kernel sane.
We don't need a second goddamned kernel running in userland.
" It is still possible for systemd to be a welcome part of Linux, if the project can listen to the users, respond and change accordingly. Are they capable of that ?"
Emphatically and overwhlemingly no. Which is - above and beyond all other reasons - why so many don't want this shitpile of a viral fuckup on their systems.
It might have been different, had the people in charge of systemd/gnome not been in charge of these projects.
"If this is the beginning of the end for Debian and Devuan doesn't get enough traction to be a viable alternative what is the consensus on the best alternative?"
Slack and Gentoo are both probably going to succumb int he next year or two. The BSDs are moving to launchd, which shares many similar issues with systemd.
If Devuan fails, we're pretty fucked.
"If systemd is so bad... ...then why are so many distros using it?"
It's called blackmail.
RedHat are behind the whole thing. They spend the money that makes a lot of critical pieces of your average Linux distribution work. Now those things won't work without systemd and/or getting them to work without systemd is a right bitch/there are roadmaps to make them not work without systemd in short order.
The short version of this whole thing is that Poettering - and with him, RedHat - are trying to take the kernel away from Linux Torvalds. They are doing so by creating another kernel in userland that everything depends on. Once they have enough stuff jacked into Poettering's matrix, they'll use it to leverage Torvalds out of the picture and finally take the whole cake for themselves.
Systemd is nothing more than a cynical play for domination and control of the entire Linux ecosystem. To "own the stack" of a modern distro. And since RedHat has managed to co-opt so many core projects, there is precious little to stop them.
"Linux" as we think of it today is on life support. Android/Linux and systemd/Linux are now looking to be the two dominant entities. Traditional Linux - one that adheres to the Unix philosophy - is all but dead. Hopefully, Devuan can save it.
"IF the leaders of GNOME & systemd were more community minded, this would have never happened. Unfortunately, it's hard to see how this will resolve itself, it would require adult behavior & conciliation, both of which are sadly lacking."
If the leaders of Gnome and systemd were community minded we would have ended up with a rational compromise: something that was a proper (and badly needed) replacement for SystemVinit, but didn't have binary logs, a registry, tentacle dependencies and wasn't trying to become a userland kernel.
It would have been a Good Thing, and we could all move forward happy, and in harmony. Unfortunately, Poettering is an ass and he leads a small nation of other asses on their grand quest to "pacify" the fuzzy wuzzies.
"The Devuan developers are going to have to address the resource issues as noted above. Good luck to them though if they succeed and give some people what they want. "
I've donated. Money where my mouth is, etc. I can only hope the rest of the community follows. Those of us who can't code still have our part to play in keeping Linux free/libre.
Where/how did you pull from anything above that the application or guest OS should be the one moving things around? Whaaaaa?
"Watch this space". ;)
If you've questions after the fourth article is out, I'll be glad to fill in the gaps. Cheers!
There are two more installments in this series.
But, if I may, please understand that I am aware that there is more to our industry than "Trevor Pott and people who think like Trevor Pott." I absolutely won't be using Docker until it has things like FT, HA and vMotion...but I am largely a keeper of "legacy" workloads. Traditional applications; not the sort that are optimized for, for example, cloud computing.
Those with money - startups with VC funding, governments and enterprises - absolutely are rewriting extant applications to take advantage of "cloud" technologies. These recodes translate almost directly to being "good for Docker". Also, a huge chunk of all new application development follows that model.
In the old world of the sorts of "legacy" applications that I herd - back when applications were applications, not "apps" - you would have a few components to worry about: file storage, database, the application itself and the client. Eventually "the application itself" and "the client" became more or less one thing as things went to web-based applications. But we still had these three things that absolutely had to be up 100% of the time. Both scale up and scale out were (and remain, for us legacy herders,) A Great Big Bitch Of A Problem.
"Redundancy" comes from VMs, or from NonStop servers. The database isn't allowed to go down. Clusters only work if your database app supports it, and you usually have to convince the vendor (who may not even exist anymore) to recode some chunk of the app/database. If the devs that are left even know how to do that!
Modern "apps" are totally different. They're built from the start to be able to scale out and up. It can collapse down to one core copy of the DB/Files/App or scale out to thousands.
In a modern app, you only need to keep the master copy safe. Everything else can spawn some unlimited number of copies as needed.
In Theory. Truth is, doing so in practice is still A Bitch, but it's not quite A Great Big Bitch.
Then along comes Docker. Docker makes the "scale out" part of the modern apps thing Creepy Meerkat Simples. That's grand. So if you want to run Netflix-class infrastructure, you can basically put your core stuff on a NonStop server, then spawn unlimited Docker instances out on a bunch of cheap metal. AMD SeaMicro, HP Moonshot or Supermicro MicroCloud, anyone?
Voila: a use case for the next generation, albeit not one that I will myself be using any time soon. And - just by the by - it's a use case worth hundreds of billions of dollars. Mind you, so is keeping those legacy workloads running.
Our industry is diverse. And Docker has added to the choices before us. It is only one tool in the toolbox. It is kinda odd and non-standard to us old fogies...
...but I promise you, it will be the #2 Philips Screwdriver of the generation that's just cutting their teeth today.
Edit: let me add that the above should read containerization will be a multi-billion-dollar industry. Whether or not Docker, specifically wins this war is as yet undetermined. That said, containerization's time has come.
@J3, wipe them out. All of them. Viral warfare. Fire. Destruction of the planet. I don't care what it takes!
"Very well, please explain to several species why they are ordained by our idiocy to extinction. Then, explain to those who predate upon them for survival."
Okay, I will. Dear things that eat mosquitoes, and only mosquitoes: you had to die because your food source was a massively annoying vector for various horrible diseases. So sorry.
See, that wasn't hard, now was it?
"Most of the SMBs I know aren't running VMs even now, they run their applications on a miscellany of elderly hardware held together by good fortune and the occasional visit of the part-time finance director's second cousin."
Then you don't know SMBs, period. "Small to medium business" covers 1 to 1000 seats, generally. With enterprise being above 1000 seats. (Depending on which government is doing the counting.) The bulk of those companies are in the 50-250 seat range, and as an SMB sysadmin by trade, I promise you they've been virtualised for some time now.
"And as soon as containerisation makes it properly to Windows, those people will be taking it up in their droves, because there's no point running multiple OSs if you don't have to - even just from a licensing point of view."
Wrong again. Ignoring the rest of your prejudiced (and false) remarks, you don't understand at all <i.why</i> most companies use VMs. It is to obtain the benefits of redundancy, reliability and manageability (including snapshots, backups, replication, live workload migration during maintenance, etc) that hypervisors provide. Contianers, at the moment, don't provide that.
SMBs want far more than just the ability to run the maximum number of workloads on a given a piece of hardware. They want those workloads to be bulletproof. They need them to be something that can be moved around while still in use because there aren't any "maintenance windows" anymore. There's always someone remotely accessing something. That's just life today. Hell, that was life 5 years ago. It's like you have a picture of SMBs stuck in a time warp from 1999 and you imagine that they've never evolved.
"Containers aren't just a packaging technology - they depend on the provision of resource management and scheduling in the OS that are equivalent to those provided by a current hypervisor."
Everything depends on " the provision of resource management and scheduling in the OS that are equivalent to those provided by a current hypervisor". Whether running on metal in it's own OS, in a container, or in a hypervisor. I don't understand how this precludes containers from being "just a packaging technology".
"And while Docker may have a little way to go (but I think 30 months rather than 30 years will see some big changes), I think you'd have a hard time persuading the people on non-x86 hardware that their WPARs and Zones are manifestly harder to work with than VM solutions."
No, I wouldn't. Because you are completely ignoring the desired outcome portion of the equation. Containers provide what companies desire when the hardware underneath the container provides the required elements of high availability, workload migration and continuous uptime during maintenance. Run containers on an HP NonStop server or an IBM mainframe and you get all the bits you want while getting the extra efficiency of containers.
But, shockingly enough, most businesses don't have the money to spend $virgins on mainframes or NonStop servers. So they use hypervisors to lash together commodity hardware into what amounts to a virtual, distributed mainframe. They then package up their applications in their own OSes and move them about.
Are containers realtively easy to deploy and somewhat easy to manage? Sure. I'll even go so far as to say they're way easier to deploy than VMs are, but I will remain adamant that VMs are currently easier to manage. What you're missing, however, is that hypervisors democratize all the other things - portability, heterogeneity, high availability and so forth - that are requirements of modern IT. Containers don't provide mechanisms for that, unless you burn down your existing code bases and completely redesign.
"Even IBM praises the benefits of WPARs (containers) over LPARs (hypervisor) in the majority of use cases, even though it supports both and the latter has rather more hardware support than the typical x86 VM. I can't really improve on their reasoning:"
Of course IBM is touting WPARs over LPARs. They sell the pantsing mainframes that make containers a viable technology. And they only ask the firstborn of your entire ethnic group in order to afford it!
"Better resource utilisation (one system image)"
Nobody is debating this one. Containers are more efficient.
"Easier to maintain (one OS to patch)"
One OS reboot takes down 1000s of containers. Also, you get the lovely issue of having to deal with workloads that may react badly to a given patch being mixed in with workloads that might need a given patch, all running on the same OS instance. Funny, container evangelists never talk about that one...
Easier to archive (one system image)
Oh, please. We're not using Norton Ghost here. Ever since Veeam came along nobody in their right mind has had trouble doing backups, DR or archives of VMs.
Better granularity of resource management (CPU, RAM, I/O)"
That depends entirely on how shit your hypervisor is. Funnily enough, VMware seems to be quite good at providing granularity of resource management.
So, of IBM's four-point path to victory, the only thing that really shines as rationale is "efficiency". And it carefully sidesteps some pretty significant issues ranging from price (we can't all afford mainframes or NonStop servers) to manageability (1000 workloads sharing a single OS can actually be less desirable than, say, 10 OSes, each with 100 workloads.)
@dan1980 except you leave out the part of the Office 2007/2010 licensing that pertains the remote/VDI usage whilst attempting to sing Microsoft's praises of licensing that product. Though you at least acknowledge the media-based madness of the time.
Microsoft doesn't do "rational".
When, in the history of our industry, has Microsoft licensing been connected to rational thought?
Let's have that conversation after they've finished integrating Docker into the OS and decided on licensing, eh? I've heard so many different things out of Redmond that I can't put credence to any of it, tbh.
And yet, in practice, for the workloads I run, I see only 10% improvement. I wholeheartedly believe that for certain workloads, you can get a 10x improvement. Hell, why not. Do you know what I can do with benchmarking tools, when motivated?
But the question isn't about the headline improvements. It's about the average improvements for everyday workloads, for average companies.
Also, along those lines, I absolutely do not buy the latency claims you state. I run a fairly significant lab full of stuff and I have been testing virtualisation and storage configurations for about 4 months solid, 8 hours a day. Every configuration I can get my hands on. From various arrays to hyperconverged solutions to local setups. I've run the same workloads on metal. I've used SATA, SAS, NVMe, PCI-E and am getting MCS stuff in here sometime in the next couple of weeks.
The long story short? Take the ****ing workload off the network and you get your latency back. And there are plenty of ways to go about doing that today. I suspect you'd be shocked at the kinds of performance I can eek out of my server SANs, to say nothing of the kinds of performance I get when using technologies like Atlantis USX, Proximal Data or PernixData!
I respect that you have found a way to use containers to great effect, sir. Yet I find I must humbly submit that your use case of them may well be abnormal when we consider the diversity and scope of workloads run by companies today.
I think it is fairer to say that under the right circumstances, containers can deliver manyfold increases in density and perhaps even performance, however, they are not likely to deliver this for all - or even most - workloads today. Containers need to be built into, just like the public cloud. With the advantage that many public cloud workloads can be migrated to containers with relative ease.
We still don't know how licensing will shake out. I suspect every Docker container will require an OS license from Microsoft. Otherwise, I agree with you 100% here, sir.
Some problems with your viewpoint:
1) Not everyone values efficiency over ease of use or capability.
2) You give up a lot of ease of use and capability in order to get the efficiency gains of containers.
3) Legacy workloads will take decades to go away.
4) Many/most legacy workloads, as well as a significant portion of new workloads (for at least the next decade of coding) will not have application-based redundancy or high-availability. They'll rely on a hypervisor to provide it.
What you say only makes sense if you assume everyone is going to move to public-cloud style scale-out workloads. We're not all Facebook/Google/Netflix. Industry specific applications don't make that jump well. Ancient point of sales apps, CRMs, ERPs, LOBs, OLTPs, etc won't make that jump...and migration is crazy expensive.
That's if you can convince a company that is absolutely dependent on a 30 year old POS application whose every quirk they know by heart that they should ditch it and migrate to a new one. Because...Docker?
There are 17000 enterprises in the world. Maybe a few hundred thousand government agencies that could be considered as large as one. There are over a billion of SMBs in the world, and they're not going "web scale" with their applications any time soon.
Hypervisors displaced metal because they offered immediate benefit without being too disruptive: you didn't have to recode applications for them, you didn't have to really make huge changes of any kind. As infrastructure got denser, datacenter designs changed, but that was dealt with as part of the regular refresh cycle.
Docker, like public cloud PaaS scale-out apps requires burning what you have down and restarting. Maybe one day, 30 years from now, containers will have displaced hypervisors. If so, I will bet my bottom dollar that the containers of 30 years from now look a hell of a lot more like a hybrid between the containers of today and the hypervisors of today than just a straight up continuation of the current container design.
Good questions. I will do my best to answer.
"Why should containers do this, surely that is the job of the hypervisor?"
Containers shouldn't do this. They will likely try anyways, as a great many of those who are invested in containers and adjacent technologies want to see them replace hypervisors.
The argument goes that containers are so much more efficient than hypervisors that we should do away with hypervisors altogether and use containers for everything. Based on that, we'll either have to throw all our workloads away and recode everything (not bloody likely, especially since there are any number of workloads that can't be "coded for redundancy" in the container/cloud sense) or containers will have to evolve to add these technologies.
As it stands now, I know a number of startups working to bring hypervisor-like technologies (HA, vmotion, etc) which are in stealth. We are at the beginning of mass market contain adoption, not the end. Thus the technologies enjoyed by the mass market for their current workloads will have to be recreated for previous ones, just as we have for every technology iteration before this.
Does this mean it's rational or reasonable to do so? I argue no. If I run 4 VMs on a node and each node contains 100 Docker workloads, I am giving up a small amount or overhead in order to virtualise those 4 Docker OSes, but gaining redundancy for them in exchange. Meanwhile, I can load up that server to the gills and keep all it's resources pinned without "wasting" much because I'm using containers.
To me, it makes perfect sense to have a few "fat" VMs full of containers. Resilient infrastructure, high utilization. But this is considered outright heresy by many of the faithful, as - to them at least - containers are about grinding out every last fraction of a percentage point of efficiency from their hardware.
"Is it the cost of requiring both that keeps you from using containers 'till the above condition is met?"
No, it is the pointlessness of using both that keeps from using containers. Right now, today, most of my workloads are legacy workloads. They don't convert into these new age "apps" that "scale out", as per docker/public cloud style setups. If I want redundancy, I need a hypervisor underneath.
So, I could do what I talked about above and put my workloads in containers and then put the containers in a hypervisor. That would increase the efficiency of about 50% of my workloads, ultimately dropping the need for an average of two physical servers per rack where a rack contains about 20 servers.
That's a not insignificant savings, so why not jump all over this?
1) Even with the workloads that can be mooshed into containers it will take retooling to get them there. It is rational and logical to move them into containers, but that is a migration process akin to going XP --> Windows 10. It takes time, and is best done along the knife's edge of required updates or major changes, rather than a make-work project on it's own.
2) If I start using containers I need to teach my virtualisation team how to use these containers. That's more than just class time, it takes some hands on time in the lab and the chance to screw it up some. That is scheduled, but I'm not going to adopt anything in production until I know that I can be hit by a bus and the rest of the team can carry on without me.
3) Politics. Part 4 of this series will talk about the politics of Docker. Not to give anything away, but...the politics of containerization is far from settled. I don't want to be the guy who builds an entire infrastructure on the equivalent of "Microsoft Virtual Server 2005" only to have all that effort made worthless a year or two later. Been there, done that.
4) 2 servers out of 20 isn't world-changing savings for me. Oh, that's Big Money if you're Rackspace, but at the scale of an SMB that only has a few racks worth of gear, there's an argument to be made for just eating the extra hardware cost in order to defer additional complexity for a year or two.
Really, in my eyes, it's Docker versus the cloud...sometimes in the cloud.
If I was building a brand new website today, I would have a really long think about Docker. Do I use Docker, Azure, AWS, Google or someone like Apprenda?
The choices are likely to be informed not by the technical differentiators between these platforms, but by business realities ranging from Freedom of Information and Privacy regulations, marketing success around Data Sovereignty, cost and availability of manged workloads.
Do I run my workload in one of the NSA's playground clouds, pick something regional, or light it up on my own infrastructure? Is the particular set of applications I am looking at deploying into my Docker containers available from a source I trust, and likely to be updated regularly, so that I can just task developers to that application and not have to attach an ops guy?
New applications make good sense to deploy into Docker containers. And Docker containers in the hands of a good cloud provider will have a nice little "app store" of applications to choose from.
But if I am lighting them up in the public cloud, do I really care if it's in a Docker container? Those cloud providers have stores of stuff for me to pick from in VM form as well. And I don't care if what I am running is "more efficient" when running on someone else's hardware; that's their problem, not mine.
I'm not against using Docker in the public cloud, but I see no incentive to choose it over more "traditional" Platform as a Service offerings either. If for whatever reason we decide the public cloud is the way to go, I'll probably just leave the decision "Docker/no Docker" up to the developers. The ops guys won't really have to be involved, so it's kinda their preference. I really don't care overmuch.
So from a pragmatic standpoint I really only care about Docker if it's going to run on my own hardware, either as part of my own private cloud, or as part of a hybrid cloud strategy. As we've seen, there are layers of decision-making to go through before we even arrive at the conclusion that a given new workload is going to live on my in-house infrastructure. But let's assume for a moment we've made those choices, and the new workload is running at home.
This is where we loop back to the top and start talking about inertia.
All my workloads are on my own private cloud already. They're doing their thing. If I don't poke them, they'll do their thing for the next five years without giving me much grief. My existing infrastructure is hypervisor-based. My ops guys are hypervisor-based.
If I simply accept that - if I give in to the laziness and inertia of "using what I know and what I have to hand" - then my new applications require no special sauce whatsoever. I can let the hypervisor do all the work and just write an app, plop it in it's own operating system, and let the ops guys handle it. What's one more app wrapped in one more OS?
Change for change's sake offers poor return on investment. So for me to move to Docker there has to be inventive. Right now, today, at the scale I operate, the ability to power down 8 servers isn't a huge motivation. I could write pay for the electricity required to light those servers up by writing 4 additional articles a year.
Two years from now, I may have a dozen applications in the cloud, all coded for this "scale out" thing. I may have gotten rid of one or two legacy applications in my own datacenter and replaced them with cloudy apps. Five years from now...who knows?
It would be really convenient for those new applications to be written to be Docker compatible, scale-out affairs that provided their HA via the design of the application rather than the infrastructure. But I don't know for sure that Docker will be the container company that wins.
For that matter, the hypervisor/cloud companies could see Docker as a threat in the next two years, declare amnesty and agree to a common virtual disk format.
Docker offers a means to make my apps more-or-less portable. Ish. As long as there isn't too big a difference between the underlying systems, they'll move from this server to that one, from private cloud to public. If I kept the OSes at the same patching levels on both sides, I could move things back and forth...though not in an HA or vmotion fashion. That has some appeal.
But a common virtual disk format would allow me to move VMs between hypervisors and from any hypervisor to any cloud. Were this to happen, I'd really lose most of my incentive to use Docker. At least at the scale I operate.
All of the above is a really roundabout way of saying this:
Docker is cool beans for big companies looking to make lots of workloads that require identical (or at least similar) operating environments. (See; scale out web farms like Netflix.)
Hypervisors are just more useful to smaller businesses.
I'm way more likely to care about a technology that lets me easily move my workloads from server to server and from private cloud to public cloud (and back) than I am one that will let me get a few extra % efficiency out of a server. Docker could do this one day. So could hypervisors. Neither really do it today.
Hope that helps some.
"You're not "giving up" high-availability - that's provided by the design of your production infrastructure and software architecture."
Which is exactly what I said.
Containers let you get more workloads per node, but they don't of themselves give you a means to provide high availability for those workloads. You either use production infrastructure (hypervisors or NonStop servers) which provide you redundancy, or you redesign your applications for it.
Thus containers are not competition for a hypervisor. They are not an "or" product. They are an "and" product.
Considering the hype and marketing of the Docker religious, that absolutely does have to be said to the world, spelled out explicitly and repeatedly.
Seems to me they're the only restraining the excesses of your xenophobic and increasingly authoritarian police-state worshiping rulers, yes.
"The art of what I need to do is make it look like something magical is being achieved (the perception bit) "
This is fucking exhausting. It's a full time job in and of itself. If you're an introvert by nature, well...you'll end up burnt out. Sad, but true.
"the techies will always be needed regardless of this business/social skills bullshit"
Why? What do you do that can't be automated?
To be fair, "a large busted red head able to operate a laptop" is welcome anywhere and everywhere.
More'n just that. Companies want you to devote you life to them. Bleed their colours, think always with the company's interests in mind. They promise you everything from stock options to raises, from a seat at the table of decisions to little things, like being allowed to put a fish tank in the office.
But everything they say is a fucking lie. What they want is the most possible work out of you, with absolute devotion for the least amount of money. And when you're burnt out, and you can give no more, they discard you.
The problem is that businesses have an entitlement issue. They expect absolute devotion from their employees whilst offering nothing in return. The only way they retain staff is to keep them so busy they can't realistically go forth and look for a better job.
That's not "us" versus "them", because they never even consider "us". We simply don't occur to them. Not on that level. Not even on a level enough to keep promises.
And then you leave, do something else, build you own office and get the goddamned fish tank.
The man is literally - not figuratively - the prototypical gentleman and scholar of our era.
"On the contrary, if I were to meet him (and recognise him), I would stop him, shake his hand and thank him very much for all that he has produced and to please keep on doing what he does so well."
Very much my reaction when I got to meet they fellow behind Ninite.
I don't know as I'll ever get the honour to repeat that experience (meeting one of me heroes), but should I ever meet Mr. Munroe, I will try to be less of a gibbering idiot than I was with Mr. Kuzins.
As with Mr. Kuzins, I expect that meeting Mr. Munroe would be an experience wherein the ancient axiom "never meet your heroes, you discover they have feet of clay" would not apply. I expect, from all that I have read, that Mr. Munroe would be humble and awkward, rapaciously curious and stunningly intelligent as he is reputed to be.
And that, if nothing else, he's accept my heartfelt gratitude. Not for the wit, or the feeling of belonging, or the unity across the globe with other nerds his works have provided. Not for the humour and wit, intelligence, or even for taking the time to let me thank him.
I expect he'd understand when I said that it is knowing that someone else in this world deals best with confusion, grief, fear, loneliness, anger and even despair through art, thought and seeking to help others. Knowing that Mr. Munroe - how may well be one of the brightest minds of our generation - has the same reaction as I do when confronted with these emotions makes me feel less alone.
He's a private guy. I respect that. He has, however, shared with us glimpses into the difficulties with which he struggles. That he can continue to be witty, humorous, helpful and kind through all of that is of itself amazing.
That he occasionally lets us see the humanity of the artist makes me feel less alone.
My apologies for an inability to express myself appropriately. Better to be mushy here where he's unlikely to read it than to gibber at the man in person.
"Are you seriously trying to tell us that the coders behind the NHS patient records debacle or the MOD procurment process are somehow better then a couple of gifted and motivated amateurs? In fact, can you name a single piece of not-shit software that can be credited to a nation-state? Even BT's fucked up Phorm was written by a private entity."
As a general rule, the "not shit" coders working for a state end up working for the spooks, or the banks. Unimportant things like health care get the mediocre of what's left. They don't really pay all that well to code for health care.
"Hundreds" of infected machines. Why tap the undersea fibers? Get some men in bright vests to dig a hole in the ground outside the company in question and just put the taps in there. Then the packets aren't leaking across the internet for everyone to see.
"The 451 organisation surveyed 1,000 users"
Were they all American?
You're both wrong and you're both right.
Tablets will sell in large volume. Tablets will sell for less money. The number of tablets is flat-to-growing. The sale price of tablets is plummeting.
I can say with about 75% certainty it's a problem with chrome's flash player. Right click on the "new tab" button and select "task manager' in chrome. You'll see what tabs are eating the CPU. In every case where Chrome turned to crap for me, it's been tabs with flash. Even though I have flashblock on, the fact that there's a flash element somewhere in the source makes those tabs go nuts.
But it's not consistent. It can go days without a problem. Then one day, I take the notebook out of sleep mode, and *bam*, problem's there. Exit Chrome, restart, let all the tabs reload, and 50/50 the problem will recurr. Reboot the system, restart Chrome, problem's gone again for a few days.
But always, it is the flash tabs that start this chain.
"Riiiiiight. So what exactly do you mean? ADFS from Microsoft, PING, CA, Oracle and to some degree Shibboleth have been translating AD authentication to the cloud via standardized federation protocols for over a decade. These were not lackluster attempts at separate online authentication systems. Federation solutions from vendors have been working with AD for a long time and a LOT of people implemented them."
These solutions have mostly been translating between enterprise applications and/or customer built stuff (including websites that could, yes, be hosted "in the cloud") and AD. They are not generally "cloud based authentication systems" that then tie back to AD. Rather, they take the opposite approach, living mostly behind the corporate firewall and then extending a tendril to hosted applications one at a time.
The goal behind these sorts of applications is "single sign on". Basically what FIM was supposed to do, but never quite got right. They still (generally) rely on either loading client software onto clients or having client systems connect to the corporate network via VPN, etc.
Azure Active Directory takes a different approach. It basically hangs the authentication system out on the internet and says "address me via API from wherever you are." Instead of custom coding each interconnected tendril into each third party app, website or so forth, Microsoft expects everyone else to code for AAD. And they'll probably do so.
But it also means that when you combine it with technologies like Direct Access, the whole concept of having to manage a client system with agents, VPNs or other tools of "get the behind my corporate firewall" go away. Everything lives facing the internet, and the internet becomes the common point of communication, not the corporate network.
"Maybe you are referring to the efforts of Facebook (via their Graph API) and Google that pioneered the OAuth protocols for web based authentication. In fact Microsoft has only recently started to implement these standards into Azure AD."
Yes, as a matter of fact, when talking about "everyone else - including Microsoft's past incarnations - have all been lackluster attempts to create what amounts to a separate online authentication system only very loosely coupled to AD" that was exactly who I was thinking of.
The major online authentication systems - Facebook, Twitter and Google being the primary examples - have done very poor jobs of enterprise integration. And they are the only things that are, to my mind, directly comparable to Azure Active Directory. Why? Because their primary purpose - like that of AAD - is authentication of online services. They are there to live "in the cloud" and serve as a central point of identity using a globally addressable network that isn't directly controlled by the enterprise.
This is completely different to SSO software setups that seek to make the enterprise authentication system (AD) the primary, and extend that piecemeal into selected applications and services. They approaches are polar opposites.
"Again, you should really find a contact at Microsoft (Vittorio Bertocci would be a good contact for you) who can help you understand the history and current implementation of Azure AD. It wasn't "AD thrown into the cloud". Azure AD is a brand new code base. File -> New -> Project...."
The code base doesn't matter. The APIs do. How much of the functionality is exposed. How much legacy is maintained, how much isn't. What can integrate with ease and what can't. My understanding is that AAD will never fully replicate traditional AD. It's a clean break, with only the minimum required to get the job done held over from the old AD. The goal isn't to authenticate devices anymore, it's to authenticate SaaS apps and various other services.
Microsoft basically took the AD APIs, threw them into the cloud, cut back to the bare minimum they could get away with then started growing it in a whole new direction from the on-prem stuff.
"Microsoft's solution for hybrid auth is the SAME as everyone elses."
False. It was similar. The latest iteration has changed that.
"In fact some would argue that they have a old architecture for the bridge from Azure AD to on premises AD. Microsoft uses ADFS for the federation of authentication between Azure AD and on premises. Just like Oracle,PING, CA and IBM (and so on)."
False, dirsync is moving away from this.
"The only piece that Microsoft has that is fairly unique, is the synchronization of identity attributes. Built on a 10 year old legacy system, it requires the deployment of slight (with DirSync/AADSync) to significant (with FIM) on premises software."
Start here: http://blogs.technet.com/b/educloud/archive/2013/06/03/new-azure-active-directory-sync-tool-with-password-sync-is-now-available.aspx and continue through the various links and research until you get the difference between a "federated" auth system and a "managed" one.
With Dirsync AAD and WASD are not simply federated SSO systems. It's more appropriate to think of the local auth system as slaved to AAD. The architecture is different, which introduces it's own benefits and it's own drawbacks.
"Google has it's own similar solution, with GADS (Google Active Directory Sync). Heck even Salesforce has a way to sync AD data into it's own cloud identity platform."
And this is where I start to seriously doubt your self-declared (anonymous) authority on the topic. GADS is horrible compared to AAD. Not that I'm overly a fan of either solution, but that's like holding up a Windows Phone and declaring it a perfect substitute for a proper desktop.
Implementation matters. What strikes me is that you are holding up a whole bunch of completely unconnected solutions here that behave completely differently, have shockignly different design philosophies and radically different thresholds for ease of use and basically saying "they're all the same".
PING, CA, Oracle, Google, Facbook and AAD all live in the same box in your mind? Really? Do you also mentally cluster together a Caterpillar a Semi Truck and a Smart Car because they all can be used for transportation?
Look, let me make this simple for you:
AAD is the easiest of all the options available to set up. AAD is the easiest of all the options available to maintain. AAD is one of the most miserable to integrate with traditional enterprise applications or your own home-rolled special sauce because it doesn't conform to your enterprise apps, you conform to AAD. (Or you use FIM, but FIM is...touchy.)
AAD has a lovely API for everyone who wants to conform to AAD to do so. Microsoft is big enough to convince most of the world to do exactly that...and they're well on their way to getting Everyone Who Matters onboard. They'll bribe or bully whomever else remains.
AAD is comfortable, familiar, easy to use and already has quite a few SaaS apps and service providers on board. Perhaps more to the point, it's affordable and doesn't require specialists to work with. Every SMB in the world can use it tomorrow, and afford to do so.
Companies have trusted Microsoft to be their identity provider for 15 years now, AAD is the natural extension of that...and they finally have it done right.
Active Directory became the basis of modern identity systems a while back. Most applications talk to it natively, and don't need a third party SSO application. Hell, man, even PHP has libraries for talking directly to AD (http://adldap.sourceforge.net/)!
Yes, some applications - or rather the vendors seeking control over the customer that write those applications - still need some form of third party SSO. There will probably always be such folks in the world. But the majority of new applications out there will code for AAD, not for Oracle, Ping CA, Google or whathaveyou. (Well, maybe Google.)
Like it or not, when it comes to identity, Microsoft can bully through a standard by sheer largesse. And by making AAD easy to deploy, integrate with and maintain, I argue they've done exactly that.
"Trevor, please stop trying to declare that Azure AD is the worlds leading cloud identity platform when you clearly have little to no knowledge of other existing solutions or the identity industry in general."
Active directory is the world's leading identity platform. Azure Active Directory is the cloud extension of this that will dominate the online identity market. It is inevitable, and there is noone out there capable of preventing this.
The deal is done, the die is cast and it's all over except for the screaming.
The better question is: who are you, Mr Anonymous Coward, and what is your interest in all of this? Not meaning offense, but your posts strike me as similar to several I read around 10 years ago on usenet, in tech magazines, etc. They were by the staff (and sometimes executives) of hosted e-mail services/webmail etc who spent rather a lot of time telling anyone who would listen that Google wasn't a threat.
Well Google did change email forever. The old model of charging a monthly fee for a few dedicated megabytes of storage evaporated overnight. Google commodiitsed email. They offered the entire world a means to get an e-mail that wasn't tied to your ISP, and didn't go away when you switched providers. More to the point, you could store all your e-mail, forever, and it didn't go away when your hard drive crashed.
Here, now, Microsoft is commoditising joined-up identity services. They are also changing the focus from "identity behind your corporate firewall" to "identity in the Cloud". You might not like this - hell, I don't like this - but it is what is happening.
And really, why is that such a bad thing? A single common referent for future development could be very useful.
Azure doesn't have to "win the internet and cloud". Non-requisite. Microsoft must maintain a cloud presence. Size will be determined by success. But it will be there from here on out. Part of that is Azure AD. That's not going anywhere.
Azure AD can be a smash hit success and power most of the internet's authentication even if Azure itself remains a relatively minor cloud player.
100% agree. And the fact that it's such a mess - remember, Microsoft has tried to "own" online authentication with it's own online mechanisms at least three times prior to this - is what gives Microsoft the advantage.
Everyone else - including Microsoft's past incarnations - have all been lackluster attempts to create what amounts to a separate online authentication system only very loosely coupled to AD. This time, Microsoft basically took AD and threw it in the cloud, then cut back as much of it as possible until they could declare a compromise made between security and functionality.
Microsoft then lashed it together with the onsite version and voila: a hybrid auth system that A) works and B) stands out from the pack. Everyone else has essentially the same auth system, just backed by a different player. This is your old, familiar auth system "stretched" into the cloud.
That will give it a hold that no other auth system will be able to match. Like it or not, Microsoft are still the 800lb gorilla of enterprise authentication. Now they have a real product for handling people outside the corporate firewall.
Everyone else who is out there trying to extend a consumer identity system into something enterprises will accept might as well just pack up and go home. This game is over.
Now, who will own the consumer identity space...that's a whole other question. But if Microsoft gets enough uptake from cloud services for Azure Active Directory, they may well win that too.
"To see what happens to the DNA when it gets hot, rather than all that rocket nonsense, couldn't they just heat that stuff up in an oven?"
Doesn't tell you anything about ablation, vibration, sonic disturbances, coping with the impact, etc.
"Also, wouldn't a real meteor be arriving at at least the Earth's escape velocity of 11 km/s?"
No. Most likely a real space rock would hit the atmosphere, explode and send shards of itself all across hell and gone. Man of those make it down to Earth just fine, and they aren't quite going at 11km/s. They're traveling at terminal velocity just like the rocket. Not being as aerodynamic as the rocket, their terminal velocity would be lower.
Also: the outside of those objects will be melted by friction, but the insides have been known to house quite cold objects. See: ablation, and it's effects on cooling.
"The question whether or not viable fragments of DNA, or even whole organisms landed here is moot. If it was a whole organism, we know that it cannot have been more complex than the most basic chemotrophes, because that is the stuff our earliest fossil record indicates existed at the Dawn of Life. Nothing more complicated than that could have possibly landed here."
Actually, you are incorrect. Since the most likely Source Of Modern Life On Earth is "soft" panspermia (comets seeded volatiles and amino acids which gave rise to life) technically it's entirely possible that relatively complex alien life did land on Earth and native-grown Earth life simply killed it all off. We'll probably never know.
For that matter, maybe alien blue-green algea landed on Earth last night via some space rock, and the local fauna had a tasty snack. Again, we'll never know.
"We had everything needed for life on earth before it started."
No, we didn't. When Theia whacked into Earth it blew off out atmosphere and liquified the entire fucking planet. Heavy elements sunk to the core and it took millions of years for the crust to reform. Volcanism ejected stupid amounts of CO2 and methane, but not nearly enough H2O and virtually no N2.
The nitrogen that makes up most of our atmosphere came from the late heavy bombardment, as did a lot of our water. And it is most likely that the chemicals which gave rise to the first life on Earth were seeded here during the late heavy bombardment. This is known as "soft" panspermia, and it is the best fit for the evidence we have today.
Earth didn't simply "have all the chemicals required" for life and then life arose. That's radically unlikely, due to the formational history of our planet. Instead, it's far more likely that those chemicals were deposited on Earth as fully formed amino acids during the late heavy bombardment, along with the majority of our volatiles.
But the Earth, as a planet, existed for quite some time before the late heavy bombardment. Indeed, it had already gone through one fairly major event (the Big Whack) before we ever got that far. We were a dead world until some comets brought us life.
Who cares that it wasn't life in the form of alien cells? It was precious, precious volatiles and the amino acids that made us.
Life doesn't have to get here on meteorites. Just the chemical building blocks. Methinks if DNA can make it intact, so can various amino acids. And I'm thinking that, really, if we have C, N, O, Fe, Mg, P, Ca and H2O along with a few amino acids and temperatures in the right ranges, we'll get abiogenesis. At least, if you give it a few hundred million years.
A whole alien cell doesn't need to get here intact. But if some interesting enough chemicals happen along, it can really speed up that whole "abiogenesis of metabolism" thing.
And fuck you too. Half the commenters on this site qualify as hackers. Myself included. And no matter the colour of their hats, nobody deserves to be killed for cracking a system, mate. You're also failing to distinguish between ethical hackers, hacktivists, white hats, black hats, grey hats, mercenaries and so forth.
Kill off all the hackers and you'd wipe out 95% of the top talent in our industry. Fancy going back to running society on TI-83s?
That's not how it's analysed by CxOs. They ask "how likely is it to happen to us?" Then they balance cost versus perceived risk.
So let's say that this whole fiasco woudl have cost $35M to avoid in the first place by doing security right. Currently costs are at $43M, with that likely to reach $250M by the time it's all done.
So, that same $35M, invested into something else - let's say Apple stock - over the 10 years it would have taken to spend that all and evolve their systems into something properly secure (these sorts of security issues are cumulative and the result of organic growth and lack of joined up planning.) Is the rate of return equal to or higher than $250M? And what is the likelihood you'll see be hacked, even with bad security?
Understand that Home Depot may well still be financially ahead after this hack, despite the high headline numbers. That's what's the most horrible about all of this.