Re: What about Load Dynamix?
Until you commented here, I'd never even heard of it, to be perfectly honest. I guess I'll add it to my list of things to investigate in 2015...
7021 posts • joined 31 May 2010
Until you commented here, I'd never even heard of it, to be perfectly honest. I guess I'll add it to my list of things to investigate in 2015...
Hi, maybe I can help some.
I think the answer to your question is "either/or". For the VCs - and typically most founders - "suceed" in the market means "have a profitable exit". That exit can be IPO, or it can be acquisition. But the point is that you get 5-10x out what you put in (time or money wise, where time is calculated at what you could command working full time for someone else.)
Remember that most founders don't stay with their "baby" past the contractual period after acquisition. They have their money, and they're going to go roll the dice one more time and try to make another startup, get more money.
The thing you have to remember is that the people who start SV startups chafe under the rules and restraints of big business. They want to be free to innovate, make their own choices...and mistakes. The startup life is a lifestyle as much as anything. It's about the freedom as much as the money.
So for VCs, all that matters is the money, but for execs the money is the means to the end of going back and doing it all over again.
Sysadmins can't do anything about it. Every major new tech that comes out with a significant ease of use change means fewer sysadmins are required than were before. That's why ease of use matters; it gets rid of the need for priests to tend the temple and allows a single contracted janitorial team to handle hundreds of companies as they do their rounds every night.
If you're looking for "how can docker make me, as an Ops guy, better off" the chances are that it won't. Oh, if you master it, you could probably become a "containerization consultant" and be one of the janitors tending multiple businesses, but since it is the equivalent of moving from "having to carefully manage each install of each app separately" to "click install in the app store", it removes a lot of why you might need to be there.
And that, right there, is why it is valuable to business. Unfortunately, in this instance, "good for the business" is "bad for the ops guys".
RedHat are any better? VMware? Oracle? Google?
Not a lot of honour amongst IT companies.
Think of Docker as being "Steam for enterprise applications and web development platforms" and you might begin to grok what it's bringing to the table. Stop thinking of things in terms of "I'm a sysadmin with a strong ops background who can figure this out if you pay me" and start thinking of it from the standpoint of business owners and developers who don't want to have to pay ops guys at all.
I gotta be honest when I say I can't picture actually using IIS in production. I just can't. Apache is ground into my bones. I've redone those configs so many times that I don't even need the comments in httpd.conf anymore.
That said, I have seen what httpd.conf does to newbies. It's about what IIS does to me. So whenever I picture starting Apache from scratch I imagine someone asking me to take all my websites and move them to IIS. Then I quickly think of something else before that becomes a desire to self-harm.
Don't forget OpenIndianna. Or BSD Jails. Or Virtuozzo. Or OpenVZ. Or Rocket. Or...
Just because you have a containerization tech doesn't mean you have momentum, hype, a community, backing, industry support, an "app store", community input to that "app store", cloud provider adoption, etc. etc. etc.
That makes Docker quantitatively as well as qualitatively different from any of the other containerization techs that have gone before. Technology doesn't matter here nearly so much as politics, damned politics and "moolah, moolah, moolah, moo-la-haaaaaaaaaaa..."
Parallels has virty tech, they aren't a threat to VMware. Virtualbox is groovy, nobody runs a large datacenter on it. Solairs/OpenIndianna/Illuminos jails are awesome and even have enterprise support...but there isn't a heck of a lot of cloud provider adoption, hype or community support.
That isn't to say it couldn't happen. It's just that these are projects by engineers, for engineers. And that means they likely won't succeed where the marketdroids and moneymen walk.
Funnily enough, I am working on an article about exactly that.
Capitalism doesn't work so much as "devolves". But that's a discussion for another time...
Agree 100%. Containers are useful to small businesses. But they can't give up the benefits of hypervisors either. They'll be deploying containers inside VMs almost exclusively. Best of both worlds!
Only those who are dedicated at a religious level will be deploying to metal. They need/want every erg of efficiency possible. SMBs aren't in it for the efficiency; they need ease of use way more than efficiency.
"Most of the SMBs I know aren't running VMs even now, they run their applications on a miscellany of elderly hardware held together by good fortune and the occasional visit of the part-time finance director's second cousin."
Then you don't know SMBs, period. "Small to medium business" covers 1 to 1000 seats, generally. With enterprise being above 1000 seats. (Depending on which government is doing the counting.) The bulk of those companies are in the 50-250 seat range, and as an SMB sysadmin by trade, I promise you they've been virtualised for some time now.
"And as soon as containerisation makes it properly to Windows, those people will be taking it up in their droves, because there's no point running multiple OSs if you don't have to - even just from a licensing point of view."
Wrong again. Ignoring the rest of your prejudiced (and false) remarks, you don't understand at all <i.why</i> most companies use VMs. It is to obtain the benefits of redundancy, reliability and manageability (including snapshots, backups, replication, live workload migration during maintenance, etc) that hypervisors provide. Contianers, at the moment, don't provide that.
SMBs want far more than just the ability to run the maximum number of workloads on a given a piece of hardware. They want those workloads to be bulletproof. They need them to be something that can be moved around while still in use because there aren't any "maintenance windows" anymore. There's always someone remotely accessing something. That's just life today. Hell, that was life 5 years ago. It's like you have a picture of SMBs stuck in a time warp from 1999 and you imagine that they've never evolved.
"Containers aren't just a packaging technology - they depend on the provision of resource management and scheduling in the OS that are equivalent to those provided by a current hypervisor."
Everything depends on " the provision of resource management and scheduling in the OS that are equivalent to those provided by a current hypervisor". Whether running on metal in it's own OS, in a container, or in a hypervisor. I don't understand how this precludes containers from being "just a packaging technology".
"And while Docker may have a little way to go (but I think 30 months rather than 30 years will see some big changes), I think you'd have a hard time persuading the people on non-x86 hardware that their WPARs and Zones are manifestly harder to work with than VM solutions."
No, I wouldn't. Because you are completely ignoring the desired outcome portion of the equation. Containers provide what companies desire when the hardware underneath the container provides the required elements of high availability, workload migration and continuous uptime during maintenance. Run containers on an HP NonStop server or an IBM mainframe and you get all the bits you want while getting the extra efficiency of containers.
But, shockingly enough, most businesses don't have the money to spend $virgins on mainframes or NonStop servers. So they use hypervisors to lash together commodity hardware into what amounts to a virtual, distributed mainframe. They then package up their applications in their own OSes and move them about.
Are containers realtively easy to deploy and somewhat easy to manage? Sure. I'll even go so far as to say they're way easier to deploy than VMs are, but I will remain adamant that VMs are currently easier to manage. What you're missing, however, is that hypervisors democratize all the other things - portability, heterogeneity, high availability and so forth - that are requirements of modern IT. Containers don't provide mechanisms for that, unless you burn down your existing code bases and completely redesign.
"Even IBM praises the benefits of WPARs (containers) over LPARs (hypervisor) in the majority of use cases, even though it supports both and the latter has rather more hardware support than the typical x86 VM. I can't really improve on their reasoning:"
Of course IBM is touting WPARs over LPARs. They sell the pantsing mainframes that make containers a viable technology. And they only ask the firstborn of your entire ethnic group in order to afford it!
"Better resource utilisation (one system image)"
Nobody is debating this one. Containers are more efficient.
"Easier to maintain (one OS to patch)"
One OS reboot takes down 1000s of containers. Also, you get the lovely issue of having to deal with workloads that may react badly to a given patch being mixed in with workloads that might need a given patch, all running on the same OS instance. Funny, container evangelists never talk about that one...
Easier to archive (one system image)
Oh, please. We're not using Norton Ghost here. Ever since Veeam came along nobody in their right mind has had trouble doing backups, DR or archives of VMs.
Better granularity of resource management (CPU, RAM, I/O)"
That depends entirely on how shit your hypervisor is. Funnily enough, VMware seems to be quite good at providing granularity of resource management.
So, of IBM's four-point path to victory, the only thing that really shines as rationale is "efficiency". And it carefully sidesteps some pretty significant issues ranging from price (we can't all afford mainframes or NonStop servers) to manageability (1000 workloads sharing a single OS can actually be less desirable than, say, 10 OSes, each with 100 workloads.)
@dan1980 except you leave out the part of the Office 2007/2010 licensing that pertains the remote/VDI usage whilst attempting to sing Microsoft's praises of licensing that product. Though you at least acknowledge the media-based madness of the time.
Microsoft doesn't do "rational".
When, in the history of our industry, has Microsoft licensing been connected to rational thought?
Let's have that conversation after they've finished integrating Docker into the OS and decided on licensing, eh? I've heard so many different things out of Redmond that I can't put credence to any of it, tbh.
And yet, in practice, for the workloads I run, I see only 10% improvement. I wholeheartedly believe that for certain workloads, you can get a 10x improvement. Hell, why not. Do you know what I can do with benchmarking tools, when motivated?
But the question isn't about the headline improvements. It's about the average improvements for everyday workloads, for average companies.
Also, along those lines, I absolutely do not buy the latency claims you state. I run a fairly significant lab full of stuff and I have been testing virtualisation and storage configurations for about 4 months solid, 8 hours a day. Every configuration I can get my hands on. From various arrays to hyperconverged solutions to local setups. I've run the same workloads on metal. I've used SATA, SAS, NVMe, PCI-E and am getting MCS stuff in here sometime in the next couple of weeks.
The long story short? Take the ****ing workload off the network and you get your latency back. And there are plenty of ways to go about doing that today. I suspect you'd be shocked at the kinds of performance I can eek out of my server SANs, to say nothing of the kinds of performance I get when using technologies like Atlantis USX, Proximal Data or PernixData!
I respect that you have found a way to use containers to great effect, sir. Yet I find I must humbly submit that your use case of them may well be abnormal when we consider the diversity and scope of workloads run by companies today.
I think it is fairer to say that under the right circumstances, containers can deliver manyfold increases in density and perhaps even performance, however, they are not likely to deliver this for all - or even most - workloads today. Containers need to be built into, just like the public cloud. With the advantage that many public cloud workloads can be migrated to containers with relative ease.
We still don't know how licensing will shake out. I suspect every Docker container will require an OS license from Microsoft. Otherwise, I agree with you 100% here, sir.
Some problems with your viewpoint:
1) Not everyone values efficiency over ease of use or capability.
2) You give up a lot of ease of use and capability in order to get the efficiency gains of containers.
3) Legacy workloads will take decades to go away.
4) Many/most legacy workloads, as well as a significant portion of new workloads (for at least the next decade of coding) will not have application-based redundancy or high-availability. They'll rely on a hypervisor to provide it.
What you say only makes sense if you assume everyone is going to move to public-cloud style scale-out workloads. We're not all Facebook/Google/Netflix. Industry specific applications don't make that jump well. Ancient point of sales apps, CRMs, ERPs, LOBs, OLTPs, etc won't make that jump...and migration is crazy expensive.
That's if you can convince a company that is absolutely dependent on a 30 year old POS application whose every quirk they know by heart that they should ditch it and migrate to a new one. Because...Docker?
There are 17000 enterprises in the world. Maybe a few hundred thousand government agencies that could be considered as large as one. There are over a billion of SMBs in the world, and they're not going "web scale" with their applications any time soon.
Hypervisors displaced metal because they offered immediate benefit without being too disruptive: you didn't have to recode applications for them, you didn't have to really make huge changes of any kind. As infrastructure got denser, datacenter designs changed, but that was dealt with as part of the regular refresh cycle.
Docker, like public cloud PaaS scale-out apps requires burning what you have down and restarting. Maybe one day, 30 years from now, containers will have displaced hypervisors. If so, I will bet my bottom dollar that the containers of 30 years from now look a hell of a lot more like a hybrid between the containers of today and the hypervisors of today than just a straight up continuation of the current container design.
Good questions. I will do my best to answer.
"Why should containers do this, surely that is the job of the hypervisor?"
Containers shouldn't do this. They will likely try anyways, as a great many of those who are invested in containers and adjacent technologies want to see them replace hypervisors.
The argument goes that containers are so much more efficient than hypervisors that we should do away with hypervisors altogether and use containers for everything. Based on that, we'll either have to throw all our workloads away and recode everything (not bloody likely, especially since there are any number of workloads that can't be "coded for redundancy" in the container/cloud sense) or containers will have to evolve to add these technologies.
As it stands now, I know a number of startups working to bring hypervisor-like technologies (HA, vmotion, etc) which are in stealth. We are at the beginning of mass market contain adoption, not the end. Thus the technologies enjoyed by the mass market for their current workloads will have to be recreated for previous ones, just as we have for every technology iteration before this.
Does this mean it's rational or reasonable to do so? I argue no. If I run 4 VMs on a node and each node contains 100 Docker workloads, I am giving up a small amount or overhead in order to virtualise those 4 Docker OSes, but gaining redundancy for them in exchange. Meanwhile, I can load up that server to the gills and keep all it's resources pinned without "wasting" much because I'm using containers.
To me, it makes perfect sense to have a few "fat" VMs full of containers. Resilient infrastructure, high utilization. But this is considered outright heresy by many of the faithful, as - to them at least - containers are about grinding out every last fraction of a percentage point of efficiency from their hardware.
"Is it the cost of requiring both that keeps you from using containers 'till the above condition is met?"
No, it is the pointlessness of using both that keeps from using containers. Right now, today, most of my workloads are legacy workloads. They don't convert into these new age "apps" that "scale out", as per docker/public cloud style setups. If I want redundancy, I need a hypervisor underneath.
So, I could do what I talked about above and put my workloads in containers and then put the containers in a hypervisor. That would increase the efficiency of about 50% of my workloads, ultimately dropping the need for an average of two physical servers per rack where a rack contains about 20 servers.
That's a not insignificant savings, so why not jump all over this?
1) Even with the workloads that can be mooshed into containers it will take retooling to get them there. It is rational and logical to move them into containers, but that is a migration process akin to going XP --> Windows 10. It takes time, and is best done along the knife's edge of required updates or major changes, rather than a make-work project on it's own.
2) If I start using containers I need to teach my virtualisation team how to use these containers. That's more than just class time, it takes some hands on time in the lab and the chance to screw it up some. That is scheduled, but I'm not going to adopt anything in production until I know that I can be hit by a bus and the rest of the team can carry on without me.
3) Politics. Part 4 of this series will talk about the politics of Docker. Not to give anything away, but...the politics of containerization is far from settled. I don't want to be the guy who builds an entire infrastructure on the equivalent of "Microsoft Virtual Server 2005" only to have all that effort made worthless a year or two later. Been there, done that.
4) 2 servers out of 20 isn't world-changing savings for me. Oh, that's Big Money if you're Rackspace, but at the scale of an SMB that only has a few racks worth of gear, there's an argument to be made for just eating the extra hardware cost in order to defer additional complexity for a year or two.
Really, in my eyes, it's Docker versus the cloud...sometimes in the cloud.
If I was building a brand new website today, I would have a really long think about Docker. Do I use Docker, Azure, AWS, Google or someone like Apprenda?
The choices are likely to be informed not by the technical differentiators between these platforms, but by business realities ranging from Freedom of Information and Privacy regulations, marketing success around Data Sovereignty, cost and availability of manged workloads.
Do I run my workload in one of the NSA's playground clouds, pick something regional, or light it up on my own infrastructure? Is the particular set of applications I am looking at deploying into my Docker containers available from a source I trust, and likely to be updated regularly, so that I can just task developers to that application and not have to attach an ops guy?
New applications make good sense to deploy into Docker containers. And Docker containers in the hands of a good cloud provider will have a nice little "app store" of applications to choose from.
But if I am lighting them up in the public cloud, do I really care if it's in a Docker container? Those cloud providers have stores of stuff for me to pick from in VM form as well. And I don't care if what I am running is "more efficient" when running on someone else's hardware; that's their problem, not mine.
I'm not against using Docker in the public cloud, but I see no incentive to choose it over more "traditional" Platform as a Service offerings either. If for whatever reason we decide the public cloud is the way to go, I'll probably just leave the decision "Docker/no Docker" up to the developers. The ops guys won't really have to be involved, so it's kinda their preference. I really don't care overmuch.
So from a pragmatic standpoint I really only care about Docker if it's going to run on my own hardware, either as part of my own private cloud, or as part of a hybrid cloud strategy. As we've seen, there are layers of decision-making to go through before we even arrive at the conclusion that a given new workload is going to live on my in-house infrastructure. But let's assume for a moment we've made those choices, and the new workload is running at home.
This is where we loop back to the top and start talking about inertia.
All my workloads are on my own private cloud already. They're doing their thing. If I don't poke them, they'll do their thing for the next five years without giving me much grief. My existing infrastructure is hypervisor-based. My ops guys are hypervisor-based.
If I simply accept that - if I give in to the laziness and inertia of "using what I know and what I have to hand" - then my new applications require no special sauce whatsoever. I can let the hypervisor do all the work and just write an app, plop it in it's own operating system, and let the ops guys handle it. What's one more app wrapped in one more OS?
Change for change's sake offers poor return on investment. So for me to move to Docker there has to be inventive. Right now, today, at the scale I operate, the ability to power down 8 servers isn't a huge motivation. I could write pay for the electricity required to light those servers up by writing 4 additional articles a year.
Two years from now, I may have a dozen applications in the cloud, all coded for this "scale out" thing. I may have gotten rid of one or two legacy applications in my own datacenter and replaced them with cloudy apps. Five years from now...who knows?
It would be really convenient for those new applications to be written to be Docker compatible, scale-out affairs that provided their HA via the design of the application rather than the infrastructure. But I don't know for sure that Docker will be the container company that wins.
For that matter, the hypervisor/cloud companies could see Docker as a threat in the next two years, declare amnesty and agree to a common virtual disk format.
Docker offers a means to make my apps more-or-less portable. Ish. As long as there isn't too big a difference between the underlying systems, they'll move from this server to that one, from private cloud to public. If I kept the OSes at the same patching levels on both sides, I could move things back and forth...though not in an HA or vmotion fashion. That has some appeal.
But a common virtual disk format would allow me to move VMs between hypervisors and from any hypervisor to any cloud. Were this to happen, I'd really lose most of my incentive to use Docker. At least at the scale I operate.
All of the above is a really roundabout way of saying this:
Docker is cool beans for big companies looking to make lots of workloads that require identical (or at least similar) operating environments. (See; scale out web farms like Netflix.)
Hypervisors are just more useful to smaller businesses.
I'm way more likely to care about a technology that lets me easily move my workloads from server to server and from private cloud to public cloud (and back) than I am one that will let me get a few extra % efficiency out of a server. Docker could do this one day. So could hypervisors. Neither really do it today.
Hope that helps some.
"You're not "giving up" high-availability - that's provided by the design of your production infrastructure and software architecture."
Which is exactly what I said.
Containers let you get more workloads per node, but they don't of themselves give you a means to provide high availability for those workloads. You either use production infrastructure (hypervisors or NonStop servers) which provide you redundancy, or you redesign your applications for it.
Thus containers are not competition for a hypervisor. They are not an "or" product. They are an "and" product.
Considering the hype and marketing of the Docker religious, that absolutely does have to be said to the world, spelled out explicitly and repeatedly.
By "array" I typically mean SAN and NAS. DAS stuff is usually not called an array. It's just called DAS. (Or JBOD). It's a separate thing. It's usually many more disks than you'll find in a server SAN, but it's not shared across multiple systems...or at least not enough systems to make more than a two or three node cluster.
DAS is a very Microsoft thing, at least where virtualization is involved. I know it's still a thing for those few running workloads on metal, but only Microsoft really thinks it's remotely viable for hosts with hypervisors on them. But, hey, it's Microsoft. Being trapped 15 years in the past is quite an advancement for them.
I...I don't think you understand how these work? Especially how they're installed in enterprises.
As a general rule, enterprises used to buy hardware dedicated to a specific project or workload. Each DB had it's own SAN, it's own servers, etc. But then we found that this was spectacularly inefficient and led to massive underutilisation of resources. Private clouds - or at least virtualisation setups that were closeish - began to become the order of the day.
Resources began to be purchased and pooled based on cumulative predicted need, not based on the individual project or workload. Now the question has become "how best to maintain these sorts of environments."
Something like a Nutanix or VSAN cluster rarely goes beyond 16 nodes, sometimes to 32. You get multiple clusters in a virtual datacenter. You are highly unlikely to have nodes in the cluster that are different speeds/capabilities because clusters tend to live and die as a group. We've seen that even in non-VSAN clusters thus far. Clusters are born, they live and they die as one.
But in the rare instance where clustered are mixed - I run a mixed cluster myself - sysadmins can simply tell workloads to keep the copies on "like" nodes. If you have PCI-E storage on nodes A-D and only SAS storage on nodes E-H, then you can "segregate" the cluster into two.
In theory, you could end up with a workload split along the storage plane, but only if you'd lost enough of one type of node that rebuilding would cause it to put the second copy on the other class of node. As soon as you've repaired the server sin question, policies would take over and make sure your workloads go where they are supposed to.
If your assertion is somehow that server SANs are unable to support SQL or OLTP workloads, well...you're just wrong. You're wronger than wrong.
Believe it or not, server SANs have been around long enough to evolve to handle the concept of diversity in workloads...and to handle workloads that are as demanding as anything you could throw at a traditional SAN. Indeed, I'd challenge a traditional SAN to keep up with the all-flash server SANs. The MCS setups in particular are utterly spectacular.
Exactly why data governance is the new hotness, and new ways to get disks into the datacenter are not. :)
Did you know that the closest land-based relative to cetaceans (a family that includes whales) is the hippopotamus?
"Why claim victory for "server SAN" instead of the broader category, except as a marketing move?"
A) "Server SAN" was coined by Stuart Miniman. An analyst, not a marketing bloke, because we needed something to call "lashing together storage from multiple servers and present it to the cluster" that was shorter than "lashing together storage from multiple servers and present it to the cluster"
B) Because not all "clusters of storage lashed together" are the same. Object storage is, for example, going it's own say, despite being something we could reasonably call a "server SAN" as a technicality. The big money isn't in object storage. You can't charge the big margins for it. It's things like VSAN, Nutanix, etc that are wining out and going to form the "default" for enterprise workloads.
C) I calls it like I sees it. Arrays, clusters of arrays and "lashed together clusters of storage that don't run workloads on the same nodes as the storage" are simply not winning out over more modern "hyperconverged" (and by, do I loathe that term) setups.
And it's enterprise workloads that matter, mate. They're where the money is. They're where you get the margin.
5-10 minutes is a long way from a half hour. Also: for the record, you don't tend to need such long boot times in server SAN boxes, because you need RAID cards so complicated that they need to load an OS from the future when they init.
Of course, you could also be seeing extended boot times because you're using 1.5TB of RAM and doing extended memory tests on each boot. Dell in particular seems to like really indulgent mem tests.
Compute workload om? On you miserable, non-tactile touchscreen craptasm! Why isn't there an edit button in the mobile UI?
The mainframe is ultimately what everyone wants, but IBM refuses to price it reasonably.
Server SANs are hyperconverged. Lash together storage from multiple servers, present it to the cluster as shared storage. Or, more to the point, hyperconvergence is one of the possible means by which a serverSAN can manifest.
Server SANs can be done without running a compute workload on the same node. Then it's not hyperconverged. Run a compute workload om the node and it is hyperconverged.
Marketing terminology. It are the dumbs.
expanded on his cloud strategy to get Microsoft apps on every internet-equipped device on the planet.
"Talk about making pigs fly. Does anyone in their right mind even remotely think that this is doable?"
"Does you leccy meter that sends your reading to the mothership need a Word Interface? Does it even need to run Windows of any shape or form? Does it need an SQLServer Client or even client access to CRM, BizTalk or any other MS product?"
No, but it probably could report it's data back to Azure, or run a little Microsoft Research-developed micro OS (they have a couple) that are efficient, small, and designed for embedded devices. I think one of them even runs in less than 2MB of RAM.
"My current smart meter does not. I know that it runs Linux."
"Does he think that every Router on the planet is going to have some MS software in it?"
Why couldn't it?
"The list of area where MS will fail is very long."
I can say this about (almost) every OS or software company, if we're being honest.
"Perhaps he should become a Politician. They always promise stuff that is clearly impossible to deliver (if you have half a working brain that is) but they make great sound-bites don't they?"
He's the CEO of one of hte most important corporations on the planet. He is a politician, now.
So...anyone with a genetic predisposition towards any of a number of different diseases will pay more to live? Anyone who is poorer (and thus can only afford bulk processed foods that aren't good for us) will pay more to live? Anyone who has had an accident (I can give you a list of people who have had spectacularly costly - read: millions of dollars and counting - worth of medical issues due to being hit by a drunk driver) will pay more to live?
You can't control your genetics. Poverty is a vicious circle that very, very few escape from. Lots of things can happen to you where you have absolutely zero control or ability to prevent them.
And we're going to place people at a disadvantage because of this people for this? There is a thin line here between medical insurance tracking bracelets and eugenics. It's maybe a slower process to use the bracelets, but the end result is the same: pushing those who are "impure" into a position of significant disadvantage such that they will eventually just die off.
Where did I say that marijuana was safe for everyone? Hmm? No chemical is. And yet, that doesn't mean you lock it up and away from everyone. It means you put your time and effort into education. Into making sure you can tailor drugs to the individual, etc.
People with peanut allergies know to stay away from peanuts. People with certain genetics should know to stay away from marijuana. People on the schizophrenia spectrum need to know to stay way from amphetamines. People on the autism spectrum need to stay away from antipsychotics.
We are all different, and it is up to the individual to know their own selves; what they can tolerate, and what they can't. We can test for this stuff now. It's not that hard. I mean, hell, you need to get a blood pressure test before getting birth control pills, why the hell can't we mandate a genetics test before being cleared for marijuana/amphetamines/etc.
There are a huge number of drugs withheld from the market that could do real good in the world. These are drugs that could change the quality of life for tens of millions of people, but are held back because less than 1% of people would experience serious negative side effects.
This is stupid, wasteful and harmful to anyone with an IQ bigger than their shoe size. Those drugs can not only improve life for people, in many cases they can save lives.
But no, they're locked away from everyone because of shortsighted fearmongers who are terrified that they will get into the hands of the less than 1% of people they would truly harm. Any attempts to work out a middle ground - for example, make those drugs prescription only, put money into developing commoditized test to ensure that taking that drug is okay, etc - are stamped out in furious anger by the fearmongers.
Troglodytes. Troglodytes that care nothing for the suffering of others so long as it allows them to impose their narrow, limited worldview on everyone else. I hope each and every one of them suffers greatly from a perfectly preventable illness before dying a miserably, lingering, horribly painful death. One that a drug withheld because of fear of how it would affect less than 1% of people could have mitigated or cured.
There is a corrections link at the bottom of the article, in the left column. Thank you.
You mean the end user experience has been awful because the install you're working with, specifically, cut a shitload of corners and/or you don't know what you're doing.
That's a hell of a different thing from generalizing to "this is how it is for everyone."
"It can be, and has been, construed that the fact that someone is breaking into your house constitutes a de facto threat to life and limb. I thoroughly agree with you in that possessions are not worth lives, though I suspect we disagree on the point at which one reasonably might be expected or allowed to use lethal force."
And this is the difference between the American - read; batshit fucking bananas - view of the world, and the Canadian - read: proportional response to events - approach to life.
Americans are xenophobic by nature. Terrified of everything and everyone. They believe that everything and everyone are out to get them, all the time. They believe they are special, important, what they own is important, that everyone wants to harm them, specifically, or just wants to do harm to everyone indiscriminately.
It is very rare to find a Canadian with that twisted worldview. Oh, yes, crime does happen, but there's usually a damned good reason for it. We're taught statistics in school. Repeatedly, over the course of about 8 years.
We understand that while, for example, there are people out there who will kill indiscriminately because they're unhinged, the chances of that are roughly equal to getting hit by lightning twice and then walking out into the street and getting shit on by a bird. Especially because we have the beginnings of a competent mental health care system. (With, admittedly, some lamentable gaps.)
Yes, some dude might be breaking into your house, but the chances that he is going to harm you or wants to harm you are virtually nonexistent. And they are not legally grounds for you to attack him.
Explain to me exactly how the NSA is changing the readout on this in real time to match their usage?
Who the metric fuck has the resources to fork an app that's a half million lines of code? What about once it's a million? Two? Ten?
Past a certain point, the fact that it's GPL means nothing.
You are basically making the argument that the American people have power over their government because the second amendment gives them the ability to carry around AK-47s. Ignoring completely the part where the government has everything from tanks to helicopters to drones, and there isn't a hope in hell of "the people" rising up against the government without being beaten back down as domestic terrorists.
The same basic principles apply to open source. Past a certain point, the complexity, integration and sheer size of a project make it functionally impossible to fork, unless you can convince the overwhelming majority of the original developers to move to the fork. (LibreOffice, MariaDB, etc.)
SystemD is a fucking cancer. The goal behind it's ongoing metastasization is nothing more than control of the Linux ecosystem in it's entirety. You can lay down any excuses you want, attempt to dismiss any criticism with a half truth or a technicality, but you can't change the fact that the thing has been rammed up our asses against the express wishes of a huge chunk of the community and with the developers openly and proudly refusing to engage with the community about any aspect of either design or implementation.
it's the same arrogant egofuckery that the Gnome team displayed and it has no fucking place as a hydra-like unkillable monster at the core of open source upon which everything expected to hang.
There is room for exactly one Linus Torvalds in the Linux ecosystem, and he's right where he belongs: keeping the kernel sane.
We don't need a second goddamned kernel running in userland.
" It is still possible for systemd to be a welcome part of Linux, if the project can listen to the users, respond and change accordingly. Are they capable of that ?"
Emphatically and overwhlemingly no. Which is - above and beyond all other reasons - why so many don't want this shitpile of a viral fuckup on their systems.
It might have been different, had the people in charge of systemd/gnome not been in charge of these projects.
"If this is the beginning of the end for Debian and Devuan doesn't get enough traction to be a viable alternative what is the consensus on the best alternative?"
Slack and Gentoo are both probably going to succumb int he next year or two. The BSDs are moving to launchd, which shares many similar issues with systemd.
If Devuan fails, we're pretty fucked.
"If systemd is so bad... ...then why are so many distros using it?"
It's called blackmail.
RedHat are behind the whole thing. They spend the money that makes a lot of critical pieces of your average Linux distribution work. Now those things won't work without systemd and/or getting them to work without systemd is a right bitch/there are roadmaps to make them not work without systemd in short order.
The short version of this whole thing is that Poettering - and with him, RedHat - are trying to take the kernel away from Linux Torvalds. They are doing so by creating another kernel in userland that everything depends on. Once they have enough stuff jacked into Poettering's matrix, they'll use it to leverage Torvalds out of the picture and finally take the whole cake for themselves.
Systemd is nothing more than a cynical play for domination and control of the entire Linux ecosystem. To "own the stack" of a modern distro. And since RedHat has managed to co-opt so many core projects, there is precious little to stop them.
"Linux" as we think of it today is on life support. Android/Linux and systemd/Linux are now looking to be the two dominant entities. Traditional Linux - one that adheres to the Unix philosophy - is all but dead. Hopefully, Devuan can save it.
"IF the leaders of GNOME & systemd were more community minded, this would have never happened. Unfortunately, it's hard to see how this will resolve itself, it would require adult behavior & conciliation, both of which are sadly lacking."
If the leaders of Gnome and systemd were community minded we would have ended up with a rational compromise: something that was a proper (and badly needed) replacement for SystemVinit, but didn't have binary logs, a registry, tentacle dependencies and wasn't trying to become a userland kernel.
It would have been a Good Thing, and we could all move forward happy, and in harmony. Unfortunately, Poettering is an ass and he leads a small nation of other asses on their grand quest to "pacify" the fuzzy wuzzies.
"The Devuan developers are going to have to address the resource issues as noted above. Good luck to them though if they succeed and give some people what they want. "
I've donated. Money where my mouth is, etc. I can only hope the rest of the community follows. Those of us who can't code still have our part to play in keeping Linux free/libre.
Where/how did you pull from anything above that the application or guest OS should be the one moving things around? Whaaaaa?
"Watch this space". ;)
If you've questions after the fourth article is out, I'll be glad to fill in the gaps. Cheers!
@J3, wipe them out. All of them. Viral warfare. Fire. Destruction of the planet. I don't care what it takes!
Seems to me they're the only restraining the excesses of your xenophobic and increasingly authoritarian police-state worshiping rulers, yes.
"The art of what I need to do is make it look like something magical is being achieved (the perception bit) "
This is fucking exhausting. It's a full time job in and of itself. If you're an introvert by nature, well...you'll end up burnt out. Sad, but true.
The man is literally - not figuratively - the prototypical gentleman and scholar of our era.
"On the contrary, if I were to meet him (and recognise him), I would stop him, shake his hand and thank him very much for all that he has produced and to please keep on doing what he does so well."
Very much my reaction when I got to meet they fellow behind Ninite.
I don't know as I'll ever get the honour to repeat that experience (meeting one of me heroes), but should I ever meet Mr. Munroe, I will try to be less of a gibbering idiot than I was with Mr. Kuzins.
As with Mr. Kuzins, I expect that meeting Mr. Munroe would be an experience wherein the ancient axiom "never meet your heroes, you discover they have feet of clay" would not apply. I expect, from all that I have read, that Mr. Munroe would be humble and awkward, rapaciously curious and stunningly intelligent as he is reputed to be.
And that, if nothing else, he's accept my heartfelt gratitude. Not for the wit, or the feeling of belonging, or the unity across the globe with other nerds his works have provided. Not for the humour and wit, intelligence, or even for taking the time to let me thank him.
I expect he'd understand when I said that it is knowing that someone else in this world deals best with confusion, grief, fear, loneliness, anger and even despair through art, thought and seeking to help others. Knowing that Mr. Munroe - how may well be one of the brightest minds of our generation - has the same reaction as I do when confronted with these emotions makes me feel less alone.
He's a private guy. I respect that. He has, however, shared with us glimpses into the difficulties with which he struggles. That he can continue to be witty, humorous, helpful and kind through all of that is of itself amazing.
That he occasionally lets us see the humanity of the artist makes me feel less alone.
My apologies for an inability to express myself appropriately. Better to be mushy here where he's unlikely to read it than to gibber at the man in person.
Biting the hand that feeds IT © 1998–2017