Re: Asteroid belt?
Disrespect Mars and I'll go through you like a door.
7055 posts • joined 31 May 2010
Disrespect Mars and I'll go through you like a door.
Dinosaurs remain among the most populous animals on the planet. Perhaps you've heard of their modern incarnation. They're called birds.
And there are reportedly half a trillion of them out there.
So next time, pick a better analogy. One, perhaps, that doesn't demonstrate you failed to evolve your thinking to incorporate scientific knowledge that is now decades old.
I should point out that ad block penetration has surpassed 30% already.
Crayon does seem the appropriate MID for Microsoft fans.
Maybe if Microsoft put in a "stop fucking spying on me, you lousy git" button, people would be more inclined to adopt it. At least with Windows 7 I can murder the bloody call home with a hosts file. In Windows 10, they hard coded the bastard in so that even this doesn't work.
Fuck Microsoft. Fuck them with Lennart Pottering.
Come in may. Openstack Vancouver. It's a great reason to hit the continent, and also it's Vancouver, not Edmonton. That's always better.
It's Red Hat. They'll give the whole thing to Pottering, and he'll build it into systemd. It will end up riddled with bugs and with security as an afterthought's afterthought. When the first logoed vulnerability emerges, he'll blame end users. Nobody will be able to fix anything because all the logs will be binary, and corrupted by the attacks.
Red Hat will snicker and go "who else are you going to buy from". Three more RHEL derivatives will emerge the next day. Oracle will pretend it didn't hear anything about any of this.
So, you know, a standard Tuesday in IT.
Why do you assume CPUs have to be "safe" for people to buy them? Do you honestly think that Spectre has slowed down CPU purchases? Do you think anyone but the handful of nerds that haunt these forums and some security nerds that read the Grugq a lot actually care about any of this?
Make no mistake, Intel is still the seller of server CPUs. They'll keep on being the seller of server CPUs through the lawsuits, and they'll emerge from this as the seller of server CPUs.
I'm as much of a fan of the underdog - and hence AMD - as anyone, but let's be realistic. Intel has an iron-fisted monopoly, and unless someone goes in there with the Almighty Axe Of Anti-Trust and cleans them out, they'll continue being a monopoly for at least the next decade.
You know it. I know it. Everyone except a few deluded die-hards knows it. So let's no pretend about this, shall we?
The people who buy things don't give a fnord about "security", and they never have. If they did, the Internet of Things wouldn't be such a gor'ram security dumpster fire. Nobody would ever buy Cisco or Supermicro again, and the list goes on and on and on.
But you know what else? It's not their asses that end up in front of the judge. It's us. The hoi polloi at the coal face. Nobody sends suits to jail. They make some poor bastard working in ops the lightning rod and ruin his life, and the lives of his family instead.
So yeah, Intel's dominance isn't going anywhere. Nobody's going to do a bloody thing about it. We're going to be responsible if/when it all goes horribly wrong, and we should know about this ahead of time so that we can take precautions and/or run the hell away in terror. (Depending on how you view risk.)
For suits, the only risk they care about it "will this cost me some of my bonuses". For nerds, the risk we need to worry about is "will this land me in front of a judge"? I leave it as an exercise for the reader to work out how likely (or not) they feel this is to affect their chances of negative consequences.
But we should all have eyes open here and understand that this issue isn't going away, and that we, as the plebians, have no choice but to deal with it.
AMD and some ARM chips are affected by Spectre. But let me be 100% clear on this: AMD is completely irrelevant to this discussion.
AMD chips might powerful a smallish percentage of endpoints, but they power almost none of the existing fleet of deployed servers. Even if, for some reason, we all decided to buy AMD tomorrow, AMD couldn't deliver. At full ramp, AMD would struggle mightily to put out enough silicon to cover 10% of planetary server capacity during the next refresh cycle, and there is zero indication that demand exists for them to invest in that many wafers.
I'm sorry, but in the real world, AMD just isn't part of the any discussion about server chips. There's only one player in that market, and they'll be the one everyone buys replacement chips from.
"And Google's, obviously."
Fuck off with your whataboutism. Who are you, Sean Hannity?
Jibbers garlic-covered Crabst...
Yes, I'm sure. I'm sure because already "serverless" doesn't mean "Amazon". Serverless has a lot of ardent followers, and they're building open source solutions that do what Amazon's Lambda does.
Similarly, machine learning, AI, BI and other BDCA tools are seeing a lot of open source growth. In short: a commercial entity (like Amazon) might come up with the initial concept and get to milk it for a while, but all technology eventually ends up democratized.
The basic approach of serverless - white simple scripts (or have a UI/digital assistant write them for you), and have those scripts simply pass data from one black box to another - isn't going away. Humans don't typically uninvent things, especially in IT.
The increasing adoption of digital assistants also makes this seem like a permanent thing to me. The black boxes used by serverless types are really no different than the "skills" one can build for Alexa. Indeed, many of those skills are nothing but serverless scripts that call black boxes, and I've already seen Alexa used to create new serverless apps, which could be published as Alexa skills...
The whole thing has already reached critical mass and started a cascade. While a bunch of whingy nerds who can't disconnect "application development" from "enterprise apps" might not get the importance of an ever-increasing library of digital capabilities that can be used by any Tom, Dick or Harry, non-nerds seem to get the importance really quickly.
So we, IT nerds used to the way things were, we might not see the utility, or ever get around to using serverless to do our jobs. Our kids, however, will use serverless-style tech to do all sorts of stuff. Using - and trusting those black boxes will be as natural to them as smartphones are to my generation, or staring blankly at VCRs flashing 12:00 was to my parents' generation.
So sure, in the short term I expect some of these black boxes to disappear. That will lead to a backlash, and to standardization, to the development of "skill libraries" and all of the predictable evolution of responses to this problem. Corporate greed is eventually overcome in tech, even it takes a decade or so for us to get our shit together.
It is very early days for this technology yet, but the basic approach is sound. And those who have used serverless in anger tend to become adherents pretty quickly. Even the disenfranchised and cynical nerds.
@criagh yes, but you are doing enterprise work. Application development for commercial purposes. What you - and all the rest of the angry nerd mob of commentards are missing - is the part in my article where I very specifically said "It is the commoditisation of retail and consumer application development".
Serverless isn't going to replace traditional development for organizations looking to build their own middleware anytime soon. VMware isn't going to make their next vSphere UI using serverless. That's not where the revolution comes in. The benefit of serverless is not "helping highly trained nerds do what they already do better", despite the inability of the commentariat to conceptualize anything different.
Serverless is going to let people who are not nerds create applications that solve problems that nerds and commercial entities are not normally interested in.
For example: Let's say that i want to grow cannabis plants at home. (In a few months we can legally grow 4 plants per household here.) Cannabis is a particularly persnickety beast to grow. Much more so than the bell pepper plants that I have all over my house.
With serverless, I could take data from my house - images of my plants, or perhaps sensors similar to these - and feed that data into a black box up in the cloud. I could set the thing up so that if A happens, the plants get watered, if B happens, a fan turns on and if C happens, it sends me an alert.
Traditionally, if I wanted something like this I would have a few options:
1) Build something myself involving an ardunio (lots of work)
2) See if a vendor has already build a pre-canned solution, probably involving their own sensor ($$$)
3) Hire a human ($$$$$$$$$$$$$$$$$$$$$)
With serverless, however, I just need someone to write a "black box" that can accept some form of data I can provide (images, sensor data, etc) and spit out simple data about the plant in question. That black box does not, to my knowledge, exist today...but I am sure it will soon. (I could even train my own black box using machine learning, but that's another discussion.)
Essentially, serverless is a scripting platform allowing access to an ever-increasing number of "skills" or "capabilities", in a marketplace-like format. A platform I don't envision being used to replace super-niche industry development, but one I envision being used by civilians to automate and enhance their daily lives.
Application development is already a thing that is out there enhancing businesses that can afford qualified nerds. Now it is going to start being something we can use in our day-to-day lives to collect, modify and act on the data around us.
The individual applications wont' last long, it's true. But the black boxes themselves can. Oh, they'll change and evolve as they're maintained, but as a general rule, once invented, they won't be uninvented.
If someone creates a black box that does facial recognition, we aren't going to find ourselves 50 years from now without a black box that does facial recognition. We may find that someone has created better facial recognition systems, but we will always have at least the original black box's capabilities.
The digital skills being created accumulate until, one day, you'll be able to say "computer, watch the roses in the garden and water them if they start to show wilt", and it will happen.
That phrase spoken to the computer is a script. It told the computer to gather a dataset consisting of images of the roses in the garden. It told the computer to send those images through a black box that determines if there is wilt. If there is wilt, it will trigger the garden's watering system. That, right there, is serverless. The only difference is the interface used to create the serverless script.
The end user doesn't care if the black box that checks images of roses for wilt is the same today as it was yesterday. They only care that it works. All that matters to them is that a black box that performs that action exists, and that it doesn't stop existing.
If we did call it gluten-free, we'd probably get a whole bunch of new people interested in application development! :)
There's a whole industry full of supposed trained experts with loads of experience and education and all our information is ending up on haveibeenpwned anyways. You'll excuse me if I don't feel that people doing the application development equivalent of drawing lines between black boxes that are designed and maintained by large multinational public cloud providers could possibly do any worse.
Hell, maybe those public cloud providers will actually hire enough security people that the stuff they offer is secure by default. It would be a nice change from the shitpoaclypses created by the "I know better" or furiously cheapskate crowds.
Ease of use is the difference. Yes, serverless is basically "scripting for noobs". But the "for noobs" part is really critical.
Yes, really smart, well paid, highly experienced nerds can write a bunch of gobbedly-gook scripts, or write applications in proper development languages that called libraries or other application components. That's hard. And it requires expertise. And if you are incorporating some black box (application, library or other component) into your solution you first have to find it, download it, install it, make sure it's in the right place to be called, etc.
Almost all of that goes away with serverless. You write what amounts to a script, but you have access to this (constantly growing) library of tools and features supplied by the cloud provider. Making use of those tools and features is generally much easier than a bash script, and you don't have to manage or maintain the underlying infrastructure in any way.
When you take something that requires significant expertise and make it something that nearly everyone can use, that's a pretty big deal.
I'm sure that there were a bunch of nerds who knew how to build and maintain their own electrical generators who also said that national grids were a stupid idea. If they can run their own generation capacity, and wire up their homes/businesses exactly according to their own specifications, why can't everyone? Standardization means everyone isn't using the optimal plug/wire gauge/whatever for the job! There might be inefficiency! A national grid won't be revolutionary or change everything, because people can use electricity now!
I, for one, am glad nobody listen to them.
@Craigh You are correct: there is no reason for various cloud features (such as BDCA tools) to be tied to serverless. On the other hand, serverless doesn't offer you much of any real use except the ability to push data into and pull it out of various cloud features.
The point here is that serverless basically allows one to lash together some pretty complex applications about as easily as one could write a (reasonably simple) batch script.
Real developers will build the various black boxes. Trained IT practitioners will be able to use those black boxes wither through serverless or other cloud solutions. But for the milled masses, serverless will be their gateway to application development.
Right now, today, in order to build serverless applications you need to actually type a few lines of code. But the code you need to write in order to pipe data from A to B to C and back out again is simple. It's the sort of thing that you could build into a drag-and-drop GUI and have 4 year olds use.
Nerds think about serverless from the viewpoint of "how does this help me do what I do now"? Non nerds don't do any of this now, and when shown how easy serverless is, say "wow, look at all the neat things I can do that I couldn't do before".
So serverless - in the form of AWS lambda - is itself pretty useless. But it is this gateway to all the other things that Amazon's packed into AWS. It isn't the only way to use all of AWS's goodies, but it absolutely is the means by which individuals who aren't specialists, trained practitioners or experienced cloud architects will set about making tools for their own needs.
@Sir Runcible Spoon well that comment certainly created a combination of wry smile and crushing, crushing depression. A little bit too on the nose...
Maybe if I had been, you'd have learned a few things. :P
"Minix used it and look how well that didn't do out in the wild"
Minix is easily the most widly deployed OS in the world. It's in every Intel IME, and a whole heap of other embedded systems.
Think before engaging keyboard, mate. There's a lot more to tech in this machine-to-machine world than end-user-facing OSes.
KVM: Now powering two of the big three public clouds. What's that VMware? KVM is "not ready for the enterprise"? I think you know better.
"Camping with a projector to watch a movie of a campfire do you mean ?"
Where's the fun in that? Who doesn't like fire? I like fire. Hmmmm. Fire.
"Given the general lack of flat white walls in the general area when out camping in the woods/fields/wilderness, won't the average millennial fairly-well-off youngster be a bit pissed off lugging an 8ft projector screen around with them?"
I'd just bring a white blanket and hang the thing off the side of the car, but hey, that's me. "Flat" is a luxury. You're camping, eh?
"Isn't the point of camping to rough it a little?"
Depends on whom you ask. I think of camping more as "getting away from people and all the noise of a city". If I could, I'd live on an acreage surrounded by trees all the time. I'm too poor. So I go camping.
Camping with a projector to watch a movie while I poke the campfire? Sign me up.
Now, if we could just extinct mosquitoes...
They're less "technically challenging" than they are "a damned lot of tedious, boring, thankless work". I.E. a miserable pig.
You have to go through a lot of data before you even get to a candidate. Long after the fun work's been done, and the challenges of building the 'scope are long past, there's just crunching image after image. If you're really lucky an algorithm can help some. Even then, that's an awful lot of faint smudges and maths.
G09 83808 isn't the galaxy spotted earliest in the universe's history, however, it is spotted very early on, and we've gotten a chance to get a slightly better look at it than we could with previous instruments. Any and all objects that we can spot which are from about 1B years after the universe formed or earlier will get press, and rightly so. They're a miserable pig to find, and they can tell us a great deal about the early universe.
This is a feature, not a bug.
Oracle Microsoft Amazon don't have customers, they have hostages.
Mine's the one with the empty wallet -->
While the benefits of current robot-assisted surgery is somewhat questionable, this doesn't mean surgery robots are useless...or at least that they will remain so. Consider, for example, this prototype.
This prototype robot uses machine vision - amongst other modern techniques - to create a (mostly) autonomous surgery bot that is actually better than humans. Yes, it has some bugs, but it's an early prototype. I expect to see some rapid enhancement in this area.
"Should Google fact check Presidential Tweets before showing them on its site?"
Yes. Fuck yes. Absolutely yes. Yes a dozen more times and yes.
So should everyone else.
No they weren't. Drop a sandworm in water, watch it shatter into sandtrout. In relatively short order they would sequester all your planet's water beneath the surface and convert the planet into a desert filled with sandworms.
Do not fuck with Shai Hulud. He will your entire planet.
Except, you know, for most of us. Microsoft isn't trustworthy, and likely never will be.
I have some questions. Where can I reach you?
"people like them give women a bad name"
So a handful of female CEOs that screwed up "give women a bad name", but thousands of years and millions of male leaders of companies/governments/transcontinental empires etc. don't give men a bad name?
Look, I question the validity of affirmative action as much as the next guy, but dude, that's some completely unsupportable sexist bullshit right there. What the fuck.
"religion muslim or jewish or some recognized religion other than anything those based on Jesus"
Um...Jesus is an important prophet in Islam...
"Are most IT jobs actually bullshit jobs where the performance in the role actually has little impact on the org?"
I second the "yes" comment.
Speaking as a Canadian - and one who apparently has quite a bit more knowledge of what you're babbling about than you do - I would like to, in the kindest, politest possible way convey my feelings about your inane babble:
No, we can't. And the reason is that politics, privacy and security are tightly wound up into an inextricable morass.
Regardless of your politics, and what side you take in the multi-tentacled debate, politics affects the balance chosen between security and privacy. Or whether the needs/desires individuals (as opposed to corporations and/or states) should be considered at all.
So put on your big boy pants and welcome to the real world. Politics is everywhere.
Thank you kindly for choosing to engage directly with the community. It is nice to see anyone from any vendor taking the time.
I do have one small question however: does the rest of Microsoft know you're doing this? You're harming their ruthlessly customer hostile image. (Well, not a lot, as the Windows team exudes so much animosity towards literally everyone that it's hard to overcome...but you are denting the evil overlord image a little...)
I am told by some of the hard core gamers in my sphere that if you want to do VR at 240hz then having 8+ cores @ 2..8Ghz or better is usually required. As I'm poor, and still working on a video card from 3 years ago and a Sandy Bridge-era CPU, I cannot confirm this.
Apparently VR is a thing that some people do. I don't understand. Why do you need VR to play Scorched Earth?
Containers = multiple apps, single OSE
VMs = multiple apps, multiple OSEs
In car IT, there are a lot of very specific differences between OSEs for the different apps.
@John 104 Of course not. Though you are demonstrating that far too many of us cling, terrified, to the past.
Looks like someone never discovered composable workloads or composable infrastructure. But Happy Sysamin Day anyways.
Yeah, I wasn't going to get into pricing with these sorts. Open source stuff like Ceph or LizardFS can handle HCI storage layer, with OpenNebula and many others providing great management UIs. Then we go up through the various smaller contenders like Maxta, Yottabyte and Nodeweaver to the midsized ones like Hypergrid or Scale to the big heavies like SimpliVity, VMware or Nutanix.
The price range varies wildly, and even Nutanix have entry-level gear that isn't that badly priced. HCI isn't expensive. It certainly isn't as expensive as ancient three-tier architecture. That said...
"It is difficult to get a man to understand something, when his salary depends upon his not understanding it!"
"Does HCI actually give you the ability to go to a web interface, spec your server and provision it without ever going near a techie like a cloud platform does?"
Depends on the platform, but yes, several HCI solutions do exactly this. Nerds optional.
"On prem IT really isn't like that, HCI or no HCI, because you still have to scale to peak"
You only have to keep your critical workloads on-prem. The stuff that is actually burstable is likely not mission-critical (it will just run slower if there aren't 'enough' instances) and you can farm that out. To the public cloud or to a service provider. Use the right tool for the job. The world isn't black and white/one or the other.
Use the public cloud only for what it's good at: providing non-mission-critical capacity when you are over peak. It's 2017. We can do this stuff now without having to throw out our ability to sweat our assets through bust times, have control over our own data, or run sensitive workloads in our own legal jurisdictions.
Hybrid IT isn't just some buzzword. It's not even some ideal towards which we are striving in the distant future. It's a thing we do today. Some things on prem, some things in the cloud. It's not rocket surgery. It's just some bloody YAML.
"Economics 101 lets you know that cloud will be less costly and better than on prem"
Let me guess, you also believe - despite mountains of evidence - that trickle down economics works.
And the earth is only 4000 years old. The eye is irreducible complexity. *thud* *thud* *thud* *thud*
"Everyone knows this intuitively. If I were to ask you if it would be less costly to purchase network services from an ISP like Verizon or bury your own fiber lines underneath the ground/oceans around the world, everyone would say that obviously it is going to be a lot more cost effective to just rent lines from the ISP where many people share the cost burden than to pay billions to create your own global network...."
Um...you're demonstrably wrong. Massively, demonstrably wrong.
First off, it's cheaper to build your own global network the instant the cost of laying your own fibre gets to about 1/20th the cost of renting it. Right about there you can go lay multiple strands of cable, use whatever capacity you need and rent out the rest.
You know, like Google does. Sure wasn't cheaper to just stand up their own datacenters and pay the rent to the ISP. Nope, they laid their own fibre. And yes, they even have a stake in oceanic cable.
I can introduce you to municipalities that also lay fibre for everything from last mile to backhaul. I can introduce you to WISPs and even businesses as small as 10 people who would rather pay the municipal fees to dig a ditch to lay fibre between their location and the local internet exchange than to pay the ISP. Shock, horror...it turned out to be significantly cheaper.
In some cases the ISP is cheaper. In many others it's not. Just like in some cases (you know, those very rare niches where you have "could native" burstable workloads) the cloud is cheaper. In many others (such as 24/7 workloads), it's not.
As with everything in IT it depends. You do a needs assessment and you use the right tool - technical and economical - for the job. You don't decide on the tool and then contort all reasoning beyond logic in order to fit what you do to that tool.
Also - and I don't understand why I have to keep repeating this to someone supposedly so smart - there are huge differences between regulated industries (like telcos) and completely unregulated ones (like public cloud services). What my governments impose on telcos here as minimum service quality, pricing caps and more keeps monopolistc behavior in check. There is absolutely nothing keeping monopolistic behavior in check amongst the cartel of public cloud providers.
Also - and again, I can't understand why this is so hard for you to get - when you do the actual numbers on running your own workloads you don't have to be running that many workloads 24/7 before rolling your own is significantly cheaper than public cloud.
Religion. All you're touting here is religion. It is no different than trickle down economics or praying the gay away.
Biting the hand that feeds IT © 1998–2018