* Posts by lamont

12 posts • joined 3 Aug 2010

Tuesday's AWS S3-izure exposes Amazon-sized internet bottleneck


Really this is almost a non-issue to people who aren't perfectionists

Outages are going to happen. If the only time this year your service crashes is when everyone understands that half the internet is offline because S3 is down, then you've got a ton of air cover on this one with your customers, who are probably themselves struggling with S3 being down.

Far from being a rationale to get off of Amazon, this is a good rationale to be using Amazon for their services. If you're a small shop you can go the Amazon route and only get hosed when Amazon borks up the whole internet -- or you can try to set out on your own and then when you inevitably screw it up because you've got vastly fewer resources than Amazon at your disposal you have nobody else to blame.

There's this third option which people think they have which is that its simple enough that they'll solve it all, but inevitably the complexity of your software and systems will bite you hard some day or night no matter how smart you think you are (and likely the fact that you think you should be able to reach perfect operational uptime indicates that you don't understand the uncertainties and you'll be much less likely to succeed). And something that kind of aspie perfectionist IT people don't understand is that the company that spends enough resources to try to get a flawless operational record will get beat by the company that spends enough to get by and diverts the freed up resources to other efforts to capture customers. So if your UI hasn't been updated in 10 years but your uptime is 100% and your backups are flawless you're probably sinking in the marketplace. And if you had the resource to do it all, you'll probably be a Fortune 500 company like Amazon -- the rest of us have to made business trade offs.

Annoyingly precocious teen who ruined Trek is now an asteroid


Most of the characters on that show had cringeworthy moments. The whole of season 1, long before Wesley showed up, is pretty jam packed with cringe. He may have ruined a few episodes, but hardly the "whole show".

Canonical accused of violating GPL with ZFS-in-Ubuntu 16.04 plan


hopefully they do file a lawsuit and this stupid argument gets settled.

Researcher claims Facebook tried to gag him over critical flaw


Re: There is a difference

Agreed, something to keep in mind here is that in the 90s this kind of "research" would have just been "hacking" and it would all be illegal, and you'd just get prosecuted and thrown in jail if you were caught.

And what you are doing is exactly like the analogy of walking up to someone's door and trying to pick the lock. Its pretty enlightened that facebook and google and bug bounties and will tolerate these kinds of attacks as long as they're disclosed. If I found someone picking the lock on my door in the middle of the night, I'd just call the cops and get them arrested. I know that the locks on my doors can be picked because I've picked them before, you're not telling me anything I don't already know, and you're behaving like a burglar.

So, you've already got lots of leeway to probe into facebook and google and other companies like that in ways that would have gotten you into seriously hot water in decades past.

And the analogy of then going through someone's underwear drawer is pretty appropriate. I was going to say its like rifling through their fridge (I need to clean mine out, I think some chicken may have gone bad, I don't need someone to helpfully break into my house to inform me of that...)

It is extremely entitled to think that the bug bounties that companies out there mean that you have the right to attack their servers in any way you like as long as you disclose what you did.

Planes fail to find 'credible' candidate for flight MH370 wreckage


Mobile phones don't have remotely enough range

No cell towers in the middle of the Ocean.

Agreed, it looks like it was probably a fire that disabled the crew and passengers with fumes but wasn't severe enough to take the plane down. In the first moments they changed course to head to an airport, but then the plane just travelled on as a ghost plane.

So the pilots weren't terrorists or suicidal and all the fantasies about the plane being diverted and landing somewhere were wrong. Just tragically crashed into the ocean.

I still don't understand why they had enough time to turn the jet but not enough to radio a distress. Mostly this makes more sense, however, than anything else I've heard so far.

Security breach at Opscode as attackers download databases


If you note the announcement mentions that the user passwords in the database were cryptographically hashed (and via a strong hashing algorithm).

The contents of the database was the public wiki and ticket system which is already indexed by google.

You've seen the Large Hadron Collider. Now comes the HUGE Hadron Collider


Leptons not Hadrons

ILC is an electron-positron collider, not a proton-proton or proton-antiproton collider. Electrons and positrons are leptons, not hadrons. 500GeV per beam is sufficiently above the 125 GeV mass of the Higgs that they expect to be able to create massive numbers of Higgs particles, cleanly, without as much QCD background as the LHC produces. The LHC uses more massive particles so it can reach higher energies so is better for discovery, the ILC would be cleaner and better for precise measurement.

The Register, as usual, gets all the details wrong in favor of attitude...

Public cloud will grow when experienced IT folks DIE



Private cloud can still be done cheaper. However, we still have a generation of IT managers who lust after "big iron" and their idea of a private cloud is huge servers, huge SAN, VMware vCloud, lots of Cisco 7k series switches and routers and massive checks written to VMWare, EMC and Cisco. Those dinosaurs need to die. The next generation needs to be building thin dense private clouds using openstack, largely without using SANs, and not paying anything to VMware and EMC, and keeping the Cisco checks under control. Then using open source monitoring and management tools. You can absolutely beat Amazon's prices by a factor of 3-4x if you do that. If you hire a bunch of lowest-common-denominator idiots for the lowest dollar to run your IT, though, you'll find yourself chasing after vendors to try to make up for the idiots you hired and burning money on 6-figure IT solutions vendors over and over again and you'll be roadkill. Hire smart people, build around opensource, keep it lean, and private clouds are the way to go.

Cloud doctors, DevOps and unconferences: Pass the Vicodin


The main point of DevOps

Put the SDEs oncall for their own software bugs, and close the loop so that the SDEs are just tossing an unlimited amount of operational problems over the chinese wall between dev and ops and getting all their own technical debt "off their books" so to speak.

Sysadmins! There's no shame in using a mouse to delete files

Thumb Down

"as long as they understand the fundamentals of how a computer works"

Okay, i'll bite... What fundamentals are you talking about?

The fundamentals of how systems work that I interview for are things like what strace does and what a system call actually is, and what subsystems one would expect to see in a unix or unix-like kernel. Explain the TCP 3-way handshake, or explain how traceroute works. Other interviewers like to use the boot process question.

Generally you find very little understanding of the /fundamentals/ of how a computer works in people that just push around the mouse on the GUI. Those are the admins who don't know TCP at all or who barely stumble through SYN, ACK, SYN|ACK (yes, usually in that order). The ones that know options negotiation and PMTU discovery, SYN cookies, etc are the ones that also have strong commandline fu.

Only using a GUI keeps you dumb.

Google network lord questions cloud economics


Good article

Anther savings is that if you're using Amazon you're stuck with a few different memory sizes of virts that you can get from EC2: 1.7GB, 7.5GB, 15GB, etc. If you need a bunch of 4GB virts then you're stuck buying 7.5GB RAM servers and trying to politically deal with bundling up apps on the same O/S instance, or else just wasting RAM.

You'll get better "compression" by being able to size the virts yourself by running your own internal cloud.

I've also been doing these calculations repeatedly because our board of directors seems to be infected with irrational lust for the cloud, and keep on determining that break-even on a cash-flow basis comes after about 3-4 months for 100% duty cycle instances -- so Vijay is probably being conservative at what can be achieved in the real world.

However, if your IT department spends millions on expensive enterprise-class tools and money is flying out the door to VMware, EMC, NetQOS and every other IT vendor out there, then it may be cheaper to go with the cloud -- but in going with the cloud you are getting Amazon's framework build on top of Xen and open source or in-house tools. You are not getting infrastructure engineered to "five 9's" you are getting infrastructure engineered to "you bet it'll fail -- so spin up another one!"

If you run the numbers on cloud and find that it makes economic sense, you should fire your IT staff and start over from scratch with people who understand cheap open source.

Researchers: Arctic cooled to pre-industrial levels from 1950-1990


Does not contradict the science

Skeptical FAIL.

Climate science has consistently predicted that climate change up until the past 20-30 years was dominated by largely natural forces (orbital cycles, solar cycles, volcanoes, etc), and that only in the past few decades should the temperature start to respond to increasing levels of CO2 in the atmosphere due to the integration over time of historical build up of CO2 and thermal lag in the Oceans.

Anthropogenic climate change from CO2 and other greenhouse gases is also partially offset by anthropogenic emissions of climate-cooling aerosols.


Prior to 1900 anthropogenic climate change is largely negligible, and up until 1960 the aerosol emissions largely balance GHG, but it is only post-1960 that anthropogenic GHGs are predicted to have dominated climate change.

So, this is like using pictures of astronauts in space to "falsify" gravity. Actually, gravity predicts that astronauts should experience zero gravity. Similarly, climate change predicts that prior to the last few decades that non-anthropogenic forces should have dominated. This article just confirms that climate science has been correct, and is yet another piece of evidence that natural forces have dominated in the past and have only very recently become completely decoupled from typical behavior because of the introduction of anthropogenic GHGs.

Of course in the "skeptical" world, black is white and evidence that is completely constant with climate science is now evidence that contradicts climate science. All that this new study does, however, is contradict a simplistic parody of climate science that editors at The Register hold -- they have, in fact, falsified their own understanding of climate science, but that just underscores the fact that the editors here know nothing at all about climate science.


Biting the hand that feeds IT © 1998–2020