practically next door
It's a bit of trek from T or C, as it's dubbed locally, to SPAAM
A bit of a trek? In New Mexico, 40 miles is right neighborly, unless there's a mountain pass or something in the way.
2701 posts • joined 21 Dec 2007
It's a bit of trek from T or C, as it's dubbed locally, to SPAAM
A bit of a trek? In New Mexico, 40 miles is right neighborly, unless there's a mountain pass or something in the way.
Also I think you are thinking more of Kansas, Nebraska or Oklahoma than the mountain west.
Oklahoma has lots of nice scenery, particularly in the east, though there's a good bit to see in the west, too (eg the Gloss Mountains). I agree about Kansas and Nebraska, though. Everything west of the Flint Hills in Kansas on I-70 is nightmarishly dull. (Things perk up a bit if you get off the Interstate and on to more-local roads.)
I've never done the entire drive from Denver to T or C, but I've driven pretty extensively in northern New Mexico. Some of the scenery is beautiful. I'd take the scenic route: Denver to Colorado Springs to Raton, Taos, Santa Fe (stop in Española for lunch; the food's much better), Cline's Corners, Encino, then west on US 60 to T or C.
Or, if you want to make it a bit faster (but to my thinking less interesting), go from Santa Fe straight down I-25 through Albuquerque to Truth or Consequences. There's some stuff to see in Albuquerque's older districts. I'd rather detour down US 285, though.
To my mind it's a much better drive than, say, going through central Kansas on I-70, or central Nebraska on I-80. When I head out that way I drive the Oklahoma panhandle just to avoid I-80 in Kansas.
No Blake's I've been to puts "green chile sauce" on their burgers. They take sliced green chiles, slap them on the grill to heat 'em a bit, and put them on the burger. As it should be.
(In New Mexico, "green chile" can refer to the chile peppers themselves, or a stew - not to be confused with "chili"1 - made from green chiles, often with ground pork, and sometimes other ingredients. Green chile the stew, properly prepared, is also delicious, but doesn't work well on a burger because it's too loose.2 Thus green-chile-the-pepper is used directly for that purpose.)
1Chili, though also any of a wide variety of stews featuring chile peppers, traditionally contains tomatoes and always employs some starchy thickener - not generally part of New Mexico red or green chile.
2One can of course make a sort of open-faced sandwich, to be eaten with utensils, from the combination. Along these lines we have the legendary Poor Man's Benedict at Michael's Kitchen in Taos: Fresh bakery roll, sliced; shaved ham; poached eggs; cheese; and green chile stew (made with ground pork) ladled liberally over the whole. You can also get it red or Christmas, of course, and with other variations.
It's not difficult for a competent developer to pick up another programming language, provided it's not one of the evil (aka "esoteric", though the common meaning of that adjective fits JOVIAL) ones, like INTERCAL or Unlambda.
I've never actually written an ALGOL or JOVIAL program, but I've read plenty of ALGOL code (including some discussions of the man or boy test) and it's not a difficult language. The syntax is relatively expressive and should seem comfortable to anyone with decent experience in the large and prominent family of ALGOL descendents (Pascal, C, etc).
JOVIAL permits inline assembly, but S/390 assembly isn't difficult either. It's a pretty CISCy architecture and assembly conventions and APIs are very well documented.
The "it's written in an obscure programming language" argument just doesn't count for much. There are plenty of mainframe programmers around (the "skills shortage" is largely an effect of employers not wanting to pay reasonable rates), and any S/390 or z systems programmer should be able to learn JOVIAL in short order. I can think of four - no, make that five, maybe six - people off the top of my head that I believe could be competent maintainers of a decent-sized JOVIAL application in a month or so. And I'm not counting myself, because I'd have to brush up on some of my mainframe skills first.
Ok, so there should have been a parallel system to failover to.
There is a parallel system, and NATS failed over to it successfully when the initial problem occurred.
There wouldn't have been a publicly-visible incident at all, except for a second failure, this time restoring the network link between the two systems, and an aggravating factor, which is that once NATS downtime reaches a small critical threshold (I think the earlier Reg article said it's about 8 minutes), the overhead of delayed and diverted flights cascades into a major air-traffic interruption.
You can of course argue that there should have been a parallel network link. Or, perhaps better, two backup systems and redundant links among all of them.
There are three problems with that line of reasoning:
- Empirical research shows partitions will still continue to happen, and applications will still have downtime. There's a good article in the September issue of CACM on this topic.
- More components means more complexity and more opportunity for harder-to-diagnose failures.
- Greater costs, in physical assets and staffing.
The underlying issues are 1) air travel is oversubscribed - too many planes in the air, too many flights scheduled, too many people traveling for the system to have much margin for accommodating transient errors. And 2) complex systems are complex. They're expensive to change, particularly when you want the result to be better, by various metrics, than the one it replaces.
There's no simple fix. "They should have a parallel system" is just pointless hand-waving. (On the other hand, Vince Cable is clearly an ignorant jackass, since his diagnosis is orders of magnitude less accurate and useful.)
Then they could remove it form the list altogether.
Yes, that's brilliant. Let anyone get any site removed from Google results by filing copyright-violation claims. There's no way to abuse that process, is there? It wouldn't be at all onerous to implement, would it?
In addition to "Preview" and "Submit", maybe the Reg should put a button at the bottom of the comment form that leads to a short course on critical thinking.
Google should not be manipulating anything, including search results
Without manipulation there are no search results. They're the results of human-designed, machine-implemented algorithms operating on data. They don't exist in nature.
And, of course, the entire value of Google's search results to the end user is in their "manipulation" (i.e., ranking).
The point, which was explained pretty clearly in the article, is that there are government subsidies for rolling out service if they meet the "broadband" requirements.
Well, my post might have been read as sarcasm
Those who forget Poe's Law are doomed to fall foul of it.
Duh. All windows will be replaced by touchscreens, displaying video of what's outside the car. Except when they're displaying something else, of course.
(Don't sneeze while operating the vehicle - attempting to clean the windscreen could activate the self-destruct system, or cause the entertainment system to download and play the entire Bieber catalog.)
Yes, sometimes DVD authoring is simply abysmal. Back in the days when there were video-rental stores, I rented a DVD of (the original) The Italian Job. Turned out it was authored with the entire film as a single title with pause, rewind, and fast-forward disabled. We were three-quarters of the way through the film when we were interrupted by something, and there was no way to return and see the rest of it without watching the whole thing again.
Which wouldn't have been so bad with that particular film, really, except that it was too late that night and I didn't get the chance before I had to return the thing.
Then you have the terrible DVD menus that aren't accessible to color-blind users, or that don't work properly if you have limited controls. (I've run into the latter problem watching DVDs while donating fractional blood products - a process that takes 90 minutes or so and involves needles in both arms, so movement is restricted. Look, folks: for a DVD with multiple episodes of a show, when one episode ends, start playing the next one. Is that really a difficult concept?)
Why would you even need a BluRay player these days?
Good question. I've never figured out why I'd need a BluRay player any day. To play BluRay disks, I suppose, but I've never felt the urge to do that. Seems like on the rare occasions that I want to watch recorded video, it's available on DVD, or on-demand from my cable provider.
"SEO people" would claim that sacrificing livestock to your servers will improve search rankings, if they thought they could get away with it. I dare say they'll jump on board anything they can charge for.
the US even setup [sic] a Parliamentary system in Japan after WW2
Wildly incorrect. Japan had a parliamentary legislature - the Diet1 - since the Meiji Restoration; it was established in 1889. While MacArthur rewrote the Japanese Constitution and greatly reduced the power of the emperor and the military, the structure of the civil government was largely preserved, and the Diet continued to function during the Occupation.
Never mind it has failed everywhere else it has been tried
Folks in Mexico, Brazil, etc might be surprised to learn their government has "failed" - at least more so than any number of parliamentary governments.
The US government - with its tripartite Federal structure, bicameral legislature, bureaucratic executive with ill-defined limits, and reservation of various powers to the several States - is certainly very complicated and far from efficient. Whether it's "worse", by any sensible definition, than some other representative government is a question for idle speculation and political philosophy; only a fool would declare it as unqualified fact. Do other nations score better on various metrics? Yes, but those other nations aren't the US; they don't have the same geography, demographics, history, etc. And the same is true if you want to argue the US government is better than many other candidates.
Each nation's circumstances are different. While we can declare with some confidence that some have particularly bad governments, at the more-functional end of the spectrum things are far more nebulous. There are things I'd change about the US Federal government (and that of my state), if I had the power; but I can't in honesty say I know the results would be an improvement (and by what measure?). If you think you can, then more the fool you.
But please don't let facts get in the way of your irrelevant rant.
1Modeled, as the name suggests, on Germany's.
Now I'll have that song by Ed's Redeeming Qualities running through my head all day.
There's an aviation gag about modern planes being designed for a pilot and a dog. The dog's there to make sure nobody touches the controls, the pilot's there to feed the dog.
I remember seeing this joke1 in a press release from Lawson Software in the mid-1990s. Wikiquote suggests Warren Bennis coined it around 1991.
Personally, I think the challenges facing driverless cars are rather greater than those faced by human-supervised fly-by-wire planes. Even given the dismal safety performance of human drivers, I'm in no hurry to try the former. Once they have a record of good performance in widespread deployment over several years, perhaps.
1I use the term loosely.
wouldn't that be legally considered as entrapment?
IANAL, and who knows how courts would interpret it; but I can see an argument that it's not entrapment. You're not enticing someone to commit a crime. First, you're not communicating with the driver directly - you're just publishing a notice via the app. Second, they've already agreed, in principle, to commit a crime, and you're merely suggesting where it might occur.
it's an app, some developers, and a yacht
Based on the behavior of Uber devs and execs, I'd guess they also have a tidy amount of alcoholic beverages available.
They won't repair themselves but if your taxi driver repairs their car you probably should be looking for another taxi driver.
Good luck with that. In the US, at least, apparently it's quite common for taxi companies to hire drivers as independent contractors and make them repair their own vehicles.
From what I could gather from the article, they're talking about hierarchical storage management with a unified address space. Hasn't that been around since, like, 1990?
Maybe some of this is new to commercial data-center technology (that's not an area I spend a lot of time in), but from a computer science, OS-research perspective it doesn't sound very novel. Transparent hierarchical storage was a hot topic for UNIX systems by 1991 or so. The AS/400 had a single address space for all storage objects in 1988. All this stuff sounds nice enough, but hardly mind-blowing.
But perhaps I'm missing something.
but "generic" serving is typically faster with Nginx
Versus which Apache MPM?
More generally, Apache is so configurable that "Apache versus Nginx" is pretty much meaningless, even for a given workload.
I have nothing against Nginx, mind you; but some people seem to regard it as some magical innovation, as if there were no other event-driven servers out there. It's a well-understood architecture that's been around for at least a couple of decades. I've written a commercial event-driven multiprotocol server myself. Nothing terribly remarkable.
When that openness was mandated, DSL went from being an expensive and poorly maintained service to a moderately expensive and poorly maintained service.
To be fair, everywhere I've lived DSL is a pretty terrible service even if it's well-maintained. So maintenance is not so much of an issue.
No mention of XGKS or PEX. Whew! In the clear. (None of my other X11 stuff ever made it out of IBM, as far as I know. Thank goodness.)
Or to put it another way, apparently no one's bothered doing a security analysis of XGKS or PEX yet...
(I almost want to dig up the XGKS metafile handling source. I have vague memories of fixing - this would have been around '89, I think - various overflows there, but I strongly suspect now that my "fixes" are rather fragile under deliberate attack. Oh well - it's not part of any stock X11 distribution, and probably isn't used much. Though I see Steve Emmerson migrated it to Sourceforge in 2000. Warms the cockles of my heart, that does.)
I'm sure it would have generated an "unchecked possible use of null pointer" report on the mentioned bug line.
That's a very bad thing to be sure of.
Anyone with extensive experience with static-analysis tools for C knows that they suffer from a lot of false positives, a lot of false negatives, or both. Some are better (sometimes much better) than others, and most are worth using. But they will not catch everything, unless they emit so many spurious warnings that they're useless; and using them against a large, old codebase is a major project.
Dynamic-analysis tools have their own limitations too, of course.
That doesn't mean analysis tools shouldn't be used. As Peter van der Linden wrote in Expert C Programming, one of the worst decisions in C's history was deciding to make lint a separate program; that applies even more for more modern analyzers, including free ones like splint and cppcheck. But they're no silver bullet.
How does a large critical opensource project not use free tools?
But all such projects do use those tools - unless, of course, none of the unpaid volunteer developers who work largely without supervision on projects that interest them personally take the time to do it.
How does anyone drive a car without checking the tires and lights before every journey? It's free, after all.
They're open source. Are you volunteering to do the analysis?
I'm always wary of stories of how much some people claim to be involved with computers in the early to mid 80's, most of these stories are BS...
And you know that how?
There's some BS here, all right, but it's not from those of us who were programming in the '80s.
1987? I don't think there was anything we could call the internet then.
There most certainly were both internets and the Internet in 1987. The IP Internet had existed since 1984, and the NCP Internet before then.
No, there was no HTTP, much less graphical browsers. But the Internet is not the web. Hell, we even had SMTP in '87.
X's biggest problem stems from the era it existed in. Back in the days when the Internet was entirely inhabited by academics and military people.
Neither of those statements is particularly accurate.
X was a product of Project Athena, and was incorporated into the Andrew Project as well. It was initially used by highly-knowledgeable students at MIT and CMU, who were both capable of and motivated to find security holes (if only for their own amusement). X11R1 included quite a range of security features, and there was much discussion of their relative merits. Don't forget it was contemporaneous with Kerberos, also a product of Project Athena; PA made network security a primary goal.
And while the NSFNet backbone prohibited commercial traffic in the '80s, there were other Internet backbones, and some commercial entities used NSFNet for non-commercial traffic. There very definitely were commercial users on the Internet in '87.
According to the announcement linked to in the article, the '87 core protocol bugs are all integer overflows. That class of bug was not visible as a security issue in 1987 (at least in public discussions; who knows what the spooks might have been up to?).
Prior to Levy's "Smashing the Stack for Fun and Profit" article in '96, stack-smashing in general was not broadly seen as a major security threat - despite the Morris worm using a stack overflow as one of its attacks. (There was a perception that stack-smashing was too difficult to leverage in general.) Integer overflow attacks, as a special case of stack-smashing, didn't become prominent until the early 2000s.
Argh gonna drown myself in bleach.
No no no. Retcon it as a successful attempt to troll old X hands. Success!
(That said, I admit I upvoted the original post too, since I've been using X11 since, oh, 1988 I think. Probably still have the protocol and xlib reference books around somewhere.)
That's a tempting response. But multiple successful attacks like this over the years have shown that the X.509 PKI simply does not work in practice.
People who control the signing keys for well-known CA roots and intermediaries cannot be trusted to keep those keys under their control and only sign legitimate requests. End users who use (often unwittingly) certificates to authenticate input do not enforce good certificate hygiene - which is not surprising, since it's 1) barely understood, 2) generally infeasible (as it requires incessant vigilance), and 3) sometimes outright impossible.
Sony may be the foolish-looking victim of the week, but there have been many before them, and there will be many after them.
Public X.509 PKI is broken and cannot be fixed. The best you can do is decide whether your threat model accommodates it - whether you can say, "we have nothing of sufficient value to make it worth the attacker's cost to break our PKI using any of the many vulnerabilities in the public PKI hierarchy". If the answer to that is "no", you'd better roll your own, and lock it down tightly. (And that's going to be expensive if it's done right.)
It's tough to compromise software that never runs.
This used to be a tick box option in IE IIRC.
You've neatly identified a number of the problems with a typical X.509 PKI implementation:
- IE had multiple checkbox options for PKI, buried in the "advanced settings" where few dared to tread. Some were for CRLs; some were for OCSP. How many people knew what they all meant, and knew how in practice they were likely to interact with real-world threats? Almost none.
- IE settings are per-user, so to use them to improve organizational security, you have to enforce them with group policies or the like. How many system administrators get around to doing that - assuming they understand the problem in the first place?
- IE PKI settings don't control what Windows does with code-signing certificates, so they wouldn't have helped with this attack anyway.
Putting a GUI on top of end-user PKI configuration does not significantly improve security.
Or is connected to the net when the malware is run.
When Windows updates its copy of the relevant CRL, you mean. CRL updates aren't necessarily done at time-of-use.
Now, had you been talking about OCSP...
Of course, the broader point remains. The commercial X.509 PKI system is wildly broken, and revocation is one of the areas in which it's especially wildly broken. Neither CRLs nor OCSP work very well. The mechanisms are fragile, many of the failure modes are unsafe and silent, the incentives are perverse, and most users have not a fucking clue how the whole horrible mess is supposed to work (and why should they?).
That BMJ study looks decent, though (as the authors say) very preliminary. My personal feeling - which counts for absolutely nothing, except to indicate where my biases are - is that this will be found to be a non-causal correlation or confounding (again as the authors suggest it might be).1
They do show results of P<0.01 versus their controls, which is better than the rather sloppy P<0.05 at which far too many preliminary results are announced.
I'm worried about this part of their analysis, though:
For leukaemia, the results of two of the trend analyses were significant (P < 0.01); these analyses suggest the risk might depend either on the rank of the distance category or on the reciprocal of distance. The latter seems more plausible.
Statistically, they see a lower P-value with distance bands than with strict reciprocal distance, but decide the latter is "more plausible". That makes me nervous. And If we're going with what's more plausible, why is the statistical effect weaker when looking at inverse-square - which, as the authors note, is how the field strength will normally1 attenuate?
But this is very much not my area.
1If there does turn out to be a causal relationship, my guess is that it'll be something in gene expression, as that looks to me like the most sensitive component at risk. But man, those fields are pretty weak. The authors cite another meta-study that found evidence of a correlation between increased leukemia risk and extensive exposure to "≥0.4 μT" fields. That's like 1% of the earth's magnetic field.
2Actual field strength at various points will be affected by transmission-line design and environmental factors such as shielding by buildings and so forth.
How does the presence of reflectors on the Moon prove humans personally put them there? The argument is not that we haven't landed things on the Moon, but that we haven't landed people on the Moon.
Obviously the Moon landings were faked to cover up the fact that aliens put the mirrors there, shortly before crashing into New Mexico.
(On a slightly more serious note, I have to say this thread is pretty entertaining, with several people arguing earnestly over the most basic issues in epistemology as if they'd never been considered. Folks, there's no way to prove, beyond all doubt, that the Moon even exists, much less that a handful of dudes kicked about the place. Descartes among others demonstrated that centuries ago. The best we can do is pick an epistemology that seems relatively robust and reduce the amount of trust we have to place in our senses and mental faculties. That's called "science".)
Oh yes you can.
No, you can't. A proof under one epistemological regime isn't applicable under another.
Of course, "prove" and "disprove" are at best terms of art (if not informal infelicities) under scientific epistemology. What the present study has shown (apparently - I haven't read it) is that there's strong evidence against the hypothesis that a particular mechanism might have a carcinogenic effect in humans. That's all well and good,1 but it says nothing about the myth that mobile phones, power lines, etc cause cancer. The latter is not a scientific hypothesis and is not subject to scientific argument.2
Now, one may on occasion convince someone to exchange a non-empirical belief like that for a similar scientific hypothesis, and then present evidence against the latter. Usually we hope to do these two things in the course of a single argument, felling the presumed-false belief in one swoop. We like to imagine this happens rather more often than it actually does (according to methodologically-sound psychological studies). In point of fact, people are rarely convinced to abandon beliefs they hold dear.
1I've never found any of the "EMF causes cancer" hypotheses plausible, personally, but "sounds dubious to me" doesn't carry much weight in scientific epistemology, and for testable hypotheses, that's the best epistemology we have.
2Some will argue that such beliefs don't deserve the label "theory". They're free to take that position, but since they have no means of enforcing it and "theory" is a word in common use as well as a term of art in some disciplines, they haven't much of an argument to make either.
I'm all for more video-chat applications. Makes my "I don't seem to have that one working" excuse more plausible.
Everyone involved in IT entertainment should read Silicon Snake Oil, too. But frankly a great many people "involved in education" have no desire to foist computers on children. As usual, Andrew is slinging mud at various wild generalizations that haunt his imagination.
They should be earning their keep being stuffed up chimneys surely?
Are they allowed to smoke there? Is it considered "indoors"?
Uber get into air travel? An app connects travelers with private pilots who fly them in their own planes?
That is a terrific idea. I mean, I wouldn't use it myself, because I'm sane; but surely it'd get at least a few self-indulgent idiots off the passenger lists for commercial flights, and that benefits everyone else who flies.
Indeed. I don't even see any significant IP here. At least with Instagram and some of the other hyper-valued app companies, there was some reason to believe a good portion of the user base would remain loyal customers if someone else bought the app. I can't see any reason why Uber users wouldn't switch to another app and firm if Uber crashed - there's no special user experience with the app, as far as I know. (I admit I haven't used it myself, as I have no desire to ever use Uber or the like.)
I suppose you could treat it as Poe's Law, though I prefer to see this instance as the Reg editors once again successfully trolling a significant portion of their readership. And why not? If we give up our time-honored traditions, what will be left?
I found them legible and interesting, and the images of early Notes UIs even more so. And I haven't read any Twitter feeds in years. But then I do have training in this sort of thing.
.NET's not bad for the one large project I've used it on. I'm not thrilled with the tooling (i.e. Visual Studio - but then I am not a fan of IDEs), but the Framework has a decent feature set of basic infrastructure classes, generics, and the like; the managed environment catches a lot of mistakes; the JITting system generally works pretty well; system functionality like AppDomains and GC control is mostly clean; the code-signing mechanism is easy to use.
I wouldn't try to port a lot of legacy code to it, except in an environment where that can be done with minimal changes (like, say, if I had a bunch of CICS COBOL apps I wanted to run under .NET). But for new development I've seen much worse environments.
I still do more work in C than anything else. I like C (which is more than I can say for C++, a language which seems to exceed nearly everyone's ability to write readable, maintainable code). But I like C largely because I generally work with clean, tidy, well-designed C code written by someone who knows the spec (me). If I had to work with a bunch of inferior developers, I'd much rather work in a managed environment that'd catch much of their crap for me, and make refactoring easier.
I admit I've never encountered it, but that's because I found NuGet so awful the first time I tried it, I never went back.
I assume they did not implement the recent findings of the Dutch team, otherwise that would mean another tenfold increase in efficiency is to be had
"another tenfold increase in efficiency"? I'm trying to figure out what that might mean. From 40% to 94%? I have to admit that I'm just a little dubious.