Are they comparing it to the band?
Running ordinary applications with administrative privileges: overrated and unnecessary.
3314 posts • joined 21 Dec 2007
Are they comparing it to the band?
Running ordinary applications with administrative privileges: overrated and unnecessary.
“Nine out of 10 times when we see equipment from that manufacturer, 90 percent of the time, this is the password.
So 81% of passwords?
Allow me to introduce you to a little thing we call "the noun phrase in apposition". A clever little devil, it closely resembles the adverbial phrase, but its behavior is quite different.
Since SMM in some intel chips has been cracked and SMM can do whatever it damn well pleases and not even a hypervisor can stop it - this is all just playing to the crowds.
Excessively reductive. An IOMMU-protected watchdog still prunes a significant portion of the attack tree, even if SMM represents a way around it. There is certainly plenty of non-SMM-based malware out there, and there will continue to be such malware for the foreseeable future.
Security is about cost transfer under threat models. It's not about perfect solutions. I don't know why some people find that concept so difficult.
No software security mechanism protects against suborning an authorized user. That doesn't mean no all software security is a waste of time.
No, no. You misunderstand.
The infamous House Un-American Activities Committee was formed to search out and destroy un-American activities.
The Senate Intelligence Committee ...
A lot of interesting and useful research could be done with a nuclear submarine. Unfortunately navies generally aren't interested, and where they are (e.g. ocean floor mapping) they aren't keen to publish the information. That aside, though, I agree. It's nice to know that whacking great expensive apparatus and small, relatively inexpensive apparatus both still have their place.
you have to ask yourself whether you believe one day trader in a scummy part of London was able to bring down the financial might of the US of A
Since this did not happen, there's really no point in wondering whether you believe it could.
One man can, even momentarily, wipe out a large percentage of the value of the richest country in the world
Ugh. The lack of basic critical thinking among the commentariat whenever this topic comes up is really a bit nauseating.
What does it mean to "momentarily ... wipe out"? Not much of a "wiping", is it? Congratulations; I believe you've coined an oxymoron.
"A large percentage of the value of the ... country"? Where the hell did you get that? The 2010 Flash Crash peaked at about 9% of the DJIA. Now, we'll pretend for the sake of argument that the DJIA actually means something, rather than being the result of a completely arbitrary formula. I wouldn't even call that "a large percentage of that particular market", much less "of the country".
The usual estimate is that at its nadir the drop represented about $1T in nominal market valuation. In 2010, even the money supply dwarfed that - US M3 for May 2010 was around $14T. And that's just the (broad) money in the country. There are a few other assets here, like all the stuff produced and the services rendered and the property and the natural resources. Those add up to a few more dollars. The 2010 US GDP was also around $14T, for example; so even counting just the money supply and that year's productivity, the Flash Crash at its worst moment - and it was only a moment - represented less than 3% of the nation's "value".
And, as I and others have already noted, the FC was not due to HFT per se. The same thing can easily happen with high-volume algorithmic trading at more human frequencies, and variations of it without even involving algorithmic trading.
"HFT isn't the issue"
Really? Being fooled by proposed sales that don't ever take pace, and you say that is not a fundamental failure?
I see you have (currently) eight upvoters, so there are at least nine people here that don't understand that neither spoofing - "proposed sales that don't ever take place" nor "being fooled" are part of HFT.
This scheme did not depend on HFT. As Worstall pointed out, it's been done many times without benefit of HFT.
In cases like this, the "being fooled" part was based on algorithmic trading. That's necessarily a part of HFT, but it does not require HFT. Regular meatsacks can perform algorithmic trading, and many small investors do. Everyone who has a "system" that involves crunching numbers to make trading decisions is doing algorithmic trading.
And investors can be fooled even if they're not using algorithms. Indeed, many a fool has lost investments thanks to making trades based on intuition or whatever nonsense might have guided his or her decisions.
HFT may be a problem. It is not the problem here.
Incidentally, for those interested in more technical details about HFT, there's a nice intro to HFT algorithms here, and an overview of HFT systems here, both by Jacob Loveless. I recommend the latter in particular to anyone who wants to understand what HFT systems look like - regardless of whether you think they're evil. They're in ACM Queue which I believe does not require ACM membership.
Quantum phenomena have to be accepted as just-is. There is currently no explanation for why or how they occur.
Epistemological rubbish. Any "explanation for why or how" of any physical phenomenon is simply an appeal to a theory at another level of abstraction. This is farcically represented by the "purity argument", but as Anderson pointed out back in '72, there are unavoidable information-losing consequences in making such transitions.
Complaining that quantum theories (QED, QCD, whatever) do not answer "how" and "why" questions is simply a complaint that there is no commonly-accepted definition of a lower level of abstraction. And that's because the quantum theories as a body typically define their domain as going all the way down. There are no more turtles; you're at the last one.
If you want to make a meaningful argument that quantum physics lacks a "why or how", you're either going to have to define that next abstraction (to your interlocutors' satisfaction), or argue that we should operate under a different epistemological model. And good luck with that.
It seems extremely unlikely that such a system would work outside the realm of science fiction.
That looks like a wildly inappropriate and implausible evaluation to me. There's a ton of research on attack-monitoring software, self-healing software systems, feature injection ... all the stuff Symbiote is supposed to do. Have you read the paper? Or any of the 15 others Cui is primary or secondary author on?
I'll note that nowhere does it mention doing anything magically. (And that would be fantasy, not science fiction, by most folks' definition.)
I will note that the article refers to him as "Dr" and claims he holds a PhD, but UMI doesn't have a dissertation for him, and his academia.edu page still says he's a grad student.
Oh, and a lot of technical people go to the RSA conference.
But, hey, apart from being completely wrong, insulting, and arguably libelous, good post!
Self-serving? Sure. But supported by historical evidence. None of his claims are extraordinary - except maybe the one about "you'll probably be using .. Symbiote"; I don't know enough about it to evaluate that one.
Given the abysmal state of software security, betting against it is pretty safe, and saying that OEMs need a better class of protection mechanism is hard to argue against. This bar is so low that nearly anything clears it.
Gotta wonder if that programmer/coder from that game company that just had pretty much this exact same type of flaw worked for Avaya, too.
Possibly, but the error in question seems like a relatively probable one for an inexperienced UNIX developer to make when writing a cleanup shell script. And if memory serves, a big NASA study some years ago into redundant software development - where you have multiple teams develop software for the same purpose, in an effort to produce a fault-tolerant redundant system with an overall reduced error rate - showed that teams working independently often produce the same bugs.
Basically, it's not uncommon for developers to independently make the same wrong assumptions. And that's not very surprising.
Anonymization has no useful effect anyway, as the reams of research on de-anonymization over the past few years show. And often the de-anonymizers don't even care whether they eventually connect a target to PII as traditionally defined; they can create reliable identifiers of their own for individuals and use those for their purposes.
Anonymization is dead. It's a sop to privacy advocates who still labor under the misapprehension it means something.
For as long as I can remember, people have been posting that very comment.
...for finally saying it. It's not a law.
At least half a dozen commentators feel obliged to point this out in the comments for every Reg article that includes the phrase. "Moore's Law isn't a law" is such a cliché that some people probably have it as a keyboard macro. This piece is hardly groundbreaking.
Your complaint is a nice illustration of the Gell-Mann Amnesia Effect. "Bah, those fools who are not in my area of expertise don't understand my area of expertise!" You might want to consult a biologist on the predominant ursine defecatory loci.
I have a 23-year-old Panasonic VCR that still works, and that I occasionally use. It was connected to a little TV (don't remember the brand) that I just had to replace a couple of months ago when the vertical hold went.
My daughter has my 19-year-old Magnavox TV in the kid's playroom - still works. I admit I did have it repaired once, a few years into its life, when the power supply died.
Not accurate, Google has written good software which is now superseded, manufacturers, if they cared, could update their devices in someway to the new Apis.
Google has decided to deprecate a protocol they encouraged people to use, only a few years into its existence, because they're utter bastards and this is part of their business model. They do it all the time.
"good software" doesn't come into it. Google created an adequate API, and now they're blocking it, simply to prevent older applications from getting at the data. They did the same thing with, for example, their Outlook Calendar Sync applet. There's no technical reason to do it.
it is not a failure of Google but a failure of the OEMs.
What utter rubbish.
You cannot support aging software forever.
The cost to Google of supporting the old YouTube API is very close to nothing.
Again, this is standard practice for Google: kill stuff off after a few years to force users to upgrade. It's psychological manipulation of the customer base; by forcing customers to keep reinvesting, it ensures they stay trapped in the Sunk Costs Fallacy and other irrational psychological investments in Google products. It's one of the reasons why Google's corporate motto is so obnoxiously ironic.
Yes. Visi-Calc was the original "killer app" which got PCs (mainly the Apple ][) into small business.
For importance in legitimating PC use in business, its only real competitor is probably the IBM PC, which drove PC adoption by larger businesses thanks mostly to the IBM name, and secondly to its ability to replace 3270 terminals on management desks with a more-functional device (because it could both be a 3270 and run spreadsheet and word-processing applications).
It's not clear that many people actually used PCs as 3270s (until much later when TCP/IP stacks and TN3270 clients became common), particularly since third-party hardware was required until the 3270PC came out, but it was one of the marketing points IBM emphasized when selling them to large businesses. It made the IBM PC seem "serious": you could use it to run your mainframe-based management apps.
As always, there are long chains of ideas and innovations in this domain that lead to the present. Joanne Yates' highly readable study of business communications, Control through Communication, describes the major innovations in business DP in the US during the nineteenth century, for example. The transitions from pigeonhole desks to chapbooks to flat filing to vertical filing were hugely important, as were those from manual copying to spirit duplicators to carbon paper to xerography; and those from quills to fountain pens to typewriters. (And, of course, none of these transitions were abrupt; we still have spirit duplicators and mechanical pens, etc.)
Foucault's The Order of Things, though controversial, documents many of the epistemic changes in Early Modern and Modern Europe regarding the nature, manipulation, and organization of information. There are numerous competing views, of course.
If memory serves, the first episode of James Burke's Connections (and first chapter of the follow-on book) concerns the invention of the modern digital computer, hitting such now-well-known highlights as Jacquard's loom and player pianos. While it doesn't have the depth expected of an academic treatment, it's good fun and enlightening if you aren't familiar with all the bits he discusses.
And so on. I have shelves' worth of books that touch on the subject, and it's not one of my research areas. Well-trodden ground.
Instructions are data. They're data that the processing units use to decide what actions to take.
By definition, computation involves manipulating data. As soon as you have a switch, you have data.
Computing can be reduced to a number of formalisms. Saying "it's all about data" is no more insightful than saying "it's all arithmetic" or "it's all function construction and application" or "it's all compression" or "it's all moving heat around".
In fact, from that set, there are probably more interesting insights to be derived from, say, "it's all compression" than from "it's all data". In particular the rather dubious analogy between tally sticks and writing on the one hand, and structured and unstructured data on the other, that Whitehorn draws, while a decent bit of parlor discourse, seems rather too shallow to illuminate anything significant.
And of course the history of forms of data-keeping was hilariously abbreviated, when not outright wrong, but what can you expect from this sort of article?
Well, "antique" simply means "old", and in some domains the threshold is generally set pretty low. Car collectors usually treat 25 years as the minimum age for an "antique", for example.
MK hasn't made it to that quarter-century mark yet, but even a couple of decades is a long time in video-game terms. 1992 also saw the release of Wolfenstein 3D, for example, which I'm sure many people would consider "antique" these days.
And the ACS column has included stories on the 20th anniversary of Myst and the 15th of Goldeneye.
Buffalo is 2500 KM from the Pacific
It is? Buffalo to San Francisco (your example) is 2309 miles, according to the Great Circle Mapper. San Francisco isn't the closest point on the Pacific coast to Buffalo - glancing at a map suggests that's somewhere in Oregon, and indeed Coos Bay OR is a mere 2270 miles from Buffalo. Nearly 40 miles closer!
But 2500 km from Buffalo will only get you to, what, Cody, Wyoming? Which I'm sure is very nice but not a Pacific beach resort as such.
You can do Kansas City to San Francisco in under 2500 km. And Han Solo can do it in under three parsecs.1
1About 7e-11 parsec, which is somewhat less than 3.
it's quite a useful trick to look at all the made-up cyber-words, cyber-cruft, cyber-dingles then remove the "cyber-" replace with precisely nothing
The prefix "cyber" attached to any root other than "netic" is generally 1) meaningless, and 2) an indication that the author doesn't know what he or she is talking about.
More formally, there's an inverse relationship between the number of "cyber"s and the information entropy of any given message. Just by using it twice here I've made this post dumber.
while zero proofs verify the work
In case anyone's wondering, that's "zero-knowledge proofs". The phrase "zero proofs" doesn't appear anywhere in the paper, of course. A ZKP is a way of demonstrating (with arbitrarily high probability) you know a secret without revealing the secret; "zero proof" is how we describe non-alcoholic beer.1
relies on the Quadratic Algorithmic Programs
Or, as we say here on Earth, relies on Quadratic Arithmetic Programs (no "the", and "arithmetic", not "algorithmic").
As you can see in the original Pinocchio paper, QAPs are basically a generalization of Quadratic Span Programs. QSPs represent Boolean circuits (a collection of wires and gates, just like in your Intro to Computer Engineering class). QAPs represent arithmetic circuits in a finite field: so the "gates" are things like addition modulo N, where N is the size of the field.
The paper provides the details, but basically, in Pinocchio, the server computes the QAP that represents a problem and a "verification key". Then it gives the QAP to a worker. The worker computes the result for a given set of inputs and returns it to the server (or anyone else) can verify that the result is good using the verification key.
The idea is that you want verification to be much cheaper than computation of the result. In other words, the problem should be in NP: worse-than-polynomial time to find the answer, but the answer can be found in polynomial time. (Obviously linear time, constant time, etc are subsets of polynomial time.)
The innovation with Geppetto is that it optimizes Pinocchio enough to make it useful in practice. Or so I gather; haven't looked at that paper in detail yet.
1Insert joke here.
>>>> Is too resource intensive and everyone knows it at first glance. Simple.
Wow, you can tell that at a glance??? You're either the cleverest person on the planet... or a chump.
I don't see why. I can tell Windows Phone is too resource-intensive for me without even glancing at it. I know from reviews it does things I don't want it to do, and those things have non-zero cost.
The same is true of Android and iOS, of course. The closest thing to a smartphone OS that wasn't too resource-intensive was Symbian S60.
"Too resource-intensive" is a subjective evaluation; and for the right subject, yes, only minimal knowledge is required to make it. You're arguing the wrong point. It's the "everyone" part of the original post that's stupid.
As someone who has spent some time counting up the pencil-marks on bits of paper in elections
Mark-sense is a pencil-and-paper (or pen-and-paper) system, is automated, and in methodologically-sound tests consistently performs as well as or better than other approaches. Pencil-and-paper doesn't mean "counted manually by human readers".
And mark-sense counting machines can be simple electromechanical devices - easy to audit, relatively difficult to subvert.
While there are computer-based voting systems that offer advantages over mark-sense (such as Rivest's ThreeBallot), they're difficult to explain to the average voter and suffer the same implementation problem (where will you find a reliable vendor?) as other computerized voting systems.
All these comments, and no one's bashed either Whitman or McAllister for not using the perfectly good term "ligature"? I am truly disappointed in the quality of Reg pedantry here.
(And, really, Neil - "crossbars"? There's a typographic term for those, too. I guess we're lucky you didn't describe Whitman as "top decider-person at HP".)
No, it's just someone at Google has just discovered "stateful connections"...
No, as others have already noted, they discovered DTLS (ugh), TLS session resumption (moderately ugh), and reinventing TCP over UDP (a crime against nature).
I'm not particularly worried, though, since I never use Chrome; and if Mozilla insist on putting this in Firefox, it'll almost certainly be easy to disable (as SPDY and HTTP/2 are).
SPDY runs over TCP
And it's not very good.
It largely "form[s] the groundwork for the IETF's HTTP/2 standard" because the IETF tries to standardized existing practice, not invent new; and even more so because the IETF was scrambling to get something onto the standards track. The HTTPbis Working Group was apparently going to take forever to get an RFC together, and Google was in effect threatening to do to the IETF what WHAT-WG did to the W3C - impose a de facto standard by force of numbers and undermine the standardization process.
But SPDY did not win on technical merit, nor on innovation, nor because it addresses most of the more-pressing issues with HTTP/1.1. It addresses precisely what Google cared about, which is lowering their costs.
The ever-controversial Poul-Henning Kamp has a decent piece about it in ACM Queue. While I can't (ever) agree with everything Kamp says (in this case, for example, that HTTP hasn't changed since 1989; that's just a silly claim), I tend to agree overall with this piece. I wasn't impressed with SPDY, and I'm not impressed with HTTP/2.0.
And I can claim a little expertise in this area. I've been working with a wide range of comms protocols, including the prominent IP-based ones, since the '80s. I've written client and server implementations for several, including HTTP/1.0 and 1.1. I've debated niggling details of HTTP/1.1 in places like comp.protocols.tcp-ip, where other folks with some expertise hung out. I'm not an HTTP specialist, but I've had my hands in its guts more than once.
I want Strange, Charm, Top and Bottom!
And Truth and Beauty - which strike me as more useful than Top and Bottom, though the former might help other correspondents learn How to be Topp.
How about it, Reg? Surely you can respect our desire to acknowledge the strangeness, charm, truth, and/or beauty of out fellow commentards' work.
If it was patched, then it had to have been done before the day was over; there does not seem to be an archive of the joke site available.
Presumably the "patch" involved removing the misfeature in google.com's servers that recognized the
igu=2 query-string parameter and responded by omitting the
X-Frame-Options header. That's the actual vulnerability, and it's independent of the existence of the joke search page.
5.1 sound versus TV speakers, I highly doubt you'll prefer TV only for anything except voice only.
I've heard 5.1 systems in stores and at others' homes, and I have no inclination whatsoever to use anything other than the speakers in the television.
I, too, am annoyed at the crap sound mixing that makes BGM and SFX much too loud and dialog too quiet, but putting in a "sound system" just so I can boost the center channel is more effort than it's worth, as far as I'm concerned. I'll just avoid programming that doesn't mix the sound to my liking. The vast majority of it is crap anyway, as far as I'm concerned. (I do think there's actually a lot that's good on television these days, but strangely very little of it suffers from the thundering-SFX problem.)
In my amateur opinion It's not secure, but it's better than nothing.
In my professional opinion, it's a huge waste of resources that accomplishes nothing useful in the vast majority of cases. I don't need to read the Register over anything more secure than plain HTTP. The vast majority of HTTP use is information retrieval for which the additional confidentiality, integrity, and authentication benefits of HTTPS - which are not particularly generous in the first place - provide users with no benefit. Traffic analysis of encrypted conversations gives attackers nearly as much information.
This is Mozilla catering to an ideological position.
I'll bite; how insecure is ssh? With citations..
The SSHv1 protocol was substantially broken. That shouldn't need citation; it's widely known. A web search will turn up plenty of material.
Various issues have been found in the SSHv2 protocol, such as the 2008 CBC SSH decryption vulnerability, and with specific implementations, such as the 2003 OpenSSH server-side buffer-management bugs.
But by far the largest security issue with SSH has always been, and continues to be, poor key hygiene. Many users accept any server key without trying to verify the fingerprint, or get the fingerprint information over an insecure channel. Servers are compromised through other means and keys are stolen. SSH has no standard PKI; it has no standard means for protecting, distributing, verifying, or revoking keys. It puts all of the burden for those things on individual users, few of whom have the knowledge or patience to manage them well.
Not that there are better solutions, generally speaking.
We never learn just how small the mini computer veteran is.
We bought IBM AS/400's with integrated everything, and we LOVED it!
Yeah, and the AS/400 came in on the tail end of the minicomputer era (in 1988). The low-end ones were slow - I remember builds taking an order of magnitude longer on the '400 we had (at the time, the very smallest one you could buy) than the same software product took to build on a '386-equipped PS/2 Model 80.1 And the PDM development environment was awkward and limited, not nearly as capable as, say, the VMS TPU editor or CMS XEDIT (much less something like ISPF or the UNIX collection of tools).
But, man, the problems we did not have with that thing. Hardware and software were rock-steady. Built-in UPS and modem that, if you provided a phone line, would dial home if it detected a problem. Bug in your code cause an application job to crash? You'd get a nice message in your terminal message queue with fault and backtrace information. Every command had menu and prompt modes.
I never used the '400s ancestors - the System/38 and System/36 - but I gather they were similar, without the recycled Future Systems aspects of OS/400. I did use VAXes pretty extensively at school, and they were similarly reliable and non-frustrating. And the little time I spent with PDP-11s was also pleasant. Never got to use DGs machines or the other famous minis, alas.
1The source base wasn't identical, of course. This was circa 1990. The software in question was mostly written in C, with platform-specific layers for OS/2, Windows, and UNIX. The only C available at the time for the '400 was EPM C, a rather interesting beast (the later System/C and ILE C were less idiosyncratic), and a bunch of functionality couldn't be implemented in it and had to be done in COBOL, Pascal, or CL, with various odd OS/400 constructs. But in terms of number of modules and lines of code all the variants were pretty close.
like "cloud computing"? Not much difference, really...
True, in the sense that both the service bureau model and the cloud model are examples of utility computing - you pay to have data processed and stored, and the vendor handles all the actual hardware and, generally, the generic software (OS and the like).
Cloud providers emphasize features like on-demand capacity upgrades and geographic co-location. Some, but certainly not all, service bureaus provided those; they were less important as selling points.
For several years I worked for a small software company that had some mainframe software products, and we used a service bureau for all our mainframe computing. It was actually in the same building, so when we need to cut a tape to ship to a customer, we'd walk over and hand it to the operator on duty. It was a good arrangement.
Also, the last time I checked, modern conventional computers "store data ... in states of microscopic objects". Certainly my eyesight isn't good enough to distinguish the individual transistors and magnetic domains in my computer's chips and drives.
That whole "Unlike conventional computers" sentence had rather a bit of the Fry-nature.
And as an addendum to your item 1: for practical interesting problems, the "bunch" needs to be pretty big. Certainly much bigger than anything anyone's demonstrated so far.
I see that Simon Rockman, the author of the condescending soi-disant expression, is billed as "Mobile and Motoring Correspondent". His description of it as “what we used to use before HTML 5” suggests he's less knowledgable about programming languages than phones and cars.
Certainly the article has plenty of questionable statements. For one thing, COBOL and BASIC are both acronyms and should be written accordingly (unless you're referring to a specific not-really-BASIC language which unfortunately is called Visual Basic). Fortran, of course, used to be FORTRAN but was renamed in 1990; but "Atlas-Fortran" is properly "ATLAS-FORTRAN" (or better "ATLAS and FORTRAN", since ATLAS is a linear-algebra library for use by FORTRAN or Fortran programs).
We've already discussed the Haskell comment elsewhere.
Yes, Grace Hopper should have been discussed in more detail, but "the disliked Cobol [sic]"? Even if that were her only achievement, creating the language that's still responsible for the vast majority of business transactions seems moderately significant.
I don't know what Rockman's experience of Java is, but his description doesn't match my "day to day experience". Another tiresome generalization from the Java-bashing league. The language has real problems (cough type erasure cough); how about we talk about those?
Omitting APL seems fine to me, and I've actually written APL code, which is probably more than most Reg readers can say. In the long list of omissions by the series, this one must be way at the back. APL enjoyed a certain cachet among actuaries and the like for a while, and has a certain historical importance, but if you want to complain specifically about "mathematical languages" being omitted (and noting Rockman does mention Forth), then Mathematica and Matlab, SPSS, VisiCalc (which was in effect an interpreter for a programming language) and its descendants, and newcomers like R and Julia are better bets.
I could go on, but the whole thing just makes me tired.
BASIC was the programming equivalent of the Bay City Rollers.
If I hate you after I say
Just can't READ I$ any longer
Just gotta tell you anyway
Bye bye BASIC, BASIC goodbye
(Bye BASIC, BASIC bye bye)
Bye bye BASIC, don't make me cry
(Bye BASIC, BASIC bye bye)
Completing the lyrics is left as an exercise for the reader.
having a decade or two of general dominance
"general dominance" by what metric? The wild-ass handwaving one?
Yes, it's pretty silly to omit C and its descendants (other than Java) from a documentary like this. But making claims about "dominant" programming languages is nearly as bad. The simple truth is that we have several programming languages with many years of widespread use; another set that enjoyed briefer heydays and then dropped in popularity but appear they will hang on forever; and a much larger set of languages that were always more or less marginal but refuse to go away. (And, of course, then the considerable set of "esoteric" experimental and joke languages.)
Yes. All respect to Randall,1 while Haskell isn't widely used in industry, it's not unknown either; I've read research papers that discuss commercial systems with components written in Haskell. And it's popular in academia as both a teaching language and for certain types of research.
As you suggest, it's probably the most popular member of the ML family, beating OCaml and F# (though there seems to be a decent amount of interest in F#). I doubt much of anyone uses Standard ML any more. (I have a soft spot for it, myself, but I haven't used it in many years. These days when I write ML-family code it's usually F# and sometimes OCaml.)
In any case, "one of the most popular functional languages" is true for any functional language, since the subset is not well-defined. Or, at best, evaluated lazily. ("Does it include Haskell yet? If not, add the next-most-popular functional language.")
1And Randall Munroe does get all the respect. All of it.
Without Algol, where would we be now?
While Algol was certainly influential, FORTRAN, COBOL, and LISP all preceded it, so it's not like it invented the idea of the high-level language. Many of Algol's syntactic innovations were inspired by mathematical notation, so we might have gotten them anyway. And some of its more prominent semantics were, thankfully, discarded along the way (I'm looking at you, Pass by Name).
Maybe the functional language families - the LISP family and the ML family - would have become more prominent. That would have been OK with me.
I'd argue that the bulk of the world's computer systems and applications rely on C++ at some point
The problem with such a claim is "at some point", which is a weasel phrase (even more so than "the bulk"). At some point? Well, sure, because everything's connected to everything else at some point.
This sort of thing is notoriously slippery anyway. What's our metric? Numbers are hard to come by, but in the early 2000s one estimate was that ninety-some percent of all CPUs in use were 8-bit embedded systems. (That's "CPU cores", not necessarily discrete chips.) Those were mostly programmed in assembly; a minority were programmed in C, Forth, and a handful of other languages.
These days, extremely cheap 32-bit cores have probably displaced the 8-bitters from the top of the pile. Embedded systems are programmed in a host of languages, but C, C++, and Java likely dominate, with Forth and more domain-specific languages like Erlang still making a showing in some domains.
What if we count not CPU cores but cycles? Then HPC is going to boost Fortran's rank significantly. But how much?
What if we count "items of work performed" in some vague, handwaving sense? Business transactions swamp personal computing, and the majority are COBOL programs.
I claimed, a few years ago, that by far the most popular computer application is a little thing called "digital clock". I counted two dozen instances of it running in my house. How many versions are implemented in C? In assembly? In discrete logic?
It's also astonishing how many people trot out the "GOTO Considered Harmful" without ever having read the paper from which the quote is taken, sans context.
Letter, not paper, technically speaking. At any rate, that's how Wirth - then editor of CACM - decided to publish it. And, of course, the infamous title is Wirth's; Dijkstra's was "A Case Against the GO TO Statement".
(And while I'm being pedantic, Wirth's title was actually "Go To Statement Considered Harmful". Two words, add "Statement".)
The problem is not there, but the goto requires a label as the target and that is where the problem lies.
A useful observation, but irrelevant to Dijkstra's critique. And it's not entirely true. Any statement that can cause premature exit from a basic block complicates program flow of control, both for determining things like data flow, and for a human reader studying the code. In C, that includes not just goto but break, continue, and return (which are just syntactic sugar for goto), and longjmp, which for this purpose behaves like goto optionally proceeded by a series of one or more returns. In languages and environments with exception handling, it includes exception handling.
A human reader studying a block that contains, or may contain, any goto-like construct has to consider premature termination of the block, which means the tail of the block may not be executed. It's easy to see the effect by converting the control-flow graph of a program with explicit gotos into a tree without them.
And people reading, and attempting to understand, source code accounts for around 40% of total software development costs, according to some estimates.
Personally, I like the judicious use of goto in C, particularly the "goto done" pattern to exit early but still perform end-of-function cleanup. (There are good alternatives, such as handling resources in an outer function; it's largely a matter of taste.) But that works precisely because it becomes an idiom and a reader can expect and recognize it.
Also, in languages like C, a label's scope is local to the function or procedure.
Labels are, yes, but then you have longjmp.
This is especially a problem in COBOL where there are section labels and paragraph labels and these may be dropped through, performed and performed thru, or be subject to a goto in various combinations.
By eliminating GO, PERFORM THRU, and SECTION labels the only constructs for accessing code out-of-line is the PERFORM and the CALL. This eliminates the complexity of determining the logic flow of the code, paragraph labels can be treated as if they were procedures (functions in C) with the only entry from a PERFORM and the return to that at the end of paragraph.
Unfortunately, not always true, due to the complex semantics of PERFORM in some conforming COBOL implementations. That's why we in fact recommend always putting PERFORMed code in a separate SECTION, rather than simply making it a paragraph - SECTION has stronger control-flow requirements. But yes, removing GO and PERFORM THRU from COBOL logic is felicitous, just like using the scope-terminating statements (END-IF et al).