Actually, this is just the sort of thing I'd like to hear at fireworks parties.
No, I'm not any fun either.
12317 publicly visible posts • joined 21 Dec 2007
It might be paid out of insurance.
But in either case it's still going to hit the police department's budget (directly or in increased insurance premiums), and given the relatively small size of Portland that will bring all sorts of political pressures to bear. It seems to me the point of this ballot initiative is really to send a message: a majority of the citizenry do not want this technology used, and they're prepared to make things quite uncomfortable for any agency that does so.
It's a good precedent, anyway.
There are plenty of alternatives for the lactose-intolerant. I myself have occasionally used almond milk (I find the consistency more familiar, because I was raised with 1%-milkfat milk1), and I quite like the occasional coconut-milk alternative for ice cream. There's rice milk, cashew milk, soy milk - quite a few to choose from, at least here in the US.
Occasionally I have real ice cream, along with a hefty dose of lactase, and, yes, it's probably a bit better than the analogues; it's hard to precisely duplicate the flavor and mouth feel of a high-quality ice cream, or the nostalgia of soft-serve. But particularly for the severely lactose-intolerant2 cheese is the real casualty.
1The usual story: our pediatrician, concerned about a relatively high heart-attack rate in my mother's family, recommended a low-cholesterol diet for me and my siblings. Now evidence shows no strong correlation between dietary cholesterol and serum cholesterol, much less serum LDL or triglycerides, so we know the whole thing was pointless if not counterproductive. But then much of nutrition "science" is anything but, and medical GPs rarely have the time to follow current research (which is why we have groups like Cochrane doing metastudies and creating clinical recommendations...).
2I can tolerate cheese, at least enough so that I haven't had to give it up. I'd miss cheese a lot more than milk or cream.
Then we'll have the usual health freaks giving it to their babies ignoring the evolutionary reason why milk has so many calories.
Is this actually a problem? Milk alternatives have been available for decades (the article notes those based on soy and nuts).
There's the infamous (and ongoing) Nestlé infant-formula scandal, but that's quite different from your claim; it's about formula being aggressively marketed to poor and undereducated populations, not relatively healthy nutrition-tourists jumping on the latest bandwagon.
Anyone who follows the Jenkins vulnerability announcements knows the Jenkins plugin ecosystem is toxic and ridden with vulnerabilities - many of which remain unfixed long after publication. It's as toxic as other well-known sewers such as NPM and the WordPress plugin collection. You could probably find seven vulnerabilities by printing out a bunch of Jenkins-plugin source at random, pasting it up on a wall, and throwing darts at it.
That said, CodeQL is a good addition to the security tools available to GitHub contributors (though there are plenty of static-analysis tools which people could already be making use of, and very few do). And its approach is different enough from classic static analyzers, and other vulnerability-identification tools such as dynamic analyzers and fuzzers, to provide a different tack on the problem; that helps both with finding different sorts of vulnerabilities and with reducing the fatigue of going through large result sets with a lot of duplicate information.
Pff. That's not "really cold". According to the climate info in Wikipedia, Helsinki compares favorably with Lansing, Michigan, which I rate as "moderately cold" in the winter. Lansing's generally more comfortable in the winter than some other places I've lived, including Boston, Massachusetts and Lincoln, Nebraska. (And winter nighttime lows here at the Mountain Fastness fall pretty damn low too, though the extremely low humidity means that you lose heat significantly slower, so you don't feel it as much. It might be -25 C but it feels like maybe -5.)
Forget that cold record (which doesn't seem to be in the Wikipedia table, but whatever) and look at the normal lows. They're very reasonable.
Personally, if I were single, I'd be very tempted to give this a try.
Good luck with that
Thanks!
I don't know about where you live, friend, but here in the US there are a variety of older cars for sale. We call them "used cars".1
I'm already assuming I'll never buy another new car for myself, since they all seem to come with fucking touchscreens now. I hate touchscreens, and having one (for controls the driver might want to use) in an automobile is the height of stupidity.
1Some people refer to some of them as "pre-owned". Those people should of course be forbidden from speaking or writing until they learn to avoid moronic, unnecessary neologisms.
Not a single driver that I personally asked (in excess of 40) wanted me to vote no. The vast majority of the actual workers
The latter claim may be true, but with over half a million Uber and Lyft drivers in California, N=40 is not a statistically significant sample. Particularly not when your sampling method is probably biased by your location, etc.
Personally, I don't find "the drivers are against it" to be a particularly compelling argument anyway. The whole point of the social contract is finding a balance between what an individual wants and what's best for society at large.
In any event, this hasn't made me any more inclined to use gig-economy services.
Yeah, I was quite surprised that high-level execs were physically involved in this, and not just giving orders. They (allegedly) drove to the victims' home? After flying across the country, since presumably they're based at eBay's HQ in California. (And like most of the Boston Metro area, Natick1,2 isn't exactly fun to drive around.)
This whole thing just gets more and more bizarre as the details come out.
1At first I misremembered and thought the victims lived in Nahant, which is more pleasant to drive around, but more tiresome to get to in the first place. Nice beaches, though, by Massachusetts standards.
2For driving around Natick, I recommend listening to "Driving on 9"3 by Ed's Redeeming Qualities. Listening to the cover by the Breeders is also permitted.
3There's some debate about whether "9" refers to Massachusetts Route 9 or the one in California, but the song was released the same year that the band moved from Boston to San Francisco, which suggests to me that it was written before the move. In any case it works for either.
Sigh.
The github issue was disclosed to them 104 days ago: 90 days plus the 14-day grace period. That's how responsible disclosure policies work.
github themselves disclosed technical details about the Github Actions vulnerabilities.
Google have disclosed the Chrome issue discussed in the article. They just haven't released technical details.
Are these details really that hard to understand?
People go on and on about "Acrobat" (be honest people, it's just PDF)
PDF != Acrobat. It's entirely possible to have a PDF renderer which doesn't support scripting and much other Acrobat idiocy.
I'm not a huge fan of PDF; for the vast majority of documents I'd prefer HTML1, or Markdown2, or plain UTF-8 text.
But there's a place for proper typographic layout. Book-length works, and even many shorter articles, are far more pleasant to read when they're laid out well. HTML+CSS simply can't do that. It can't do proper ligatures or kerning or digits with descenders or micro-protrusions or any of the other things you'll get with, say, pdflatex output.3 And for those applications, PDF remains the best choice. None of the other widely-available formats really handle that properly.
1Real HTML: POSH, cleanly formatted, with minimal CSS, and no scripting. Minimal scripting which degrades gracefully if it's disabled is acceptable for web pages.
2I generally find Markdown unnecessary, but if for some reason people feel compelled to have some markup and formatting in documents that would work just fine as plain text, it's safer and more readable in source than HTML.
3Yes, in principle, you can get some of those things with CSS and fonts, if you can find suitable high-quality fonts and you go through a lot of trouble. But anyone who lets the browser download arbitrary fonts from arbitrary sources ... well, you might as well use Acrobat.
SMB over QUIC is the future of distributed systems
Good god, I hope not. SMB is a horrible, horrible protocol, and QUIC is only slightly better than the typical "let's reinvent TCP using UDP" attempt.
QUIC solves certain problems, true; that's because it's optimized for different use cases than TCP is. That doesn't mean everything should be switched from TCP to QUIC. And it especially doesn't mean that we should prolong the life of dreadful rubbish like SMB by promoting QUIC as a transport for it.
And, of course, the vast majority of distributed systems don't use SMB, because they're not interested in anything SMB does. Remote filesystems are a niche application, statistically, when the whole of IT is considered.
Agreed. I'm not particularly impressed by Wayland and its orientation toward local, single-user systems.
I was writing X11 applications at IBM in the late 80s / early 90s: clients, a window manager, graphics libraries (XGKS), and extensions (PEX). I wrote the ddx side for some experimental display hardware. While there were some unfortunate choices in the X11 protocol - specifically, it would have been nice if clients could specify strict or relaxed rendering of wide lines and other primitives to make better use of acceleration - X11 was a rather brilliant piece of work.
VNC is just network framebuffers. It's the sort of remote-display technology an undergrad would come up with. It has its uses, but comparing VNC to X11 is like comparing a pedal car to a Ferrari.
I've never looked at RDP closely, but apparently it's based on the ITU's T.120 family of specifications, and those are just as elegant as you'd expect.
No one (who knows anything about it) thinks it's "random" at all. "Deterministic Random Bit Generator", the phrase NIST actually uses, is their (unfortunate) term for cryptographically-strong PRNG.
Everyone always knew Dual_EC_DRBG was a CPRNG, which meant it deterministically generated a bit stream with statistical properties that were indistinguishable from random under a series of assumptions. The concerns around Dual_EC_DRBG were, first, there's no way to tell whether there's a backdoor (i.e. whether the default constants provided by the NSA via NIST1 were chosen to allow someone with an additional piece of information to predict the output2); and second, it's a rubbish algorithm anyway and so there's no good reason to use it.
Ever. Even if you don't think there's a back door. And if there isn't a back door, why recommend it in the first place? Probably just an honest mistake.
1It's worth noting that these constants can be changed, and in fact NIST tells you how to compute a suitable set of alternatives and use them in the DRBG. Of course doing so invalidates any backdoor, and the backdoor is the only reason to use Dual_EC_DRBG.
2Specifically, SP800-90 specifies the form of the DRBG and provides parameters P, the curve's generator, and Q, both points on the curve. It's not explained where Q comes from. It's a prime curve, so there's some e such that Qe=P (mod p). Given Q, e is hard to find. But say you're proposing an EC-based DRBG, and instead of picking a random point Q, you set Q to be a multiple of P. Then you can easily compute e. And you can recover the internal state of a Dual_EC_DRBG instance by observing about 32 bytes of output. That is a Bad Thing.
Sigh.
The problem is not whether a handful of technically-adept parties who already have a secure channel for key distribution can maintain confidentiality, or even confidentiality + integrity + authentication (and, hey, throw in non-repudiation if it makes you happy). That's always been possible.
The problem is government interference with attempts to address the actual difficult questions, like mass cryptography for non-technical parties, key distribution among large groups with no prior secure channel, authentication where there's no existing relationship, and so on.
Your amateur cryptography is not interesting in this context. It's the equivalent of a pen-and-paper cipher. It might be weak, it might be strong; but it doesn't touch on any interesting problems.
The main problem with security by obscurity is Kerckhoff's Principle: The information you're trying to keep hidden is in effect part of the secret key, and it's a part that 1) has lower information entropy than key material should have, and 2) can't be managed easily, because it's not pure key material. So it's inefficient security at best. Its contribution to security and resistance to attack can't be easily or accurately measured, and there's no recovery from compromise.
In any case, it's not so much the hardware platform as the OS that matters. The only currently maintained OS for Itanium I'm aware of offhand is HP-UX; I don't know if Linux or FreeBSD are still supported (and OpenVMS?). Because HP-UX is obscure relative to the market leaders there's less total reward for exploiting it, and it has an overall smaller attack surface; the same would be true of other non-Linux alternatives.
But I wouldn't even bother mentioning that, if I were in charge of security for these systems. It might reduce exposure to broad attacks - the typical portscanning script-kiddie stuff - but it won't help with targeted ones.
Perhaps it will distract people from their awesome parade of vulnerabilities.
Some of us organize our lives on some principle other than financial compensation.
I work when I want, as much as I want. I happen to be well-compensated for it, and that's useful for the other things I do. But I don't feel the need to extract some fixed amount of money for every second I spend improving my employer's position. What a sad mental state that would be.
Of course, most of my work involves thinking about things. It would be tiresome to try to keep track of that.
The subject of the verb is "none" ("of them" is a prepositional phrase acting as an adjective modifying "none"). Grammatically, "none" has long been either singular or plural depending on the whim of the writer - there's no conventionally dominant number for none in English usage.
And, of course, all of the comments claiming that either a singular or plural verb is "correct" here are prescriptive, and prescriptivist comments on English usage are false pedantry.
In short: none of you is correct, and none of you are correct. Either is acceptable.
No, it converts the output of the layout engine into a serialized display list for the renderer, and sends that over the network.
Whether that's a good idea is a separate question (I'm not a fan), and as discussed above it's hardly a new idea, but it is considerably different from sending HTML.
drawing primitives that are then rendered locally is more X-terminal than VT terminal
The 3270 Model 3279 had GDDM support for host graphics rendered locally in 1979. It supported GKS and PHIGS. Then in 1985 IBM came out with the PC-3270/G, which similarly supported GDDM.
If memory serves, the earliest X terminals came out in 1988, with X11R3.
Of course, there are various differences between GDDM and X, such as the latter's openness and availability from multiple sources. In many ways these "hosted browsers" are more similar to X terminals than they are to the 3270 graphics terminals.
And then there were Sun NeWS and Adobe Display Postscript (both based on Postscript but developed independently). Wikipedia gives 1986 for NeWS and 1987 for DPS.
I assume there were other "graphics terminal" protocols in the late '70s and '80s, though none are coming to mind right now.
Yes, Wiener coined "cybernetic" to refer to any self-regulating mechanism.1
Then Clynes and Kline co-opted it for their daft "cyborg" portmanteau, which doesn't even make sense, since all organisms are already self-regulating to some extent. Why they thought "cybernetic" meant "biomechanical" I do not know.
In their article they begin by referring to the "cybernetic aspects" of biological homeostasis, which is fine; but then they coin cyborg to mean "the exogenously extended organizational complex functioning as an integrated homeostatic system functioning unconsciously". Now, I admit the latter phrase is a bit of a mouthful (though I am tempted to drop it into conversation whenever possible2), but surely the "homeostatic" part is not the innovative aspect vis-a-vis the extant a priori organism, as C&K might put it. (Their piece is well worth reading just for the prose, which leaps beyond "turgid" to some new realm of awesomely over-written.)
Anyhoo, Clynes and Kline started the rot in "cybernetic" in 1960 with their "cyborg", which then became popularized by Halacy, Kaidin, and others. Donna Haraway3 introduced it to critical-theory circles in 1985, which then trickled down to middlebrow venues. Meanwhile there was some use of "cyber" and other forms in IT; the CDC Cyber range launched in the early '70s, for example. And "CYBER" was a standard Library of Congress index term in 1990, and probably earlier.
But the term didn't really pick up steam until the early 1990s, judging by Google Books Ngram data, as people began to tire of prefixing everything with "e-" to indicate it had something to do with IT. Then it snowballed.
It was always etymologically unfounded for this use, though. And it carries rather a non-technical whiff; someone who sprinkles their conversation with cyber-this and cyber-that rather comes across as the sort of person whose expertise is derived mostly from reading the popular press.
It's long past time to retire "cyber-".
1From Greek "kybernetes" or "steersman", which now of course has been adopted by the Cloud People.
2It's never possible.
3Much of whose work I like. Not this piece, though.
Yes, this is a loathsome product category. Troy Hunt wrote a good exposé on the TicTocTrack last year.
Statistics in the US are hard to come by. Child abductions are not reported in the UCR and the DoJ's transition to the new system (NIBRS) seems to be having some problems (data for 2018 was "supposed to be available by fall of 2019" but the page still hasn't been updated).
A best guess seems to be that parental abduction - either by the non-custodial parent, or by one parent in a shared-custody situation - happens some hundreds of thousands of times a year. Abduction by strangers appears to be in the hundreds per year. So parental kidnappings are around three orders of magnitude higher.
Given that, it's reasonable for some parents (based on the child's custodial situation) to be concerned about a parental-kidnapping risk. It's not reasonable to take anything more than common-sense measures against the stranger-kidnapping risk; that's simply not a rational response. And with around 73-74 million children in the US, the rate even for parental abduction is low - but individual risk will depend very much on the particular situation, so the average isn't particularly meaningful.
All that said, even for those most at risk I don't think spyware wristwatches are going to be much help.
What, like a version of Office that you run on your own machine, even if it's not connected to the cloud? Inconceivable!
Though personally I'd be happy if they just stopped at "an off version of Office 365". I've been using Word (reluctantly, when forced to do so) since a floppy with a preview of MS Word 1.0 was bound into issues of PC Magazine, and Excel (with loathing) since the mid-1990s. If I never had to use them again it would be a minor but significant improvement in my life.
I did a fair bit of work remotely over a 1200bps dialup link (which was prone to dropping spontaneously; saving your work often, or using an editor that could recover a dropped session, was a good idea).
1200bps sync connections, for example to IBM midrange or mainframe machines, were even better, since they had less overhead than async dialup.
I did a little work over 300bps dialup, but to be honest 300bps was very tiresome. 1200 was where it became reasonable.
It could have been done _a_ decade ago. Two decades ago, most people were still on dial-up and paying 3p per minute in phone charges to access the internet.
Conditions in the UK two decades ago might not be an accurate model of those everywhere else.
I started working exclusively (aside from short visits three or four times a year) from home, as a software developer, in the US, in 1992. I was back in the office briefly, from mid-1996 through early 1998. Since then I've worked exclusively from home.
Initially I had an OS/2 machine, an RS/6000, and a SPARCstation, and a V.32bis modem, which I used primarily for UUCP file transfers and a SLIP link, directly to the office (Ohio to Massachusetts). Pretty soon I switched to a Telebit Trailblazer at each end of the dialup connection. About a year after that, we put in a 56Kbps dedicated digital line from the local telco.
When I left the office again in 1998, we went with Basic Rate ISDN. I was in Nebraska; I don't remember what corporate location I was connecting to at that point.
In 2002 I moved to Michigan, and there cable (DOCSIS) broadband was available. Bandwidth, latency, and reliability were pretty terrible, relative to what people were typically getting in major US cities, due to poor investment by the small cable company that served the area; but that didn't significantly impede my work, because CVS and ssh don't need a whole lot of bandwidth and I grew up with high-latency connections.
Eventually the cable company was bought out by a bigger firm which did a lot of capital investment in the network (shocking, I know).
At my other house, we started off with crap ADSL from CenturyLink, but the local electric cooperative has been running fibre alongside their power delivery infrastructure and selling residential and commercial Internet access on that, so about four years ago we were able to upgrade to FttP and now things are pretty sweet.
My point, though, is that for developers with a workload similar to mine, in the US, working from home has been quite feasible for nearly three decades.
if people ask you all sorts of questions and you refuse to answer, any reasonable person would infer something from that
For example, a reasonable person might infer that you consider civil rights more important than law enforcement's right to conduct bullshit interrogations.
In the US, the right to silence is absolutely critical and should always be exercised, except as specifically advised by counsel, because the federal government has made any misstatement to federal officers a felony, and is very happy to imprison people based on that principle.
The UK version is an institutionalization of the principle that "only the guilty have something to hide", and as such is inherently immoral. That should be obvious to anyone capable of critical thought and with a decent grasp of the human condition.
It's entirely possible for both parties to be at fault here. What I've read of the case, in the Reg and elsewhere, suggests that is in fact what we have.
That said, the sentence against Hussein seems rather disproportionate to me. But then I think that's true of a great many sentences in US criminal and civil cases. Unfortunately there is little political will to correct the situation.
Actually, all of the credit-card breaches I can recall, or could find in a few minutes of searching, from the past couple of years were the result of one of:
- A skimming attack against POS terminals or backend systems.
- A web skimming attack (Magecart being the most common).
- An attack against an issuer, credit agency, or some other non-merchant.
All the breaches I found that included credit-card data retained by a merchant were from several years ago.
That doesn't mean no merchants retain CC data, but that particular class of exposure seems to have become much less common than physical or web skimming. The move to dedicated payment processors seems to have more or less have the effect claimed by disgustedoftunbridgewells.
Relatively recent (i.e. going back a couple more years) breaches against merchants that yielded stored CC data are mostly against hotels, most notably the big Marriott breach.
I still think we should recommend virtual cards and/or other payment options (I don't personally like Paypal, but it does provide some protection against card-data theft), but more as a defense against skimming. As for whether you let merchants retain payment-method information in whatever form: that's a different part of the attack tree. Some consumers feel it's worth the risk; others don't, or are willing to assume it only in particular cases. But it's not the same as a CC-data-exposing breach, which is a more serious failure because it lets the attacker clone the card and use it at multiple merchants.
I used one of those cards that allows you to relegate a unique card number to each merchant you buy from
Yeah. I've been using virtual cards from privacy.com for any card-not-present transactions for a while now, and I have to say I've been pleased. Create any number of cards, set various limits (per-transaction, daily/weekly/monthly), restrict to a single merchant, various options for being notified of any transactions, and you can use any name and mailing address you like. It's all tied to a bank account, so if you want an additional layer of security, you can open an account specifically for those cards.
They make their money off the merchant fees, so it's no additional cost to the consumer.
The web UI is fancier than I prefer, but it's not too obnoxious. Works fine with non-Chromium browsers.
I don't have any relationship with them beyond being a user of their service.
There is no way we can keep coding local
Oh yeah? Watch me.
This claim is just a variation on "we can't expect developers to have any discipline".
And, of course, there are no failure modes with remote development that anyone might need to worry about. No one ever loses connectivity.
And we have decades of experience with primarily or exclusively remote development to learn from. I still do plenty of remote development today, though I do it properly (ssh to machines several timezones away, GNU screen, bash or ksh, source in Subversion or git, vim, gdb or dbx...). Browser-based IDEs are fine for people who like that sort of thing, I suppose, just like a 1980s Chrysler was fine for people who didn't want a vehicle that was more efficient, reliable, maneuverable, or practical; but to suggest it's the way everyone should write code is typical All-the-world's-an-X myopia.
It's almost as if people are not all identical, and generalizations about them are suspect.
I've been working from home for over twenty years. I've worked remotely from my teams for most of my career - about 5/6th of it.
I get plenty of human interaction: In person from family, neighbors, shopkeepers, doctors, strangers I pass on the street; by phone, text, and email from family and friends; many times a day from my co-workers by various means. I have daily calls with members of two of my teams, and weekly calls with others, and ad hoc calls with all sorts of folks. I get quite a bit of work email, which I genuinely enjoy.1
I used to have face-to-face meetings with some of my teams once a year or so, and I did like that, even if (indeed, partly because) it involved international travel. But do I need it? No, I do not.
I'm sure there are many people who work best in a group setting. That may be true of most people. But people are adaptable, and I have yet to see any reliable evidence that a broad shift to working from home will have the dire consequences some are predicting.
1I realize this is unusual, but I'm a compulsive reader. Two of my degrees are in writing.
The Mountain Fastness is pretty rural - not jake level of rural, but we're sitting on 2.3 acres of agricultural-zoned land, on a private dirt road, with neighbors just in hollerin' distance. We have fiber because the local electric co-op is also an ISP, and over the past several years they've been running fiber on their poles as they do electric maintenance and upgrades. So for much of the county, if you're on the electric grid, you have fiber right at the pole.
It's about $100 to get the drop to the house put in, and a bit more for the terminal. Then you can either buy your Internet access from the co-op, or from various other local ISPs who contract with the co-op for backhaul.
It's not perfect. Redundancy isn't great - a couple of years ago a forest fire took out the fiber trunk, and it was a few days before they got service back up. And the tier pricing is definitely high compared to some places with more competition; but the vast majority of households can get by just fine on the bottom or second tier. (We were on the bottom tier at first, but it turned out that QoS wasn't great with two simultaneous video calls plus web traffic plus our phone microcell, so we bumped it up a notch and it's been smooth since.)
The co-op has an incentive to run projects like this. In particular, they need to get a quorum of members to attend their annual meetings, and anything that makes them interesting to customers helps with that. This sort of thing also builds favor with regulators.
I moved to working exclusively from home (aside from rare in-person team meetings, which more or less ended some years back) in 1999. Initially that was with a single ISDN Basic Rate channel (never got around to bonding two channels), at 64Kbps.
And before that, from '92 to '95, I was by myself in a little satellite office, initially with a Telebit Trailblazer dial-up setup, and then a dedicated 56Kbps digital line.
In 2002 we moved and I got cable with a few MB/s of bandwidth. For the past few years I've had fiber-to-the-home at the Mountain Fastness, with a cap that's something like 64MB/s; I still use the cable setup at the Stately Manor. I've never had a need for more bandwidth.
To be honest, most of the time I could still get by on the old 56Kbps for work purposes. I'm rarely fetching or committing so many bytes of source changes that it takes any significant time to sync with the repository, and 56Kbps would work just fine for ssh. But for video calls, online research, and software downloads, of course, that extra capacity is necessary. (Plus there are the people who insist on attaching megabytes of screenshots or other cruft to emails...)
The primary purpose of any business is to make money.
The specific value proposition of credit agencies is consumer-credit pricing, which is mostly a matter of risk assessment for lenders. Risk assessment is probabilistic and applies to aggregations of borrowers, so there's a certain level of noise - inaccuracy in the data - which is optimal for the credit agency, where its affect on their profits is less than the cost of improving accuracy. So they're happy to tolerate a certain amount of borrower-side fraud, such as identity theft. In fact, they've learned to monetize that by selling add-on products such as credit monitoring.
Similarly, there's a point of diminishing returns on protecting the confidentiality of their data from fraudulent customers (i.e. lenders and others interested in credit ratings). Past that point, fraud becomes an externality - it's not worth them trying to prevent it.
The only way to fix that problem is to convert the externality into a direct cost that's greater than the marginal profit of ignoring it. Sometimes market forces can do that, but the oligopoly of credit agencies in the US, and the fact that consumers have almost no effect on which ones are used by lenders and other customers, makes the market a non-starter in this case. That leaves only regulation.
How many languages that appeared 10-15 years ago are still in demand?
For the past 10 years: Rust (2010), Dart (2011), Kotlin (2011), TypeScript (2012), Julia (2012), Swift (2014).
I'm not claiming those are all good languages, but they're all "still in demand" by any reasonable metric. They're all being used for production applications, they're all still present in the trade media and in various surveys, they all still have their proponents and backers.
And I don't think that's a very useful metric anyway. There may not be much demand for AUTOCODER; that doesn't mean it wasn't important. You don't see a lot of new ALGOL projects - that doesn't mean ALGOL wasn't hugely influential. Pascal1 has largely fallen out of favor (pace Stob and other Delphi fans), but it left its mark, too. ML was never much for production apps, but its descendants OCaml, Haskell, and F# - even while they remain niche languages - have had a significant impact. Erlang has never been as popular as it deserves to be, but people keep reinventing it, so it must have done something right.
On the other hand, Fortran, COBOL, PL/I, etc., not to mention various assembly languages, may not be sexy, but they have a hell of an existing code base and there's still plenty of fresh development in those "legacy" languages.
And, finally, why not develop new languages? Kotlin and other JVM languages pushed Java to improve in its expressibility and syntactic sugar. If people are going to continue to insist on developing huge applications in ECMAScript-based languages, then yes, please, let's have some with a bit of type safety and other improvements on the base language.2 Whenever I'm writing something in Managed OO COBOL I'm glad to have generics and type inference and anonymous methods with proper closures - even if I don't need them for whatever I'm doing at the moment.
Language development gives us better languages.
1The programming language, not the mathematician/philosopher, or the Reg regular commentator.
2Yes, I know you can do purely functional programming in Javascript with proper algebraic structures and monads3 to defer side effects. And that's great, since you can then do all sorts of handy reasoning and manual or automated proofs of correctness about the vast majority of your code base. But clearly only a vanishingly small fraction of Javascript programmers are willing to learn how to do this.
3Look, we've explained this already. A monad is a monoid in the category of endofunctors, like a semicolon with side effects. It's so obvious.