I guess it's a good time
... to remind of Devuan
2506 posts • joined 6 Sep 2007
You can't decide that you won't pay off the full balance at the end of the month we call that "charge card" over here, one example is American Express. The card issuer profit is obviously not from the interest but from the annual fee paid by the client.
Where the updates are initiated by IT because they're needed to patch some risk or move off some component that's reached maintenance EOL if you can't get agreement then go to the top team yourself and point out the risk and that you can't accept responsibility for any consequences of postponement.
... and when you do so, do make sure to include a printout to Equifax hack postmortem. Not a link - hard copy so they have no excuse for not knowing the dangers of delayed patching.
Many applications don't use multiple threads very heavily. Yes, because you need either 1) embarrassingly parallelizable algorithm (fitting within existing imperative programming paradigms) or 2) a new programming paradigm which limits data sharing between threads. Without either of these, the horizontal scalability of your application is severely limited by the Amdahl's law. New languages like Go or Elixir, or frameworks like Akka go some way towards 2), but few programmers can be bothered.
Are you familiar with the term "tech-illiterate"? That's what most directors at established banks are. And they are the only people with the authority to make architectural decisions. That might not apply to one of the upstart banks like Monzo or Starling, but I am yet to learn more about how they work.
One of my favourite charities "Water Aid" also takes a keen interest in sanitation. Having learned a little about how the world works outside of my immediate surroundings, I can understand why.
Agreed, distributed databases is a hard problem. The fact that they used MySQL does not make it any bettter. The solution you mention requires a totally asynchronous client, which may not work for the database users. The other solution is to use Convergent Replicated Data Types, and yet another is to simply fail one side due to the lack of quorum.
But this time it's different. Redhat is opensource ... which means that the biggest asset that RH could possibly bring to the table is the knowledge in the heads of its employees. If they are gone, nothing is stopping them from joining CentOS team or perhaps setting up a new distribution based entirely on RH (also with commercial support, because they know how to do it).
But I think this deal is about something else. The winner here is Power CPU architecture, which will receive "virtually unlimited" support from the favourite distribution of banks and other large (and medium-sized) institutions.
For a government project, you probably don’t want distributed consensus.
Yes and no. Yes, because centralised solution (i.e. non-distributed) for most problems is what any government would naturally gravitate towards. No, because DARPA was a government project, with the explicit goal of creating distributed, highly available system.
If a government wanted to build a distributed, highly available database for its citizens or for international community, then perhaps blockchain could be a part of the solution. Admittedly, that goes strongly against typical governmental thinking, so there you go.
Actually there is a use of blockchain. It is a distributed consensus algorithm (or, in other words, totally ordered broadcast protocol) which happens to be also resilient against Byzantine failures, unlike other consensus algorithms like Paxos, Raft etc. Of course, being a distributed consensus algorithm does not make it intrinsically valuable (as some "investors" would like you to believe), but it could be potentially useful to store and update a distributed database across a large number of untrusted devices (say, privately owned computers or mobile phones). Or it could be used to track a path of a physical thing in the supply chain (where individual suppliers cannot be trusted). It would be also rather inefficient and very laggy. Oh, and the whole "proof-of-work" is a total non-starter, unless you are into speculation with "instruments" which they are not.
The engineering approach is to start from the assumption that at any given time, some part of the systems will be in "bad" state. If you start from that, then bugfix releases or configuration updates are just variables in the complex equation of "how much more broken could it become if we (do not) do that". Of course, the military cannot have that - hence there is no functioning monitoring, no canary releases, no fault tolerance, no regular disaster recovery exercises, no nothing. Just put it all together and hope it holds shape. Because in military, apparently "hope" is a strategy. Who would have thought?
... which is controversial for any Linux distro. Luckily the price is small enough to put it under "support a developer with pizza and few beers" label, so no big deal. During installation you are expected to create a user with "sudo" rights. The installer will add WLinux icon to "Universal Applications", which is standard Windows console. You are automatically logged into the user you have created when you launch this console. I had some difficulty figuring out the selection of available packages until I checked the content of /etc/apt/sources.list - there is mostly Debian stable, with the addition of apt.patrickwu.ml . The fact that this domain is owned by "Mali Dili B.V. " with only a postbox in Netherlands, who apparently owns 227 more domains under .ml is potentially a security issue. Yet need to check this in GitHub WLinux wiki.
As for X11, I tried sublime text 3 with vcxsrv and it "just worked" although I had to add "export DISPLAY=:5" to ~/.profile (my display is not the default :0).
"... at least it puts a shelf life against stolen credentials"
if passwords are not reused that should not be a problem. In case of a genuine password leak the correct way to enforce password security is via monitoring of user logins. That gives you much shorter reaction time and also view on the damage incurred.
I have a six years old Brother laser, color with duplex and network port. Would like to replace it with a newer model, but it just does not fail, and I have no heart to throw away a functioning machine. I did replace its toner few times (not too often), reset the page count on tenor cartridges few more times (not too difficult, and thankfully well documented now) and cleaned its insides once (after apparent black toner leak). It does not look like much, and installing working drivers in Linux is more hassle than I would like it to be (still doable, though), but it works, and tenor is cheap per page count (if reset, as it should be).
I am currently reading a great book, titled "Designing data intensive applications". There are many things in it that I "kind of knew", but never was aware of the details of. The point is, systems like the ones "discussed" here are typically designed by guys (invariably - a woman would have learned first) who "kind of know" how to do it but in actuality, not quite. They learn on the job, like most of us did. So, the server side services are unresponsive, lose data on occasion, do not offer a clear upgrade path for the client side app etc. Things "kind of work", if you squint enough - just not when you need them to. The answer is to learn, but when do you learn if the project budget has been eaten up already by five project managers and ten consultants, and you are half year behind the schedule?
Biting the hand that feeds IT © 1998–2019