Re: What do MPs understand?
Aren't these the people some of whom thought it was fine to share their login credentials with anyone in the office? Or was that just a particularly dim one? (Or one who was just lying, I suppose.)
518 posts • joined 11 Jan 2010
Aren't these the people some of whom thought it was fine to share their login credentials with anyone in the office? Or was that just a particularly dim one? (Or one who was just lying, I suppose.)
I have fond (and probably slightly inaccurate) memories of debugging programs on an 8051 with a logic scope, which you had to queue for time on.
The sad thing is that I have spoken to people who, apparently seriously, said this was an argument for brexit. It may be they were joking but I had to kill them anyway.
I think you have to take them out of the bell jars to use them.
You can use it to slow you down, but not enough. To get into an orbit around the planet you need to leave it on the first pass with less than its escape velocity, and given you are approaching it at a significant fraction of the speed of light you can't do that.
Quantum entanglement -- quantum anything in fact -- does not mean real-time communication is possible and there is no suggestion that it ever will. Please don't spread this silly myth.
Facebook are less bad than Google. They're less bad because everyone knows they're evil whereas a lot of people still fool themselves that Google are the good guys.
Now you have me trying to work out how long you can live off a corpse (probably depends strongly on whether there is refrigeration) and what the resulting population curve looks like (obviously exponential decay with the half-life being how long you can live off a corpse).
Well, he's made me laugh.
I think that the number of significant figures can (surprisingly) be continuous. You know both how many digits you know, and the uncertainty in the first digit you don't know, which is a continuois quantity. Another way pf seeing this is to consider what you're actually trying to represent which is an interval on the real line.
Yes, definitely her. It's actually going to be Hawking, of course, but Noether would be a really great choice.
Someone I know pretty well has banked with the Clydesdale since they had a bank account. They were so pissed off with the new online banking thingy, I think because it made setting up new payees painful in some stupid way (the existing OTP device apparently was not enough, you needed one which sent a small payment in the form of a shard chipped from your soul to
The DevilMark Zuckerberg every time you used it or something), that they're in the process of moving everything. Turns out to have been the right decision.
(Not, of course, that I wish to imply that Mark Zuckerberg is in any way satanic, you understand.)
I wasn't really claiming that only hipsters use macbooks, I was just poking fun at the kind of person who might spend their time writing code in a coffee shop. However that's silly, because I write code in coffee shops sometimes, and I'm way too old to be a hipster.
My point really was that things like iPads and iPhones depend for their success on a copious supply of apps, and those apps don't get written on the devices they target, both because those devices lack things like keyboards which people who write code tend to want, and because they don't in fact host the kind of development environment which you use to write the apps at all.
And then, if Apple laptops go away, that leaves either non-practically-portable machines on which to write code, or assumes that the people who write apps for iPhones and iPads will do so on Windows or Linux laptops.
I don't think the non-portable machine option is something that can be taken seriously: I started my career with machines which sat in rooms you had to go to to use (and those rooms were often pretty cold) and the entire trajectory of computing since then has been to make things more portable. I'm 55 and I quite like working in coffee shops: they're, frankly, often nicer than open plan offices.
I think the deveopment-will-happen-on-non-Apple-laptops option is more possible, but is probably long-term suicidal for Apple: it's the exact opposite of the kind of tie-people-in strategy which has worked so well for them.
There's a final option which is that it all happens in the clown and you access the clown via your iPad. You still need a keyboard, so it will be some kind of iPad that looks a bit like a laptop, an apple version of a chromebook I suppose. That might just fly, but actually issues of latency, connectivity &c are kind of hard to deal with well (and I do a lot of work-based stuff via VNC so I do quite a lot of what is, essentially, this).
So, in summary, the market for machines on which developers write iPad and iPhone apps is relatively small, but it is absolutely critical for the continued existence of those platforms. I don't think Apple are stupid enough to cut off their own air supply like that: they will keep on making portable development machines -- laptops in other words.
(As a note to this: this is one of the things that I think doomed Sun: they owned the university desktop (which were called workstations of course) market) in 1990. But enterprise systems were far more profitable and Windows & Linux started eating their workstation market. So they gave up on it, people stopped writing things for Solaris since they no longer had Solaris machines, and Sun in due course ran out of air. The Solaris-desktop market was tiny, but critical.)
Let's say you're the kind of young hipster type that Apple would like to appeal to (and, in fact, do appeal to). The sort of person who likes to work in the kind of coffee shop where they can pretend to be cooler than they are. Are these people going to start turning up wheeling a trolley with a Mac Pro strapped to it, or what?
No, no, they're not: they're going to turn up with the same kind of shiny Apple laptop in their backpack they turn up with now, which is why Apple are not going to kill their laptop range.
Because AIX and RHEL are the two remaining enterprise unixoid platforms (Solaris & HPUX are moribund and the other players are pretty small). Now both of those are owned by IBM: they now own the enterprise unixoid market.
I presume they were waiting to see what happened to Solaris. When Oracle bought Sun (presumably the only other company who might have bought them was IBM) there were really three enterprise unixoid platforms: Solaris, AIX and RHEL (there were some smaller ones and some which were clearly dying like HPUX). It seemed likely at the time, but not yet certain, that Solaris was going to die (I worked for Sun at the time this happened and that was my opinion anyway). If Solaris did die, then if one company owned both AIX and RHEL then that company would own the enterprise unixoid market. If Solaris didn't die on the other hand then RHEL would be a lot less valuable to IBM as there would be meaningful competition. So, obviously, they waited to see what would happen.
Well, Solaris is perhaps not technically quite dead yet but certainly is moribund, and IBM now owns both AIX and RHEL & hence the enterprise unixoid market. As an interesting side-note, unless Oracle can keep Solaris on life-support this means that IBM own all-but own Oracle's OS as well ('Oracle Linux' is RHEL with, optionally, some of their own additions to the kernel).
If you have things run on your machine rather than remotely you get something which can interact with you with a tiny latency rather than however long the round-trip to the far end is, and you also get something which (at least potentially) doesn't vomit your information to some remote system you don't own or probably trust.
Both of those sound like a win to me. The flip side of course is you end up running code you may not trust.
I think a lot of people would define a backup as 'a (possibly partial in the case of an incremental) copy of something on physically- and logically-independent storage' In that sense a neither a snapshot nor a RAID system is a backup (a detached mirror might be in the right circumstances).
It's not about what git can do, it's about how people use it, and particularly that they expect there to be a big central system and are lost without it.
I'm also pretty sure that unless you do work to avoid the problem you're also in trouble with git if the central system you rely on goes away because you generally won't have all its commits. The documentation for 'git fetch' says, in part,
Fetch branches and/or tags (collectively, "refs") from one or more other repositories, along with the objects necessary to complete their histories.
which, I think, means it only fetches the commits it needs, and not commits associated with refs you're not fetching. So I think that means that pulls generally don't pull branches &c which you aren't tracking. In a busy repo that could be a lot.
I might be wrong about that but it would be easy to check I think. I don't know because I'd never use GHitHub as my big central repo but have origins which sit on storage I control and I'm generally very careful about making sure I have complete clones when I need them.
I pay for GitHub: do I get to complain?
The ZFS approach to checking if a filesystem (pool, really) is OK used to be (and may still be) to attempt to import that pool. In other words all the consistency checks run in the kernel where any kind of error in the checks (I mean a mistake in the code which checks for on-disk errors) is probably going to cause the machine to fall over in some horrible way. That kind of made it obvious to me that the ZFS people had never been very near a production environment and certainly never should be allowed near one.
The problem is that although git can do all that -- you can ship updates by email I'm pretty sure (and not just the git format-patch thing but commits), so the connectivity requirements are tiny in theory -- people (a) really, really want the issue-tracking stuff (b) in practice treat git just the same way they treated subversion and CVS, with a central system which runs everything, and (c) want it to be free. And that central system, for many people, is GitHub, so when it goes away the same doom befalls them that befell them when google code went away and when sourceforge went away before that (I know it, sort of, came back). And there's almost no collective memory -- anything that happened more than a year or so ago is forgotten -- and so the wheel of reinvention turns forever.
'Live storage somewhere else ready to take over' is why banking IT is expensive. My guess what most of the cloud people do is, at best, 'storage somewhere else ready to take over which is in a consistent state and no more than a few transactions (of whatever nature: git commits here) behind the current live storage. May be that's enough.
That is false. When OSX on intel came out there was Rosetta, which means that PPC binaries would continue to run. That's, you know, why the thread has 'Rosetta' in its subject.
Fat binaries work for new applications but completely fail for existing ones. That's why there was Rosetta, and why there would need to be something like that for any new transition.
Anyone running VMs on their Mac is also going to find this interesting: I don't even want to think what the performance of an x86 VM sitting on top of something dynamically translating to ARM is going to be like: absolutely crap I expect.
I don't put it beyond Apple to just make the decision anyway.
I was wondering if it was an acrostic. I'm still not quite sure. The uppercase letters from the middle two paragraphs are "TGUAIMAITSCDPPDFVMSVRWDWCHAOSLOVEQLPMMSCADADADGNH" (this is all the uppercase, not just the initial caps, which may be wrong). And there are words in there although whether there are more words than you would expect, I am not sure.
I'd rather hope that banks don't lend all their money to startups, most of which fail: that sort of thing is how collapses of the financial system happen, and why VCs are isolated from banks.
Yes, exactly like that. If that thing had actually been reproducable (so if it really was an effect) then GR would be dead now and we'd all be living in a much more interesting world now, or people interested in physics would anyway.
If something like that was true then it would be a direct experimental test which General Relativity fails: the first such test it has ever failed in more than a century. The same sort of thing that pople at CERN are trying to find with the antimatter-falls-which-way? experiments. GR would then be a dead theory.
Anyone who does, and publishes in such a way that it can be replicatd, such a test is going to win an automatic Nobel prize and be the most famous scientist of the 21st century.
Strange that no-one does, then.
Having a 'simplistic' library is a bit like having a 'simplistic' version of general relativity or quantum mechanics. In all three cases it turns out that you need a fair degree of mathematical sophistication to actually do anything useful with the thing, and a simplistic version just ends up with the monkey-turning-the-handle-on-the-system-they-can't-understand problem. Of course, if what you want to end up with is 'here's this magic box, feed it some data and it will tell you some things' rather than actually understanding what's happening then that's fine, I suppose.
I think the days when almost all programmers were doing more than turning the handles on systems they do not understand to produce answers they do not understand are long gone.
The evidence from computer science indicates that it's not all the parents' fault, if any of it is. In 1984 something between 35% and 40% of CS graduates ('majors' it says: does this mean graduates?) were women. Today it's under 20%: about a factor of two less.
So unless somehow, starting after 1984, there has been a concerted campaign by parents to discourage their daughters from doing CS. which seems unlikely, something else accounts for this change.
(The figures for physical sciences are much better: I don't know how physics specifically comes out.)
Note also that this dramatic decline also rules out any of the stupid 'but women are just no good at this stuff' arguments: if women were genetically not as good as men in CS then any change would take many generations, as evolution is really slow.
when it turns out he can not just walk all over poorer people merely because he is rich but, like them, has to obey the law.
I was going to say this applied to that orange guy with the weird hairpiece too, but he's not rich, is he?
And that's the point everyone else is missing: this isn't just 'this is what physically-accurate models of people do when they dance' it's what physically-hopeless models except for one part do. And, gosh, which part has all the effort gone into?
The whole 'hardwired' thing is just stupid. No human is 'hardwired' to do physics or maths, or to be able to read or write.
I just knew there would be conspiracy theories about this.
I can tell you with absolute certainty that there are retail banking IT people working in the UK, including for at least one of the banks that has had an outage in the last few days.
Except they didn't sack everyone: there are plenty of UK-based IT staff working for the big banks. There probably are fewer than there once were but there definitely are quite a few still.
I just had my iPhone 6 battery replaced (by Apple). I was bracing myself for the awful decision that spending £200 or something to keep the thing alive for another year was still cheaper than £700 for an 8, however bad it made me feel.
It was £25. I don't remember how much decent replacement batteries used to cost when I had phones with replaceable batteries, but I bet it was ... about £25.
So oddly, Apple seem to be not ripping people off for batteries.
I am struggling to resist making an extremely defamatory comment here.
No, the difficulty of protecting people from other people is not some trivial monotonic function of the number of people you have, if the number is small. When it's one, then there is no difficulty. You can't have a total of two people, because one of them must be the protecting person who then serves no purpose. With two people you need a third protecting person which is very expensive, or you take a risk. Up from there the cost comes down and tends to some fixed proportion (n protecting people per m people being protected), but since you can't supply partial people it comes down rather jerkily and perhaps quite slowly (if you need one person per 20, say, but you only have five people, you're paying way over the odds and may decide not to have anyone at all, meaning there is no protection). So yes, small numbers of people are hard to protect, where 'small numbers' means 'more than one'.
What really happened was probably: they knew someone at the place was downloading something nasty using its internet connection. But they didn't know who: they just knew it was coming from an address in the place, and they certainly did not know enough to nick whoever it as. So they closed it down and sent everyone home. Now they can (a) quietly pore over the machines there to try and work out who it was, and (b) keep watching for the same patterns, this time coming from someone's private address.
But of course it wasn't that, it was aliens, or possibly liberals: liberal aliens?
I'm entranced by your naivety: I keep expecting you to start talking about formal proofs (although that's probably a different branch of the cult).
Yes, approval to back out. The technical person or people implementing the change are absolutely not in a position to make decisions which could influence the functioning of the organisation, especially where the functioning or otherwise of the organisation is going to be in the papers. That's why there are elaborate governance structures in banks.
Quite so. I spend my entire life taking large systems and carefully dividing them down into smaller parts with controlled interactions. Yet still my programs have unexpected bugs: how strange!
Between (7) and (8) there is 'get approval to back it out': that's what takes the time, especially when (7) passed (but the tests were not adequate).
Well, assume that the monitors do go off, and that they go off promptly. If they do, then you don't just reverse whatever change you made: you have to fill in a great mass of forms which describe what you're going to do, apply for the access which lets you do what you're going to do, get approval from a bunch of very cautious people many of whom don't understand what you did to break it, how your proposed fix is going to fix it, or indeed any of the technical details of the thing at all, but who have burned into their brains the memory of a previous instance where someone 'backed out a change' which then took the bank down for days and are really concerned that this should not happen again. This takes time. It would be much quicker if you could just apply the fix you know will solve the problem, but the entire system is designed to make that hard.
Yes, it would be easier, and much quicker, if all this laborious process did not get in the way. But no-one really knows how to do that: the laborious process makes it hard to do bad things as it's intended to do, but it also makes it hard to do anything at all as a side effect. It's like chemotherapy: it kills the bad stuff, but it nearly kills the good stuff too. I think this is an open problem in running large systems: people like Googlebook claim to have solved it ('we can deploy and run millions of machines') but they do that by accepting levels of technical risk which a financial institution would find terrifying (and there's a big implied comment here about people (like banks) moving services into the clown and hence implicitly accepting these much higher levels of technical risk which this margin is not large enough to contain).
It's hard to get right because we're dealing with systems which are at or beyond the ability of humans to understand them.
And 'the companies making mega profits' are companies like Google, Intel, Facebook & Apple: not, for instance, RBS. Of course, those highly profitable companies never ever make mistakes. No company ever shipped several generations of processors with catastrophic security flaws, for instance.
Biting the hand that feeds IT © 1998–2018