2142 posts • joined 10 Jun 2009
Thinner makes it intrinsically more fragile (wrt getting bent). Something I'm sure the manufacturers have thought long and hard about. Convince the punters to pay more for a less durable product. Yes. Oh YES.
If you want a random client to upgrade you to demigod, also carry around a USB to SATA adapter and a stand-alone Linux distro with ddrescue on it. Then when you hear about someone who has lost data on a disk drive that's making clicking noises, ddrescue it. It doesn't always work, some drives die too quickly, but you sure gain a believer when it does work!
DVD-W still has a niche
One writeable DVD costs less than 20p and holds nearly 5Gb. Cheap enough to give away freely and/or stick in the post. 8Gb USB sticks are at least twenty times more expensive.
Yes, you can send 4.6 Gb by network these days, and without pain on a LAN. However, at DSL upload speed? OK, it might complete overnight and it probably will beat the post, but that's not very convenient . Especially if there's more than one person you need to send a copy to.
Despite this, I don't object to computers lacking a DVDRW drive. I've got a dinky little USB DVDRW drive that powers itself off a single USB socket. If you don't have access to one, there's something wrong with you or your employer.
Re: So precisely do we benefit from discovering higgs?
I'm sure that there were cavemen asking the same question about discovering fire (or rather how it could be moved from one place to another and kept going).
Re: Apples with Apples?
I don't believe tablets are replacing PCs. They're being added as well as PCs. PCs for work that requires serious inputting. Tablets for leisure (and some work) that is almost all output. Also because the tablet is a new device class, there's a huge sales surge going on at present. Just like there was once a huge sales surge for the now moribund netbook format. It lasted until everyone who wanted one had got one. In the fairly near future, everyone who wants an iPad will have got one. Microsoft will probably arrive in competition just as the market is saturated.
Re: "Lets face it, it is rather retro kernel design"
Monolithic kernels may be said to have passed the test of time ... at least if they're put together as well and as flexibly as Linux is. Point me at some other kernel architecture that works half as well. Yes, I'm aware of all the academic arguments in favour of microkernels. On paper, they are quite convincing, but I won't be convinced until I see one working well, across a range of workloads and system types, in the real world.
Personally I think Linux has a lot in common with Microkernels. Its software architecture is well modularised. New subsystems are easily integrated and existing ones re-engineered. it's just that the binding is done at kernel build time, not at runtime. It's a bit like the C++ versus script language argument. C++ is less easy to develop, but more efficient. A monolithic kernel is likewise less easy to develop for, but more efficient in production. A kernel is somewhere that efficiency DOES matter.
I have a big problem with neophiles. They think that "old" automatically means bad, without any actual comparison of the relative merits of the old and new products. They don't like "tried and tested and nearly unbreakable". They are also happy to disregard the vast amount of man-hours that are wasted, when a company like Microsoft replaces (say) the XP UI with the Windows 7 UI, and the Office 2003 UI with the Office 2007 UI. Sure, it may be only a couple of hours of lost productivity per user, but multiply that by maybe a billion users. Personally I think it's much higher. There's no accounting for the cost of the mistakes that are made while someone is thinking about the bloody new interface rather than the work he's trying to accomplish within it. Somewhere out there, I'm sure that the change to windows 7 has been the triggering event that destroyed marriages, killed companies, and caused deaths (by heart attack, probably). The right way to go is incremental improvement. Slip in th new features in a completely non-intrusive way, so that if you don't yet need the new stuff you never notice that it's arrived. That's what the Linux kernel has been doing very successfully for at least the last decade. (Unlike Gnome developers ... sorry!)
And almost as soon as we get used to Windows 7, Microsoft decides to Metro-ize us. That's a good neologism, by the way. To Metroize. To pull the rug out from underneath a billion users, in a misguided and doomed attempt to increase corporate revenue. To FUBAR by deliberation rather than by accident.
Re: "99% of software is crap"
Should I ask about the 1% of crap that isn't crap?
Re: War of the Worlds deserves a place in history
Cybermen don't qualify. They're not aliens, they're technologically-created zombies.
Daleks really should be on the list, though. Any Dr. Who saves Earth from Daleks sequence beats "Independance Day" on every front, including plot intelligence and plot believability.
Re: What the f*ck does this have to do with the IT industry
99% of software is crap
99% of Hollywood alien invasion movies are crap.
Re: What about Chocky?
And there's the comedy version - Gremlins
Re: What about Chocky?
Triffids weren't aliens. We made them.
It's the everyday world that's complicated! The standard model is really quite simple, but obviously not complete. There may be an even more simple underlying theory that so far we have hardly glimpsed.
Lasers and some forgotten alternatives
We rely on Lasers for optical disk devices and for data-communications. The science of Lasers is definitely simple quantum physics. If someone had experimentally discovered a lasing medium in the 19th century, quantum theory would have had to follow along rapidly. As it was, Einstein got the theory right decades before anyone made a laser.
You can have fun imagining a future where computers still run on purely classical vacuum tube technology. (Yes, micron-scale vacuum tubes are possible, as is integrated circuitry containing millions of them! ) Or, you could try having the Babylonians or Romans discover pneumatic computers (clock speeds of 100kHz, logic element size a few mm - Rolls-Royce did actually once build one to embed in the hot end of a jet engine). If Babbage had known about pneumatics, today's world would have been quite utterly different.
There is absolutely no way to determine if the universe is really real, or is just a perfect simulation of its physical laws and an initial state running on a computer within a universe with completely different physical laws. This is pretty much by definition. The perfect virtuality hypothesis also has zero predictive value, so we apply Occam's razor to it.
Note "Perfect". The most dangerous thing physicists could do is to find the bugs in an *imperfect* virtuality, and then tickle them. (There's a variant which says this has already happened many times over).
There's a scarier possibility, that it's our brains and sensoria that are being simulated by distant descendants of real beings much like ourselves. The simulation is running in their university department of pre-digital history. Sometime soon a grad student is going to realize that the simulation has progressed past the dawn of the information age, and is therefore pointless, so he'll stop the run.
(Ever had the feeling that your life has suffered a subtle continuity error, usually simultaneous with the desire never to drink so much again? Now you know why. Both the continuity error and the getting drunk. One's the bug, the other's the fix).
Re: Be careful what you wish for
Science advances in two ways. One is a prediction from a theory, later confirmed by experiment. the Higgs boson is in this class. The other is an observation of something not predicted by any theory, or which contradicts the generally accepted theory.
So assuming there are no more predictions that everyone wants to confirm, CERN should start looking for the unexpected and currently inexplicable. (I think there are also many more tentative theories making predictions that CERN will in due course test).
Eventually, if physics needs particles at higher energies than CERN can provide, a lot of new technologies will have to be developed. A usefully larger circular accelerator would be impossibly large and impossibly expensive. It'll have to be a linear accelerator with operational parameters way beyond anything we know how to build today.
The truth is that most jobs are boring, unless you can really enjoy the everyday detail. Here are some others. Car salesman. Estate agent. Hairdresser. Accountant. Garbage operative supervisor. Bus driver. Solicitor. Hotel manager. Starting to get the picture? Prefer any of those to IT? (the whole job, not just the over-inflated salary that some of them command).
The thing that's sick with society is celebrity culture, the whole idea that everyone should ape the glamourous, the rich and famous, the fashionable. Mostly what it breeds is dissatisfaction, unhappiness, low self-esteem, and a failure to realise that the reward of helping other people is not solely that it gets you a paid at the end of the month.
I've found a job that lets me spend a good part of my day solving puzzles (something I enjoy). It could be a lot worse. Also it's my job ... not my life.
Re: So, basically a land hurricane?
More likely, a squall line http://en.wikipedia.org/wiki/Squall_line
Hurricanes cannot form over land. They are driven by hot moist air rising from an ocean surface. They also take several days to get going, so you get at least twelve hours notice that a hurricane is headed your way. Usually longer.
Squall lines give very little advance notice. I've heard tell of a transition from a hot summer day to roofs being blown off an hour later.
But how do you know?
How do you know that by overclocking your system, you haven't created conditions that cause, say 34387.00*1.01 to compute as 79231.48, that sum being the decimal representation of something that Intel knows is on the critical timing path of the FPU?
Next thing you know, all the grade 3 techs have been paid over twice their usual salary rather than the scheduled 1% pay rise, and the FD wants to see you NOW!
But even if you're just number-crunching a model that you know will unconditionally iterate to correctness (in the mathematical sense), you still can't be sure. Maybe the FP error was in the calculation of the residual error, and the iteration is terminated before the answer is right? Let's hope your Ph.D. doesn't depend on that result.
Or maybe it's not an FP error, it's in one of those rarely-used instructions that only OS kernels ever use (which may be where MS is coming from, though I have my doubts). Consequences: corrupt database? corrupt filesystem? deadlocked system? security compromise?
I won't overclock a CPU for work, period. (for fun, OK). Intel knows what are the timing-critical logic paths in their billion-transistor chip. I'm sure that if a significant fraction of the dies tested OK at 5% faster on the critical paths, Intel would sell them specified for running 5% faster. Fact: they don't. Becuase Intel knows, this chip doesn't work 100% reliably above that speed. Maybe you'll never crash into the invisible wall, but logically it must be there.
Re: The real problem
have you ever tried running Win 98 SE under VMware on state of the art hardware?
The OS was / is crap, but it sure boots impressively fast!
OT - skyscrapers
You might be surprised to know that most tall buildings have only one support, right in the middle. Everything else is cantilevered off this core. These days, the core has to designed proof against airliners colliding with it and large (I don't know how large) explosions.
Re: Bullpat @JDX
I don't believe that Intel or any other CPU manufacturer would knowingly ship CPUs where getting the right result from any particular operation was be design and testing only probable rather than certain.
Of course, there's a thermodynamically large set of states and they cannot test all of them. They do, however, have access to the CPU simulator, and the ability to probe the actual signals at the surface of the die to validate and calibrate it. They therefore know what are the speed-limiting transitions, and can design their tests to exercise these in particular. If they don't sell a faster version of a particular die, it is fairly likely that they *know* that for this chip and at that speed, at the maximum operating temperature and worst in-spec chip power supply, there is at least one instruction sequence that is very likely to fail.
Overclocking a game is one thing. Overclocking a financial, scientific or engineering model is quite another. Don't. It's more important that the results are right and the system reliable, than getting an extra few percent of speed.
Re: Laptops more reliable than desktops?
A laptop has a built-in UPS (the battery and charger) rather than crashing if the mains supply glitches. A laptop often has a slower CPU than a desktop, and a slower cooler hard drive. These may tip the balance, depending on what exactly is being measured.
The desktops last longer, but that's often because the laptop's RAM can't be upgraded enough to make it worth keeping, or because it's too expensive or too much hard work to replace its keyboard after something gets spilled into it. Laptop displays are also harder to fix (desktop: throw away the monitor and plug in another one). And of course, desktops don't get dropped onto a hard surface nearly so often.
Re: And as for a Unix Server
I've seen that sort of reliability from desktop PCs crunching numbers. No unscheduled downtime other than those caused by the electricity supply, up until the day that it was decided that a newer system would make better use of the electricity. They weren't even required to be quite that reliable, they just were!
Running Linux, of course. And I'm sure that your IBM's disk subsystem was taking a much greater pounding.
Overclocked vs. Flat-out
An overclocked CPU is a CPU running outside its specification. It's been tested by the manufacturer (who knows the weakest spots w.r.t. timing) at a particular speed and may well have been found wanting at a higher speed. It's blindingly obvious that an overclocked CPU may not be working 100% correctly, and can only be recoemmended to someone who cares neither about correct results nor about reliability. A gamer, maybe.
Flat-out, on the other hand, should not reduce reliability. With modern CPUs there is a feedback loop to slow down the CPU when the chip temperature limit is reached. I work in an environment where desktop PCs are crunching numbers 24x7 most days of the year, and our desktop systems don't seem noticeably unreliable. By far the commonest failure is a PSU fail, followed by a hard disk fail. Failed CPUs are as rare as hen's teeth and failed MoBos only slightly commoner. In the old (Athlon) days when a CPU didn't slow down and could actually overheat until the heat crashed it, failed CPUs were also as rare as hen's teeth. I'd vacuum the heatsink, replace the fan, and the system would happily reboot and last as long as any other. Too high a temperature slows down the logic gates in the CPU, until it's the equivalent of a CPU that's overclocked one notch too far, and crashes.
Oh yes, and always run memtest overnight on a machine that's randomly unreliable. Low-incidence errors on RAM will do that. it's why servers (and serious engineering workstations) have ECC RAM. If memtest crashes rather than reporting errors, suspect your power supply first (you may or may not see the problem with a DVM).
Re: Its a kernel bug
Don't know the details here, but it's not impossible. Kernel documented to do X, actually does Y which is subtly different. Java is the only widespread app which does something noticeably bad as a consequence. Everything else "just works" much the same under X or Y.
Re: Not a sys admin but...
The greater FAIL was whoever connected and configured it in the first place. With the right jumper on the D25 (or D9) connectors, unplugging the connector would have been treated the same as a modem hangup, and that should have terminated the logged-in session if the software config had security in mind.
A PC wasn't working, looked like PSU fault. Took it back to my office to check and repair. Plugged in. Pressed power button. Very loud bang. Small amount of smoke. Lights went out. Oops.
Outside office, also no lights. OOPS.
100 yards down corridor, still no lights, lots of people peering out of their offices asking WTF. OOOOOPS.
Somehow, the PSU fault hadn't taken out the 13A fuse in the plug (yes, it should have been a 5A fuse, but it did have the mandatory safety-tested passed-by sticker). Not the 30A breaker on the circuit either. Nor the 100A breaker above that. No, when the electricians finally located the fault, it was a 180A fuse in a box high up on a wall that had probably last been looked at when it was installed in the 1920s.
It took a lot of phoning to source such a monster fuse and they paid for a motorcycle messenger to bring it down to London from Leeds. As far as I know it's still there. Too much effort to schedule replacing it with something modern before the next time (in the 2060s? )
The PC was fine after a PSU transplant.
You don't need complicated things to make a FUBAR. The simple things are also out to get you.
I could also tell you about the exploding substation and the need do do a tap-dance to avoid getting burned by globules of molten copper pouring out under the door, but that's got no IT angle at all.
I was called in to fix a workstation in an old Victorian basement bit of the site. Quad 140W Opteron thingy. Expected to have a certain amount of fun sourcing a beefy enough power supply. But the lights were on and fans were whirring.
When I took the cover off, a very strange burned organic smell assulted my nostrils. A few seconds later I found a dead mouse with its head wedged between the fan blades and one of the heatsinks. I hope the poor wee thing's neck was broken in an instant. I fear otherwise.
After the mouse was removed and the CPU allowed to cool down, it rebooted without a hitch. The mystyery remains, how did a mouse get into the case? There was no hole anywhere near big enough anywhere in the metalwork that I could see.
We once purchased a server with Windows Server pre-installed. By the time it was delivered plans had changed and it was reformatted to run Linux. Three months later it broke down. Looked like a simple failed PSU to me, but it was still on warranty, so we called for an engineer.
Some hours later he told us it was OK again and left at a trot. We were surprised that he hadn't left it powered up, and dispatched someone to the machine room to boot it. But it was booted ...
and once again running Windows! The muppet thought GRUB was a hardware error, so he reformatted the disk array and reinstalled Windows. Thank heaven for backups, and that it wasn't desperately mission-critical.
You don't have to outsource to India to get muppets.
My first thought.
My second thought was whether things are any better elsewhere?
My third thought is whether after a few months, they won't have learned something from the experience and done at least some of what's necessary to make sure that the lightning strikes somewhere else next time.
I suspect it's a multi-level FUBAR. Someone made a small not very serious error. Someone else got the patch-up for that wrong, and made the hole bigger until a chunk of masonry fell into it. And then someone carried on digging even though he *really* should have stopped, and brought the entire building down. "When you're in a hole, stop digging" is good advice, but these guys seem not to have known a hole when they saw one.
The person who really needs to be shot isn't any of the sods on the ground. It's the person who decided it was OK to get rid of all the experienced staff in the first place. Preferably also everyone upwards from him to the CEO, since it was mission-critical, and to encourage the others. Before we get to find out how much worse it might have been, by experiencing it.
Spot on. It's nothing to do with offshore staff per se. It's to do with replacing long-term staff with proven experience, by cheaper staff with no experience. Staff who quite possibly lied on their CV to get a job, or paid someone else to sit their exam.
It could be worse. I wonder if they're offshoring the control rooms for nuke power stations yet?
Re: Investment in the backbone?
Fired is inadequate.
When an engineer wilfully neglects to design to the accepted standards of his profession and people are killed by the collapse of the resulting structure, he's likely to find himself facing manslaughter charges.
The manager responsible for this almighty F***-up ought to be personally liable for the losses. All of them. Bankrupcy is the least that should happen to him. Jail would be better.
Many Roman bridges and even buildings are still standing after two millennia of use including one of total neglect. This probably has something to do with the Roman approach to quality control. The architect was required to stand under the arches as the scaffolding was removed.
Re: iSeries? @Spartacus
More accurately the equivalent of having your car serviced by a work experience sociology student who's there only because his benefit will be cut if he isn't, rather than an engineer of twenty years' experience who loves cars (who isn't there because the garage "let him go" to save a few pennies in the short term).
And I've just realized that there may be an explanation for the biggest wrong number in physics, that discrepancy of the order of 10^113 (give or take a few) between the observed energy density of the vacuum and the values predicted by all Theories of Everything to date.
The universe is the most successful Ponzi scheme of all time, but no-one has rumbled it yet.
Oh dear ... I predict the End of Everything starts about n
Also Google "Administratium", which doubtless has a Quantum Bogodynamic explanation awaiting discovery
as in Northern Crock?
The broken bank.
Google "Quantum Bogodynamics"
there's also a joke paper from CERN out there about Quantum Indeterminacy applied to banking. One observation is that it's impossible to know the precise ownership and value of anything at the same time. The value is certain at the time an entity is sold, but the ownership is highly indeterminate. On the other hand if it's been in the family for generations the ownership is near-certain, but no-one has much idea what it's worth.
Re: @Graham Marsden
Santander make it very easy to put your money in. They are total bastards when it comes to getting it out again. Santander is a Spanish bank. Is that the sound of a penny dropping? Good luck!
Re: Single sourced
I've always wondered why nature gave us redundant kidneys, very considerable distributed redundancy in our livers and brains, but only the one heart.
If TS ever really hits TF, paper cash will become worthless. You'd need gold for large sums, silver for smaller ones, and I'd hazard a guess that fifty quid in pound coins would be worth more than fifty quid in tenners because small change would still be needed.
Re: Closing accounts
Have you ever been asked by HMRC for an actual nil-interest certificate on an account with a negligible balance? If they did ask me, I'd say that the account had a negligible balance and that I'd happily surrender all money in the account to HMG if they could only accomplish what I could not -- persuade the bank to close the accursed thing.
I've never been asked for a certificate at all. I ask for or print, and keep, the ones representing significant sums of interest just in case, but I reckon even HMRC has better things to do with its time than ask for proof that I really did have all the tax I owe deducted from 55p or 5p or 0p of interest.
Re: One tactic
I know, but the HSBC - Midland merger was over 20 years ago. Also prior to that HSBC had no UK retail banking operation, so I doubt there was a merger/ transfer of IT systems at the customer end, just a takeover of the management end.
I have a credit card account which I cannot close (translation: the bank does not know how to close) because some ****up means they owe me 22p. I've on various occasions asked them to donate the 22p to charity, to send me a cheque, to just lose it in their error account .... They always say they've fixed it. Three months later I get another statement telling me that they still owe me 22p. I no longer have a card, so I can't go out and buy something and then get the balance to zero second time around.
The problem was latent for a number of years after I thought I'd closed the account, and only surfaced when a UK bank bought all the customers of my onetime credit card company which was closing down its UK operations. I guess that once an reverse-indebtedness of 22p was transferred from one database to another, there was/is no programmed mechanism for sorting it out.
I guess that on the bright side, my 22p means that the bank has contributed getting on for fifty times that amount to the Royal Mail, which needs all the help it can get! (Wonder what happens when I pop my clogs ... will they still be sending statements to "Executor of your truly, deceased" in the year 2200? Or perhaps inflation will finally cure the problem when the pound eventually becomes the smallest unit of UK currency?
Re: "how many of those customers are sufficiently pissed off to move?"
Ever since I had savings, I always took the attitude that I should never keep them with any institution from which I had borrowed money or even with which I had a credit agreement. If you look at the T&Cs, they reserve the right to help themselves to your savings ("offset") if you're deemed to be in breach of your borrowing agreement. Who knows what definition of "in breach" is programmed into their computers? I wonder if even the banks do?
I felt this particularly strongly while I had a mortgage. If things had ever gone tits-up outside my control, the bank could not have siezed my savings without first getting a court order (which in practice would have meant bankrupcy proceedings).
You're probably right about moving your account being pointless. A bunch of tenners cached somewhere in your home is probably a better idea.
Re: One tactic
So that's HSBC or Barclays, then?
Re: Single sourced
Good advice, if you have savings.
Trouble is that two overdrafts cost more than one overdraft, and there's an awful lot of people living one unexpected bill away from bankrupcy.
Also if one bank suffered a CAUFU (which this was not), the effects would be systemic and (possibly) the whole UK banking system would be forced to a stop. Indeed, the whold global banking system might be forced to a stop.
Too big to fail
It's another manifestation of the too-big-to-fail problem. Indeed if a majority of RBS's customers jump ship, then we've just gone from the big five to the big four and are just four more f**k-ups away from the big Zero. Gulp.
The answer might be for RBS to set up a new wholly-owned subsidiary with brand new state-of-the-art IT systems. Keep the fact that it is wholly-owned as quiet as possible. Milk exising customers for all they are worth (what's new?) while hoping that they jump ship to the new bank, along with disgruntled customers of their competitors.
Retailers and consumer-product manufacturers are forever doing this. Think Pepsico just makes Cola - probably not, but can you name all their brands?
I read that as an expired cat!
Have you ever seen the damage that mice will do to the wiring under the floor of the server room?
"You know all those staff you insisted we let go last year ...."