1554 posts • joined Wednesday 10th June 2009 11:28 GMT
Re: Only use Epson if...
With HP, keep the printer in standby even if you won't be using it again for weeks. HPs wakup up to squirt a tiny amount of ink through the nozzles every 24 hours, to make sure that the heads don't block up. It really is a tiny amount of ink - you won't notice unless the printer is left un-used for years.
Re: "can do 50 full drive writes a day for five years"
Treat warranties from new companies with more than a pinch of salt. They have nothing to lose by lying, sorry, being over-confident.
If the product doesn't wear out prematurely they get to stay in business and their happy customers come back for more. If they get more warranty returns then they can afford three or five years hence, they file for bankrupcy. Either way everyone has had a living for two or three years (and the fat cats at the top might well retire for life, if their living was a seven-figure salary).
BTW isn't SLC good for a million cycles these days?
Just as long as there isn't anywhere that water or air can get trapped until it's exerting nearly an atmosphere of pressure, followed by a tiny pop or crack that's nevertheless quite fatal.
The best places to dry out wet electronics are an airing cupboard, or a machine room with air-con. In both cases, leave it for a good few days.
Rule Zero: get the battery out of the device as soon as possible after it gets wet.
Water won't do much harm to an un-energised device even if it's wet for days. Electrolysis, on the other hand, can corrode it to death within minutes, sometimes less.
Re: You arent mentioning...
That's if you do a full install. (And I don't have experience of the low-end models to know if there's any alternative). for the high-end Officejets, you can choose which bits you install, and I agree it's best to go minimalist.
Funny, I'd say cheap laser printers, and most especially cheap colour laser printers, are for mugs.
My reason for saying this are the high-end HP Officejet K550, 5400 and 8000 printers. I don't yet have enough data to be able to say whether the latest HP Officejet 8100 continues the fine tradition of printing fast, well, for tens of thousands of pages, and at a price per page lower than cheap mono Laser, let alone colour.
NB the running cost of all printers in this survey. An extra 2p/page on just 5000 pages is a hundred quid. After that many pages you'd automatically have been better off spending more on the printer and less on the ink. (HP OJ Pro 8600 about 120 quid).
The manufacturer doesn't know if the motor will ignite *reliably* under those conditions. One successful test-fire won't say much about reliability. It might be a 10% lucky shot.
Methinks you need to fire at least four. Maybe the manufacturer will supply the motors for free if you return the data?
Re: Good for them
How many nuclear submarines are already dissolving at the bottom of the ocean? Some with a full complement of ICBM warheads? Quite a few that are known to have been lost, and probably rather more that still haven't been disclosed.
Actually there's no nuclear explosion risk, and probably very little radiation risk. There's next to no circulation between the deep ocean and the surface, and a helluva lot of water to dilute the radioactives in. I doubt that this Chinese project adds significantly to that risk, even should the worst happen. Hasn't the worst already happened at least twice, at Chernobyl and in Japan? With less actual harm than the normal operation of coal-fired power stations, even ignoring their CO2 output?
The oceans are salty because they contain most of the sodium that's been released from rock over three billion years of plate tectonics. They're naturally mildly radioactive, for the same reason with respect to Uranium.
Latex + LyX is vastly superior to any word processor. It's a WYSIWYM editor. What You See Is What You Mean. You get to see a rendition of the markup as you type and edit, but it's an approximation to the better rendition you'll get when you print it. You don't have to type markup language control sequences. It is vastly superior to Word, where inserting another paragraph (or even another word) on page 2 can wreak havoc with the layout of any subsequent page. The longer the document, the more mathematics, or the more inserts, the worse Word gets.
Lyx uses LaTeX behind the scenes.
LyX is free and available for Linux and Windows alike. Try it.
Re: If only a quality, user friendly Linux distro was available...
Photoshop - GIMP
Acrobat reader - Evince. PDF creation - print to PDF via CUPS (or on Windows, PrimoPDF) from your content creation program of choice.
Office - OpenOffice or LibreOffice
The alternative products are similarly capable, and the ones I've mentioned above are all available to run on Windows as well, for zero cost, so you don't even have to go to Linux.
Oh, but the interface is different. So it's OK if Microsoft completely changes the interface from Office 2003 to 2007 and again for 2010, but you rule out any other software that's not bug-for-bug compatible with your favoured expensive product? If you were arguing the value of the UI you already know well I'd tend to agree, but when it's ripped out from under you at the next upgrade, why not just say no and go to Open-source alternatives?
I know I'm wasting my bytes, though. For some folks it's better the devil I know, than the angel I don't.
Do you know Acrobat can create pdf files that Acrobat Reader can't print, but which Evince has no difficulty with? (Both on the same Windows system).
Re: Some plusses:
You could always buy 8cm mini-CDR and even mini-DVD-RW disks. Most drives can hack them. Smaller capacity, of course.
Slower? Hardly. DVD 16x is 22 Mbytes/second. Most USB thumb-drives are slower, at least when writing. Kingston Data Traveller Hyper-X 16Gb is 16Mb/s write, 25Mb/s read, and that's a premium model. Cheap ones are often around 5Mb/s.
Thinner makes it intrinsically more fragile (wrt getting bent). Something I'm sure the manufacturers have thought long and hard about. Convince the punters to pay more for a less durable product. Yes. Oh YES.
If you want a random client to upgrade you to demigod, also carry around a USB to SATA adapter and a stand-alone Linux distro with ddrescue on it. Then when you hear about someone who has lost data on a disk drive that's making clicking noises, ddrescue it. It doesn't always work, some drives die too quickly, but you sure gain a believer when it does work!
DVD-W still has a niche
One writeable DVD costs less than 20p and holds nearly 5Gb. Cheap enough to give away freely and/or stick in the post. 8Gb USB sticks are at least twenty times more expensive.
Yes, you can send 4.6 Gb by network these days, and without pain on a LAN. However, at DSL upload speed? OK, it might complete overnight and it probably will beat the post, but that's not very convenient . Especially if there's more than one person you need to send a copy to.
Despite this, I don't object to computers lacking a DVDRW drive. I've got a dinky little USB DVDRW drive that powers itself off a single USB socket. If you don't have access to one, there's something wrong with you or your employer.
Re: So precisely do we benefit from discovering higgs?
I'm sure that there were cavemen asking the same question about discovering fire (or rather how it could be moved from one place to another and kept going).
Re: Apples with Apples?
I don't believe tablets are replacing PCs. They're being added as well as PCs. PCs for work that requires serious inputting. Tablets for leisure (and some work) that is almost all output. Also because the tablet is a new device class, there's a huge sales surge going on at present. Just like there was once a huge sales surge for the now moribund netbook format. It lasted until everyone who wanted one had got one. In the fairly near future, everyone who wants an iPad will have got one. Microsoft will probably arrive in competition just as the market is saturated.
Re: "Lets face it, it is rather retro kernel design"
Monolithic kernels may be said to have passed the test of time ... at least if they're put together as well and as flexibly as Linux is. Point me at some other kernel architecture that works half as well. Yes, I'm aware of all the academic arguments in favour of microkernels. On paper, they are quite convincing, but I won't be convinced until I see one working well, across a range of workloads and system types, in the real world.
Personally I think Linux has a lot in common with Microkernels. Its software architecture is well modularised. New subsystems are easily integrated and existing ones re-engineered. it's just that the binding is done at kernel build time, not at runtime. It's a bit like the C++ versus script language argument. C++ is less easy to develop, but more efficient. A monolithic kernel is likewise less easy to develop for, but more efficient in production. A kernel is somewhere that efficiency DOES matter.
I have a big problem with neophiles. They think that "old" automatically means bad, without any actual comparison of the relative merits of the old and new products. They don't like "tried and tested and nearly unbreakable". They are also happy to disregard the vast amount of man-hours that are wasted, when a company like Microsoft replaces (say) the XP UI with the Windows 7 UI, and the Office 2003 UI with the Office 2007 UI. Sure, it may be only a couple of hours of lost productivity per user, but multiply that by maybe a billion users. Personally I think it's much higher. There's no accounting for the cost of the mistakes that are made while someone is thinking about the bloody new interface rather than the work he's trying to accomplish within it. Somewhere out there, I'm sure that the change to windows 7 has been the triggering event that destroyed marriages, killed companies, and caused deaths (by heart attack, probably). The right way to go is incremental improvement. Slip in th new features in a completely non-intrusive way, so that if you don't yet need the new stuff you never notice that it's arrived. That's what the Linux kernel has been doing very successfully for at least the last decade. (Unlike Gnome developers ... sorry!)
And almost as soon as we get used to Windows 7, Microsoft decides to Metro-ize us. That's a good neologism, by the way. To Metroize. To pull the rug out from underneath a billion users, in a misguided and doomed attempt to increase corporate revenue. To FUBAR by deliberation rather than by accident.
Re: War of the Worlds deserves a place in history
Cybermen don't qualify. They're not aliens, they're technologically-created zombies.
Daleks really should be on the list, though. Any Dr. Who saves Earth from Daleks sequence beats "Independance Day" on every front, including plot intelligence and plot believability.
It's the everyday world that's complicated! The standard model is really quite simple, but obviously not complete. There may be an even more simple underlying theory that so far we have hardly glimpsed.
Lasers and some forgotten alternatives
We rely on Lasers for optical disk devices and for data-communications. The science of Lasers is definitely simple quantum physics. If someone had experimentally discovered a lasing medium in the 19th century, quantum theory would have had to follow along rapidly. As it was, Einstein got the theory right decades before anyone made a laser.
You can have fun imagining a future where computers still run on purely classical vacuum tube technology. (Yes, micron-scale vacuum tubes are possible, as is integrated circuitry containing millions of them! ) Or, you could try having the Babylonians or Romans discover pneumatic computers (clock speeds of 100kHz, logic element size a few mm - Rolls-Royce did actually once build one to embed in the hot end of a jet engine). If Babbage had known about pneumatics, today's world would have been quite utterly different.
There is absolutely no way to determine if the universe is really real, or is just a perfect simulation of its physical laws and an initial state running on a computer within a universe with completely different physical laws. This is pretty much by definition. The perfect virtuality hypothesis also has zero predictive value, so we apply Occam's razor to it.
Note "Perfect". The most dangerous thing physicists could do is to find the bugs in an *imperfect* virtuality, and then tickle them. (There's a variant which says this has already happened many times over).
There's a scarier possibility, that it's our brains and sensoria that are being simulated by distant descendants of real beings much like ourselves. The simulation is running in their university department of pre-digital history. Sometime soon a grad student is going to realize that the simulation has progressed past the dawn of the information age, and is therefore pointless, so he'll stop the run.
(Ever had the feeling that your life has suffered a subtle continuity error, usually simultaneous with the desire never to drink so much again? Now you know why. Both the continuity error and the getting drunk. One's the bug, the other's the fix).
Re: Be careful what you wish for
Science advances in two ways. One is a prediction from a theory, later confirmed by experiment. the Higgs boson is in this class. The other is an observation of something not predicted by any theory, or which contradicts the generally accepted theory.
So assuming there are no more predictions that everyone wants to confirm, CERN should start looking for the unexpected and currently inexplicable. (I think there are also many more tentative theories making predictions that CERN will in due course test).
Eventually, if physics needs particles at higher energies than CERN can provide, a lot of new technologies will have to be developed. A usefully larger circular accelerator would be impossibly large and impossibly expensive. It'll have to be a linear accelerator with operational parameters way beyond anything we know how to build today.
The truth is that most jobs are boring, unless you can really enjoy the everyday detail. Here are some others. Car salesman. Estate agent. Hairdresser. Accountant. Garbage operative supervisor. Bus driver. Solicitor. Hotel manager. Starting to get the picture? Prefer any of those to IT? (the whole job, not just the over-inflated salary that some of them command).
The thing that's sick with society is celebrity culture, the whole idea that everyone should ape the glamourous, the rich and famous, the fashionable. Mostly what it breeds is dissatisfaction, unhappiness, low self-esteem, and a failure to realise that the reward of helping other people is not solely that it gets you a paid at the end of the month.
I've found a job that lets me spend a good part of my day solving puzzles (something I enjoy). It could be a lot worse. Also it's my job ... not my life.
Re: So, basically a land hurricane?
More likely, a squall line http://en.wikipedia.org/wiki/Squall_line
Hurricanes cannot form over land. They are driven by hot moist air rising from an ocean surface. They also take several days to get going, so you get at least twelve hours notice that a hurricane is headed your way. Usually longer.
Squall lines give very little advance notice. I've heard tell of a transition from a hot summer day to roofs being blown off an hour later.
But how do you know?
How do you know that by overclocking your system, you haven't created conditions that cause, say 34387.00*1.01 to compute as 79231.48, that sum being the decimal representation of something that Intel knows is on the critical timing path of the FPU?
Next thing you know, all the grade 3 techs have been paid over twice their usual salary rather than the scheduled 1% pay rise, and the FD wants to see you NOW!
But even if you're just number-crunching a model that you know will unconditionally iterate to correctness (in the mathematical sense), you still can't be sure. Maybe the FP error was in the calculation of the residual error, and the iteration is terminated before the answer is right? Let's hope your Ph.D. doesn't depend on that result.
Or maybe it's not an FP error, it's in one of those rarely-used instructions that only OS kernels ever use (which may be where MS is coming from, though I have my doubts). Consequences: corrupt database? corrupt filesystem? deadlocked system? security compromise?
I won't overclock a CPU for work, period. (for fun, OK). Intel knows what are the timing-critical logic paths in their billion-transistor chip. I'm sure that if a significant fraction of the dies tested OK at 5% faster on the critical paths, Intel would sell them specified for running 5% faster. Fact: they don't. Becuase Intel knows, this chip doesn't work 100% reliably above that speed. Maybe you'll never crash into the invisible wall, but logically it must be there.
Re: The real problem
have you ever tried running Win 98 SE under VMware on state of the art hardware?
The OS was / is crap, but it sure boots impressively fast!
OT - skyscrapers
You might be surprised to know that most tall buildings have only one support, right in the middle. Everything else is cantilevered off this core. These days, the core has to designed proof against airliners colliding with it and large (I don't know how large) explosions.
Re: Bullpat @JDX
I don't believe that Intel or any other CPU manufacturer would knowingly ship CPUs where getting the right result from any particular operation was be design and testing only probable rather than certain.
Of course, there's a thermodynamically large set of states and they cannot test all of them. They do, however, have access to the CPU simulator, and the ability to probe the actual signals at the surface of the die to validate and calibrate it. They therefore know what are the speed-limiting transitions, and can design their tests to exercise these in particular. If they don't sell a faster version of a particular die, it is fairly likely that they *know* that for this chip and at that speed, at the maximum operating temperature and worst in-spec chip power supply, there is at least one instruction sequence that is very likely to fail.
Overclocking a game is one thing. Overclocking a financial, scientific or engineering model is quite another. Don't. It's more important that the results are right and the system reliable, than getting an extra few percent of speed.
Re: Laptops more reliable than desktops?
A laptop has a built-in UPS (the battery and charger) rather than crashing if the mains supply glitches. A laptop often has a slower CPU than a desktop, and a slower cooler hard drive. These may tip the balance, depending on what exactly is being measured.
The desktops last longer, but that's often because the laptop's RAM can't be upgraded enough to make it worth keeping, or because it's too expensive or too much hard work to replace its keyboard after something gets spilled into it. Laptop displays are also harder to fix (desktop: throw away the monitor and plug in another one). And of course, desktops don't get dropped onto a hard surface nearly so often.
Re: And as for a Unix Server
I've seen that sort of reliability from desktop PCs crunching numbers. No unscheduled downtime other than those caused by the electricity supply, up until the day that it was decided that a newer system would make better use of the electricity. They weren't even required to be quite that reliable, they just were!
Running Linux, of course. And I'm sure that your IBM's disk subsystem was taking a much greater pounding.
Overclocked vs. Flat-out
An overclocked CPU is a CPU running outside its specification. It's been tested by the manufacturer (who knows the weakest spots w.r.t. timing) at a particular speed and may well have been found wanting at a higher speed. It's blindingly obvious that an overclocked CPU may not be working 100% correctly, and can only be recoemmended to someone who cares neither about correct results nor about reliability. A gamer, maybe.
Flat-out, on the other hand, should not reduce reliability. With modern CPUs there is a feedback loop to slow down the CPU when the chip temperature limit is reached. I work in an environment where desktop PCs are crunching numbers 24x7 most days of the year, and our desktop systems don't seem noticeably unreliable. By far the commonest failure is a PSU fail, followed by a hard disk fail. Failed CPUs are as rare as hen's teeth and failed MoBos only slightly commoner. In the old (Athlon) days when a CPU didn't slow down and could actually overheat until the heat crashed it, failed CPUs were also as rare as hen's teeth. I'd vacuum the heatsink, replace the fan, and the system would happily reboot and last as long as any other. Too high a temperature slows down the logic gates in the CPU, until it's the equivalent of a CPU that's overclocked one notch too far, and crashes.
Oh yes, and always run memtest overnight on a machine that's randomly unreliable. Low-incidence errors on RAM will do that. it's why servers (and serious engineering workstations) have ECC RAM. If memtest crashes rather than reporting errors, suspect your power supply first (you may or may not see the problem with a DVM).
Re: Its a kernel bug
Don't know the details here, but it's not impossible. Kernel documented to do X, actually does Y which is subtly different. Java is the only widespread app which does something noticeably bad as a consequence. Everything else "just works" much the same under X or Y.
Re: Not a sys admin but...
The greater FAIL was whoever connected and configured it in the first place. With the right jumper on the D25 (or D9) connectors, unplugging the connector would have been treated the same as a modem hangup, and that should have terminated the logged-in session if the software config had security in mind.
A PC wasn't working, looked like PSU fault. Took it back to my office to check and repair. Plugged in. Pressed power button. Very loud bang. Small amount of smoke. Lights went out. Oops.
Outside office, also no lights. OOPS.
100 yards down corridor, still no lights, lots of people peering out of their offices asking WTF. OOOOOPS.
Somehow, the PSU fault hadn't taken out the 13A fuse in the plug (yes, it should have been a 5A fuse, but it did have the mandatory safety-tested passed-by sticker). Not the 30A breaker on the circuit either. Nor the 100A breaker above that. No, when the electricians finally located the fault, it was a 180A fuse in a box high up on a wall that had probably last been looked at when it was installed in the 1920s.
It took a lot of phoning to source such a monster fuse and they paid for a motorcycle messenger to bring it down to London from Leeds. As far as I know it's still there. Too much effort to schedule replacing it with something modern before the next time (in the 2060s? )
The PC was fine after a PSU transplant.
You don't need complicated things to make a FUBAR. The simple things are also out to get you.
I could also tell you about the exploding substation and the need do do a tap-dance to avoid getting burned by globules of molten copper pouring out under the door, but that's got no IT angle at all.
I was called in to fix a workstation in an old Victorian basement bit of the site. Quad 140W Opteron thingy. Expected to have a certain amount of fun sourcing a beefy enough power supply. But the lights were on and fans were whirring.
When I took the cover off, a very strange burned organic smell assulted my nostrils. A few seconds later I found a dead mouse with its head wedged between the fan blades and one of the heatsinks. I hope the poor wee thing's neck was broken in an instant. I fear otherwise.
After the mouse was removed and the CPU allowed to cool down, it rebooted without a hitch. The mystyery remains, how did a mouse get into the case? There was no hole anywhere near big enough anywhere in the metalwork that I could see.
We once purchased a server with Windows Server pre-installed. By the time it was delivered plans had changed and it was reformatted to run Linux. Three months later it broke down. Looked like a simple failed PSU to me, but it was still on warranty, so we called for an engineer.
Some hours later he told us it was OK again and left at a trot. We were surprised that he hadn't left it powered up, and dispatched someone to the machine room to boot it. But it was booted ...
and once again running Windows! The muppet thought GRUB was a hardware error, so he reformatted the disk array and reinstalled Windows. Thank heaven for backups, and that it wasn't desperately mission-critical.
You don't have to outsource to India to get muppets.
My first thought.
My second thought was whether things are any better elsewhere?
My third thought is whether after a few months, they won't have learned something from the experience and done at least some of what's necessary to make sure that the lightning strikes somewhere else next time.
I suspect it's a multi-level FUBAR. Someone made a small not very serious error. Someone else got the patch-up for that wrong, and made the hole bigger until a chunk of masonry fell into it. And then someone carried on digging even though he *really* should have stopped, and brought the entire building down. "When you're in a hole, stop digging" is good advice, but these guys seem not to have known a hole when they saw one.
The person who really needs to be shot isn't any of the sods on the ground. It's the person who decided it was OK to get rid of all the experienced staff in the first place. Preferably also everyone upwards from him to the CEO, since it was mission-critical, and to encourage the others. Before we get to find out how much worse it might have been, by experiencing it.
Spot on. It's nothing to do with offshore staff per se. It's to do with replacing long-term staff with proven experience, by cheaper staff with no experience. Staff who quite possibly lied on their CV to get a job, or paid someone else to sit their exam.
It could be worse. I wonder if they're offshoring the control rooms for nuke power stations yet?
Re: Investment in the backbone?
Fired is inadequate.
When an engineer wilfully neglects to design to the accepted standards of his profession and people are killed by the collapse of the resulting structure, he's likely to find himself facing manslaughter charges.
The manager responsible for this almighty F***-up ought to be personally liable for the losses. All of them. Bankrupcy is the least that should happen to him. Jail would be better.
Many Roman bridges and even buildings are still standing after two millennia of use including one of total neglect. This probably has something to do with the Roman approach to quality control. The architect was required to stand under the arches as the scaffolding was removed.
Re: iSeries? @Spartacus
More accurately the equivalent of having your car serviced by a work experience sociology student who's there only because his benefit will be cut if he isn't, rather than an engineer of twenty years' experience who loves cars (who isn't there because the garage "let him go" to save a few pennies in the short term).
And I've just realized that there may be an explanation for the biggest wrong number in physics, that discrepancy of the order of 10^113 (give or take a few) between the observed energy density of the vacuum and the values predicted by all Theories of Everything to date.
The universe is the most successful Ponzi scheme of all time, but no-one has rumbled it yet.
Oh dear ... I predict the End of Everything starts about n
Also Google "Administratium", which doubtless has a Quantum Bogodynamic explanation awaiting discovery
as in Northern Crock?
The broken bank.
Google "Quantum Bogodynamics"
there's also a joke paper from CERN out there about Quantum Indeterminacy applied to banking. One observation is that it's impossible to know the precise ownership and value of anything at the same time. The value is certain at the time an entity is sold, but the ownership is highly indeterminate. On the other hand if it's been in the family for generations the ownership is near-certain, but no-one has much idea what it's worth.
- IT bloke publishes comprehensive maps of CALL CENTRE menu HELL
- Nine-year-old Opportunity Mars rover sets NASA distance record
- Analysis Who is the mystery sixth member of LulzSec?
- Prankster 'Superhero' takes on robot traffic warden AND WINS
- Comment Congress: It's not the Glass that's scary - It's the GOOGLE