2142 posts • joined 10 Jun 2009
You mean, like Firefox does?
The principle is caveat emptor. Check out the credentials of the person or org offering the plug-in. Find some satisfied users. It's no different to any other software. Indeed, the principle was well-known to the Romans (hence the latin phrase), long before software existed. In "Alice in Wonderland" it was a bottle labelled "Drink Me" which Alice unwisely took at face value.
Consider also "A fool and his money are soon parted", and "nothing is proof against an exceedingly great fool".
Blocking ads is such a morally gray matter, given the Internet can't run for free, but the principle should encourage ad pushers to use less-annoying advertisements which people wouldn't bother to block in the first place.
No, it's not grey. It's just you exercising your freedom of choice. If others doing the same causes certain organisations to cease to be profitable, they'll have to find another business model or cease trading. That's commercial life. Newspapers, other than freesheets, have now almost all gone subscription-only for their online editions, which is fine by me. I'd pay, if I needed access to their news to any significant extent. (I pay for Linux Weekly News).
Google may be planning to take away that particular freedom of choice from Chrome users. If I'd ever left Firefox, I'd be returning soon in response to this news.
Has anyone ever considered an ad-blocker that classified adverts, and allowed them through if they met the user's criteria for minimal annoyance? Or the same done manually (better - maybe?), funded by the responsible advertisers who don't want to be blocked only because of the irresponsible advertisers? Or an advert-server which guarantees no adverts that don't pass minimal-annoyance criteria?
Probably not, because most www users don't block ads and probably don't know that they can.
Re: Can someone explain the science
It was a stony meteorite rather than a nickel-iron one. Stone is brittle and usually has internal weaknesses. So when it was subjected to thousands of G, and when its exterior was abruptly made very hot, it broke up and then (with a massively increased surface area) "exploded" (meaning lots of pieces in close proximity deposited most of their kinetic energy into the same smallish volume of air).
An Iron meteor of the same mass would probably have held together and made a crater on the surface if it were big enough for it to not be completely burned up in transit. (Iron burns, most stone doesn't because it's already as oxidized as it can get).
An interesting statistic
The predictable human death rate from meteor impact is at least 60 people per annum.
Surprised? Doubtful? That's because I'm using a conservative average, but running it back over hundreds of millions of years.
Let's assume just one Chixulub-scale impact every 100 million years. It would kill most of not all of the human race if it happened today. 6 billion deaths / 100 Million years = 60 per annum.
Aren't statistics wonderful?
Re: Instant climate change
The cause of the Permian-Triassic extinction is not widely believed to have been an impact event. It was coincident with the erruption of the Siberian traps. This was the most colossal volcanic erruption probably since life evolved. It could hardly have caused less than a global ecological catastrophe due to the gases released into the atmosphere.
Of course it's possible that an impact event provided the final straw for a seriously damaged ecosystem. This is the probable fate of the dinosaurs in the more recent mass extinction. The Chixulub impact occurred at the same time as the eruption of the Deccan traps, another massive volcanic outpouring though considerably smaller than the Siberian traps.
Finally, at least one sort of catastrophic event exists that would leave no direct geological trace at a remove of hundreds of My: exposure of the Earth to a gamma-ray burst in a "nearby" galaxy. This would ionise some fraction of the N2 and O2 molecules over half the planet's atmosphere, followed by recombination into Nitrogen Oxides. The Ozone layer would be almost instantly gone, and decades of nitric acid-laden rain would follow. Land-based life would suffer worst, as almost all plant life would be destroyed. You can of course posit any quantity of NOx creation depending on the gamma-ray flux.
Re: Instant climate change
Geologists have something of a track record of being right!
Back in Victorian days, they became quite certain that the earth was billions of years old, by measuring sedimentary rock strata thicknesses and present-day deposition rates. Physicists, however, were equally certain that the Sun could not be more than tens of millions of years old, because no chemical reaction could fuel it for any longer. The Geologists insisted that if chemistry couldn't, something else must....
...and in due course, nuclear fusion was discovered.
Re: "pointless subject like French"
Music and programming (and pure mathematics) have far more in common than people who aren't at all musical can ever realize. I've never known a university maths department that doesn't have some truly gifted amateur musicians, and a random collection of upper-quartile programmers won't be far behind. Something like they employ the same parts of our brains to ends that are superficially very different, and deep down not at all so.
I've occasionally thought that a musical score is the machine-code, and learning to perform that music amounts to reverse-compiling it inside one's head, until one has ascended back to the high-level abstractions more like those that the composer started from inside his head.
Re: Kids not programming? Here's why...
Once again: Python is a free download, works the same on Windows or Linux or an RPi. So it's not availability of a simple interpreted language at fault (one FAR better structured than BASIC!).
Either the kids don't want to drink, or there are no teachers to lead them to the water.
Re: Big languages with big libraries
The trouble is that most of the easy codes have already beeen written, and most of the need for good algorithms has gone away with GHz CPUs if you aren't processing large volumes of data.
You can still have fun with coding better algorithms in science and other disciplines , but you tend to need a physics, chemistry, engineering, bioscience, geography or geology degree (to name a few!) to understand the problems before you can start addressing it with code. In fact I might advise a keen coder not to study Comp Sci at all.
Re: So fix it!
Rubbish. Python interpreter:
>>> for i in range (1,10):
... print i
An infinite loop is of course >>> while True: but because today's systems are so fast, it's not such a useful thing to demonstrate these days - you can't see anything happening except a screenfull of identical lines appears.
Re: Old Manuals
But seriously, CompSci students can't write programs? Could be that language compilers are not so readily available, or are not so cheap these days.
No. Python is available for free on Windows and Linux systems alike. http://www.python.org/download . Python is a perfect first language: well-structured, lots of libraries, and interpreted.
Most people are no longer interested. Sad but true. I don't feel it's just programming. rather, that the whole state school system is broken, and now exists to crush all inquisitiveness and initiative out of kids so that they can become drooling compliant consumers. Exams no longer require thought or understanding, just regurgitation of memorized texts.
In the USA the universal photo-ID is a driving license, to the extent that some (most? all?) states issue non-drivers' driving licenses!
Re: Print-it yourself ID works.
I think in the case of opening a Building Society account, the account details on the possibly faked bank statement will be checked with the bank. If they deny that a person of that name has an account with that number at that address, the new account will be locked and your money imprisoned until you explain yourself (probably to the police - I think such a misrepresentation is fraud).
Of course you could fake someone else's ID and operate the account until the first statement is sent to the person whose ID you have purloined. Maybe you can find a person who has gone away (say to another country) without closing an account of little value, at an address where mail to former residents just gets thrown away instead of returned to sender. So the checks aren't 100%. 90% will suffice. Organised crime will be deterred to the extent of recruiting a slightly-paid army of real-ID low-lifes and small fake businesses to launder its dirty cash. Which in turn lets the authorities subject illegal money to some degree of taxation.
It's the database, dammit!
I have no objection even to being forced to carry photo-id at all times and produce it on demand to reasonable persons of authority in reasonable circumstances (which is the law in several European states). I'd get a photographic driving license if the government didn't charge me for trading in my non-photographic one (and again every ten years thereafter). I have a passport (which I don't carry in the UK only because it's too big) and an employer-issued photographic staff-ID card (which I do).
What I object strongly to is the centralized database of everything which the former Labour government planned to introduce on the back of its ID cards, to log and control every aspect of our lives, and to make a start on constructing a UK-sized panopticon prison camp. Just because we can, doesn't mean we should, and nowhere more so than in taking away the right to any sort of privacy at all (except, probably, for a privileged few rulers).
So keep the recordings of ID completely decentralized, and forgotten after a reasonable time. That's a time-tested way of making life somewhat harder for the criminals, that really doesn't harm the honest very much.
BTW provided you lie and say you are British, you *can* check into a hotel as Mr. and Mrs Smith. You can even do it legally. In the UK, there is no law saying you cannot call yourself whatever you want, provided it is not with criminal intent. Sleeping with someone else's spouse may be immoral, but is not a crime. It's also an example of a freedom that would be all but gone the moment a national ID database went online in every hotel.
I think "lightbulb" is the modern incarnation of "toaster".
A more sensibly net-connected gadget is your fridge. It could be registered with the National Grid as a device that can be commanded "off" at a moment of critically high demand. A fridge will stay cold enough for at least half an hour without power. You'd get the electricity the fridge uses at a discount. At a future date it might also be able to deliver an inventory of its contents to your phone while you are in Tesco.
Or your central heating system, which you would command on with your mobile phone maybe half an hour before you arrive home, so home is warmed up. More efficient than having a timer set for your usual home time, that you forget to cancel on the evenings when you'll be home late.
But if light bulbs could predict their likely failure within the next fortnight, that could be useful. A sane implementation would be to alert your house-controller, which you'd have programmed to mail you or text you alerts if you are that way inclined, or just to quietly add a new lightbulb to next week's shopping-reminder list. (Just like printer alerts at work, if you think about it).
One controller per household?
Regardless of whether it's cheaper this way, there should be one and usually only one internet-connected master controller of thingies per household. The thingies should speak a restricted low-badnwidth protocol down a severely bandwidth-limited pipe. The mains wiring is the obvious conduit ... but give them a 100kbps channel not a 100Mbps channel!
Cars are already moving this way, with the lights and other peripheral devices bus-connected.
There's still botnet potential, but probably less so than the already extant hordes of broadband routers provide. And there would be just one thing to turn off when it malfunctioned, upon which the lights and other devices ought to revert to "dumb" behaviour.
Re: Well, if and when
Both offences cover a large degree of wrongdoing
About the best you could say in defence of a murderer is "he only killed one man, and that man had made him very angry". (If causing death was neither intended nor a reasonably forseeable consequence of the criminal's actions, it's not murder).
What language did they use
Seeing as no-one has yet taken the bait ...
Java or C#. Definitely not free.
Re: analogy fail? (@ Frumious Bandersnatch)
Surely people also have a strong financial interest in keeping thir systems secure? Loss of confidential information to third parties, destruction of information by hostile third parties, both have costs associated. Possibly high costs. Possibly fatally high costs for a business.
I don't know enough about the technology to judge the practicalities, but the concept is surely not a fail on motivational grounds.
Re: Distros too
There's already a long-term-stable kernel out there: the one that Red Hat maintain, and Centos and Scientific Linux free-beer clone. Security and other fixes from more recent linux kernels are back-ported to 2.6.18 (for RHEL 5) which will continue to be maintained for five years after RHEL7 release date. Same pattern for RHEL6 (2.6.32 based)
Last time I looked (a good while ago) the 2.4 kernel was also still healthy, on volunteer life-support.
There are pros and cons of feature-frozen bug-fixed kernels, and the Linux community can have its cake AND eat it. The biggest drawback is probably new CPU designs and a frozen kernel that doesn't fully support them, or doesn't support them at all.
Re: Biiiiiiig Changes
An entertaining alternative to dissolving things in Ferric Chloride, is to employ an angry person and supply them with a lump hammer and gas torch.
That's stage 1: break into small pieces.
Domestic and probably commercial data security is quite adequately served by just drilling a couple of holes in thetop of the HDA and pouring a cola drink through one hole until it comes out of the other. (Just think what it does to your teeth! )
But a really secure site worries that even a small piece of a disk platter might be placed under an electron microscope and decoded bit by bit. Therefore, do not allow off site until its very atoms have been dissociated by dissolution in a vat of ferric chloride.
Buy HP stock?
We techies understand this better than most. If all goes according to plan, HP is a must-own investment.
Except, Memristor tech is about the only bright spot for what's otherwise an uninspiring behemoth of a company. What are the chances of HP coming to grief before it can capitalize on its Memristor know-how, or some eejit in HP management selling the immature technology for about a thousandth of what it will be worth long-term? Six years is a long time on the stockmarket.
(In my dreams, HP never bought Compaq, never divested its instruments division, maybe did divest its printers division, and it's the old HP now developing Memristor tech).
Re: Nice for the enterprise but what about for toys?
You mean other than 3Tb USB3 Hard disks?
I doubt they'll be devices in the first instance. I think with Memristor tech, the primary MRAM will be tightly integrated with the CPU and NIC, built onto the motherboard if not into the CPU assembly for bandwidth reasons.
(And we're going to need better than gigabit networking to the desktop! )
predicting the outcome of 5 more years of bleeding edge technological development seems fanciful.
Ever heard of Moore's law? Not quite up with the Higgs Boson, but still one of the most startlingly correct predictions of future progress.
Once CMOS and VLSI fabrication were working, Moore's law became inevitable. It was based purely on the physical scaling laws. Prior to CMOS, engineers talked about the "smoking hairy golfball" problem: a CPU with the sort of performance we take for granted today, would have to be the shape of a golfball because of the speed of light, hairy with wires connecting to it, and smoking because there would be no way to get many kilowatts of heat out of it.
But CMOS scales at constant power per area of chip, so Moore's law worked until the minimum size of a channel became limited by the discrete size of atoms. We're pretty much at that limit now, with Flash and Intel's FIN-FETs.
Going back even further, you might want to search our a lecture given by Richard Feynman "Plenty of room at the bottom".
Re: Biiiiiiig Changes
I suspect it's possible to store enough power in a supercapacitor to allow for the wiping of (say) the 16Gb of MRAM that's playing the role of DRAM, that wants to be wiped on power failure for security reasons. Certainly enough to secure-wipe a much smaller area dedicated to storing crypto-keys. OTOH I also wonder whether such a wipe can ever be proof against theft of the hardware and application of serious data-recovery hardware. (Really secure sites never let used hard disks off site. They smash them to small pieces, then dissolve the pieces in Ferric Chloride! )
Incidentally if you worry about someone with physical access to the hardware, it's possible to recover a surprisingly large amount of information from DRAM minutes after its power is failed. A data thief can usefully remove DIMMS from a system and plug them into a data-copier. (And then test a mere few billion possible bit-strings as crypto keys for stuff in the hard drive, with a significant percentage chance of success).
Re: "showed off a wafer"
Probably it's some sort of failure. During R&D the yields of working product are often low, to say nothing of the experimental designs that don't work at all.
Re: O RLY?
The difference is that this one is working in the labs and there aren't any obvious reasons why it can't go on to development and production. If we didn't already have flash and DRAM it would already be on the market. Because it's got to compete with established technologies, it'll have to stay in the labs until it's a demonstrably BETTER product for at least one value of better. During which time we'll all gain confidence that it doesn't suffer from any unexpected premature ageing processes.
Things could still get delayed. It's even possible that there's an iceberg out there. But my expectation is that this will be the biggest breakthrough since CMOS.
Re: Tablets in schools? Not for long.
I've long thought that the best approach for a school would be Gbit networking and thin clients, with a large multiuser server that actually does the work locked well away from the kids and non-IT staff.
Thin clients are cheap to replace. (All but free if you can use a Pi and recycle a keyboard / mouse / display from some other organ of your local government that's escaping Windows XP before next April). Thin clients also have the advantage that there's no financial and little other incentive for kids to steal them.
There's less reason to customize a PC as systems become more and more integrated (and Intel's on-board graphics become sufficiently capable for a greater range of tasks). This plays into the hands of Dell and suchlike who have economies of scale, and against companies like RM who build exactly what you want.
Not sure how you define a UK system builder. but rackservers.com is UK-based and also sells custom desktop PCs.
Re: What happened to...
Oh, we know the nearby stars that will go supernova within a cosmic eyeblink. Betelgeuse (the nearest) will go sometime in the next 100 kyears or so. Other giants further away look even more unstable.
But a human lifetime is a tiny fraction of a cosmic eyeblink, and the exact timing of a supernova is probably not predictable to an accuracy of years (not until it passes the point of no return). The giant bloated star gets less and less stable, contracts, finds there's still something left to fuse, gets hotter, expands, wobbles on the edge like that for hundreds of kyears. Betelgeuse has contracted 15% since we've been able to observe its size (about a century). Maybe, just maybe, it really has run out of fuel this time!
(Betelgeuse is just far enough away that it won't do serious damage to the Earth when it blows. Only just far enough. Halve the distance and there would be Consequences. Halve it again and we'd be staring at a near-future extinction-level event)
(And spare a thought that if there is a $DEITY out there, maybe It is working really hard at making sure that there are no gamma-ray bursters popping in our direction in any "nearby" galaxies. If that is the case I'll thank It should I ever meet it, and I quite understand why It doesn't have any time for our smaller concerns! )
The galaxy is reasonably transparent in infra-red, it's just visible observations that would be blocked. (And of course it's almost completely transparent to neutrinos and gravity waves ... read the article! )
And if you realy want to boggle, google hypernova or pair-instability supernova. These are rarer events. It's hypothesized that a pair-production supernova is such an efficient converter of many suns' worth of mass into energy, that it would't leave any black-hole remnant.
Re: we don't test for smell
10 Scared skunk
11 Thioacetone monomer [ http://pipeline.corante.com/archives/2009/06/11/things_i_wont_work_with_thioacetone.php ] (A must-read. Would you believe that less smells worse? )
Re: I hope this means their quality won't drop even further...
It may be too early to say SSDs are more reliable. That they less often die a week after purchase is true, but that just reflects no moving parts. The other side of the coin is that an SSD may be far more likely than a hard disk to go from AOK to brick "just like that" with no warning of any sort. I'll have a better idea 3 years hence, because ....
We're now specifying SSDs for desktop systems where the user stores nothing locally. However, for storing multiple Terabytes, there's no alternative to using an HD (or an array of HDs).
Re: Quality drop
That has NEVER happened to me with a Maxtor or a WD.
Wouldn't have bothered to say "batch problem" yet again if you hadn't mentioned that defunct manufacturer Maxtor. Theirs were the only drives that I am convinced were truly defective by design, not just one batch. I suffered multiple failures on multiple sizes and models of drive purchased over a 2-3 year spread and decided never to buy another of their drives if I had any choice. That their finances collapsed and they were bought up for a pittance shortly afterwards confirmed my opinion of them.
And now I find someone who thinks Maxtor was wonderful. Well I never.
Re: I hope this means their quality won't drop even further...
Funny: I've seen exactly the same comment about Seagate drives. And HGST and Toshiba drives.
What's really going on is bad luck and (especially) bad batch problems. EVERY manufacturer will occasionally and inevitably ship a batch of product containing one of a batch of faulty components. If you buy a batch of drives, or a batch of PCs containing drives from the same batch, then if you get two or more premature failures in the batch, you should assume that the whole lot are likely going the same way. In particular, complain in writing during the warranty period, requesting that all the implicated drives are swapped. The vendor will almost certainly refuse, but should your fears be realized you can later prove that the merchandise was not of saleable quality, and that you pointed this out BEFORE the warranty expiry but were given the brush-off.
No. It can't sound better. The codecs used in DAB introduce non-harmonic distortion into an audio signal. For music, this is ghastly. Far worse than hiss and crackle, far worse than mere harmonic distortion.
why don't they just plan to retire dab in the long term and replace it with internet streaming?
Coverage and bandwidth. Do you think 4G will ever be available on unclassified roads in deepest Dorset or the Grampians, five miles from the nearest hamlet? (And even if road coverage were to reach 100%, what about farmers and walkers miles off anyroad). Do you think there will be enough bandwidth to support any encoding that doesn't introduce so much non-harmonic distortion as to turn music from a source of joy into a source of pain?
Re: DAB Bashing
Perhaps someone will come up with a converter box that rebroadcasts a single channel on a FM frequency, over an area of a few tens or so square meters.
But it'll sound even more crap than a pure DAB radio. And it'll do nothing to address DAB's other weaknesses. It won't get you a signal that doesn't keep dropping out on the move (es[ecially in rural parts), and it won't give you the battery life of an FM radio if you want to listen somewhere that a mains or automotive electricity supply is not available.
Re: Fine without the possessive
Well, what would normally be meant is "the users who I have primary responsibility for supporting" as opposed to some other group of users. In exactly the same way, a manager is likely to say "my staff" meaning the part of the organisation for which he has primary managerial responsibility.
Why does it annoy you so much?
Re: Great expectations...
But why use the PI to do it?
Because - see "androgynous cupboard" below - it's cheap enough that if you break it, you don't need to cry. A new one won't break the bank. That's particularly important if you are interested in tinkering with hardware.
But even if "all" you have done is accidentally nuked the hard disk of a PC while trying to repartition and install Linux alongside Windows, there is lots of work to get it back to how it was and maybe lots of data you will never get back. Lots of expense also, if you didn't appreciate the importance of making some factory restore DVDs on the day you unpacked it (assuming you did get to unpack it yourself). If it wasn't strictly speaking your very own PC, but a shared resource, people will be blaming you. Not a nice place to be, especially not for a kid.
Re: Mind your language
Personally I'd say that teaching any language that needs a compiler as a first language is misguided and verging on cruel. Any scripting language is better simply because there's so much less of a learning curve to climb. No edit-compile-run infrastructure. No debugging of address / overwrite errors. If you're unsure of something, just try it in immediate mode at the command line and see what happens. Python is particularly good because it is also a very well-structured language (c.f. BBC Basic or, gag, Perl).
AFTER a kid has mastered Python is the time to explain that compilers generate code that runs maybe three times faster if you have a problem for which that actually matters, and what you have to give up in order to gain that speed. As computers get faster and open-source Python libraries of other people's numerical codes expand, such problems get rarer.
If a kid can't "get it" even in Python, a career in something other than programming beckons!
Re: About the cost.
You also need a case, a decent display with HDMI input, a power supply, some SD cards, a USB hub, a keyboard, a mouse, a network cable long enough to get to the ADSL modem, and a fair amount of desk space.
Don't need a case (or can improvise with cardboard or grown-out-of Lego). Keyboard and mouse can probably be scrounged for free or liberated from a council tip. Display maybe likewise, or jack it into any half-modern TV. SD card, you can probably scrounge an old small one from anyone who does photography, or source it for pennies on E-bay. Desk? No, floor or lap will do if nothing else available. Next to the router, if needs must and you really can't get hold of a long enough network cable.
The rest of the argument is a little bit stronger. Low-level programming is always going to be a minority interest. But now it's no longer restricted to a minority of a minority - wannabe hardware hackers no longer need parents who can buy a PC and unusual interfaces, and another one, two or more when your kid's attempts at homebrew electronics let the magic smoke out.
If you have a kid with hacker nature, the kid will be like a duck in water and you won't have to explain anything much, just answer the questions and encourage him. If your kid doesn't have hacker nature (probably 19/20 kids), trying to teach, explain to or encourage her will just put her off programming for life.
Re: How friggin awesome is evolution!
It can't be evolution - if the poison kills you, then you can't breed in a protection against it, therefore this is 100% proof that evolution does not exist.
I should think that how this evolves is something like this.
To start with, the mice attacked scorpions the same way mongooses attack cobras. Very carefully. They're faster and smarter than their prey, so mostly they get to eat it rather than die. The less nimble ones get dead more often, and the species gets nimbler. Note that all hot-blooded species have a fundamental advantage over cold-blooded ones in the morning, before the cold-blooded ones have a chance to re-warm their bodies from the overnight chill.
Sometimes when battling or eating the prey, a small amount of venom gets into their bloodstream. Enough to harm but be survivable. Genetic variability means that for the higher doses, some survive and some don't. Gradually, genetic tolerance of the venom gets bred in. When a sting ceases to be lethal 100% of the time, risk-taking mice gain an evolutionary advantage over the original extremely risk-averse mice.
And so by slow and gradual evolution, you arrive at mice that laugh at scorpion venom.
I'd guess that the final stage is that mice which feel less pain when stung have a slight advantage over ones which do. Pain is distracting, so they more often get caught by larger predators if they suffer pain? That would create an evolutionary bias in favour of "hard" mice.
One other thing: ability to acquire tolerance of venoms by an individual is near-universal in mammals, including humans. It's the standard immune system response. The immune activity degrades a foreign protein before it can fatally degrade the host's proteins, and is then primed to mount the same response faster and better next time. Immune responses are passed (or primed) from mother to child via breast-milk. It's an area we don't yet fully understand. There may be epigenetic factors at work, and Lamarkian evolution may not be completely discredited when it comes to inherited immune response. Just a thought.
One way to think parallel
One way to think in parallel is a spreadsheet!
Behind the scenes, a spreadsheet keeps a list (graph) of which cells depend on which other cells. Provided the dependencies are processed in the right order, you can recalculate each cell in parallel with many others. LibreOffice has the use of GPGPUs for calculation as a future goal.
Of course, most actual spreadsheets are not large enough to keep more than a single core significantly busy, but one might "code" for parallel execution using a program design methodology that looks a lot like a spreadsheet and it's underlying, automatically generated, dependency graph.
More generally, the problem is software. At present we do parallel coding mostly using sequential languages and the spawning of multiple execution threads. Other approaches are needed, with the computer doing the parallelisation, not the human programmer.
The irony is that nature worked it out a very long time ago. A brain is a parallel processing system par excellence, 10^11 processors operating asynchronously at something like 20Hz. But it's possible for an evolved system to be beyond the understanding of that system, and so far even insect brains seem to be beyond our ken.
And Embedded deep in VMS
Somewhere in the system timer interrupt code, back in the days when you got the source of the entire VMS operating system on Microfiche with every new release ...
; Does anybody really know what time it is
; Does anybody really care
; If so I can't imagine why
; We've all got time enough to die
(which is of course a quote of a lyric by , IIRC, Chicago).
Re: Old school assembly language programmer
Hardly. Old-school assember newbie who didn't know what a macro or a pseudo-op was.
and the macro or assembler would generate the sequence of XORs if that was the most efficient way to do it on your CPU. If the hardware vendor hadn't written it for you, any self-respecting assembler programmer would have written it for himself.
I once saw an enormous macro to implement
SETCONST register, value [,scratch_register]
which avoided loading a constant from RAM for about a thousand commonly and less commonly used constants on an early instance of what would later become known as a RISC architecture.
Dead vendor squatting?
How about finding a vendor ID for a vendor that long ago went out of business, and "squatting" that with openly allocated product IDs that the defunct vendor never used (and in light of its corporate death, never will).
Not perfect, but maybe the right way to force the issue?
the US "is not monitoring and will not monitor"
Probably literally true and 100% false at the same time!
Ie, they've sub-contracted it to some outfit that is non-US. GCHQ? Some wholly-owned tax-haven-based subsidiary of a Maryland holding company about which it is very difficult to obtain further ownership information (i.e. 100% indirectly owned by the NSA)?
Re: Reality distortion fields
Well, Microsoft started the trend of taking the superb (XP UI) and crappifying it (Vista/Win7, Win 8 ). Recent Apple OSX releases haven't been quite up to the standard of previous ones, so maybe they *are* following?
- Vid Hubble 'scope snaps 200,000-ton chunky crumble conundrum
- Bugger the jetpack, where's my 21st-century Psion?
- Windows 8.1 Update 1 spewed online a MONTH early – by Microsoft
- Google offers up its own Googlers in cloud channel chumship trawl
- Something for the Weekend, Sir? Why can’t I walk past Maplin without buying stuff I don’t need?