Re: Ban 'em
So ... it's an Israeli company that has wilfully compromised the cyber-security of the state of Israel? (and also its most powerful ally, to say nothing of the rest of the world). Do they jail such people, or do they just shoot them?
3152 posts • joined 10 Jun 2009
So ... it's an Israeli company that has wilfully compromised the cyber-security of the state of Israel? (and also its most powerful ally, to say nothing of the rest of the world). Do they jail such people, or do they just shoot them?
More to the point, why can't we be offered an 80Gb SLC drive for the price of a 240Gb TLC (consumer grade) drive? Because for many applications, speed and durability are needed, and another 160Gb is not.
I wonder if Memristor tech will be equally frought during its first few iterations?
Or Comedia (Comedy, Farce)
Or Komodo (Dragon, big lizard with lethally septic teeth, that bites you and then waits for you to rot to death).
You are not being paranoid enough.
Add, anyone using technology licensed from Komodia, openly or covertly.
Add, anyone using the same technique, without having licensed it from Komodia, and without having disclosed what they are actually up to.
The real lesson is that SSL is really, truly, deeply flawed, and that it's a case of "broken by design" rather than "broken by accident".
The thing is, a version number should mean something. It should give the user an inkling of what they'll be getting. Personally, I'm a bit fan of x.y.z (foo) where x is a major revision or rewrite, y is a minor revision, z is a bug fix and foo is the build number. I don't care how big the numbers get. I'm a big boy. I can count quite high.
Thing is, who decides what is major? This scheme way work for an application, but for a kernel? One for which much of the code is optional, and may not even be compiled-in (or even compile-able) on the system you are using?
Take btrfs becoming production-ready. That's pretty major, if you want to use some of the advanced features like near-infinite snapshots or de-duping. On the other hand if you are doing quantum chemistry on some massively parallel network of wafer-scale integrated CPUs, btrfs might be of little interest. Something obscure to do with locking in massively parallel environments, on the other hand ....
Personally, I'd have reserved 4.0 for "world domination", or at least for the day that Microsoft abandons its own NT-derived kernel and goes over to an open-source kernel derived from Linux.
Still, it's just a number.
Better ideas would be for Microsoft to make it a condition of sale to OEMs, to make it a a breach of T&Cs to preinstall any software that changes the root certificates which Microsoft distributes. Or even better, to make it a breach of T&Cs to preinstall any software at all, other than that explicitly requested by the purchaser. Or to insist that every system comes with a DVD that will reinstall to a Microsoft-only configuration, so every user can do what well-informed corporates routinely do: nuke and reinstall from trusted media on receipt.
Failing which, governments should legislate against preinstalled software that makes privileged changes to an operating system or which are otherwise non-trivial to perfectly un-install.
Wonder if there's any chance of a class action against Microsoft, for not taking any steps to pre-emptively avoid this disaster?
Yes, the "Windows tax" rankles with me too, but a heck of a lot less than the implications of this particular bit of brain-dead slimeware.
Superfish, founded in 2006, is a small company based in Palo Alto, California
Of course, the folks at Superfish will likely just get a wrist slap for this while individual white hat hackers often get jail time
On the other hand, they still have the death penalty for corporations, even for quite small infringements. One can reasonably hope that pretty soon, once the class actions get started, the first quote above will have to be modified to read
... was a small company based in Palo Alto, California
The other intrusive thought I keep having, is did any part of the Cthuluesque entity that is the US government have anything to do with this, and if so, why?
There are people who will buy them because they can, and then insist that they can see/hear the difference.
You'll see the difference. On a 1024 display of an image containing a very small feature sized one quarter of a pixel squared, that feature will at best cause the single pixel containing it to be one sixteenth brighter or darker ... which you won't (can't) notice. The human eye is far more responsive to sharp step changes in brightness (edges) than it is to slight variation.
On a 4k screen it'll be a bright or dark or different-colour "spark" pixel which you can notice.
If it's not obvious what the pixel represents (there will be a line of them for a line feature), you'll zoom in on your model to see it better. But if you can't see it at all, you won't know there's any reason to zoom.
If you went through part of your childhood with slight uncorrected short sight, you'll remember the sudden impact of reality when you put on your first pair of glasses. Leaves! Raindrops! Stars! A really high-resolution screen displaying really high-resolution imagery will be similar.
It's the sort of thing architects in particular will love.
It's a complete unknown what happens when life-forms with different operating systems come into contact. All known life is based on DNA or RNA with 3-base codons, a small common set of useful amino-acids, and there is a large amount of other commonality in biochemical operation across most of our life.
So when we meet ET with mutual good intentions, our bacteria and theirs will decide the issue. Possibilities from optimistic to pessimistic are (1) our bacteria can't eat them and vice versa, (2) our bacteria eat ETs but theirs can't eat us, (3) vice versa, and (4) mutual complete destruction. (There's also (5): our sort of life is near-universal, because its evolution is heavily favoured by the laws of physics and chemistry over any other possibility).
I suspect the worst case is most likely. There are bacteria that can eat just about anything that is capable of yielding energy when it is dismantled, and our defences against being eaten are highly specific to the "operating system" that all Terran life shares.
Actually switch-mode PSUs run a lot faster than 20kHz these days. The main reason is that the higher the frequency, the more power can be transferred through a smaller mass of (ferrite) transformer core, and then smoothed back to DC using smaller capacitors.The upper limit is approximately where the extra power lost in the power transistors while they are changing state starts to exceed any economic benefit of making the power supply less massive.
A long time ago I repaired what must have been pretty much the first ever switched-mode power supply (a 20A bench power supply using OC42 - Germanium! - power transistors ) It switched, very audibly, at a few hundred Hz.
Incidentally, the output of a typical switched mode power supply is very poor for analogue audio use. Digital circuitry such as a computer doesn't care about tens of millivolts of ripple on its power, and just because the oscillator runs at a MHz doesn't mean it can't be (and is) modulated at lower frequencies by (for example) AC line noise.
You fight skin effect by making a cable that's as much skin as possible. Lots of strands of very fine mutually insulated wire bundled together in parallel, rather than a thick solid core wire.
You can buy loudspeaker cables like this, although I doubt anyone could hear any difference in a proper double-blind test. Where the problem is moving large wattages at a low-ish radio frequency, this approach does actually work (ie, your cable merely gets warm, rather than melting). Co-ax cable works better at higher RF frequency and by the time you arrive at microwaves, you do better with no wire at all.
We used to be able to work out how big the changes were from the numbers
That may work OK for an app. It won't work for a general-purpose operating system. The thing that is most likely to be immediately visible to a non-technical user is a change in the user API causing existing programs to break (which is something that Linux tries very, very hard to avoid). The next most likely is a new bug in a facility that you are using, but that's hardly something that they wanted to ship!
Apart from this, who decides what is a big change? A complete re-working of the code for massively parallel SMP systems may be scarcely visible to a person with a single 4-core CPU (and even less visible to someone working with a single-core embedded peabrain). A new filesystem ( for example Btrfs) may be of huge interest to some, and of no relevance whatsoever to others that aren't intending to use it. And so on.
The switch from Linux 2.x to Linux 3.x was supposedly arbitrary, but did in fact coincide with a major architectural change that the kernel debelopers had been working towards for the best part of a decade. What do you mean, you didn't notice the final demise of the Big Kernel Lock? Well, actually, you weren't supposed to. Its removal was a success. Cause for celebration by kernel developers (and maybe, the reason for the big version number change), but a big yawn for everyone else.
So, is there any reason for Linux kernel going from 3.x to 4.0 other than (maybe) the next release after 3.99? Well, just maybe ...
1.x ... a developer / enthusiast system
2.x ... production-ready, large scale SMP handicapped by big Kernel lock
3.x ... big Kernel lock finally gone, scales from embedded peabrains up to huge datacenters.
What next? I'm hoping for
4.x ... Microsoft abandons its own OS kernel, adopts Linux.
I suppose there are a few environments where that's OK, but I hope that this feature can be disabled.
Imagine what happens if a baby or child gets hold of it. Or even a cat. Or if a piece of grit gets into one of the buttons.
If the PIN is eight or more digits, there's little practical reason to self-destruct. Chances of successfully entering enough random keys at one per second are too small to matter.
In the cosmic scheme, these are very very very very faint.
Unlike GRBs (gamma-ray bursts). One of those in our galactic neighbourhood could all but sterilize our entire galaxy. We're assuming these are natural phenomena, but are we sure?
And then there's whatever mystery created the "Oh My God" particle (3x10^20 eV, or fifty Joules!). Whatever made it must have been within our galaxy, because its half-life before interaction with a cosmic background microwave photon precludes any extra-galactic origin.
Then again, we don't see Mars, or any other planet flipping its axis of rotation all around
"Flipping" would take many hundreds of thousands of years, which in evolutionary terms is fast.
Uranus is pretty much half-flipped right now.
He's also contributed at least two more scenarios:
"Saturn's Children" / "Neptune's brood". Our successors are AIs which we created as our slaves. We then go extinct (as slave-owning societies always have done in the past, on a non-global scale). They're out there, colonizing the galaxy and trying to reincarnate homo sapiens (from DNA codes, with a degree of success), for quasi-religious reasons!
Accellerando, in which human beings are supplanted by evolved digital corporations which no longer need to preserve their human customers. These denizens of "Economics 2+" turn the entire solar system into computronium (solar-powed computing substrate) and don't travel, because they always seek the most bandwidth-rich environment, ie nearest to Sol.
The first pair of books are amusing and less implausible than most interstellar SF. Accellerando will haunt your imagination. Both recommended.
We won't ever do interstellar travel as biological human beings. The speed of light and the vulnerability of mammalian life to interstellar radiation guarantee this.
However, AI or human uploads into Silicon might not be so constrained. They can be radiation-hardened, and can slow down their clock-rate to make the subjective speed of light seem faster by orders of magnitude.
It's also possible that other forms of bio-life might evolve with a slower and less radiation-sensitive chemistry. Some trees live 3000+ years. There's a fungus in the USA that's at least 100,000 years old (also the largest, most massive life-form yet discovered on Earth). Perhaps elsewhere, there are intelligences that live for many My, for whom a 30ky interstellar journey wouldn't seem impossible.
But back to the Fermi paradox - where are they? (Just possibly: out in our Oort cloud, living slowly and quietly. Or here on Terra: we call them fungi, they think too slowly for us to consider them sentient. When they get around to noticing us, they might decide we're a plague and do something about it ...).
Furthermore, they would probably switch to digital equally quickly and then start encrypting everything routinely, for commercial reasons if not for anything else. And a well encrypted transmission looks pretty much like random noise.
That's any well-coded signal, ie coded to make efficient use of available bandwidth. The main difference between encryption and "mere" efficient coding, is whether you publish the decoding algorithm and key(s), or not!
The orbit of any planet in a solar system with more than one planet is chaotic. (Mathematical fact, mathematical definition of chaotic). Given infinite time, all but one planet will inevitably be ejected into interstellar space (or less likely swallowed by its sun or collided with another planet).
Fortunately for us, "enough time" for our solar system greatly exceeds the lifetime of Sol. Also the future orbit of Earth can be predicted to remain much as it is today for the next 100My at least, given the accuracy of the best astronomical observations of the rest of the planets in our system and inverse-squares gravitation.
However, in a solar system with a Jupiter-mass planet in a very eccentric orbit, smaller planets would not remain in the Goldilocks zone for the (assumed) billions of years it takes for advanced life to evolve.
A moon stabilizes the axis of rotation of a planet. Without one, the axis of rotation is unstable, and sooner or later will pass through the plane of the planet's orbit (with a timescale of under a million years). That's too fast for the evolution of all but mono-cellular life to adjust to a planet that "suddenly" no longer has a day-night cycle. (ie, one where the whole planetary surface is like our poles: half a year of night, then half a year of day).
Don't know how large a moon is needed, but no moon at all is no good at all.
Can Kepler identify how many planets have moons?
Super-advanced aliens are using comms tech so advanced that even though their signals (and space stations / ships) are all around us but we can't detect them
Super-advanced? You mean the average modern USAlian, I think.
What would be detectable from many light-years out is radio broadcasts using 20th century modulation (AM, FM, SSB etc.), and radar-illumination transmitters. We're already moving away from these. I anticipate that by 2100 broadcast radio will be extinct. Civilian radar may have gone the same way (replaced by GPS and active location transmision by planes to ground control through an evolved internet). That leaves defence radar, and maybe military stealth technology will have rendered that obsolete as well. (Also taking an only slightly longer view of things, either world peace will render defence radar obsolete, or world war will render advanced civilisation obsolete).
Cellphones, wifi etc. are (or rather, will be) undetectable from many light-years out. An efficiently coded signal is almost indistinguishable from noise, absent knowledge of the coding. Also the radio power per channel is at most two watts (usually more like two milliwatts) rather than the megawatts which Radios Moscow and America used to blast out. As the cells get smaller, so do the wattages.
Assuming technology develops along similar lines elsewhere, the era of accidental long-range interstellar signalling probably lasts for about a century whether its civilisation survives for aeons or not. Which is as good a reason as any why we haven't spotted (another) one yet.
Microsoft acted as if it didn't matter at all. And now they're saying it's not important because it's too obscure. Haven't they learned anything from Intel's experience (a long time ago) with the FDIV bug? Or for that matter, various auto companies' experiences of what happens when they ignore "bugs" in cars on grounds such as they'd be too expensive to fix?
"Almost completely secure" = Insecure.
Historically there's a clear divide between spying ("everybody does it") and sabotage.
Spying involves only small and very careful changes to a target system to compromise its security while leaving its primary function unaffected. Thereafter, access is read-only. Anything that causes the target's primary function to be damaged draws attention to the spying, thereby defeating its purpose, as well as crossing the line into sabotage.
Sabotage uses compromised security to damage or destroy a compromised system's primary function. It's as much a hostile act as using explosive devices on another state's territory. Unlike spying, it is not tacitly accepted as something that everybody does, other than during a war.
On this basis, what the USA (claims it) did to NK's systems is normal peacetime behaviour for a nation state. What NK (allegedly) did to Sony is not.
 Although it occurs to me that the USA and NK are not actually at peace ... the Korean war ended in a truce but no peace treaty has ever been signed.
I expect that great fun will ensue the next time someone's airbags go off "for no reason" and one of these dongles is present. The victim's lawyer will sue a car company but disclose that an insurance dongle was plugged in. The car company will countersue the insurer (with heavyweight lawyers). The lawyers will get rich. The victim will probably get some compensation. I expect (or rather hope) that it's the insurer's no-security dongle that gets the blame.
I am seeing more and more reasons for driving around in an old car (pre-CANBUS).
It's not a battery problem so much as an infrastructure problem.
A 200 mile range becomes OTT and 100 miles is all one would really need, if electrical charging facilities become ubiquitous. (Ie, guaranteed at every parking bay in every supermarket, mall, workplace, city street, home or visitor attraction).
At the moment, things are like they probably were in the early days of IC motors, when finding somewhere that sold gasoline could not have been taken for granted. (And at 20? 10? 5? mpg, I don't imagine that the range of a mark one car was much to write home about either).
Chicken, meet egg.
Now we have white LEDs, it's impossible to turn the lights off on most new cars!
And the Government are probably already considering it
If you carry a mobile phone you are already doing everything you fear. They know whose phone it is, where it is, where it's been. It's quite likely they can use it to bug you. Even if you try to conceal your ownership of it, they can still cross-reference it to everyone you ever called with it and find out who you are with a minimum of further interrogation of their records.
A transponder could be far more anonymous than a phone (and cheapest possible would not be personalized).
Yes, one needs camera input as well to deal with things like fallen rocks and pedestrians that don't come with transponders. (I rather anticipate a future with self-driving cars when sane pedestrians and cyclists *will* carry transponders, possibly integrated into their mobile phones).
But consider optical illusions. Despite several hundred million years' evolutionary honing, there are still situations where either you cannot quickly work out what you are seeing, or you totally mistake what you are seeing. (The latter largely accounts for the "invisibility" of pedal cyclists to car drivers. "Think bike" has limited effect on the lower levels of our vision processing. I've seen two cyclists collide because "invisibility" applies equally to the cyclists themselves )
Do we really think that a computer vision system will be better than the human vision system after a mere few years' development? I doubt it.
My desktop is a dual-core gen 4 Celeron and is fast enough for everything I need it for. Some in family have commented that it's "very fast" -- that's because it has an SSD not an HD. CPU was chosen for fanless operation, for which purpose these new 15W parts represent a big improvement.
Secondly, what sort of person locks themselves IN the house? Very dangerous if you have a fire.
Anyone who has surprised a burglar at 4am. (Lucky for me he wasn't the violent type, just shoved me out of the way and ran).
Since then I have always locked the five-lever mortice lock at night. For fire safety I leave the key in the lock (it's attached to the door by a chain to prevent somebody pushing it out from the other side of the keyhole and retrieving it). Also I suspect if the fire alarm goes off, my best move is to exit pdq via the bedroom window, not to open the bedroom door at all.
Sports injuries are by definition, "accidents"
Hardly. The joint damage caused by traumas (Football, Rugby ...) or repetitive strain (Running, Tennis ...) is an entirely predictable consequence of the nature of the sports.
And probably very expensive to the NHS, since the consequences are likely to be early onset of arthritis needing rest-of-life treatment, but not an earlier death.
Nevertheless, I'd defend both the principle of equity, and the right to play sports (along with the right to overeat, to not play sports, to inhale tobacco smoke in private, etc. etc.)
Wonder if that's true.
There's a similar UK tale about the bored traffic cops who pointed a speed gun at an RAF fighter doing low-level attack training. The radar gun stopped working. Permanently. Later they were advised, over a pint, "don't do that".
"Well, hypothetically, if the pilot had left the electronic countermeasures switched on, it would retalliate with a targetted electromagnetic pulse milliseconds later ..."
"... so that's why our speed gun failed? ..."
"... and next to that switch there's another one that enables the non-electronic countermeasures. Homing missiles ..."
"Time for another round?"
I'll assume that this is intended as a sarcastic mick-take of a typical loon's ranting, rather than being the real thing.
... power that was thousands of times greater than the piffling force of gravity
Not so much comparing apples and oranges, as apples and furious green ideas.
BTW if you learn how to do a meaningful comparison between the electromagnetic force and the gravitational force, you will discover that the electromagnetic force is of the order of ten to the power forty times stronger, rather than mere thousands. That is one of the most surprising facts about the universe we live in. Nobody has the faintest idea why, other than it being quite clear that if it weren't so, we wouldn't be here.
Karbonn Sparkle UK (Amazon) price as linked is £130, which is more like $200 than $100. At that price why would anyone buy one of these?
Feast your eyes on some vacuum-tube portable radios here: http://www.antiqueradio.org/tubeportables.htm
They're a bit larger than a modern DAB radio, and when loaded with LT and HT batteries, considerably heavier. But in essence ... pre-transistor "transistor" radios.
And eighty-something years later, kids are going to school hungry.
Unless you're going to successfully argue that the statistics in the article are wrong, that is because their parents have a poor set of priorities, and are spending their income on something other than properly feeding their children.
Throwing more money at such parents is not the answer. Indeed, it may be part of the problem (for example, if a parent has an addiction). No, I do not know what the answer is.
Pure speculation on my part - the original vendor lock-in?
In other words, was there a time when the company that supplied the rifles also supplied the ammo? If that was the case, it would not have been in their interests to make the ammo the same size as a competitor's ammo. And once the funny size was widely used, it would tend to self-perpetuate.
It isn't just rifles. Another example is railway track guages. And what about the size of what is now a standard (UK) housebrick?
Edit - I've just realized there's also a battlefield reason for funny sizes. You don't want the opposition using your own ammo against you, if they capture some! So each army using its own size that's different to any potential enemy army at the time that the rifle bore is adopted, makes perfect sense. Even better if the size is sufficiently similar that some stupid soldier tries using captured ammo and blows himself up with it.
I must admit that "Unknown" is a little but worrying
Seems the natural answer for a baby AI (and possibly even for a mature Culture Mind). AFAIK no true AIs yet exist, but it makes sense to have this answer available for when they do.
You're missing something.
Say you used the same password for your bank and for some other e-commerce website. (Yes, you shouldn't, but many do!). Say the other site gets compromised and a list of names, addresses and passwords finds its way to the crims.
With password-only banking, they'll likely hack into your account.
With two-factor authentication using an app on your mobile, they'd also have to steal your phone before they can try to hack in. Which they can't do unless they are in your vicinity. If the hackability of random accounts from the stolen info is 5%, they'd have to steal twenty customers' phones to get into one bank account. I doubt that's feasible.
The other way around, my bank has sent me a credit-card sized gizmo which I have to use to generate one-time authorization codes before I can transfer money. (There's an app alternative, which I don't yet feel any need for). If someone breaks in to my home and steals it, they won't be able to steal my money because they won't know my password.
It's a bit of added hassle but on balance I'd prefer it is all banks did this.
From the article
"The electron transfer process involves the creation of chemicals called radical pairs, and these had been put forward as a mechanism by which weak magnetic fields might interact with cells – but no dice"
That is science. There is a mechanism whereby magnetic fields interact with the chemistry of free radicals inside a living organism, and that might have been harmful. It's now been investigated and moved from "doesn't seem likely" to "studied, no effects observed".
Here's a (probably over-simplified) outline of quantum computing.
Quantum indeterminacy or "wierdness" means that a particle or small system can exist in two (or more) states at once, until you observe it. that "colllapses the wavefunction". Repeat the experiment for a two-state system, and it'll be like tossing a coin: heads half the time, tails the other half of the time. That's a quantum bit or qubit. Doesn't sound very useful, does it.
What a quantum computer does is to provide a quantum register (N qubits) and the means for performing mathematical operations on the contents of that register without observing it . This is equivalent to performing those operations on every possible N-bit number at once, from 0 to 2^N-1.
And then you observe it, and get just one out of the possible results.
However, consider a sequence of operations that will return one of a given number's prime factors. With 4 qubits, this has been done. Perform the sequence of operations that will return a prime factor of 15, then observe the quantum register, and you will get 5 or 3 with probability 1/2 each. All other possible 4-bit numbers have a probability of zero. They are not prime factors of 15 so they won't ever be observed.
With 32 quantum bits, it's looking rather interesting(*). With 4096 quantum bits, PKI is dead. For a billion quantum bits or a mole (~10^26) of quantum bits, you are performing magic(**). I do not believe that quantum computing can work for large numbers of qubits. Prediction: the upper limit of how many quantum bits one can work on, will tell us something very interesting about quantum mechanics, physics, and the structure of the universe. Of course, it's possible that the universe really is stranger than I can imagine and there is no upper limit....
Cryptography as we know it may or may not survive the experiment.
(*) AFAIK nobody has yet made even a 32-bit quantum computer. But if they had, would they be telling us?
(**)This is akin to finding a fast algorithm for solving NP-complete problems. (***)
(***) which, assuming you tell anyone else, is akin to signing your own death warrant. If you tell only a small number of people, some intelligence agency will take extreme measures to make it their secret. If you manage to spam it far enough and wide enough ... you probably just accelerate the rate at which a strongly Godlike AI bootstraps itself, and takes over the universe. Which given a universe containing billions of galaxies like ours, would almost certainly have happened already were it possible at all. So I predict that it isn't.
Much better to have two NASs with one disk each. One as the backup of the other. That way you may be able to find a strategy that also protects you against theft, controller failure, flood, fire, children, animals, ...
Actually since some form of off-site backup is best, one NAS with one disk and two same-size USB hard drives may be even better. Regularly: backup NAS to USB hard drive, take the hard drive to the off-site location, return with the other hard drive for the next off-site backup. Off-site equals some relative's or friend's house.
For the inexperienced who ask why two USB disks ... this way, there is no time at which both the NAS and its only backup could get stolen, fried or drowned together by one event. One backup disk is *always* off-site.
So to take an example that's environmental reality for some in the USA at present:
You're in a robot car. It runs into a blizzard - whited-out local conditions. It decides to stop (or to proceed at walking pace until it bogs down) because it can't see. And a week later, you are found - frozen to death, or asphyxiated.
A human would "chance it" driving at a much higher speed than was completely safe to the nearest habitation, because stopping had clearly become even less safe than ordinarily dangerous driving.
One really does need to apply the sort of probablity theory that allows for unknowns, not binary logic.
The program should obtain a random number and then proceed by probablities.
For example, if there are three people in one car and one in the other and death for all is certain if collision is allowed, the car with one passenger should be sacrificed three times out of four and the other one one time out of four. Extra facts might be allowed to bias the probabilities but my own sense of ethics says that all the involuntary participants in the scenario should be given a nonzero chance of survival.
Eris is the goddess of disorder. The Devil would be advocating allowing a guaranteed fatal crash to take place with a probability of 100%, on the basis that that is the most ethical thing to do. Worst outcome AND promulgating a false morality.
Incidentally I once made a major error of motoring judgement. I know that I had decided in a flash that if a collision with another car was inevitable, I would take my chances with high-speed off-road driving because the situation was all my fault. Luck was with me that day, there was no car coming the other way.
Kryder’s Law isn’t a smooth curve but a superposition of S-curves representing each new storage technology
Today's 5Tb disks are still recognisably the same as the 1Mb 16 inch platters of the early 1970s. Data written onto more or less homogeneous magnetic surface by a read/write head "flying" on a cushion of air.
Yes, the design of the head in particular has gone through a series of new technologies, but nothing like as radical as what happened to electronics during that period. (Bipolar to MOS to CMOS, SSI to MSI to LSI to VLSI, exotics like Copper interconnects and Hafnium gates and fin-FETs).
The breakdown in Kryder's law is for the same reason as the breakdown in Moore's law. Magnetic disks have hit the physical limits just as microelectronics has. In both cases the physical limits boil down to the fixed size and indivisibility of individual atoms. In one case, magnetic domains cannot get any shorter. In the other, transistor gates cannot get any thinner.
I wonder if we will ever see BPM / HAMR disks for sale. Solid-state storage deleloping from 2D to 3D structures may be the real breakthrough. I don't think there's any near-term physical limit on how many bits of Flash memory can be stacked on the Z axis, whereas BPM and HAMR are one-time gains after which the same limits reassert themselves.
As a developer you should then be aware that it's pretty much impossible to release 100% bug free code, especially when your talking about something the size of Windows.
True, but ...
There is such a thing as coding with security in mind. A long time ago Microsoft hired the chief architect of the VMS operating system away from Digital, with the brief to write them a secure kernel to replace Windows 98. The result was Windows NT 3.51. It was the most secure system Microsoft ever had, possibly second only to VMS in terms of excellence.
Being secure meant that graphics performance sucked compared to Windows 98 (where there was basically no security at all). This was a completely inevitable result of securely managing the system's memory on the hardware of the day. So what did Microsoft do? It took this kernel that had been engineered for security, and blew holes in it in order to make the graphics run faster. Enter NT 4.0. Broken by design and orders from the top. Then 2000 (further security compromises), then XP(even more). Eventually what had once been one of the most secure OSes in existence (perhaps behind only VMS) became an unmaintainable kluge. Around XP SP2 they claimed to realise that security mattered and started trying to patch the holes that they had deliberately created in a once-secure design. The result was an un-maintainable kluge.
So they re-wrote it again. Enter Vista ....
You may say that was all a long time ago and you'd be right, except that you'd also be asserting that a system that was deliberately broken security-wise can then be patched back to secure by the people who broke its design.
The evidence all suggests that Microsoft simply does not understand security at all.
And if you think Linux et al are any different you're very much mistaken
Different culture. Open-source applications are of variable quality. Some are excellent, some less so.
The Linux kernel is engineered with security in mind and is overseen by Linus. He is very smart, he does not suffer fools gladly, and most importantly he has no marketing department to tell him what he has to compromise (ie, break) tomorrow, because some touchy-feely focus group of non-technical users thinks it would be a good idea to let it display pink elephants galloping faster.
More generally the Linux ecosystem learns from its mistakes. Things in active development get better. If there is a disagreement one project may fork into two, which then compete until either one branch runs out of supporters, or (occasionally) until both branches have found different niches in the open-source ecology. It's a very similar process to natural evolution. In both cases good designs prosper, poor designs die out.
how difficult the battery actually is to replace
That is key. If it's something anyone can do if they are handy with jewellers screwdrivers and aren't bothered about voiding any warranty on the device, I wouldn't be too bothered. If, on the other hand, the battery is glued in so that its replacement is completely impossible, I wouldn't buy it. Likewise if getting inside it requires special tools, or if reassembly requires one to have four thumbs.
I can understand manufacturers being worried about bad publicity and lawsuits caused by third-party substandard batteries. Requiring anyone replacing the battery to employ a screwdriver and to break a warranty seal isn't unreasonable. (If they're really smart the phone would sense this and transmit "warranty seal broken" back to base, just like printers lock "3rd-party ink cartridge used" into their firmware to void the warranty, if you use third-party cartridges).
Hotels have license to charge a guest's card for abnormal expenses that the guest causes. Things like fouling the bed or trashing the room. It'll be in the terms and conditions, and it is probably sanctioned by common law in any case.
I hope it is proved that this is stretching that license well past breaking point.
I've stayed in £35 places that offered clean linen, a not-uncomfortable bed, an absence of bed-bugs, functional furniture and equipment, and some breakfast that was edible. Sure, you shouldn't expect more than that, but often all you want is a place to sleep.