Re: They already thought about that (Why bother?)
And made it *illegal* to have a party called "None of the Above".
I immediately thought of a lady in a black and white costume standing as "Nun of the above", which ought to be legal ....
2862 posts • joined 10 Jun 2009
And made it *illegal* to have a party called "None of the Above".
I immediately thought of a lady in a black and white costume standing as "Nun of the above", which ought to be legal ....
I cannot understand why any importance should be attached to votes cajoled out of people who couldn't give a damn. Surely by not voting, you declare that you'll be equally (un)happy with whomsoever is returned by the folks who *can* be bothered to vote? (Folks who, hopefully, will have bothered to inform themselves about the policies supported by the various candidates, and given who they vote for more than one second's consideration).
I'd also much rather that postal votes were once again restricted to those who declare that they will be outside the constituency on polling day, or who can reasonably be excused from walking to a polling station on medical grounds. Postal votes are otherwise far too easy to obtain and use fraudulently.
I'd even support introducing the purple-thumb technology used in "less developed" countries to prevent repeat voting using forged or stolen credentials.
How hard can it be to write a block device driver that talks to a disk device by Ethernet address and Seagate protocol instead of iSCSI? Once that is done, you "just" build your filesystem on top of a bunch or megabunch of them. Someone else shows up with a different protocol, write another block device driver for it.
"Just" because pushing the scale of your (presumably distributed, massively multi-master) filesystem past anything that's been done before is likely to show up some latent issues in the filesystem. However, that will apply equally whether your block devices are these things, old-school iSCSI devices, or (say) huge boards covered with Memristor-tech.
Off-topic ... but there are dozens. Most are described as "Live CDs or Live DVDs" and will boot off a CD or DVD, but with most it's described how to copy or install into a memory stick instead.
Fedora has a liveusb-creator app that you can run on Windoze to create a bootable Fedora Workstation USB stick.
Linux booted off a CD or DVD is surprisingly usable. An application may be slow to start the first time (while the DVD drive spins up and seeks slowly) but will usually then stay cached in RAM for repeated use.
If the hardware doesn't work perfectly, there's (usually) nothing that the operating system can do about it, other than (sometimes) detecting the problem and refusing to boot further with a hopefully informative message.
I hope that the test for this problem is added to memtest86 and similar RAM testers in the near future.
Then it gets interesting. I'm assuming this is a RAM module fault, not a generic motherboard / chipset fault. If so, I wonder whether it will show that some? all? notebook manufacturers are shipping the cheapest crappiest DRAM that they can lay their hands on, along with the preinstalled malware? Now, how they will handle the hopefully large number of people who will run a memory diagnostic and return their faulty systems to be repaired?
Alternatively, will it show that all makes of DRAM are equally crap, because the industry has been pushing the performance envelope too far and fast without properly testing the worst edge and corner cases? (Crucial/Micron, Samsung, here's hoping you will gain the right sort of publicity out of this).
The world really does need ECC to be available for professional grade notebook and desktop systems. Intel, by all means charge a few dollars more for ECC-supporting chipsets, but it was really stupid to decide that nobody using a notebook or desktop system needed ECC.
Killer cost omitted. It requires that you support the weight of the thing out in front of you while you use it. To say nothing of the weight of both your arms. Ergonomic FAIL. (BTW, it's probably illegal for a business to ask its staff to use such a device, under the EU VDU regulations).
If you doubt me on this, try holding an ordinary keyboard in the illustrated pose for a few minutes, and then consider what several more hours would be like.
All radiation is damaging to biological cells
The sun emits vast amounts of radiation, and not all of it is harmess. But if it went away, how long to you think we'd last? No sunlight (radiation) = no life.
As for traces of RF and Microwaves ... I'd suggest you quit using any equipment that allows you to post messages on the internet, and retire to "enjoy" a 15th century crofter's lifestyle in the Outer Hebrides. Or is ISIL more your style?
I suspect that you can perform a full write of 1Tb internally by sending a lot less than 1Tb of data to the SSD down the bus. These things are internally structured into pages much larger than a classical disk drive sector. Writing (say) one sector out of twenty at random probably forces at least one full write of the solid-state storage medium.
It's similar to doing small writes to a shingled HDD, but without the speed penalty.
But the cost of SSD continues to go down and the density up. At some point spinning rust will die.
Beware of extrapolating. Maybe you are right, but you are assuming it will be possible either to get several terabytes onto just a few chips, like at present they can get a quarter-terabyte onto just a few chips. Since they're already close to the physical limit of how small a flash memory cell can be made, and also close to the limit of how closely such cells can be packed on a chip, you have to go to a new technology. 3D Flash may be the answer, but does it scale? from a few tens of layers (as currently shipping) up to many hundreds of layers (which would be what's needed to reach 6Tb from 250Gb at constant price -- and that's making the optimistic assumption, that more 3D layers increases production cost at a much slower rate than linear.
Meanwhile, I suspect that 5Tb disk drives will continue to fall in price until they reach the same price point that every other size of disk drive previously shipped have reached. That's about £30 for desktop-grade and about £70 for server-grade. The higher prices on larger drives are manufacturers recouping their R&D costs while the market will support a premium price.
Meanwhile, when HAMR goes into production, "spinning rust" will scale to tens of Tb per drive. Spinning rust is nowhere near its physical density limit in the radial direction, it's just that you can't radially address it for writing with a purely magnetic head.
Surely the analogy is to software: a one-off kluge, versus properly re-usable code. This is properly re-usable hardware.
As we all know, the one-off kluge has its place, if it's really one-off, and if the cost of writing it to be general and re-usable is significant extra programming hours. Here. the cost of doing the hardware right is, er, about £40. An absolute steal, I'd say. Provided it works (see review).
It may also save you (say) £30 every time you accidentally fry a tiny little interface card costing £3 rather than a whole RPi costing £33. Kids (even more so than adults) learn by making mistakes, you'd best be able to afford them doing so.
So ... it's an Israeli company that has wilfully compromised the cyber-security of the state of Israel? (and also its most powerful ally, to say nothing of the rest of the world). Do they jail such people, or do they just shoot them?
More to the point, why can't we be offered an 80Gb SLC drive for the price of a 240Gb TLC (consumer grade) drive? Because for many applications, speed and durability are needed, and another 160Gb is not.
I wonder if Memristor tech will be equally frought during its first few iterations?
Or Comedia (Comedy, Farce)
Or Komodo (Dragon, big lizard with lethally septic teeth, that bites you and then waits for you to rot to death).
You are not being paranoid enough.
Add, anyone using technology licensed from Komodia, openly or covertly.
Add, anyone using the same technique, without having licensed it from Komodia, and without having disclosed what they are actually up to.
The real lesson is that SSL is really, truly, deeply flawed, and that it's a case of "broken by design" rather than "broken by accident".
The thing is, a version number should mean something. It should give the user an inkling of what they'll be getting. Personally, I'm a bit fan of x.y.z (foo) where x is a major revision or rewrite, y is a minor revision, z is a bug fix and foo is the build number. I don't care how big the numbers get. I'm a big boy. I can count quite high.
Thing is, who decides what is major? This scheme way work for an application, but for a kernel? One for which much of the code is optional, and may not even be compiled-in (or even compile-able) on the system you are using?
Take btrfs becoming production-ready. That's pretty major, if you want to use some of the advanced features like near-infinite snapshots or de-duping. On the other hand if you are doing quantum chemistry on some massively parallel network of wafer-scale integrated CPUs, btrfs might be of little interest. Something obscure to do with locking in massively parallel environments, on the other hand ....
Personally, I'd have reserved 4.0 for "world domination", or at least for the day that Microsoft abandons its own NT-derived kernel and goes over to an open-source kernel derived from Linux.
Still, it's just a number.
Better ideas would be for Microsoft to make it a condition of sale to OEMs, to make it a a breach of T&Cs to preinstall any software that changes the root certificates which Microsoft distributes. Or even better, to make it a breach of T&Cs to preinstall any software at all, other than that explicitly requested by the purchaser. Or to insist that every system comes with a DVD that will reinstall to a Microsoft-only configuration, so every user can do what well-informed corporates routinely do: nuke and reinstall from trusted media on receipt.
Failing which, governments should legislate against preinstalled software that makes privileged changes to an operating system or which are otherwise non-trivial to perfectly un-install.
Wonder if there's any chance of a class action against Microsoft, for not taking any steps to pre-emptively avoid this disaster?
Yes, the "Windows tax" rankles with me too, but a heck of a lot less than the implications of this particular bit of brain-dead slimeware.
Superfish, founded in 2006, is a small company based in Palo Alto, California
Of course, the folks at Superfish will likely just get a wrist slap for this while individual white hat hackers often get jail time
On the other hand, they still have the death penalty for corporations, even for quite small infringements. One can reasonably hope that pretty soon, once the class actions get started, the first quote above will have to be modified to read
... was a small company based in Palo Alto, California
The other intrusive thought I keep having, is did any part of the Cthuluesque entity that is the US government have anything to do with this, and if so, why?
There are people who will buy them because they can, and then insist that they can see/hear the difference.
You'll see the difference. On a 1024 display of an image containing a very small feature sized one quarter of a pixel squared, that feature will at best cause the single pixel containing it to be one sixteenth brighter or darker ... which you won't (can't) notice. The human eye is far more responsive to sharp step changes in brightness (edges) than it is to slight variation.
On a 4k screen it'll be a bright or dark or different-colour "spark" pixel which you can notice.
If it's not obvious what the pixel represents (there will be a line of them for a line feature), you'll zoom in on your model to see it better. But if you can't see it at all, you won't know there's any reason to zoom.
If you went through part of your childhood with slight uncorrected short sight, you'll remember the sudden impact of reality when you put on your first pair of glasses. Leaves! Raindrops! Stars! A really high-resolution screen displaying really high-resolution imagery will be similar.
It's the sort of thing architects in particular will love.
It's a complete unknown what happens when life-forms with different operating systems come into contact. All known life is based on DNA or RNA with 3-base codons, a small common set of useful amino-acids, and there is a large amount of other commonality in biochemical operation across most of our life.
So when we meet ET with mutual good intentions, our bacteria and theirs will decide the issue. Possibilities from optimistic to pessimistic are (1) our bacteria can't eat them and vice versa, (2) our bacteria eat ETs but theirs can't eat us, (3) vice versa, and (4) mutual complete destruction. (There's also (5): our sort of life is near-universal, because its evolution is heavily favoured by the laws of physics and chemistry over any other possibility).
I suspect the worst case is most likely. There are bacteria that can eat just about anything that is capable of yielding energy when it is dismantled, and our defences against being eaten are highly specific to the "operating system" that all Terran life shares.
Actually switch-mode PSUs run a lot faster than 20kHz these days. The main reason is that the higher the frequency, the more power can be transferred through a smaller mass of (ferrite) transformer core, and then smoothed back to DC using smaller capacitors.The upper limit is approximately where the extra power lost in the power transistors while they are changing state starts to exceed any economic benefit of making the power supply less massive.
A long time ago I repaired what must have been pretty much the first ever switched-mode power supply (a 20A bench power supply using OC42 - Germanium! - power transistors ) It switched, very audibly, at a few hundred Hz.
Incidentally, the output of a typical switched mode power supply is very poor for analogue audio use. Digital circuitry such as a computer doesn't care about tens of millivolts of ripple on its power, and just because the oscillator runs at a MHz doesn't mean it can't be (and is) modulated at lower frequencies by (for example) AC line noise.
You fight skin effect by making a cable that's as much skin as possible. Lots of strands of very fine mutually insulated wire bundled together in parallel, rather than a thick solid core wire.
You can buy loudspeaker cables like this, although I doubt anyone could hear any difference in a proper double-blind test. Where the problem is moving large wattages at a low-ish radio frequency, this approach does actually work (ie, your cable merely gets warm, rather than melting). Co-ax cable works better at higher RF frequency and by the time you arrive at microwaves, you do better with no wire at all.
We used to be able to work out how big the changes were from the numbers
That may work OK for an app. It won't work for a general-purpose operating system. The thing that is most likely to be immediately visible to a non-technical user is a change in the user API causing existing programs to break (which is something that Linux tries very, very hard to avoid). The next most likely is a new bug in a facility that you are using, but that's hardly something that they wanted to ship!
Apart from this, who decides what is a big change? A complete re-working of the code for massively parallel SMP systems may be scarcely visible to a person with a single 4-core CPU (and even less visible to someone working with a single-core embedded peabrain). A new filesystem ( for example Btrfs) may be of huge interest to some, and of no relevance whatsoever to others that aren't intending to use it. And so on.
The switch from Linux 2.x to Linux 3.x was supposedly arbitrary, but did in fact coincide with a major architectural change that the kernel debelopers had been working towards for the best part of a decade. What do you mean, you didn't notice the final demise of the Big Kernel Lock? Well, actually, you weren't supposed to. Its removal was a success. Cause for celebration by kernel developers (and maybe, the reason for the big version number change), but a big yawn for everyone else.
So, is there any reason for Linux kernel going from 3.x to 4.0 other than (maybe) the next release after 3.99? Well, just maybe ...
1.x ... a developer / enthusiast system
2.x ... production-ready, large scale SMP handicapped by big Kernel lock
3.x ... big Kernel lock finally gone, scales from embedded peabrains up to huge datacenters.
What next? I'm hoping for
4.x ... Microsoft abandons its own OS kernel, adopts Linux.
I suppose there are a few environments where that's OK, but I hope that this feature can be disabled.
Imagine what happens if a baby or child gets hold of it. Or even a cat. Or if a piece of grit gets into one of the buttons.
If the PIN is eight or more digits, there's little practical reason to self-destruct. Chances of successfully entering enough random keys at one per second are too small to matter.
In the cosmic scheme, these are very very very very faint.
Unlike GRBs (gamma-ray bursts). One of those in our galactic neighbourhood could all but sterilize our entire galaxy. We're assuming these are natural phenomena, but are we sure?
And then there's whatever mystery created the "Oh My God" particle (3x10^20 eV, or fifty Joules!). Whatever made it must have been within our galaxy, because its half-life before interaction with a cosmic background microwave photon precludes any extra-galactic origin.
Then again, we don't see Mars, or any other planet flipping its axis of rotation all around
"Flipping" would take many hundreds of thousands of years, which in evolutionary terms is fast.
Uranus is pretty much half-flipped right now.
He's also contributed at least two more scenarios:
"Saturn's Children" / "Neptune's brood". Our successors are AIs which we created as our slaves. We then go extinct (as slave-owning societies always have done in the past, on a non-global scale). They're out there, colonizing the galaxy and trying to reincarnate homo sapiens (from DNA codes, with a degree of success), for quasi-religious reasons!
Accellerando, in which human beings are supplanted by evolved digital corporations which no longer need to preserve their human customers. These denizens of "Economics 2+" turn the entire solar system into computronium (solar-powed computing substrate) and don't travel, because they always seek the most bandwidth-rich environment, ie nearest to Sol.
The first pair of books are amusing and less implausible than most interstellar SF. Accellerando will haunt your imagination. Both recommended.
We won't ever do interstellar travel as biological human beings. The speed of light and the vulnerability of mammalian life to interstellar radiation guarantee this.
However, AI or human uploads into Silicon might not be so constrained. They can be radiation-hardened, and can slow down their clock-rate to make the subjective speed of light seem faster by orders of magnitude.
It's also possible that other forms of bio-life might evolve with a slower and less radiation-sensitive chemistry. Some trees live 3000+ years. There's a fungus in the USA that's at least 100,000 years old (also the largest, most massive life-form yet discovered on Earth). Perhaps elsewhere, there are intelligences that live for many My, for whom a 30ky interstellar journey wouldn't seem impossible.
But back to the Fermi paradox - where are they? (Just possibly: out in our Oort cloud, living slowly and quietly. Or here on Terra: we call them fungi, they think too slowly for us to consider them sentient. When they get around to noticing us, they might decide we're a plague and do something about it ...).
Furthermore, they would probably switch to digital equally quickly and then start encrypting everything routinely, for commercial reasons if not for anything else. And a well encrypted transmission looks pretty much like random noise.
That's any well-coded signal, ie coded to make efficient use of available bandwidth. The main difference between encryption and "mere" efficient coding, is whether you publish the decoding algorithm and key(s), or not!
The orbit of any planet in a solar system with more than one planet is chaotic. (Mathematical fact, mathematical definition of chaotic). Given infinite time, all but one planet will inevitably be ejected into interstellar space (or less likely swallowed by its sun or collided with another planet).
Fortunately for us, "enough time" for our solar system greatly exceeds the lifetime of Sol. Also the future orbit of Earth can be predicted to remain much as it is today for the next 100My at least, given the accuracy of the best astronomical observations of the rest of the planets in our system and inverse-squares gravitation.
However, in a solar system with a Jupiter-mass planet in a very eccentric orbit, smaller planets would not remain in the Goldilocks zone for the (assumed) billions of years it takes for advanced life to evolve.
A moon stabilizes the axis of rotation of a planet. Without one, the axis of rotation is unstable, and sooner or later will pass through the plane of the planet's orbit (with a timescale of under a million years). That's too fast for the evolution of all but mono-cellular life to adjust to a planet that "suddenly" no longer has a day-night cycle. (ie, one where the whole planetary surface is like our poles: half a year of night, then half a year of day).
Don't know how large a moon is needed, but no moon at all is no good at all.
Can Kepler identify how many planets have moons?
Super-advanced aliens are using comms tech so advanced that even though their signals (and space stations / ships) are all around us but we can't detect them
Super-advanced? You mean the average modern USAlian, I think.
What would be detectable from many light-years out is radio broadcasts using 20th century modulation (AM, FM, SSB etc.), and radar-illumination transmitters. We're already moving away from these. I anticipate that by 2100 broadcast radio will be extinct. Civilian radar may have gone the same way (replaced by GPS and active location transmision by planes to ground control through an evolved internet). That leaves defence radar, and maybe military stealth technology will have rendered that obsolete as well. (Also taking an only slightly longer view of things, either world peace will render defence radar obsolete, or world war will render advanced civilisation obsolete).
Cellphones, wifi etc. are (or rather, will be) undetectable from many light-years out. An efficiently coded signal is almost indistinguishable from noise, absent knowledge of the coding. Also the radio power per channel is at most two watts (usually more like two milliwatts) rather than the megawatts which Radios Moscow and America used to blast out. As the cells get smaller, so do the wattages.
Assuming technology develops along similar lines elsewhere, the era of accidental long-range interstellar signalling probably lasts for about a century whether its civilisation survives for aeons or not. Which is as good a reason as any why we haven't spotted (another) one yet.
Microsoft acted as if it didn't matter at all. And now they're saying it's not important because it's too obscure. Haven't they learned anything from Intel's experience (a long time ago) with the FDIV bug? Or for that matter, various auto companies' experiences of what happens when they ignore "bugs" in cars on grounds such as they'd be too expensive to fix?
"Almost completely secure" = Insecure.
Historically there's a clear divide between spying ("everybody does it") and sabotage.
Spying involves only small and very careful changes to a target system to compromise its security while leaving its primary function unaffected. Thereafter, access is read-only. Anything that causes the target's primary function to be damaged draws attention to the spying, thereby defeating its purpose, as well as crossing the line into sabotage.
Sabotage uses compromised security to damage or destroy a compromised system's primary function. It's as much a hostile act as using explosive devices on another state's territory. Unlike spying, it is not tacitly accepted as something that everybody does, other than during a war.
On this basis, what the USA (claims it) did to NK's systems is normal peacetime behaviour for a nation state. What NK (allegedly) did to Sony is not.
 Although it occurs to me that the USA and NK are not actually at peace ... the Korean war ended in a truce but no peace treaty has ever been signed.
I expect that great fun will ensue the next time someone's airbags go off "for no reason" and one of these dongles is present. The victim's lawyer will sue a car company but disclose that an insurance dongle was plugged in. The car company will countersue the insurer (with heavyweight lawyers). The lawyers will get rich. The victim will probably get some compensation. I expect (or rather hope) that it's the insurer's no-security dongle that gets the blame.
I am seeing more and more reasons for driving around in an old car (pre-CANBUS).
It's not a battery problem so much as an infrastructure problem.
A 200 mile range becomes OTT and 100 miles is all one would really need, if electrical charging facilities become ubiquitous. (Ie, guaranteed at every parking bay in every supermarket, mall, workplace, city street, home or visitor attraction).
At the moment, things are like they probably were in the early days of IC motors, when finding somewhere that sold gasoline could not have been taken for granted. (And at 20? 10? 5? mpg, I don't imagine that the range of a mark one car was much to write home about either).
Chicken, meet egg.
Now we have white LEDs, it's impossible to turn the lights off on most new cars!
And the Government are probably already considering it
If you carry a mobile phone you are already doing everything you fear. They know whose phone it is, where it is, where it's been. It's quite likely they can use it to bug you. Even if you try to conceal your ownership of it, they can still cross-reference it to everyone you ever called with it and find out who you are with a minimum of further interrogation of their records.
A transponder could be far more anonymous than a phone (and cheapest possible would not be personalized).
Yes, one needs camera input as well to deal with things like fallen rocks and pedestrians that don't come with transponders. (I rather anticipate a future with self-driving cars when sane pedestrians and cyclists *will* carry transponders, possibly integrated into their mobile phones).
But consider optical illusions. Despite several hundred million years' evolutionary honing, there are still situations where either you cannot quickly work out what you are seeing, or you totally mistake what you are seeing. (The latter largely accounts for the "invisibility" of pedal cyclists to car drivers. "Think bike" has limited effect on the lower levels of our vision processing. I've seen two cyclists collide because "invisibility" applies equally to the cyclists themselves )
Do we really think that a computer vision system will be better than the human vision system after a mere few years' development? I doubt it.
My desktop is a dual-core gen 4 Celeron and is fast enough for everything I need it for. Some in family have commented that it's "very fast" -- that's because it has an SSD not an HD. CPU was chosen for fanless operation, for which purpose these new 15W parts represent a big improvement.
Secondly, what sort of person locks themselves IN the house? Very dangerous if you have a fire.
Anyone who has surprised a burglar at 4am. (Lucky for me he wasn't the violent type, just shoved me out of the way and ran).
Since then I have always locked the five-lever mortice lock at night. For fire safety I leave the key in the lock (it's attached to the door by a chain to prevent somebody pushing it out from the other side of the keyhole and retrieving it). Also I suspect if the fire alarm goes off, my best move is to exit pdq via the bedroom window, not to open the bedroom door at all.
Sports injuries are by definition, "accidents"
Hardly. The joint damage caused by traumas (Football, Rugby ...) or repetitive strain (Running, Tennis ...) is an entirely predictable consequence of the nature of the sports.
And probably very expensive to the NHS, since the consequences are likely to be early onset of arthritis needing rest-of-life treatment, but not an earlier death.
Nevertheless, I'd defend both the principle of equity, and the right to play sports (along with the right to overeat, to not play sports, to inhale tobacco smoke in private, etc. etc.)
Wonder if that's true.
There's a similar UK tale about the bored traffic cops who pointed a speed gun at an RAF fighter doing low-level attack training. The radar gun stopped working. Permanently. Later they were advised, over a pint, "don't do that".
"Well, hypothetically, if the pilot had left the electronic countermeasures switched on, it would retalliate with a targetted electromagnetic pulse milliseconds later ..."
"... so that's why our speed gun failed? ..."
"... and next to that switch there's another one that enables the non-electronic countermeasures. Homing missiles ..."
"Time for another round?"
I'll assume that this is intended as a sarcastic mick-take of a typical loon's ranting, rather than being the real thing.
... power that was thousands of times greater than the piffling force of gravity
Not so much comparing apples and oranges, as apples and furious green ideas.
BTW if you learn how to do a meaningful comparison between the electromagnetic force and the gravitational force, you will discover that the electromagnetic force is of the order of ten to the power forty times stronger, rather than mere thousands. That is one of the most surprising facts about the universe we live in. Nobody has the faintest idea why, other than it being quite clear that if it weren't so, we wouldn't be here.
Karbonn Sparkle UK (Amazon) price as linked is £130, which is more like $200 than $100. At that price why would anyone buy one of these?
Feast your eyes on some vacuum-tube portable radios here: http://www.antiqueradio.org/tubeportables.htm
They're a bit larger than a modern DAB radio, and when loaded with LT and HT batteries, considerably heavier. But in essence ... pre-transistor "transistor" radios.
And eighty-something years later, kids are going to school hungry.
Unless you're going to successfully argue that the statistics in the article are wrong, that is because their parents have a poor set of priorities, and are spending their income on something other than properly feeding their children.
Throwing more money at such parents is not the answer. Indeed, it may be part of the problem (for example, if a parent has an addiction). No, I do not know what the answer is.
Pure speculation on my part - the original vendor lock-in?
In other words, was there a time when the company that supplied the rifles also supplied the ammo? If that was the case, it would not have been in their interests to make the ammo the same size as a competitor's ammo. And once the funny size was widely used, it would tend to self-perpetuate.
It isn't just rifles. Another example is railway track guages. And what about the size of what is now a standard (UK) housebrick?
Edit - I've just realized there's also a battlefield reason for funny sizes. You don't want the opposition using your own ammo against you, if they capture some! So each army using its own size that's different to any potential enemy army at the time that the rifle bore is adopted, makes perfect sense. Even better if the size is sufficiently similar that some stupid soldier tries using captured ammo and blows himself up with it.
I must admit that "Unknown" is a little but worrying
Seems the natural answer for a baby AI (and possibly even for a mature Culture Mind). AFAIK no true AIs yet exist, but it makes sense to have this answer available for when they do.
You're missing something.
Say you used the same password for your bank and for some other e-commerce website. (Yes, you shouldn't, but many do!). Say the other site gets compromised and a list of names, addresses and passwords finds its way to the crims.
With password-only banking, they'll likely hack into your account.
With two-factor authentication using an app on your mobile, they'd also have to steal your phone before they can try to hack in. Which they can't do unless they are in your vicinity. If the hackability of random accounts from the stolen info is 5%, they'd have to steal twenty customers' phones to get into one bank account. I doubt that's feasible.
The other way around, my bank has sent me a credit-card sized gizmo which I have to use to generate one-time authorization codes before I can transfer money. (There's an app alternative, which I don't yet feel any need for). If someone breaks in to my home and steals it, they won't be able to steal my money because they won't know my password.
It's a bit of added hassle but on balance I'd prefer it is all banks did this.