Claim that the monkey never made an informed decision to allow PETA et al to represent him.
1920 posts • joined 8 Nov 2007
Claim that the monkey never made an informed decision to allow PETA et al to represent him.
I can see one very useful application for this in (true) RAID controllers and generically as a persistent write cache for slower SSD and spinning disks. These applications already exist using different technological means to achieve their goals, and I guess that this new form of RAM will streamline a lot of disparate uses and cause some interesting unifications.
Examples of what I'm talking about:
* "real" RAID controllers have battery-backed write cache that can ensure data consistency by flushing the cache after the system power failure is sorted out
* likewise, many (or at least some) SSDs use a faster form of flash as a write cache
* similarly, hybrid SSD/HDD drives use the faster flash as a write cache (and maybe read cache, too)
* using ext4, you can create an external journal that's stored on an SSD (or similar fast, persistent storage) to get around 2x to 3x better write performance (and still maintain quick crash recovery)
Basically anything that includes any sort of write-ahead log (including databases) should get a very nice boost in write speeds, as least for bursty writes. You'll always be limited by the write speed of the slowest device in the chain if you're doing sustained writes, but with a large enough cache many workloads will never fill it up.
I hope that this ends up being something that ends up just being another resource (like RAM and SSDs) that operating systems will be able to arbitrate the use of rather than it just being closed off within specific bits of hardware like disks and such.
Imagine, if you will, instant communication of any amount of data over any distance.
Adnim was the first in here to contradict your view that it opens up the idea of instant communications, but didn't explain why. I'll give it a shot.
First, there's the concept of "spooky action at a distance" whereby measuring one half of entangled pair instantaneously(*) causes the other half to settle on a value. This all happens in the quantum realm where the pair exists in a "superposition" of states before measurement and measurement "causes" (**) it to collapse into a single state.
Where it gets tricky is trying to use this for communication. If Alice and Bob have some method of producing a stream of entangled pairs and transporting both halves, you might think that some information can be encoded in each pair that the other party can read. In fact, that's not the case: the value of a measurement at either end has zero intrinsic informational content at all. The problem is that if you try to incorporate some information into the pair, then it counts as a measurement and the system as a whole collapses into a classical, non-quantum state.
But that doesn't mean that quantum effects can't be used as part of a communication scheme. The idea here is that Alice and Bob both set up similar apparatus for measuring some property of the entangled pair. Usually entangled photons are used, and the measuring apparatus involves using polarising filters that can be rotated. When the photon is measured at the receiving end, this measurement has no intrinsic information content since the receiver has no way of knowing how the sender set up their polarising filter. It's only by sharing their setup and results after the experiment that Bob can know what Alice was trying to encode, and that communication has to be purely classical and so is limited by the speed of light. Each trial like this has a 50% chance of not providing any information at all due to the random alignment of Bob's filter, so they need 2n trials to send n bits. Also, 2n bits need to be sent along the classical channel for each trial.
The benefit of "quantum" information transfer isn't that it's faster than light, but that it's impossible to eavesdrop without collapsing the state of the system before Bob gets to read the state of it. Plus, it's a validation that quantum effects are actually real.
* or nearly instantanous, as far as we can measure, but definitely faster than the speed of light. See
** "causes" is in quotes because this isn't classical causality. We can only strictly describe the how measurement and collapse are related with a weaker "where this, also that" construct. This also opens up huge questions about how classical physics relates to the quantum world and what, in fact, the nature of things really is---the various Many Worlds theorems arise because of this fundamental gap in our understanding of how quantum states collapse.
Rise up, unemployed Arizonans, McParadise is here!
You may laugh now, but if Bill Hicks is right and California slides into the ocean (like the mystics and statistics say it will), Arizona will have a huge amount of prime seafront property to get rich off.
I was wondering what the big deal was about nitrous oxide. I thought it was only really harmful to dentists (who partake too much).
Of course, I misread. N2O != NOx obviously. I am silly.
(Oh, and kids: just say N2O drugs)
Oooh. Only 0.2 commas. I think my budget can stretch that far.
[Lawrence of Arabia] is just so bloody long
And wide, too. A 2.20:1 aspect ratio in its original 70mm form, to be exact. A bitch to watch on a 1.33 ratio HD screen, sans doute.
re "in shed"---huh? I just noticed this. Did you just steal my joke from last week (a pun on Slough) and take it as your own? For shame!
Aaaaand still with "SexyCyborg"'s bid over the Internet. Going once ...
The jellyfish, slightly perplexed
Jellyfish don't have brains, though. Just saying...
BOFH Moss was sure he'd sorted out the minor spontaneous combustion issue, but just to be sure he roped in one of the beancounters to activate it while he monitored from the safety of his Skype link. He lingered expectantly at the back just in case there was another "golf" incident.
So as you see, the echolocation system emits a burst from here and a 3d reconstruction is mapped onto the user's breast via a network of electrodes.
Intel's "Bra-Z-Air" also includes heating elements.
Just over 50 years after the original "Wonderbra", Intel pitches in with "I wonder what the hell they were thinking" concept.
Thanks to the magic of voice recognition, it opens with a simple "aBra cadaBra".
hackers defeat "chastity" mode in 5, 4, 3 ...
Smartphone-based bra unlock set to completely
eliminate defer the awkwardness of groping around in the dark not having a fucking clue what they're supposed to be doing.
"Pinky, are you pondering what I'm pondering?"
I think so, Brain, but where are we going to find a duck and a hose at this hour?
I, for one, approve of new methods of getting women into technology.
Fully supports, POKE and PEEK on mammary addresses
So that's what those SQUID things from those WIlliam Gibson novels look like.
The smart bra includes extra dangly bits for your dangly bits.
Don't tase me, bra!
It also does some sort of smart/IoT type thing. For no apparent reason.
New bra allows Red Dwarf fans to say "well twist my nipple nuts" without looking like a right tit.
I was NOT looking at your breasts. I was just checking out your cool smart bra.
Intel to outsource manufacturing to S3 (Silicon Support Services)
Smart bras today, teledildonics tomorrow.
We all know that fashion designers have really weird ideas---but wrapping tables in brown paper? WTF?
Hacker, Tailor, Solderer Mai
go low bra for a better chance to win
Sounding like you couldn't be bothered to look at the language ...
I generally like your rants, Michael, but seriously, what's with the ad hominem? The OP is clearly making a general point about new languages (not this one) and makes no bones about the fact that he's possibly being cynical and a curmudgeon (something you yourself mentioned in your OP).
So have an upvote for your original post, and a thumbs down for that one :(
barely more legible than INTERCAL? (and never as polite)
But of course INTERCAL will throw a fit if you're too polite, too.
Was called in to a factory to look at a PC doing CNC duties (probably not directly) for cutting up various wooden sheets to size. The PC was in the same area as all the cutting equipment and it was apparently being controlled by software stored on a floppy disk. The whole thing apparently had no IPS rating. Needless to say, I had to tell them I couldn't fix it.
It's been a while since I've seen one of those.
The thing is, that functionality is trivially implemented by adding like two scripts to your system. Let's call them 'await' and 'provide' for the sake of illustration. The 'await' script blocks until some other part of the system calls 'provide' after setting up the matching service. If you want you can have a third script that does static analysis of the boot scripts to make sure that every 'await' has a matching 'provide' and that there aren't any dependency loops (or potential race conditions, perhaps). You could easily also put such dependency information in a comment section (like upstart, I think) so that analysis is easier and quicker.
The problem is that systemd wants to take over your entire system and the supposed killer feature of faster boot times has become basically irrelevant to most users (thanks to suspend/hibernate and fast SSDs).
I know this is a bit OT (we're quite some time from the 5th of November, for one), but the word "bonfire" was originally "bone-fire". It might seem a bit of a pointless factoid except for (a) burning Guy Fawkes in effigy and (b) modern Irish still uses "tine chnámh" (literally "bone fire").
Anyway, I guess I'm just chyming in to support your regurgitation of obsolete/archaic/obscure words. You never know, it might end up giving them a resurgence in use.
Why ever not? I'm thinking that in the worst case you just manually set the time, do the signing and then reboot? The fact that you say that existing signed code will continue to work suggests that any safeguards around expired keys are on the signing end, and surely it's possible to get around any restrictions?
But then, great bastards steal (while those merely "good" borrow). Plus, the comedy is weak in the ESR.
only, mine's broke down ...
NAT makes for better privacy. The use of IPv6 without any NAT is likely to make each device in your site uniquely identifiable by its global address.
Sorry, but that's probably the #1 myth about ipv6. If you use SLAAC then the global address for a single host will change over time. See for example, this page which says (emphasis added):
IPv6 provides both a stateful and a stateless address configuration functionality. Stateful address configuration is similar to the existing DHCP functionality in IPv4. IPv6 also supports Stateless Address Auto Configuration (SLAAC). In this mode, nodes can automatically configure their network configuration by generating a local IP address, locating neighbors on the same local segment, locating a default router, and even generating a globally routable address using the prefix supplied by the router through ICMP messages. All of this occurs without any user interaction. Another interesting note is that IPv6 provides the ability to easily renumber these global addresses via the routers on the network instead of configuring the hosts individually. Securing these interactions is definitely something to consider when deploying IPv6.
Do you have to configure a /64 as a routed subnet?
Are you sure you can't be more granular than that?
That link you gave was too long for me to read (quickly) but from what I understand, you could * use a smaller subnet but it's definitely not recommended. The problem is that ipv6 lets you do some neat automatic configuration at the "single user end LAN" router but only if the address space it's managing is /64. If your LAN space is smaller than that then the Stateless
Address Auto Configuration (SLAAC). mechanism won't work. Basically you will want to use SLAAC even thought technically you don't have to.
* ipv6 routing tables aren't significantly different from ipv4. You can still, for example, put in arbitrary static routes, but it's not the "ipv6 way".
* edit: just to add another explanatory note, ipv6's natural subnet size is /64, while they define /56 as being for "Minimal end sites assignment". So (to keep things really simple) ignoring any special address spaces carved out of the global address space, there are up to 2**56 different "end sites", each of which can have 2 ** (64-56) = 256 subnets, each of which can have up to 2 ** (128 - 64) individual hosts.
it's a really poor show when you've already run out by 9:30am on a Monday, too :)
The gin, yes, but running out of grenadine? It's non-alcoholic, isn't it?
Also, minor quibble... all the OpenWRT releases are named after cocktails. The splash screen (motd) when you log in has always given the recipe on any release I've ever used.
"I didn't really understand it, but it solved my issue, so I used it."
Sums up my entire career as a "developer" :(
There's a name for that ... https://en.wikipedia.org/wiki/Cargo_cult_programming
1. Did you get caught; and
2. Is there some sort of statute of limitations?
I found the whole field very interesting for a while. Not so much the basic idea of a virus (which is trivial) but more the ingenuity that some authors had in finding novel places to stash their code in memory, evade detection (like some viruses that would hook DOS or the BIOS interrupts to show infected files in their original, uninfected forms if resident) and especially polymorphic viruses (especially the Dark Avenger Mutation Engine).
I never used a BBS. I tended to use Usenet (VIRUS-L? All the 40Hex, 2600 and so on were also available) and a few key resources (Ralf Brown's Interrupt List, Patricia Hoffman's VSUM and IIRC, "The Programmer's PC Sourcebook/Handbook" by Thom Hogan). Was never part of any "hacker" scene. More of an academic interest with me. Kind of a strange hobby for teen/twenty-something, but still, I learned an awful lot about PCs, the BIOS, Dos and x86 assembly from it.
They really were simpler times. Most viruses were no more than stupid and ill-advised pranks. Even PCs were kind of more like a novelty than a serious tool. When serious money started being involved (PCs becoming mission-critical and the Internet becoming a conduit for commerce and banking) the scammers and crooks took over. That was the end of the fun/innocence.
That brings me back. I used to use it with the mh mail client and exmh (which I think integrated with fetchmail). Despite exmh being written in tcl/tK, it was as nice to use as any "full fat" mail client I've used since.
The problem I eventually ran into back then was scalability. With the possibility of tens of thousands of emails, each with their own file, the mail directory could get really slow as the dir had to be rescanned for each sub-command. Mind you, that was in the days before the ext? filesystems had optimisations (automatic indexing or something) for huge directories like that. Even with the drawbacks, the maildir format still beat the alternative of a bunch of huge Inbox.bz files that needed to be decompressed twice when you were searching for something (once to find out which inbox file it was in, with no tools apart from zless) followed by a second decompress when you issue the command needed to extract the particular mail you want.
Of course, if I'd foreseen the need to index mailboxes before archiving I could totally have used something like glimpse on them instead of torturing myself with slow searches.
Nowadays, of course, all that seems like an anachronism when Google or Microsoft will happily index everything automatically. That's good, of course, but at what price?