based on the example
They're really touting the benefits of a national ID database rather than the conduit. We should totally get ourselves one of those.
1333 posts • joined 8 Nov 2007
They're really touting the benefits of a national ID database rather than the conduit. We should totally get ourselves one of those.
I think he's over-egging things for sure. I mean there's no mention of shifting [gears], hugging curves, burning rubber, [lug] nuts popping, [cam] shaft action, driving stick, sucking [diesel], the point of no return, or even throbbing pistons.I'm sure that if "salacious" was the goal, he could have done a much better job.
Donuts Carbon Nanotubes ... Is there anything they can't do?
Eadon has spoken the truth. You may downvote me now.
Anyone who speaks of himself in the third person deserves eternities of karmic hell (or at least lots of downvotes).
(ooh... see how I cleverly avoided that trap myself ^_^)
I assume its fast enough to write data to the non-volatile part before the power dies away completely.
That's not a good assumption. Power failure when writing to SSDs can trash even bits of data that weren't currently being written to thanks to the possibility of wear-levelling algorithms effectively moving random blocks around whenever you make a write. See "write amplification" on wikipedia for a pretty good description.
You'd be caching the most-used data,
Alternatively/additionally, you'd probably find it useful to hold indexes in RAM, and implement some sort of ageing/caching algorithm that keeps new and frequently-used data in flash and the rest out on spinning disks. If you use a log-based structure for the flash storage and periodically rewrite out to disk (perhaps redundantly, depending on whether new indexing constraints are required) then you can optimise both reads and writes across all storage layers. Something like SILT or log-structured merge trees, but with spinning disks as the final storage layer, optimised to reduce fragmentation and extra seeks.
New chip design would be needed anyway
I see lots of interesting comments here, your own being particularly interesting. So anyway, this is a response to quite a few of those posts...
I think that if we're going to see more of this sort of thing (storage that blurs the boundaries between RAM, flash and disk storage as well as the ability to completely power off components when not in use) then we're going to need a fundamentally different architecture to take advantage of it. This goes beyond just new chip design (where even today cores can be started up and shut down at a whim) and into having some sort of "power arbitration" bus, with the entire system backed up with a small, finite battery. For the instant-on/instant-off scenarios using flash as hibernate/sleep storage, you need to be able to guarantee that it's going to be able to finish writing the OS state data in case of loss of mains power. For the scenario of being able to, eg, keep power routed to the GPU while it's doing some computation task, but shutting down other non-essential stuff (but probably keeping, say, Ethernet alive to enable a kind of wake-on-lan feature) you probably want to be able to budget how much you can do while on internal battery power and also have the ability to suspend gracefully when you're approaching its limit. Not trivial stuff at all.
Of course, it's very unusual these days for us to have battery power built onto the motherboard (as opposed to being in an external UPS). If these devices/ideas become commonplace, though, we're sure to see many innovations in power management overall. I shudder to think of all the new failure cases when we stick in a new device (be it faulty or malicious) in machines in future, though...
Tesla - you do not win at PR by starting an argument with the media.
Or to paraphrase: "Never argue with someone who buys ink by the barrel”. It's called Greener's Law, apparently, though I'd always thought it was a Mark Twain coinage...
Yeah, but the chance of winning the lottery is significantly worse than 1 million to one (and still they played...), and yet every other week you hear about someone winning it! Time to panic!!!
Damn! My pandigestory interlude just evacuated my nose. You owe me a new keyboard, sir!
The old ones are the good ones!
I'm so glad that the article wasn't about a really clever bunch of pygmies. Thank Heaven for small mercies, I say.
The pair of them. Cos maybe now they'll reconfirm Pluto's off-again, on-again status on the list of planets. Well OK, Orpheus and Hades it is then...
Interestingly, I read an article a while back about the US military working on building microbots that could be scattered over a battlefield to be used for gathering images and sussing the lay of the land. The software and radios that they had were capable of self-configuring into an ad-hoc mesh network, so that part of it should be easy to sort out, even if a significant fraction of the machines don't survive the landing or fail in some other way.
As Helena points out, though, these things aren't really of any use as roving devices. There's a limit to how small you can make remotely-controlled bots while still giving them useful locomotion and other more practical sensors and actuated abilities.
Still, I think the microbot idea could still be pretty useful for future missions as a means of getting an initial idea of local terrain and even provide telemetry data for later, more fully-featured rover landings. The thought of sending an Internet to Mars is pretty cool too, especially if it can self-organise and do a kind of terrain "interferometry" (a fancy word for building a map from multiple viewpoints) locally instead of having to pipe everything back to Earth first. Think about it... Martian Internet! What's not to like about that?
I was a bit confused by this at first until I realised "unpredictable" was used in the sense of "No one could have predicted, in the first years of the twenty-first century, ..." Hooray for word-sense disambiguation!
Apple are on the down-slope. Samsung are on the up-slope
So you mean that it's plain sailing for Apple and that it's going to be tough going for Samsung? I'd have thought the opposite....
Thats just a mis-interpretation through censorship. What God actually told Adam and Eve was to F*** Off.
I always thought "go forth and Multiply" was more like a vague and inscrutable (as is His wont) warning against Adders.
Gotta love the unintended (I guess) hilarity of seeing the "Illicit phone rings in Sri Lankan inmate's back crack" article cheek by jowl with the "BYOD is a PITA" one ... or are the Reg editors having a bit of fun today?
Wall Street responded by pushing the social networking firms shares to $150, significantly up from their IPO price of $45. By contrast, Facebook's shares still languish at around two-thirds of their IPO price, and those (un)lucky enough to buy into Groupon and Zynga have seen their holdings reduced to a fraction of their initial value.
I'll have you know that 2/3rds is also a fraction! Then again, so is 150/45, but I don't want to be too pedantic...
Stop splitting the site sections to look like different websites, for fuck sake.
It's worse than that. Even though we can all still click on the comments link to see the entire thread the way we've been used to, there are at least a few bad knock-on effects I predict will be the result of the new system:
1. We'll get many more first-post click whores who are more interested in just getting their words underneath the article than engaging in a conversation (ie, what the comments section is). It doesn't really matter how inane the first poster is, the fact that they're first means they have an advantage when it comes to click whoring.
2. Even those posts that are genuinely interesting and get lots of votes probably won't make very much sense in isolation since, again, it needs the full conversation as context (at least unless people change their posting styles to incorporate quotes so people know what the immediate context is)
3. We'll get lots of stupid/redundant replies in the comments section based on people attacking/defending comments that they read in the main article page without checking whether it's already been done to death in the main comments page (again with the idea of a "conversation"... get the picture?)
If the reg must have "promoted" comments (or "highly rated" as it's called now), you should either copy what Ars does and let the editors pick and choose what comments are promoted OR you add a new button to the current roster of thumbs up/thumbs down to indicate that a comment is both worthy AND front-page material (I suggest a thumbs up icon in front of a star). I'd hope that people would realise the point of the new icon is to flag posts that are particularly insightful and self-contained enough to act as a companion to the story, but who knows... you'd really have to try it to see how it works. At least it couldn't be worse than the new system.
I, for one, welcome our new comment overlords, etc...
Moderately strong tea, milk in afterwards
Actually, it depends. I prefer loose leaf tea to tea bags (*), but I drink more bagged tea due to the convenience. Anyway, if you make a proper brew(**) you need to scald the pot, put in the leaves and then put in the boiling water. If you've faffed around for too long between starting and pouring in the water, boil it up again then put it in the pot. It needs to be boiling(**). Then put it on a hot stove for about 4-5 minutes. For this type of tea, you absolutely need to put the milk in the cups first, otherwise you scald the milk. You might not believe this, but do a blind taste test and I think you'll be able to tell the difference.
For bags, you also need boiling hot water to begin with (and you may also wish to scald the cup first so it stays hotter, but it's not necessary), but from that point on you just leave it to brew by itself for a couple of minutes. Personally, I give it a stir (usually by grabbing the back with my fingers and swirling it around, but you can be fancy and use a spoon) and remove the bag before adding the milk, but the other variations of this aren't wrong. The only thing I'd insist on is if you have to use a sweetener, then it has to be honey. Even then, sweetener is really only something you want after some kind of shock or a day's hard labour, in which case it's acceptable :)
* Barry's Tea is de rigeur; it's a blend, but mainly based on Assam (also called Breakfast Tea by many)
** Actually there are many "proper brews", but I'm talking about black (fermented) leaf tea here. That's not to say that things like green/gunpowder/matcha tea (which don't take kindly to boiling water at all), Oolong or even (horror of horrors) mugi cha (which actually isn't a "tea" at all) aren't all worthy beverages in their own right.
*** Incidentally, this is why it's hard to make a decent cup of black leaf tea at altitude since the boiling point is reduced. Green (unfermented) tea is much better there.
Until, after many months, you are forced to leave half a dish washing tablet in the mug overnight to remove the build up of tea scale which has reduced the volumetric capacity of said mug to the point of unusability
Rinse the cup out in water, so that there's a dribble of water in it. Pour in some table salt and rub it over the tea stains. No need for a storm (or chemical warfare) in a teacup.
Re: Don't you know, the PI itself is the loss leader
Actually, according to a recent interview, Eben Upton said that everyone in the supply chain is making a profit. I assume that the distributors also take a small/tiny cut. Granted, like you said, they are using the Pi to entice you to buy items they're making more profit on, but technically it's not a loss leader if they don't make losses on the Pis.
Playing with a web server on my home connection isn't the greatest idea in the world
Learn how to set up a Demilitarised Zone (DMZ) on your network. Simply put, you make a separate subnet for your web server and use IP filtering rules (at your router) to allow machines outside that subnet to access it, but block all outgoing traffic (apart from responding to already-established connections initiated from other hosts). It can be as simple as three iptables rules: one default rule drops all forwarded traffic, one allows NEW connections to be forwarded to the DMZ box and a third allows packets that are ESTABLISHED or RELATED to be forwarded from the DMZ box. In practice, you'll probably want to do something more complicated, like doing NAT masquerading and port-forwarding at the router (so that all your machines appear to be at the same IP address and so that traffic coming from the Internet on port 80 is forwarded to the DMZ machine, respectively) so I can't give you the exact iptables commands or other firewall rules here.
Likewise, if you need to allow the DMZ machine to access certain services inside your network (that you can't or don't want to store on the DMZ machine) then you need to add more rules to allow it to make those connections. You'll want to lock down that service so that the DMZ machine can only do the bare minimum with it that it needs to operate without leaving a big hole in your security. Or better yet, migrate a minimal version of the service to the DMZ box itself or another machine on the DMZ subnet. There's always a trade-off between security (risk of the machine getting hacked) and utility (eg, you'd really like to be able to access your IMAP server) with any machine connected to the net, but a DMZ is a nice way, up to a point, to get the best of both worlds.
So basically, look up setting a DMZ for your particular router and learn about how to set up firewall rules in general.
Other than that, your distro should have packaged the web server to be pretty secure already, such as running it as a user with restricted rights (nobody in Unix-based systems) and maybe it also gives you the option of running in a chroot jail too.
You'd be better off burning 3 billion pounds in a park and throwing a party where tickets are a tenner to watch 3 billion pounds to go up in smoke.
But where are you going to get 3 million KLFs?
Golbach's Conjecture is only for even numbers.
Oops. I misread the OP, then. I guess that's what I get for reading these articles first thing after waking up...
So... first odd number that's not a semi-prime or a power of a prime? I think I may need some coffee ... and a calculator ...
run Android now?
when I admit that, yes, it was all just a big hoax all along. The cat is out of the bag.
Applytes are quite "special", though.
I used to have an Apple, but it fell in the shitter. I used to feel special. Nowadays I'm just another applostate with an Android phone ... and loving it!
Samsung make fridges like many other people, make TVs like many others, make everything like many others - even phones. There is nothing special about their stuff - it's ok but functional and you buy a Samsung today and you may but a Samsung whatever next time but no real compelling reason.
They are happy selling a box today and banking the cash but it's not really long term - they sell an Android phone and pass you on to Google for future revenue.
According to the recent article here Samsung started "in 1938 as a company selling dried fish and vegetables, and moved into electronics in the late 1960s". OK, maybe the dried fish and vegetables were only "functional" and there was no real compelling reason to buy Samsung dried fish and vegetables the next time.
But consider that they're now one of the top companies in Korea (if not the top, judging by the fact that their top man is the richest guy in the country). Do you really think that the sort of business minds that brought the company from such lowly beginnings doesn't have long-term aspirations? Do you really think that they don't care about, eg, their Galaxy range, and that they'll happily "pass you on to Google for future revenue"?
Could you imagine Bill Gates or Steve Jobs having that attitude? Is it even conceivable that Samsung won't do all it can to keep and expand its customer base?
Finally, one non-rhetorical question: is it possible that Apple pays people to pollute discussions like this with drivel like yours? Absolutely.
Ditto. If only they could have fitted "squabbling" in there too (to go with squillionaire) it would have been perfect.
Well that's just fines.
Sorry to reply to my own post, but it just occurred to me that they try for a charge of "contributory infringement" if they can prove that the user was uploading to a torrent swarm as well as downloading. In regularl language I suppose that means that the torrent user is helping other torrent users to copy something illicitly. Makes a lot more sense than the argument I've seen with some cases that each pirated copy is responsible for some crazy number of lost sales due to the uploading part. I could never get my head around how they could even claim that with a straight face. Mathematically, ff that were true, we'd have an infinite number of illicit copies for every one that was paid for.
I still don't get how two songs can generate three strikes, though.
poor taste in music ... just about anything by The Beastie Boys?
Hrrrmph! I resemble that insinuation! You should listen to "The In Sound from Way Out." You might be pleasantly surprised.
Hmm... it still doesn't compute. Aren't the warnings of the "cease and desist" variety? If so, how can she have received three warnings for downloading two songs? Did she like one of them so much she tried to download it again even in the face of the first two warning letter?
Also, on a different point, even if the RIAA (or equivalent) had rootkits on everyone's computers, would it even be possible for them to make the argument of uploading stick? I mean, technically, yes, anyone who's connected to a torrent will upload to some degree, but aren't most users (ie not the long-term seeders) just in it to download stuff? I really don't know how this separate uploading argument is supposed to work if regular users are just helping other torrent users to download.
If you had 50 marbles, numbered 1 to 50, there would be a 10% chance of selecting a specific desired number with any 5 random selections from a set of 50. So 43% is only four times better than random guessing. Does the software know what the valid 50 numbers are, and pick the closest match? If so, the results are not impressive.
Whoa there... the number 50 is the size of their test sample, and nothing to do with the number of possible PINs, so your probability calculation is meaningless. In other words, their program is being asked to guess what the PIN is, and not "guess which one of these 50 known patterns/PINS" we've given you".
The way you should look at it is that each random PIN guess (having no accelerometer hints) would be right 1/10,000 of the time (ie, 0.0001). If they can guess the PIN 43% of the time with 5 guesses, then their success rate per guess is 0.43 / 5 or 0.086. So in fact their ability to guess a PIN is actually 0.086 / 0.0001 = 860 times better than chance, not four times better!
To be fair, I tend to refer any large secretive insular company as a Zibatsu.....
Maybe you should switch to calling them "zaibatsu". Just a suggestion...
What exactly is the need for this? You know one of the Japanese words for foreigner ... "offensive" foreign word..
Pah. "Gaijin" is only offensive if it's used in a way that's meant to be offensive. That hardly applies here. I couldn't care less if someone calls me a 外人 or a 外国人. Or a Paddy or Mick, for that matter. You should save your ire for someone who's being deliberately offensive to you.
So I'm not so hopeful that it'll all fit in memory...
There's been a trend in research systems at least towards looking at using RAM to store index information while delegating actual data storage to (flash) disks. FAWN-DS (Fast Array of Wimpy Nodes Data Store), for example, reduces the amount of RAM used by each index entry to 6 bytes, while SILT (Small Index, Large Table) achieves even more compression of those index data (somewhere between 1.5--2.5 bytes per index entry, iirc). It also helps that these systems are designed from the ground up to work well with flash storage and avoid the write amplification problem (where a single write requires several physical writes due to the need to rewrite entire memory blocks when a single page changes). I'm not sure how many of these design features are implemented in today's commercial-grade systems (like hadoop's file system) but I'd wager that there are more similarities than differences.
If you add to this the fact that clustering your storage nodes is relatively easy using consistent hashing (or a DHT) to spread the storage across many nodes/controllers each with their own RAM and local storage, then I think that such a future is actually quite practical today. A lot more practical than you think.
"You can tell everybody to get out of their trucks (TCP) and get on motorbikes (the new thing) congestion isn't as bad, no need for the traffic lights at every junction, and no expensive lanes being added."
Or you can have everyone get out of their car/truck/bike and move forward one vehicle. Hows about that? Guaranteed progress even in the face of deadlock....
I think other posters have already covered that the paper's authors are only claiming improvements over high-latency links (and fairly lossy ones at that). The fundamental problem with TCP over such links is that each TCP packet needs to be acknowledged so (having a sliding window for acks notwithstanding) transmission speed is fundamentally limited by how fast and reliably the acks can be sent back.
Some guys at Microsoft tested out another scheme for improving transfer speeds over high-latency links (though they assumed low packet loss) a few years ago. It worked by the receiver sending ACKs to some number of packets it hadn't actually received yet, thus fooling TCP's flow control mechanism into avoiding its normal exponential backoff algorithm. That trick obviously only works over reliable data links with very little packet loss. I don't have a link to that paper, but ISTR that it was covered here on The Register. No doubt it also got its share of comments along the lines that you're making here (ie, >100% efficiency).
With UDP you can just keep sending data. Provided the far end sends back relevant acknowledgments you might be able to get away with only a few re-sends.
Alternatively, you can use forward error correction to eliminate the need for a lot of "back traffic" (or "packets traveling in the wrong direction which often hamper UDP communications" as the article states). Have a look at the udpcast project for an example of that. It's designed for multicast, where the problem with ack storms is much more severe, but it seems that with a little tuning it should be also be pretty efficient to use it for point-to-point transmissions too. There are also, IIRC, a couple of competing RFCs for implementing reliable delivery over UDP channels, and they include flow control algorithms as a means of congestion avoidance (similar to what's described in the article).
I think the most interesting thing about this paper seems to be how they convert everything to use their new UDP protocols. It seems like a good approach given that it's much simpler to implement congestion avoidance and flow control if everything is based on the same underlying transmission protocol. It does sound a bit drastic, though.
re: Can we train them to get urban pigeons?
I think you might want hawks for that. Providing the cats don't kill them.
Though surely it's a better word to describe the subject of the photo rather than those who took it? The author might want to look up "galaxy" in the dictionary...