I'm sure I speak for all quantum physicists
when I admit that, yes, it was all just a big hoax all along. The cat is out of the bag.
1403 posts • joined 8 Nov 2007
when I admit that, yes, it was all just a big hoax all along. The cat is out of the bag.
Applytes are quite "special", though.
I used to have an Apple, but it fell in the shitter. I used to feel special. Nowadays I'm just another applostate with an Android phone ... and loving it!
Samsung make fridges like many other people, make TVs like many others, make everything like many others - even phones. There is nothing special about their stuff - it's ok but functional and you buy a Samsung today and you may but a Samsung whatever next time but no real compelling reason.
They are happy selling a box today and banking the cash but it's not really long term - they sell an Android phone and pass you on to Google for future revenue.
According to the recent article here Samsung started "in 1938 as a company selling dried fish and vegetables, and moved into electronics in the late 1960s". OK, maybe the dried fish and vegetables were only "functional" and there was no real compelling reason to buy Samsung dried fish and vegetables the next time.
But consider that they're now one of the top companies in Korea (if not the top, judging by the fact that their top man is the richest guy in the country). Do you really think that the sort of business minds that brought the company from such lowly beginnings doesn't have long-term aspirations? Do you really think that they don't care about, eg, their Galaxy range, and that they'll happily "pass you on to Google for future revenue"?
Could you imagine Bill Gates or Steve Jobs having that attitude? Is it even conceivable that Samsung won't do all it can to keep and expand its customer base?
Finally, one non-rhetorical question: is it possible that Apple pays people to pollute discussions like this with drivel like yours? Absolutely.
Ditto. If only they could have fitted "squabbling" in there too (to go with squillionaire) it would have been perfect.
Well that's just fines.
Sorry to reply to my own post, but it just occurred to me that they try for a charge of "contributory infringement" if they can prove that the user was uploading to a torrent swarm as well as downloading. In regularl language I suppose that means that the torrent user is helping other torrent users to copy something illicitly. Makes a lot more sense than the argument I've seen with some cases that each pirated copy is responsible for some crazy number of lost sales due to the uploading part. I could never get my head around how they could even claim that with a straight face. Mathematically, ff that were true, we'd have an infinite number of illicit copies for every one that was paid for.
I still don't get how two songs can generate three strikes, though.
poor taste in music ... just about anything by The Beastie Boys?
Hrrrmph! I resemble that insinuation! You should listen to "The In Sound from Way Out." You might be pleasantly surprised.
Hmm... it still doesn't compute. Aren't the warnings of the "cease and desist" variety? If so, how can she have received three warnings for downloading two songs? Did she like one of them so much she tried to download it again even in the face of the first two warning letter?
Also, on a different point, even if the RIAA (or equivalent) had rootkits on everyone's computers, would it even be possible for them to make the argument of uploading stick? I mean, technically, yes, anyone who's connected to a torrent will upload to some degree, but aren't most users (ie not the long-term seeders) just in it to download stuff? I really don't know how this separate uploading argument is supposed to work if regular users are just helping other torrent users to download.
If you had 50 marbles, numbered 1 to 50, there would be a 10% chance of selecting a specific desired number with any 5 random selections from a set of 50. So 43% is only four times better than random guessing. Does the software know what the valid 50 numbers are, and pick the closest match? If so, the results are not impressive.
Whoa there... the number 50 is the size of their test sample, and nothing to do with the number of possible PINs, so your probability calculation is meaningless. In other words, their program is being asked to guess what the PIN is, and not "guess which one of these 50 known patterns/PINS" we've given you".
The way you should look at it is that each random PIN guess (having no accelerometer hints) would be right 1/10,000 of the time (ie, 0.0001). If they can guess the PIN 43% of the time with 5 guesses, then their success rate per guess is 0.43 / 5 or 0.086. So in fact their ability to guess a PIN is actually 0.086 / 0.0001 = 860 times better than chance, not four times better!
To be fair, I tend to refer any large secretive insular company as a Zibatsu.....
Maybe you should switch to calling them "zaibatsu". Just a suggestion...
What exactly is the need for this? You know one of the Japanese words for foreigner ... "offensive" foreign word..
Pah. "Gaijin" is only offensive if it's used in a way that's meant to be offensive. That hardly applies here. I couldn't care less if someone calls me a 外人 or a 外国人. Or a Paddy or Mick, for that matter. You should save your ire for someone who's being deliberately offensive to you.
So I'm not so hopeful that it'll all fit in memory...
There's been a trend in research systems at least towards looking at using RAM to store index information while delegating actual data storage to (flash) disks. FAWN-DS (Fast Array of Wimpy Nodes Data Store), for example, reduces the amount of RAM used by each index entry to 6 bytes, while SILT (Small Index, Large Table) achieves even more compression of those index data (somewhere between 1.5--2.5 bytes per index entry, iirc). It also helps that these systems are designed from the ground up to work well with flash storage and avoid the write amplification problem (where a single write requires several physical writes due to the need to rewrite entire memory blocks when a single page changes). I'm not sure how many of these design features are implemented in today's commercial-grade systems (like hadoop's file system) but I'd wager that there are more similarities than differences.
If you add to this the fact that clustering your storage nodes is relatively easy using consistent hashing (or a DHT) to spread the storage across many nodes/controllers each with their own RAM and local storage, then I think that such a future is actually quite practical today. A lot more practical than you think.
"You can tell everybody to get out of their trucks (TCP) and get on motorbikes (the new thing) congestion isn't as bad, no need for the traffic lights at every junction, and no expensive lanes being added."
Or you can have everyone get out of their car/truck/bike and move forward one vehicle. Hows about that? Guaranteed progress even in the face of deadlock....
I think other posters have already covered that the paper's authors are only claiming improvements over high-latency links (and fairly lossy ones at that). The fundamental problem with TCP over such links is that each TCP packet needs to be acknowledged so (having a sliding window for acks notwithstanding) transmission speed is fundamentally limited by how fast and reliably the acks can be sent back.
Some guys at Microsoft tested out another scheme for improving transfer speeds over high-latency links (though they assumed low packet loss) a few years ago. It worked by the receiver sending ACKs to some number of packets it hadn't actually received yet, thus fooling TCP's flow control mechanism into avoiding its normal exponential backoff algorithm. That trick obviously only works over reliable data links with very little packet loss. I don't have a link to that paper, but ISTR that it was covered here on The Register. No doubt it also got its share of comments along the lines that you're making here (ie, >100% efficiency).
With UDP you can just keep sending data. Provided the far end sends back relevant acknowledgments you might be able to get away with only a few re-sends.
Alternatively, you can use forward error correction to eliminate the need for a lot of "back traffic" (or "packets traveling in the wrong direction which often hamper UDP communications" as the article states). Have a look at the udpcast project for an example of that. It's designed for multicast, where the problem with ack storms is much more severe, but it seems that with a little tuning it should be also be pretty efficient to use it for point-to-point transmissions too. There are also, IIRC, a couple of competing RFCs for implementing reliable delivery over UDP channels, and they include flow control algorithms as a means of congestion avoidance (similar to what's described in the article).
I think the most interesting thing about this paper seems to be how they convert everything to use their new UDP protocols. It seems like a good approach given that it's much simpler to implement congestion avoidance and flow control if everything is based on the same underlying transmission protocol. It does sound a bit drastic, though.
re: Can we train them to get urban pigeons?
I think you might want hawks for that. Providing the cats don't kill them.
Though surely it's a better word to describe the subject of the photo rather than those who took it? The author might want to look up "galaxy" in the dictionary...
Shouldn't this SIMD thing just work by now instead needing lots of twiddling?
It's not just SIMD. Although the article doesn't state it explicitly, each of the cores models a small area of space and it has to communicate various outputs to neighbouring small areas of space. The clue is in the line The waves propagating throughout the simulation require a carefully orchestrated balance between computation, memory and communication. Amdahl's Law puts a brake on how well any real-world computation like this will scale up when run on a parallel (or SIMD) architecture due to the need for components to interconnect and transfer data between each other (such as propagating global force/pressure vectors after each local computation per simulation time quantum) . In this case, I'm sure a lot of their time spent "ironing out the wrinkles" was trying to get those inter-core messaging parts of the simulation humming. But there are other potential bottlenecks too that need to be looked at to prevent stalls/starvation too (ie, "computation, memory and communication" above). There's definitely not just a single "point and shoot" solution to parallel programming.
Thanks, Daniel B. It's nice to have that validated, even down to my guess that private and public keys don't store the same data . The downvotes I got are unimportant compared to that. Now if Lee Dowling had said that p and q were stored with the private key then I'd happily have conceded the point to him. Maybe he knew that and it's what he was trying to get at, but it's not what came across. I'll give him the benefit of the doubt and say we're all right. Except the downvoters. You still suck.
No, they are not equivalent. Maths modulo high primes is the entire security BECAUSE it's not a mechanism that you can just reverse like that, and the private key is not "just a prime".
In general, the private key contains both the public key and some other large prime, whereas the public key is only a large prime (to get a simple analogy). The public key is actually derivable from the private key (that's how you MAKE a public key!) but NOT vice-versa (or PKE encryption would be useless). The private key contains extra, private information that should not be revealed and is not in any way derivable from the public key within a reasonable length of time.
OK, so I just happened to have Schneier's Applied Cryptography (2nd edition) on my desk as I read your post. I looked it up and confirmed that what I said earlier is correct. On page 467 it covers generating the public and private keys (which are multiplicative inverses of each other mod (p-1)(q-1)). Then it talks about encrypting and decrypting and finishes by saying "The message could just as easily have been encrypted with d and decrypted with e; the choice is arbitrary". That's exactly (and only) what I said in my post. I think you may need to brush up on how RSA encryption works. In particular, you can't derive either the public key from the private key, or vice-versa. Not without knowing the factorisation of pq, anyway. Nor does the private key contain "extra, private information".
Compound encryption schemes (involving RSA and something else) are a different matter, as you pointed out. But then, I never actually claimed that you could swap keys there and still have everything work.
isn't ((m ** private) ** public) mod pq = ((m ** public) ** private) mod pq? Maybe some of these users at least have just decided to swap public and private keys for each other? Maybe it's their secret "twist"...
OK, I'm not really serious. Most likely ssh (or pgp or whatever you use to generate keys and do the crypto) stores public keys in a different format to the private key so they're not interchangeable. But at least with the underlying RSA bit, calling one key public and the other private is just a matter of which one you actually reveal...
Will they now use some of their record profits to give the workers better conditions and rates of pay and stamp out child labour?
They're turning kids into slaves Just to make cheaper sneakers But what's the real cost? 'Cause the sneakers don't seem that much cheaper
Why are we still paying so much for sneakers ? When you got them made by little slaves kids What are your overheads?
(Think about it)
I said to myself, "self, you should really buy some Samsung shares."
Unfortunately, even though I talk to myself (quite a bit), I rarely listen to what I have to say.
I have it on good authority that it came up with "BOOBIES"
those extra horizontal pixels rarely go to waste.
I like widescreen too, for the reasons already pointed out. I still find myself wishing for more vertical resolution too, though. Tabbed browsing in the webotron and gnome-terminal (and virtual desktops if you want to count that) already make great use out of screen real estate, but it would be nice to be able to see more lines of code in emacs or Eclipse or the like. Come to think of it, I guess there's always Ctl-x 3 in emacs if I want to look at a buffer in 2-up mode, providing I don't mind scrolling manually in each window. I must remember to try that next time.
Yup... definitely in that camp myself
♫ ... whiplash girl-child in the dark ... ♬
That "crisis" is just another word for "opportunity". If the problem is really that bad, I'm sure the book vendors will start a new "100 books to read before you die" campaign.
Except he left out the bits about going to the bottom of the sea for bunk-beds. And the hooker. Can't forget the hooker. Almost as vital as ECT.
Hmm... you seem to have forgotten "and I think I like it"
Though judging by your last line, perhaps not.
Just break the encrypted files down into small enough chunks and you'll find dupes
If it were that easy, you could just break it down into 1-bit chunks. But that obviously requires a bigger index than the original file collection. (Q.E.D. by Reductio ad Absurdum). Random data (such as the output of a good encryption algorithm) by definition are not compressible.
It doesn't seem likely, does it? There's one type of encryption (homomorphic encryption) that in theory could work, but in practice it won't. I won't bore you with details of that.
The solution I would use would be to set up the front-end of the storage system to use an all-or-nothing transform (AONT) on the files, break them up into blocks and then distribute those blocks in a random order, with a single encrypted "key" being the locations and order of those blocks. So long as nobody can break into the fronted computer (or instruct it to divulge how to reconstruct a given file) then the storage is secure. Since the AONT should produce the same blocks for the same input file, you can do block-level dedup on the actual storage servers. I'd then encrypt the access key, add some validation info and send it back to the user before deleting it.
Of course, in this scheme, you (as a user) can't trust the server not to keep the access key or to make a copy before it's encrypted, and so on.
Gah! Enough with the downvotes. I get it. I know, it's something I'll have to take up with my the- rapist.
They really called those things rapey-scans? (that's how I'd pronounce it, anyhow).
It can't win and it can't break even. Unfortunately, neither can it break out of the game.
I got an Odroid-x and the hidden Fedex charges put it to > £120.
Similar story for my X2, but I did what the hardkernel website suggested and called my local customs office before placing the order. They told me about the extra "customs clearance" charge that Fedex adds in. I guess that hardkernel could have done a better job on pointing out the surcharge that Fedex puts on it, but I can't fault them on their advice on contacting customs. I still went ahead with the order once I knew about the extra costs. Well worth it, I reckon.
As for Atom vs ARM systems, I actually did a bit of window shopping before ordering the X2. To be honest I couldn't actually find any Atom systems that were as good or as cheap. The one thing that the bare-bones Atom systems did have going for them was standard (mini) ATX and SATA ports for upgradability. They're still quite expensive compared to the ARM boards. Also, buying a cheap 2nd hand system was out for me because I was looking for something with low power usage and you don't get that with older Intel/AMD/Atom stuff.
I guess that in the next year we'll start seeing more ARM SoC with SATA, USB 3 and gigabit ethernet since there's definitely a market for it. Until then I can definitely live with flash/USB2 and 100Mbit ethernet.
An mk808 is less than half the price, runs a newer version of android and is WAY more powerful.
I'd never heard of it, but the link you gave puts the mk808 at $58.99, which isn't "less than half the price" of either the Rock ($79) or Paper ($99).
Now the ODROID-U2, on the other hand, costs $89 (*), has quad core clockable to 2GHz (base 1.7GHz) and 2Gb of RAM. Also runs Jelly Bean and Linaro Ubuntu (no accelerated X yet, though it's expected in a few weeks). Its sibling product, the ODROID-X2 is very similar, except that it costs $135 and has a whole lot more ports.
(*) as with a lot of these boards, power supply, cabling, flash drive and shipping aren't included. A full U2 ends up costing about $150 (including a hefty $40 shipping fee from Korea), plus local customs clearance and VAT which brings it to something closer to $190. Definitely pricey compared to a Pi (which comes to around €72 all told), but the U2/X2 are are at least 12 times more powerful in my tests (thanks to 4x cores, 2x speed, step up from ARMv6 to ARMv7). So while the Pi definitely wins out on price/system, the ODROIDs definitely win out on performance/price IMO.
I think the word you were looking for is 'depilation'.
The way I misread that ("delepidoteration"), I thought he was talking about getting rid of butterflies.
Although I don't have any direct experience with this, it seems that a lot of databases aren't well suited to using flash storage due to them not being optimised for that medium. The problem lies in the way that inserts and updates often have to make several updates on the on-disk indexes (B-trees or whatever) and each random write requires a read-blank-rewrite cycle on an entire disk block. Reads from the database, on the other hand, is something that does suit flash well since seeks are effectively free.
I don't have a link to any recent papers to hand, but if you search for "log structured database flash" you should turn up a few. The main advantage of log-structured databases is that inserts and updates only have to write once to the disk (with periodic rewrites for garbage collection to coalesce partially empty blocks). Thus you get very good write speed and since you're not going through as many of the read-blank-rewrite cycles that are typical of B-tree style indexes you should be able to extend the life of the disk by three times or more. Have a search for Fast Array of Wimpy Nodes (FAWN), too. It's slightly old, but it gives a good demonstration of the kind of speedups that log-structured databases + flash storage can achieve.
If he's normally researching lead, what qualifies him to talk about carbon?
For his next breakthrough ... a [carbon nanotube] Zeppelin.
re: Ah. Passers by asking people stupid questions of people who are taking pictures in the street.
I disagree... I actually got quite a warm feeling on reading that part of the article. Nice to see that a passer-by would take the time to see if he was OK. And lets face it, taking photos by spinning around probably does look a bit crazy if you don't know what's going on.
Do it on the cheap:
Is it really the third year in a row that I've had to vote for Ubuntu as the worst product of the year?
(I say this not as a Linux hater, but as a fan, fwiw)
So we just have to survive another two years after that until the real end of the (Unix) epoch kills us all then?
There Is More Than One Way To Do It (encoding "nothing", of course)
re: Copying another company to make 8Billion.......I feel proud for them!!
As it says in the article headline: bite me.