"well-positioned to re-capture market share"
It may (?) be true that it's "well-positioned" but coming from behind and trying to recapture lost market share is hardly a good position, is it?
2662 publicly visible posts • joined 8 Nov 2007
It hardly constitutes a declaration that Bitcoin is a bona-fide currency. All that seems to have happened here is that the judge saw through a particularly transparent defence that was based only on a legal technicality.
As the article said, if it looks like a duck and quacks like a duck, then it's probably a duck. Therefore: not a valid defence. Or, as Fry from Futurama might say, "That dog won't hunt, Monseigneur".
Wow. Big up/down-vote ratio for that. I hold the opposite view, that "if (...) {\n" is better. For two reasons...
1. Vertical space is precious when editing, especially given most people (OK, I generalise) are using 1920x1080 monitors. Putting the brace on a separate line means one less line of code visible on screen without scrolling for each if/do/while/whatever. That means more scrolling and more getting lost, especially if your only way of matching braces is to keep track of how far you think the matching one should be from the left of the screen. Simply put, better use of vertical space = improved readablity.
2. It's no harder to trace back up the screen vertically to a statement than a brace. You could even use tabs and have your editor display them visually so that it's easier no matter which style you decide on.
Oh, and
3. Your editor probably has something like emacs's blink-paren command (or mode) to show you where the matching brace is anyway so it probably makes any religious argument one way or the other moot.
Whereas in Perl, you have to put braces around the if-true part and the if-false part, regardless of whether they're just single statements or not. I quite like that since there's none of the fiddling around adding and removing braces (and potentially errors) when you change the number of statements in the if-true/if-false parts. Of course, it's also nice that perl gives you the 'statement if condition' and 'statement unless condition' syntax (without braces) too so that more than makes up for the enforced use of braces in the more traditional form.
Even in C, it's probably a good idea to use if (...) { ... } [else { ... }] even when you don't need to. Use it without braces and occasionally you'll come across a macro that expands to several statements, probably leading to very puzzling and hard-to-debug program behaviour... And as I mentioned, adding/removing parenthesis based on the number of statements is tedious and error-prone. Your essential logic hasn't changed (just the number of statements), so why should the syntax need updating?
IOW, thumbs up for mandatory/orthogonal use of braces!
I don't know ... isn't single-sign on backed by two-factor authentication? AIUI (and I haven't used this on Ubuntu services) with something like OpenID you put in your login request at one service and then go to another page (on your authenticating server) to OK that request. Barring some sort of browser flaw that lets a rogue site access the master details on the authentication page (which probably means you're owned anyway, so even individual passwords wouldn't be safe), I can't see how it's a big problem.
Of course, I'm only talking about single-sign on for sites that aren't that important. Of course you wouldn't want SSO for protecting anything of value.
has to look up the internet to find out how to make a "pressure cooker bomb"? Surely it can't be any more complicated than (a) make a big bomb, (b) put it in the pressure cooker with some bits of metal, and (c) close the pressure cooker.
Admittedly, I've never done this or looked it up, but I fail to see how it's any different (mutatis mutandis) from a pipe bomb. Isn't the name totally suggestive of the recipe for making it to anyone with two brain cells to rub together? That being so, does knowing the name of the device then constitute an offence for "possessing knowledge likely to be of use to a terrorist"? (Yes, that's actually a real crime where I live!)
A while back I made a list of the sorts of things you could get with around €250--275. It included:
* a PS3 with free game
* a Nexus 7 (which has since been upgraded slightly)
* an ARM chromebook
* An eMMC-based ODROID-X2 (with plenty of change to spare)
* 2x microSD-based based ODROID-U2s
* 4x Raspberry Pi Model B (or 3x plus a network/USB hub)
* 5x Model A Pis (a rough guess, though adding wireless cards might push me over budget)
* Various combinations of {Pi, Arduino} and {gertboard, Pi Face, Slice of Pi, Adafruit, Arduino modules} and {basic electronics kit and tools}
Since then I see that the Parallella boards are available for pre-order, so I could add:
* 2x Parallella boards (with 16-core coprocessor and FPGA)
For what it appears to be (a hobby or "gadget" item), the price is just crazy. The only thing that it has that the other things above don't is PCI Express and SATA (the PS3 and Parallella have gigabit Ethernet and the Chromebook has 802.11n, so fast networking isn't unique to this board). Is that enough to warrant paying twice (or more) the price of most of the other things I listed? I seriously doubt it...
I notice that all the gagets I mentioned (bar the PS3) happen to be ARM-based, so perhaps that shows a bias on my part. On the other hand, it shows the range of products that Intel is competing against in this segment of the market--let's call it the "gadget" segment. As such, this new board would be at the bottom of my list, even assuming it made the list at all.
Heh, that was my first thought on reading the article, though being a cynic, I figured that it'd just do ASL (American Sign Language).
The the second app that I thought about was virtual puppeteering. To be honest, though, that was also the first use I could think of for the gyroscope/accelerometer in all modern smartphones. So far, though, nobody has filled that important niche. Disappointing.
I guess I'll just go back home to watch 'Being John Malkovich' again (or maybe Team America: World Police) -->
you can't polish a turd
Really? Have you tried? I know you can definitely polish mud to make it nice and shiny. Though maybe you're right: pure shite might have to dumped on the compost heap to rot for a while.
> A solution looking for a problem.
It does seem like it. At least the 4k part, anyway. Some Reg links:
"4K video may wow vidiots, but content creators see pitfalls"
"The future of cinema and TV: It’s game over for the hi-res hype"
The gist: higher res is not a panacea.
Thanks. I'd been watching for developments in this product and I think this shows that they are definitely on the right track.
I was very interested to read in the article that the Epiphany cores have "a mere 35 instructions". I'd never read that before, so I went and found a link to the architectural reference document. Quite surprised to see that the cores don't have any division instruction (integer or float). But then, I guess ARM has been getting along quite well with only spotty support for hardware division instructions, and I'm sure that working around this restriction is the bread and butter work of the sort of people who write gcc or llvm (who both seem to be on board in supporting Parallella).
Despite having zero use (at the moment) for a cluster like this, I'm seriously tempted to put in an order. Even without a hardware division (or inverse) instruction, I'm sure there are still lots of interesting applications that would run well on this. The clustering side of things looks very interesting, too, given the huge interconnect bandwidth and memory architecture.
Microsoft have to choose between Surface RT becoming a cheap Linux box without Office or landfill RT.
Well the first is not going to happen. MS was never going to let you install another OS on this thing. Just why they thought selling a locked-down ARM tablet with no software ecosystem to speak of (having "Office" hardly counts, given the licensing terms and the fact that's it's restricted in other ways) was going to work is a mystery to me. Just who was it supposed to appeal to? Perhaps they made all those silly ads first and the various departments heads got carried away with how cool it seemed (to them) that they just had to go and build the damned thing.
There could have been a third option, and that would have been to announce a new cross-platform layer in Windows 8 and guarantee that all apps developed within the framework would work seamlessly across both ARM and x86 systems (and call it "Windows 8 Anywhere" or even use "Windows One" as an umbrella term to indicate the stuff will run on any of the MS/W8 platforms, including the new XBox) . Technically, the three main options for doing it would be (a) machine code translation like qemu (which the ARM/RT platform isn't up to doing well enough), (b) fat binaries that compile to both target platforms (like Apple did when it migrated between hardware platforms, twice), and (c) compile everything into a platform-agnostic bytecode that can be JIT-compiled into native code on the target platform at near-native speeds (eg, like Dalvik on Android). A consequence of this would have been no backwards compatibility on the RT platform, but if MS was really serious about it, they could totally have pushed everyone to adopt this "Windows One" (or whatever you want to call it) approach as part and parcel of taking the Windows 8 pill.
Unfortunately, as we can see from history (eg, .NET, Silverlight), even (or should that be "especially?") a behemoth like MS finds it very hard to do portability/interoperability. And anyway, even though it often pays lip service to these goals, in reality that's not what it wants. Rather, it wants to lock you in to its own proprietary solutions while spreading FUD about patents and whatnot to actively prevent interoperable implementations (which is why, for example, Mono on Linux is seen as such a bad idea for so many people). Besides the technical challenges, for this to be a success would require a large amount of bravery on the parts of the team tasked with developing Windows 8 all the way up to Ballmer. I simply think that there's no way they'd have to stomach to bet the farm so heavily on this sort of "Windows One" concept and risk making Windows 8 even more hated than it already is. The evidence for that is there: just look at the split personality that the Windows 8 desktop has as the prime example.
So I think I'll have to agree that landfill is probably the most likely final destination for most of these machines. In a few years time, my guess is that the App store will go away as the machine is quietly end-of-lifed so RT won't even be much use as a museum piece.
/my €0.02
re: sudo echo $'The Register on Pi-Lite' > /dev/ttyAMA0 won't work
For those that don't know, when you mix redirection with sudo, it's your (non-root) shell that tries to do the redirection (>) part and if you don't have write access to the target the whole command will fail.
You need to use this idiom instead:
echo 'something' | sudo tee target
Rewrite the original line and if becomes:
echo $'The Register on Pi-Lite' | sudo tee /dev/ttyAMA0
Also, I'm not sure what that $ is doing in that line. Typo?
Eh, that's the one they way to keep higher. The bit in ellipsis (cost/performance/watt) is, as you say, something they want to keep lower. So minimise Watt/GFlops and €$£/GFlop/Watt.
Personally, I'd love to see some of this stuff making its way out of the data centre and becoming something that someone could buy as a desktop/workstation replacement. The low-end ARM-based systems (Pi, ODROID and so on) are all severely lacking when it comes to I/O bandwidth and interconnection options. I'd love to see more of this on-die high-speed networking stuff make it into consumer products, preferably with similar buses/interconnects for accessing GPUs using something standard like OpenCL. I know that's unrealistic given that the desktop market is tanking and nobody wants to risk ARM in that kind of system right now, but if such personal mini clusters of ARM machines were available, I'd jump ship from x86-based systems in a heartbeat. I guess there's always Parallella, but it remains to be seen when that will become readily available, how easy it will be to program for, how much software support there will be for it and so on.
As I said, it's all a bit of a dream scenario, but at least it's good to see the ARM platform developing into something that you can do some serious processing on. Give it a few years and I reckon I might just get my wish.
The review didn't mention if it has this, so I assume not. A pity really, since without it I imagine that fiddling around with the port cover would get annoying pretty quickly. Waterproof may be nice for some, but messing around with port covers could turn this into a negative for many others.
I'm not sure exactly what you're trying to point out as being wrong, apart from what I said about rewriting full pages. To be honest, as I started to write I had a different idea about what the article's author was asking us, and by the end of it I figured he was asking about something slightly different, which I answered with my last paragraph of "gibberish".
The essential idea I was trying to get across at the start was that with flash-based systems you need different strategies for updating data on disk than with traditional block-based storage. You can't just update a structure like a B-Tree or a directory entry in situ because of the penalty that flash memory as a medium imposes on you. I don't disagree with what you say that we don't use a naive approach for updating a single block in this case---you're totally right to say that instead we group updates and write them all in a single page. But this has implications for filesystem integrity. If you can't mark the original data as obsolete and you can't just erase that whole page, then how do you (a) know which copy is the correct one, and (b) how do you handle problems like loss of power while writing the update? That's why I mentioned timestamps and periodic compaction. That's all I can really say on that because I'm not really sure where I went wrong in explaining it.
Maybe it's the last two paragraphs, but the last one paragraph is, I think, the key point I was trying to make. Up to that I was trying to explain the problems with error recovery at the flash level (which implements its own log-structured storage system at the firmware level, as you say), but what I think this Kaminario system is describing is more like Fawn-KV[pdf] and SILT. [abstract]. Those approaches use relatively large in-memory indexes to find data values on flash, and store all the data (including indexes) in a log-structured storage system on flash. FAWN-KV, in particular, looks a lot like the diagram, which shows each block spread across multiple nodes. The way this is usually done (and is done in FAWN-KV) is to use consistent hashing to spread the data across several nodes/silos. FAWN-KV also includes replication, so that a single hash key is stored to more than one node/silo. That's the essential point I was trying to make regarding node failure and recovery from it: FAWN-KV can recover from this quickly in the short term because an alternate node/silo is there to provide a backup copy of the data, although repartitioning the hash scheme (with associated costs of moving the actual data across nodes) will be necessary in the longer term if a node is really dead.
The SILT paper has a section on extending their scheme to include crash tolerance/crash recovery, which, again, I think is what our author here was really trying to get his head around.
HTH.
We can't see why this has anything to do with sustaining a level of performance during a system failure, but maybe Reg readers can.
Have a look at the wikipedia page for "write amplification" to get an idea what the problem with traditional uses of flash storage is. In a nutshell, writers and updaters of the memory tend to treat it as a normal random-access memory. However, since flash needs to be updated many blocks at a time (in a unit called a "page", I think they call them), if you've just changed one block, then you need to read in all the other blocks in that page, update it and write it back out to a fresh place on the disk. If the power fails in the middle of all of this then it can be tricky to figure out exactly which blocks are now good. Worse, since the R/W pattern tends to be random, other files can be sharing the same page, so any corruption will not necessarily be limited to just the file (or chunk of a database table, etc.) that was being updated at the time.
With log-structured databases, you just imagine the whole disk to be like a circular list. In the simplest case, you just push stuff on the end of it and if there's a power failure, you just rescan the whole list from start to finish and delete any uncommitted writes. Of course, it's more complicated than that since O(n) traversal just to find some bit of info on the disk isn't practical, so most log-structured dbs will have some sort of compaction and indexing threads going in the background. Also, updates are generally timestamped so that later writes in the list override previous values. They'll also generally keep as much of the indexes in RAM as possible so that (notwithstanding initial delay when reading this in from the flash at startup) it's efficient to find the data you're looking for (and writes/updates generally simply involve writing to the head of the circular list, so it's O(1)).
A quick search for log-structured databases and file systems throws up examples such as Log-Structured Merge Tree (LSM-Tree), Riak's Bitcask, Logbase, Fawn-KV and SILT (Single Index, Large Table, IIRC). Any of the technical papers describing those will most likely explain why log-structured is the way to go with flash-based storage. Maybe my explanation above is enough, though... but definitely read the wiki page on write amplification and things should make a lot more sense.
Oh, just one other point... your actual question was to do with performance after a failure. Chances are they use something like Fawn-KV or SILT: some redundancy is built in, so that there will be backup "silos" for storing the data (much like RAID replication). Using a Distributed Hash (DHT) lets all the silos effectively share a common key space, and if one of them goes down, then collectively they can switch over to the alternates, while in the background they'll repartition the DHT space to account for genuine hardware failures (as opposed to transient errors). You'd have to delve into some of the papers of the above systems and (if they exist) the ones describing kaminario's implementation in particular, but I'd guess that's what they're doing and what they're talking about "sustaining performance during system failure".
Sending him to prison for 20 years is only dealing with the symptoms of rampant drug abuse in the US. It solves nothing, and in fact often ends up making things much worse. Drug addiction and the war on drugs is a closed, vicious cycle. Until society starts dealing with with the causes and start implementing proper political, educational and treatment policies, there's really no light at the end of this tunnel.
I don't know if your calling for a 20-year sentence is because of some fucked-up absolutist sense of morality or because you're some sort of sadist who enjoys piling suffering on top of suffering. The two are probably not mutually exclusive. I do know that for as long as the current system continues in the same stupid, vicious cycle, we'll always have people like you offering up these gems of "wisdom".
How sad.
Well, you have to consider that gravity doesn't just get cancelled out just because you're under water. Don't forget that you need to add up the total weight of the atmosphere *and* the liquid water above you. Some forms of life on Earth can tolerate extremes of pressure, but who knows if life could actually have started under such conditions.
Also, to be totally pedantic, just saying the planet is 10 times more massive isn't the whole picture. We also need to know what the planet's radius is. If it's large enough, the surface might have a tolerable gravitational force.
I already read this article in my crystal ball yesterday.
Careful with that crystal ball! Allow me to dredge up to a link to an old Reg article: Crystal ball torches woman's flat . As the sub-head there was "didn't see that coming", and to answer a previous commenter here, no, "That one never gets old".
He (as a self confessed fake) beat the two "genuine" psychics.
Reminds me of the story from quite a few years ago that pitted Microsoft's technical helpline against some "psychic" hotline for fixing some Windows-related problems. The result was that they were both (surprisingly) relatively on a par with each other in their ability to fix the problems.
The point? I guess that anecdotal evidence is fun, but of little use otherwise.
I already invented one of these years ago, though I never built a prototype. It's a pretty obvious design, though, and I'm sure it's been "invented" many times before.
To be honest, laziness played a large part in my not building the thing, although lack of experience in electronics was also a large factor. I'd wanted to incorporate two features that, after a bit of thinking, it was clear that I'd have to learn a non-trivial amount of electronic circuit design (and source the necessary components) to implement, so I never went any further than the imagining stage.
First, I wanted to use a standard radio controller, but I wanted the ball to have a network of receivers (at least 4 arranged in a tetrahedron, but an inscribed cube, other platonic solids or buckyballs would work too). My thinking was that as each of the (directional) antennae would be at a different orientation to the incoming radio signal, each of them would be receiving the signal at a different strength, so it should be possible to triangulate, roughly, where the RC signal was coming from. The point of this was so that if I pushed the RC stick towards the ball, it would travel away from me, and vice-versa. All motion would basically be relative to a line between the centres of the controller and the ball. That seemed to make most sense in the absence of some kind of external positioning system (like GPS, but finer-grained). It would mean you'd have to know roughly where the ball is in relation to your position if you want to steer it meaningfully, though.
The other tricky bit would have been coordinating the movement of the weights within the ball. It's (fairly) trivial to shift the weights(*) in the right sequence if you want the thing to move in a distinct set of "steps" (with it settling down to a new centre of gravity before applying the next movement), but if you want it to act more like a ball and move smoothly you need to factor in all the moments of inertia in three dimensions as well as the characteristics of the motors that move your weights in and out relative to the centre of the ball (how fast and how accurately they can move, where they are at any given moment, and even the lag between sending the movement command and being able to act on it). If you want variable speed control you need to be able to measure the current moments of inertia (using accelerometers that weren't that cheap or readily available at the time) and adjust how far you shift the weights when you're already going fast (like an ice skater can move their arms in and out to adjust speed when spinning). Mathematically, it's fairly complicated, but doable. Unfortunately, as I said, I lacked the skill in electronics to be able to translate the maths into a proper control circuit. The various feedbacks among inertial sensors, current and projected centre of gravity and the sensor array used to triangulate where the user is makes it all very complicated, particularly if the thing is moving at high speed. Depending on the size of the ball, you might have gone through a significant fraction of complete revolution by the time the circuit has figured out what it should do next, by which time that calculation is completely wrong for the current state. Quality, high-speed sensors is a big part in overcoming that problem, but at the time I didn't have access to them. Nowadays, I guess a mobile phone has most of the sensors needed for this kind of thing though it still needs something better than GPS for telemetry.
So, anyway, I've got to tip my hat to these guys. I'm not sure how they've implemented their telemetry or whether they've cracked the problem of controlling the ball at high speeds, but it's really nice to see that someone has had a proper go at implementing this kind of thing.
*a note on weights: another alternative form of locomotion would be to have various pistons spread around the surface of the sphere. By pushing them out and retracting them in the right order (and with the right amount of force) it should be quite easy to get the thing to move quite quickly. It does sound like a fairly industrial-level implementation, though, since you'd need more pistons than you would internal weights. There's also the problem of legs breaking off or getting stuck on/in things that doesn't happen if the ball is self-contained and only contains weights and motors. On the plus side, if you have pressure (weight) sensors on the ends of the legs, that's a pretty useful sensor to have for detecting not only which side is up, but also for collision detection.
The other form of locomotion that I considered, and I'm quite proud of, is to use two-way memory metal to construct the shell of the bot. The idea is that in one state each of the wires forms a strut of a platonic solid (eg, a dodecahedron), while in the other state the overall shape of the ball is deformed at the bottom and it tips over onto the next face. By alternatively lengthening and shortening struts in the right order, it should be possible to get it to roll in whatever direction we want. The beauty of this sort of bot (provided it could actually be built, and I haven't overlooked some crucial problem) is that the shell (and a few sensors, power sources and other electronics distributed evenly around the shell) essentially is the robot. Taken to the extreme, it should also potentially be possible to use the memory metal itself as the communications network medium so there'd be no unsightly wires or internal circuits even visible. If someone wanted to try this, they could build it as a set of nodes (vertices) containing the processing parts and then the user could build it simply by training the metal struts and building up the dodecahedron out of nodes and memory metal struts. That's one variation of this idea that I would really love to see someone implement!
Wow... chill. Nobody's saying that entanglement allows instantaneous information transfer. The way this works is (roughly) that we send some photons using a recorded polarisation, the receiver sets up some polarising filters at random and then both parties communicate the results using standard (non-FTL) communication channels. The quantum magic happens because an eavesdropper can't know both the initial polarisation and the polarisation setting at the detector and any attempt to "copy" the photon in flight has at least a 50/50 chance of getting the polarisation wrong (assuming only two polarisation settings), thus alerting Alice and Bob to Eve's presence.
The whole system (from emitter to detector) is the quantum entanglement "experiment", so it's easy to see how an eavesdropper will prematurely collapse the probability waveform, ruining the values that Bob sees. But again, even though in normal operation, without an eavesdropper, the collapse is instantaneous with the measurement at the detector end, Alice and Bob still need to communicate their results before they actually know what that measurement means, so there's no FTL information transfer... so it's actually not Hokum.