857 posts • joined Thursday 8th November 2007 17:09 GMT
Re: Missing the point, I think...
10^9 core MPP systems are now viable. At that scale we can drop the Turing Machine model,
Not really. We're still stuck with the Turing model in an abstract sense and the Von Neumann model in more practical terms. We just have to adapt them to be more aware of multi-core and multi-processor systems. And in fact, we pretty much have done so years ago and there hasn't been any great paradigm shift.
and progress to an Object Machine, where every element of data is an active processing element
It sounds like you're talking about agent-based programming. Again, it hasn't caught on, except in writing botnets and perhaps back-ends for massively-multiplayer online games.
generating random sequences of logic and see if they do anything useful
And just how do you decide what's "useful"? Or as Robert Pirsig put it in Zen and The Art of Motorcycle Maintenance, "And what is good, Phaedrus, And what is not good—Need we ask anyone to tell us these things?" You'd probably enjoy reading that since it's really about philosophy, not hard computer science.
because all software will be written by software(*)
Of course. And the Singularity will arrive and bathing in Unicorn Milk will keep us young forever.
do not just binary digital processing but higher-base hardware processing ... feed a base-3 digital processor a compatible pair of base-2 & base-3 instructions
Hmm... Are you really amanfrommars in disguise? If so I claim my £5.
But seriously, do you even know what Turing-complete means? In particular, a Turing machine can be re-expressed in terms of Gödel numbers, which it turn can be mapped onto the set of natural numbers. Crucially, all practical number bases are isomorphic to each other, so binary, ternary or base 10 (or balanced ternary or whatever) all have the same expressive power so there's no theoretical reason to favour one over the other. It only comes down to issues of practicality. For most purposes binary is good enough, and it's only if you want to represent certain numbers with a finite number of digits that you might want to consider other bases (the string to represent 1/10 is infinitely long it binary, for example, while it's just "0.1" in decimal or binary coded decimal, for example). And in case you're wondering, going from the natural numbers to the reals doesn't magically grant your computer new powers either: the naturals are perfectly sufficient for "universal" computation, so, eg, a phinary-based computer can't do anything more than a binary one can, except be a pain to build and program. Another book recommendation for you: you might like Godel, Escher, Bach, and Eternal Golden Braid...
(*) Actually, there is one kind of "program that writes programs" that can benefit from having massive amount of cores to work with, though I mean "program" in the kind of mathematical sense that Turing did, rather than the way you think of it (eg, a word-processing package). I'm thinking of something like Turbo Codes, which are effectively bit-level programs that tell a receiving computer how to reconstruct some embedded data even if some of the bits are dropped or corrupted in transit.
Another, similar type of application is data compression, since you can treat the compressed data as a "program" that tells the decoder how to unpack the message. I think that that's the most interesting possible application in this realm: given enough computing power, we should be able to try out many different ways of compressing some given data and output a compressed string and a decompressor. Obviously, this still isn't going to be able to magically compress incompressible data and it's quite impractical as a replacement for general-purpose compression schemes like gzip, bzip and so on (since there is an infinite--or worse, transfinite--number of "languages" to consider, and the best compression ratio possible is sensitive to the choice of language) but it still could be quite useful for discovering good compression schemes for certain types of data. See Kolmogorov Complexity for background details.
I just don't get where you get the idea that you can't backup ARM servers, or even why it's a stumbling block to deployment. If you want them, there are plenty of backup solutions you can compile from source, or you can use the venerable rsync if you don't have any special requirements like snapshotting a filesystem so that it's in a consistent state during the backup (though I understand that LVM can do this).
The second point is to consider whether you really need backups in the first place. I think you may be misunderstanding the use case of (most?) ARM server deployments. You're probably more used to thinking of having a variety of servers each doing different things, or running a number of VMs, perhaps? I see the use case of ARM servers more in terms of grid or cluster computing. Looked at in that way, there's probably nothing on any of the nodes that you'll actually want to back up explicitly. The system image (or a large chunk of it, anyway) will probably reside on an NFS server and will be shared among several nodes. If you're using them for "OLTP" type applications, then your database is definitely going to be distributed, with replication of data across several nodes. The upshot of both of these points is that if something goes wrong with one of the nodes, it's not important: you just replace it or reimage it. If your database is already distributed and replicated across nodes, it can survive some number of failures like this, so again, there should be no need to backup individual nodes. You will want to make sure that you've got some way of backing up your entire database, but that's a whole different kettle of fish, and nothing to do with what you say is the problem here.
shut off internet access for my toaster?
But then how am I supposed to read my morning emails?
Re: That Ergosphere is gonna whip the wheels off Hawking's chair!
what exactly is "rotating at speed X" here?
The accretion disk. The article says that "the outer edges of the NGC 1365 black hole are spinning at 84 percent of light-speed or more."
Infalling matter follows the rules of relativity, so that in a relatively flat spacetime, by definition it can't travel at c or more, while in a degenerately curved spacetime (like falling into a black hole) it's red-shifted to such a degree that it will disappear from our relative view before it even appears to approach or exceed c (even before it hits the event horizon). It's just cosmic censorship in action.
RE: Somehow I can easily imagine someone believes that, in spite of the fact that he missed and none of them died.
Yes, more of a "Small Meteor Hits Russia: Not Many^H^H Nobody Dead".
Re: There is a reason for Software Smugness
RE: Remember your assertion: "Thinking about better algorithms is never a bad idea."
While you should probably "never say never", I'm siding more with the original poster. Although there is often a balancing act involved in how much time you can spend on finding a better solution and bearing Knuth's "premature optimisation is the root of all evil" quotation in mind, there's still often a good case to be made for looking for a fairly efficient algorithm right from the start.
I don't think that anyone is saying that we should go to excess in looking for the best solution, but for what we're talking about here (processing big data sets), we should really be aware of how expensive and time consuming each possible solution might be. It's the mindset that's important: do you just write the most basic SQL query, or do you take care to minimise expensive join operations or defer them to be operated over a reduced data set, for example? Also, experienced coders will of course realise that there's no point in blindly trying to optimise every single aspect of the code. They'll use a profiler (or equivalent) to identify where their efforts stand to reap the most benefit.
Speaking of programmer effort, I think that in many cases, it can be false economy to use inefficient algorithms. If your algorithm is bad enough, you can end up spending more time waiting for results when you're coding and testing the thing (on real data, as opposed to just a small amount of test data) than you would if you'd just thought about the problem a bit more from the outset. Granted, you can multitask and do other stuff while you're waiting, but it's not ideal to have too many context switches or your productivity will suffer. Plus, what happens when you finally realise (or have to be told) that the solution isn't good enough? Most often, you have to go back to the drawing board and do what you should have done in the first place and implement a half-way decent algorithm.
Re: A modest proposal...
re: I thought that we had a system for tracking and watching dangerous objects in space. Why didn't that system warn us?
Why not? Probably because their budget isn't big enough. I thought I read that it was $5.4 million, but I'm not 100% sure of that figure. It's certainly only a few million (3--6) spent on the problem. Less than what a typical Hollywood disaster movie costs, anyway.
Hello? I'M ON THE TRAIN!
Be that as it may, it doesn't stop us from building up a fair charge on carpets and the like. I'm guessing that the plant itself is acting as an insulator and it's the bees' rubbing against the pollen/stamen is what sets up the differential. Kind of like a mini Van De Graaf generator.
wireless signals and electrical pollution
Sigh. Static electrical charge is not the same thing as wireless (radio) propagation. The article is about the bees' use of the former.
Diamond Rio PMP300
First commercially successful flash-based MP3 player. Pre-dated the iPod by about 3 years and was widely regarded as the inspiration for the iPod in the first place. Just saying...
Re: Also: MS Office For Linux (kernel)
If they don't bring Office to Android, somebody else is going to eat their lunch on the tablets
It still doesn't seem likely. They may have got some of the way there with the cut-down version of Office for Surface RT, but it is exactly that: cut-down. I've read lots of unsubstantiated comments here that the Office code base is such a mess of x86 assembly for things like macro support is unlikely to be ported any time soon. Also, I read here on the Register that there seem to be political problem within Microsoft as to whether they should even develop and release an ARM version (or a Surface RT version, to be precise) of Outlook. If that's to be believed, then there's probably a considerable faction within MS that would never accede to releasing a Linux (or Android) version of any of their desktop tools.It would completely go against the whole philosophy of maintaining customers through locking them into the Windows ecosystem. And even if they do go down that route, it may, as you say, already be too late .
On a slightly unrelated note, I think that there is definitely a niche there for a "good enough" (which incidentally is a phrase you used to hear at MS to describe their development/release philosophy) office suite. I've been thinking for quite a while now that a suite that had the 80% of features that most people actually need and use could easily capture a significant chunk of the market for "office"/productivity software. People are fed up with massive, bloated systems with tons of arcane features that they'll never use. By paring it down and providing good interoperability between components and across platforms, it should be "good enough" to satisfy all but the most hardcore/insane of users. In keeping with the 80% of functionality idea, I'd suggest calling it "Pareto" (if such a thing doesn't already exist). So long as developers were ruthless about not implementing features just for the sake of it, I think it could go a long way.
This is just my opinion, though. Personally, I've not used Word in many years and I have no need for it unless someone demands a document in that format. If I need something professional looking, I'll plump for LaTeX every time (edited in emacs, naturally :), or just use XML and CSS if I want to mess with layouts and fancy stuff. In either case, I prefer to concentrate on the content rather than formatting (which gets done at the end and is abstracted away from the actual content). This seems to be the opposite of the way that most Word users (and developers) work--style over substance, you might say.
Re: The laws of physics will be different in the encroaching bubble.
Don't get me wrong... it was a fine attempt at making a joke, and I'm all for that, but the part of me that holds maths in such amazement(*) just flat out refuses to even consider Pi being some other value, even in an alternative universe. I literally just doesn't compute. A universe where e**(pi*i) isn't -1 is as unimaginable as one in which effects precede their causes or don't have causes at all, or where entropy doesn't grind everything down. Besides, even "non-Euclidean" geometries (eg, geometries without the parallel postulate) don't mean that they don't use and need Pi. If you take a plane journey through three points on the globe, the triangle you trace out has >180 degrees, so it's non-Euclidean, but it's obvious that if you go up a dimension from the 2-D Cartesian representation to the earth as a 3d sphere that everything still works and revolves around Pi ...
Re: Oh, well...
re: it could be that this is the adequate explaination for the thing with the moon
Been reading 1Q84 then?
Re: The laws of physics will be different in the encroaching bubble.
re: How quaintly Euclidean...
I see what you're trying to do, but any of these "alternative universe" theories are rooted in maths. Even if they exist, there will be no universe where 2+2 = 4.1 or Pi isn't both a constant and an irrational number. The fundamental rules describing the geometry of alternative universes has to be the same as ours according to all the theories. The most likely scenario is that physical constants like ratios of fundamental forces or binding energies needed for chemical bonds or decay rates or the like could be subtly different, though it's vaguely possible (in a mathematical sense) that if a particular string theory happens to describe the Whole Sort of General Mishmash that is the multiverse, and the alternative universe has slightly different parameters, then we might actually be able to see extra dimensions there on a macroscopic scale. That would probably be the weirdest possibility. Even so, the metric spaces of our universe would also apply there, so Euclidean distance would still apply on some scales while a Minkowski space metric (which still requires Pi!) would be more natural in others.
I see that a previous poster got a downvote for suggesting alternative lead-based lifeforms. You'd have to tweak the fundamental physical constants by a massive amount before that would even be a remote possibility. Before you'd even managed to get there, you'd find that the stars had gone out due to not being able to self-sustain their fusion reaction. Then we'd have a lot more to worry about than alien invaders. Something like Ice-9 would be a lot more plausible than Pb-based life.
Kind of reminds me of ...
the previous Python-related copyright snafu involving Twisted and (val)grind. Guido and friends seem to be on similarly strong grounds here.
Not sure what the outcome of this poll will be. Will it be some sort of frankentea, one bearing the hallmarks of being designed by a committea (sorry) or perhaps it'll just reflect tea à la mode (or is it "à la mean"? I can never remember which is which).
I think he's over-egging things for sure. I mean there's no mention of shifting [gears], hugging curves, burning rubber, [lug] nuts popping, [cam] shaft action, driving stick, sucking [diesel], the point of no return, or even throbbing pistons.I'm sure that if "salacious" was the goal, he could have done a much better job.
Was never much of a fan of [Dec] Wars. [Vax] Trek was more my thing.
Donuts Carbon Nanotubes ... Is there anything they can't do?
Re: Who can fix the surface pro? No one, it SUCKS FROM TOP TO BOTTOM
Eadon has spoken the truth. You may downvote me now.
Anyone who speaks of himself in the third person deserves eternities of karmic hell (or at least lots of downvotes).
(ooh... see how I cleverly avoided that trap myself ^_^)
I assume its fast enough to write data to the non-volatile part before the power dies away completely.
That's not a good assumption. Power failure when writing to SSDs can trash even bits of data that weren't currently being written to thanks to the possibility of wear-levelling algorithms effectively moving random blocks around whenever you make a write. See "write amplification" on wikipedia for a pretty good description.
Re: >Holding big databases in memory
You'd be caching the most-used data,
Alternatively/additionally, you'd probably find it useful to hold indexes in RAM, and implement some sort of ageing/caching algorithm that keeps new and frequently-used data in flash and the rest out on spinning disks. If you use a log-based structure for the flash storage and periodically rewrite out to disk (perhaps redundantly, depending on whether new indexing constraints are required) then you can optimise both reads and writes across all storage layers. Something like SILT or log-structured merge trees, but with spinning disks as the final storage layer, optimised to reduce fragmentation and extra seeks.
Re: Thanks for the memories
New chip design would be needed anyway
I see lots of interesting comments here, your own being particularly interesting. So anyway, this is a response to quite a few of those posts...
I think that if we're going to see more of this sort of thing (storage that blurs the boundaries between RAM, flash and disk storage as well as the ability to completely power off components when not in use) then we're going to need a fundamentally different architecture to take advantage of it. This goes beyond just new chip design (where even today cores can be started up and shut down at a whim) and into having some sort of "power arbitration" bus, with the entire system backed up with a small, finite battery. For the instant-on/instant-off scenarios using flash as hibernate/sleep storage, you need to be able to guarantee that it's going to be able to finish writing the OS state data in case of loss of mains power. For the scenario of being able to, eg, keep power routed to the GPU while it's doing some computation task, but shutting down other non-essential stuff (but probably keeping, say, Ethernet alive to enable a kind of wake-on-lan feature) you probably want to be able to budget how much you can do while on internal battery power and also have the ability to suspend gracefully when you're approaching its limit. Not trivial stuff at all.
Of course, it's very unusual these days for us to have battery power built onto the motherboard (as opposed to being in an external UPS). If these devices/ideas become commonplace, though, we're sure to see many innovations in power management overall. I shudder to think of all the new failure cases when we stick in a new device (be it faulty or malicious) in machines in future, though...
Re: The difference...
Tesla - you do not win at PR by starting an argument with the media.
Or to paraphrase: "Never argue with someone who buys ink by the barrel”. It's called Greener's Law, apparently, though I'd always thought it was a Mark Twain coinage...
Re: The chances of anything coming from mars.....
Yeah, but the chance of winning the lottery is significantly worse than 1 million to one (and still they played...), and yet every other week you hear about someone winning it! Time to panic!!!
Re: I just like to offer....
Damn! My pandigestory interlude just evacuated my nose. You owe me a new keyboard, sir!
Re: Cunning Linguist
The old ones are the good ones!
I'm so glad that the article wasn't about a really clever bunch of pygmies. Thank Heaven for small mercies, I say.
The pair of them. Cos maybe now they'll reconfirm Pluto's off-again, on-again status on the list of planets. Well OK, Orpheus and Hades it is then...
re: swarms of microbots
Interestingly, I read an article a while back about the US military working on building microbots that could be scattered over a battlefield to be used for gathering images and sussing the lay of the land. The software and radios that they had were capable of self-configuring into an ad-hoc mesh network, so that part of it should be easy to sort out, even if a significant fraction of the machines don't survive the landing or fail in some other way.
As Helena points out, though, these things aren't really of any use as roving devices. There's a limit to how small you can make remotely-controlled bots while still giving them useful locomotion and other more practical sensors and actuated abilities.
Still, I think the microbot idea could still be pretty useful for future missions as a means of getting an initial idea of local terrain and even provide telemetry data for later, more fully-featured rover landings. The thought of sending an Internet to Mars is pretty cool too, especially if it can self-organise and do a kind of terrain "interferometry" (a fancy word for building a map from multiple viewpoints) locally instead of having to pipe everything back to Earth first. Think about it... Martian Internet! What's not to like about that?
Re: All empires eventuall fail...
Apple are on the down-slope. Samsung are on the up-slope
So you mean that it's plain sailing for Apple and that it's going to be tough going for Samsung? I'd have thought the opposite....
"the unpredictable rocks on Mars"
I was a bit confused by this at first until I realised "unpredictable" was used in the sense of "No one could have predicted, in the first years of the twenty-first century, ..." Hooray for word-sense disambiguation!
Early entry for website of the year 2013.
Looks good to me, too. I like the "Lost Consonants" feel to the whole thing (judging it by the pic in the article, anyway). That's not a bad thing at all.
Re: re: go forth and multiply
Thats just a mis-interpretation through censorship. What God actually told Adam and Eve was to F*** Off.
I always thought "go forth and Multiply" was more like a vague and inscrutable (as is His wont) warning against Adders.
Gotta love the unintended (I guess) hilarity of seeing the "Illicit phone rings in Sri Lankan inmate's back crack" article cheek by jowl with the "BYOD is a PITA" one ... or are the Reg editors having a bit of fun today?
Wall Street responded by pushing the social networking firms shares to $150, significantly up from their IPO price of $45. By contrast, Facebook's shares still languish at around two-thirds of their IPO price, and those (un)lucky enough to buy into Groupon and Zynga have seen their holdings reduced to a fraction of their initial value.
I'll have you know that 2/3rds is also a fraction! Then again, so is 150/45, but I don't want to be too pedantic...
Re: Rubbish comments system
Stop splitting the site sections to look like different websites, for fuck sake.
It's worse than that. Even though we can all still click on the comments link to see the entire thread the way we've been used to, there are at least a few bad knock-on effects I predict will be the result of the new system:
1. We'll get many more first-post click whores who are more interested in just getting their words underneath the article than engaging in a conversation (ie, what the comments section is). It doesn't really matter how inane the first poster is, the fact that they're first means they have an advantage when it comes to click whoring.
2. Even those posts that are genuinely interesting and get lots of votes probably won't make very much sense in isolation since, again, it needs the full conversation as context (at least unless people change their posting styles to incorporate quotes so people know what the immediate context is)
3. We'll get lots of stupid/redundant replies in the comments section based on people attacking/defending comments that they read in the main article page without checking whether it's already been done to death in the main comments page (again with the idea of a "conversation"... get the picture?)
If the reg must have "promoted" comments (or "highly rated" as it's called now), you should either copy what Ars does and let the editors pick and choose what comments are promoted OR you add a new button to the current roster of thumbs up/thumbs down to indicate that a comment is both worthy AND front-page material (I suggest a thumbs up icon in front of a star). I'd hope that people would realise the point of the new icon is to flag posts that are particularly insightful and self-contained enough to act as a companion to the story, but who knows... you'd really have to try it to see how it works. At least it couldn't be worse than the new system.
I, for one, welcome our new comment overlords, etc...
Moderately strong tea, milk in afterwards
Actually, it depends. I prefer loose leaf tea to tea bags (*), but I drink more bagged tea due to the convenience. Anyway, if you make a proper brew(**) you need to scald the pot, put in the leaves and then put in the boiling water. If you've faffed around for too long between starting and pouring in the water, boil it up again then put it in the pot. It needs to be boiling(**). Then put it on a hot stove for about 4-5 minutes. For this type of tea, you absolutely need to put the milk in the cups first, otherwise you scald the milk. You might not believe this, but do a blind taste test and I think you'll be able to tell the difference.
For bags, you also need boiling hot water to begin with (and you may also wish to scald the cup first so it stays hotter, but it's not necessary), but from that point on you just leave it to brew by itself for a couple of minutes. Personally, I give it a stir (usually by grabbing the back with my fingers and swirling it around, but you can be fancy and use a spoon) and remove the bag before adding the milk, but the other variations of this aren't wrong. The only thing I'd insist on is if you have to use a sweetener, then it has to be honey. Even then, sweetener is really only something you want after some kind of shock or a day's hard labour, in which case it's acceptable :)
* Barry's Tea is de rigeur; it's a blend, but mainly based on Assam (also called Breakfast Tea by many)
** Actually there are many "proper brews", but I'm talking about black (fermented) leaf tea here. That's not to say that things like green/gunpowder/matcha tea (which don't take kindly to boiling water at all), Oolong or even (horror of horrors) mugi cha (which actually isn't a "tea" at all) aren't all worthy beverages in their own right.
*** Incidentally, this is why it's hard to make a decent cup of black leaf tea at altitude since the boiling point is reduced. Green (unfermented) tea is much better there.
Re: It's not a proper mug of tea unless it's a double bagger
Until, after many months, you are forced to leave half a dish washing tablet in the mug overnight to remove the build up of tea scale which has reduced the volumetric capacity of said mug to the point of unusability
Rinse the cup out in water, so that there's a dribble of water in it. Pour in some table salt and rub it over the tea stains. No need for a storm (or chemical warfare) in a teacup.
Re: Eye Spy with My Raspberry Pi
Re: Don't you know, the PI itself is the loss leader
Actually, according to a recent interview, Eben Upton said that everyone in the supply chain is making a profit. I assume that the distributors also take a small/tiny cut. Granted, like you said, they are using the Pi to entice you to buy items they're making more profit on, but technically it's not a loss leader if they don't make losses on the Pis.
Playing with a web server on my home connection isn't the greatest idea in the world
Learn how to set up a Demilitarised Zone (DMZ) on your network. Simply put, you make a separate subnet for your web server and use IP filtering rules (at your router) to allow machines outside that subnet to access it, but block all outgoing traffic (apart from responding to already-established connections initiated from other hosts). It can be as simple as three iptables rules: one default rule drops all forwarded traffic, one allows NEW connections to be forwarded to the DMZ box and a third allows packets that are ESTABLISHED or RELATED to be forwarded from the DMZ box. In practice, you'll probably want to do something more complicated, like doing NAT masquerading and port-forwarding at the router (so that all your machines appear to be at the same IP address and so that traffic coming from the Internet on port 80 is forwarded to the DMZ machine, respectively) so I can't give you the exact iptables commands or other firewall rules here.
Likewise, if you need to allow the DMZ machine to access certain services inside your network (that you can't or don't want to store on the DMZ machine) then you need to add more rules to allow it to make those connections. You'll want to lock down that service so that the DMZ machine can only do the bare minimum with it that it needs to operate without leaving a big hole in your security. Or better yet, migrate a minimal version of the service to the DMZ box itself or another machine on the DMZ subnet. There's always a trade-off between security (risk of the machine getting hacked) and utility (eg, you'd really like to be able to access your IMAP server) with any machine connected to the net, but a DMZ is a nice way, up to a point, to get the best of both worlds.
So basically, look up setting a DMZ for your particular router and learn about how to set up firewall rules in general.
Other than that, your distro should have packaged the web server to be pretty secure already, such as running it as a user with restricted rights (nobody in Unix-based systems) and maybe it also gives you the option of running in a chroot jail too.
Re: All true
You'd be better off burning 3 billion pounds in a park and throwing a party where tickets are a tenner to watch 3 billion pounds to go up in smoke.
But where are you going to get 3 million KLFs?
- IT bloke publishes comprehensive maps of CALL CENTRE menu HELL
- Nine-year-old Opportunity Mars rover sets NASA distance record
- Prankster 'Superhero' takes on robot traffic warden AND WINS
- Analysis Who is the mystery sixth member of LulzSec?
- Comment Congress: It's not the Glass that's scary - It's the GOOGLE