> Turnover is vanity - profit is sanity.
>> Profit is opinion - cash is fact
Candy is Dandy, but Liquor is Quicker!
1349 posts • joined 8 Nov 2007
> Turnover is vanity - profit is sanity.
>> Profit is opinion - cash is fact
Candy is Dandy, but Liquor is Quicker!
Fags do suppress the appetite so it could be an aide to staving off the hunger
You'd be better off getting some garlic and rubbing a bit in your mouth. Apparently it's good at staving off the hunger.
Some other thoughts...
Various people have mentioned potatoes and rice, which is a great idea. You do need to make sure you're getting some protein, though, Dried beans, lentils and split peas are the best value, along with TVP (textured vegetable protein). Oils and fats will probably be your most expensive outlay.
Someone else mentioned foraging, but it's not practical if you're either living in the city or don't know what you're looking for out in the country. It also tends to be seasonal, but if you know what you're looking for you can get plenty of fruit and maybe mushrooms (requires knowledge and caution!) and definitely some plants like wild garlic and even dandelion or nettle that are easily identified and easy to find.
In the city, foraging is pretty hard. You could follow a squirrel back to its lair and steal his nuts, I suppose. Much easier is to find a supermarket where they're offering free samples of stuff. You could steal a copy of "Steal this Book" and get some ideas for other ways to get free stuff, or invite some friends around for some "stone soup" (you provide the stone).
Surviving on £1 a day sounds very hard, unless you "cheat" by relying on getting free stuff (like sugar and ketchup packs and butter pats from restaurants). As an awareness-raising exercise, though, I'd have to applaud it. Good luck with it!
It doesn't say he publically stated it
I am "Chessmaster Hex", and you may claim your £5.
Upvote on the "ignore user" button... I would actually *PAY* not to have to see Eadon's ramblings
I'm sure it would be simple enough to implement using greasemonkey. I'd happily investigate for a fiver...
Oi you lazy photons... quit your lollygagging!
As soon as they start decelerating at the other end, they'll get a column of exhaust catching up, and smashing into them!
No. First, you have to understand inertial frames of reference. If I'm on the roof of train and I fire a gun in the forward direction, it will have the same apparent velocity (to me, and ignoring air resistance) as a bullet fired in the "backwards" direction. Despite being in motion, the relative velocities still work out the same as if we decide that (or it's actually the case that) the train is fixed in space. This is our "inertial frame of reference". Second, you need to take into account Newton's third law: "for every action there is an equal and opposite reaction", which is the principle behind any "reaction drive", of which this is an example. Again, consider the plasma exhaust coming out of our "gun". It has a mass that is a tiny fraction of the mass of the ship, but will have a large acceleration. Since F = ma (force = mass x acceleration) and due to Newton's third law, our ship will have a balancing (reaction) force propelling it in the opposite direction to the plasma. Since the ship's mass is so many times larger than the projectile, the resulting deceleration will be much less than that experienced by the projectile.
So in summary, (1) the projectile will always accelerate away from the ship, regardless of which direction we're going, and (2) catching up with the exhaust assumes you're going forward-backward-forward for some reason, rather than forward-backward, and even then, the chance of hitting the exhaust over such vast distances is crazily small. Also, (3) detonating the pellet and turning it into plasma means that after a short time there won't be anything except a diffuse gas for anything (including other ships in the vicinity) to collide with anyway.
> If the universe is infinite then everything happens infinitely often.
Ah, but you say that now (and at this time) ...
The idiots keep trying to defund and shutdown the Hubble Telescope.
Have a downvote for "defund". If I had another to give, you'd have one for "shutdown" (as a verb) too.
... or it didn't happen.
Speaking of form factor, the first thing I noticed about this is that the power switch is on the top. That's either a design flaw, or perhaps a tacit admission that the heat output of these is such that stacking multiple units would tend to cause overheating. IF these things were cheap enough to use in a small cluster (and if it weren't crippled by only having USB 2) then I'd have thought being stackable would be a virtual necessity. I note that neither the Mac Mini nor the Chrome boxes have power switches on the top. I don't know if they have heat dissipation problems if stacked, but at least the power button placement doesn't stop you from trying it out.
While logically, a refusal to answer a question would imply guilt ...
No, it doesn't: not logically or legally. Previous posters have already covered what valid inferences (note: "to infer" rather than "imply") can be drawn from refusal to comment in criminal vs civil suits, so I've nothing to add. In terms of logic, though, it's obvious that no firm inference can or should be made from such a refusal to comment. Various logical possibilities exist, ranging from hiding one's guilt, protecting another guilty party or avoiding incriminating oneself for a different crime than the one being examined, the belief that the question need not be answered (for whatever reason), avoiding revealing something that is embarrassing (though not illegal, such as having an affair, being the subject of blackmail or whatever), simply not understanding the question or being incompetent, or believing that the question is unfair and best not answered ("so have you stopped beating up your wife?"). Logically speaking, a refusal to speak doesn't lend any weight to any of these (or other) possibilities being correct.
Apart from that, most of what you said was fair enough. It's just a pet peeve of mine when people talk about logic in a clearly illogical way. That, and mixing up "imply" and "infer"...
No need to mangle other songs when there are plenty of cat-related titles already...
Year of the Cat
What's new Pussycat?
The Lion Sleeps Tonight
Don't Go (by the Hothouse Flowers, thanks to "black cat lying in the shadow of a gatepost" verse)
Cool for Cats
Anything by the Pussycat Dolls, Cat Stevens, "Catatonia", Felix da Housecat or Bass Kittens. Also songs from "Cats" (the musical), The Lion King, and probably many more...
Germany already produced films of cats in the 1970s
And Japan had its famous autocatography 『吾輩は猫である』 ("I am a Cat") back in 1906. This fascination has obviously been going on for quite a while, so it's unsurprising that modern humans are still in thrall to the cat.
I read Whorf and I found his analysis of language and its relationship to the thought process and why, to be very unsatisfactory indeed.
True, most of what he proposed has been discredited. Still, he makes a mean "Gagh".
Not in media studies, I'll wager.
I'd hope they'd teach that "irregardless" is not a word, even there, though...
My first thought exactly, though I see it more like a crowd-sourcing thing more than an algorithmic thing: a "network" provides data on where these dynamic-or pre-existing-product placements are, and your player filters them out and replaces them with "Acme" or something nondescript.
[Beer icon] of course some things are better left un-messed with. I'm off to rewatch "Ice Cold in Alex" ... Worth waiting for.
but ray-tracing is THE easiest thing to convert to parallel as each pixel is independant.
It may be the easiest to make it work in parallel, but it won't be the most efficient since you need to access colour info from all over the scene. Memory will be the bottleneck, in other words, not computing power. Rendering fractals, on the other hand would be an application where pixels are truly independent.
And then there was also that Bulwer-Lytton competition runner-up from 2010 (Detective category), which is what immediately sprang to my mind:
As Holmes, who had a nose for danger, quietly fingered the bloody knife and eyed the various body parts strewn along the dark, deserted highway, he placed his ear to the ground and, with his heart in his throat, silently mouthed to his companion, "Arm yourself, Watson, there is an evil hand a foot ahead."
I think it even got read out on Countdown. Make of that what you will :)
As I was reading this (chuckling along the way) I glanced down to see how much more there was to read, finding only a few more inches to go with no "next page" link. I was worried that the ending was going to be unsatisfactory given the scant few lines remaining. However, I was not disappointed! Top episode--Chin-chin!
where you get to hunt for various butterflies and cut and paste bits together to make something that looks like a rare breed?
Oh wait... Nelson Mandela... sorry wrong "stuck on an island prison" trope...
(ok, sorry for being so crass and flippant about a great man's suffering... at least I didn't mention the Nissan Main Dealer joke, errr.....)
Wouldn't the Samsung ARM Chromebook be a better thing to compare it with? OK, maybe not, considering that it doesn't have touch, but it's more comparable in other ways, IMO, most notably in the sense of being a laptop replacement/adjunct (with keyboard) and having a more "niche" OS (if you accept that Win 8 RT is different from "proper" Win8). Does the Reg have any figures for sales of these ARM Chromebooks for comparison?
Can this reactor design burn thorium fuel too?
It doesn't sound too dissimilar to other Thorium-based molten salt reactors I've read about (including the fail-safe "plug" that melts and has the salts draining away into several sub-critically sized reservoirs), so I'm guessing yes. As I understand it, though, the fuel cycle for Thorium would have to include elements outside the actual reactor, for chemical separation of various waste (or "poison") isotopes that would get in the way of a self-sustaining reaction, and possibly other similar steps (for maintaining other ratios of elements). Someone here once pointed out that the chemical separation process is pretty nasty (dangerous) based on the need to use (iirc) fluorine. Apart from that, in a Thorium reactor, the main "fuel" is actually Uranium, which is a decay product of Thorium, so there shouldn't be that much difference in the reactor design.
I'd built a crude mouse and circuitry to hook it up the the RS-232 interface. Then it took me weeks to write the mouse driver interface, saving it to the Microdrive. Alas the tape failed...
Flappity Floppity Flip
The mouse on the Mobius strip
The strip revolved, the mouse dissolved
in a chronodimensional skip!
(*) well, not really; I just wanted to post this rhyme.
With such tiny tapes I can only imagine that alignment was a real pain.
At least they had the good sense to only store one track on the tape rather than go with the idea of basing it on 8-track (or similar) recording format. If alignment with just one head is a problem, imagine how bad it would have been with multiple tracks/heads.
you had to use pokes to do anything remotely interesting with the sound and graphics instead of high level calls like Sinclair BASIC
But on the other hand, the manual that came with the C64 was pretty good and included lists of addresses to peek/poke for changing colours, using sprites and making music/sounds with the SID chip. There really wasn't a need for extraneous syntactic sugar within the BASIC interpreter when the peek/poke addresses were documented. The more complete "Commodore 64 Programmer's Reference Guide" even included schematics for the C64 itself along with a wealth of other technical info such as for accessing bank-switched RAM, a full memory map and even tables listing the frequencies in Hertz of standard musical notes and trig identities. It also had a pretty decent introduction to writing assembly on the 6510.
That document probably ranks as being the best technical manual for any computer I've ever used, even to this day. They just don't write manuals like that any more, unfortunately. I've still got two copies of it floating around :)
The movie studio responsible just ripped off Michael Marshal Smith's "Spares", The book is quite excellent, the film, not so much.
Butterfly Effect? You may as well include The Time Travellers Wife...
Or "The Jacket".
Sharewere (yes, I think it's part wolf) from the early '90s. Such a generic name(*) that Google has problems dredging up references. At least it did most of the stuff I'd expect from an emacs-like editor, which is really what we're talking about here, no?
* at least it's not as bad as "List", which was the premier more/less replacement of those times.
Yeah, I know-I should be ashamed of such a dreadful pun, but given the OP's perfect setup I couldn't not use it.
Fine.. I'll leave menhir then.
Has been around for a while and will generate 3d models from regular photographs. Obviously, laser scanning is going to be much better for precision work and cutting down on the amount of post-processing work (less noise and higher resolution), but I doubt that photos can be totally replaced (within reasonable cost limits) when it comes to surface "texture" mapping (by "texture", I mean in the sense of a colour map rather than an actual texture, obviously).
While it's nice to see this new project, I think it's unnecessarily restrictive. Sure, there are plenty of applications where you just want to scan in a 3d object, so having a controlled shot (such as with a fixed camera and turntable, possibly with a set background for calibration) makes sense there. In fact, these kinds of object scanners have been around for many years. But they can't handle lots of real world scanning tasks that would also be nice, eg, scanning room interiors and larger objects that can't physically fit in the control frame like furniture, vehicles, etc. Being able to track location as you enter an object's interior would also be pretty useful (think of the opening tracking shot in, IIRC, Vertigo, for example--the one where the camera tracks through a sign and into a building).
I think that latter kind of scanning (of larger and enclosing objects) is much more interesting from the point of view of developing new virtual reality and augmented reality applications. It's akin to the shift from still photography to films, with the ability to move around in space and time. Think of robots that can locate obstacles (or goal objects) in a 3d space, or terrain/object mapping based on aerial video recordings, inferring an object's motion relative to other scene elements, or even just as a quick and easy way to knock up quick scenes for first-person shooters (eg, Runtfest map for Quake 3) or 3rd-person interactive puzzle games (modern versions of the old Monkey Island style of game). Digitising small objects is all well and good, but it's really more of a time saver than a game changer, IMO.
Loved playing ... Asteroids (and later Thrust on C64), Moon Patrol (2 player version), Double Dragon (backwards elbow strike FTW), Bubble Bobble (relaxing, but great powerups), Ghosts and Goblins (hard!), Rampage (smashing and eating) and Outrun (bike racing games were great too).
In another really fun game that I came across in later years (probably in Thailand, or somewhere in East Asia at any rate) you had to control a flying balloon by cycling and steering with a set of handlebars. No idea what it was called.
You speak like one previously burned :)
Unfortunately, yes. On multiple occasions, I'm sad to say. You'd have thought I'd learn me lesson after the first time. Alas, no.
As a result I can't get it into its native mode of 1280 x 1024
Sounds like it's probably a problem with the monitor's EDID information being screwed up or the monitor itself reporting invalid data, although there's always a chance that some xorg update broke something. The latter problem is a lot less common these days, and actually I think that the developers deserve a lot of respect for the work they've put into auto-detection of graphics cards and monitors. These days 99% (or a high percentage, anyway) of users won't need to edit (or even create) the xorg.conf file. Such an advance from the early days when you basically had to manually put in modelines on pretty much any system you were working on, probably followed by using xvidtune to deal with overscan and image centring ...
Anyway, for your problem, you might want to look at the xrandr command to see what X thinks the available modes are and bypass any of the layers of gunk that unity/compiz puts on top of things. It might not be the solution, but you never know... it might help. There should also be a command to dump the monitor's edid information, though I'm not sure if it's available in Ubuntu without compiling it from source yourself.
Or a combination of the ps and kill commands, if you're a command
The pidof command is quite useful too, if you know what you want to kill (or check whether it's running). Internally, it's the same command as killall, which, unlike the Solaris version, gives you command help when run without any arguments instead of killing every single process that it can...
Microsoft could chew through 1,401GB of data in the [Daytona mode] test in 29 seconds and in Indy mode it could do 1,470GB in 59.4 seconds.
I take it there's a mistake here somewhere if after tuning the performance drops by ~50%?
But do any of them display the right time?
Even a stopped clock tells the right time twice a day...
Personally I always wear 3 watches for true fault detection and recovery. And that's just for a walk down to the local chemist. A trip to Mars would be mind-bogglingly big compared to that. I don't think even my 3 digital watches would be a good enough idea for that.
that £498 phone bill iv just paid was in vein then ??
Only if you injected it. With the "iv" you talked about, no doubt. I've heard of people going to desperate lengths to get high (Zappa smoking a high school diploma being one), but you've officially taken the biscuit.
10^9 core MPP systems are now viable. At that scale we can drop the Turing Machine model,
Not really. We're still stuck with the Turing model in an abstract sense and the Von Neumann model in more practical terms. We just have to adapt them to be more aware of multi-core and multi-processor systems. And in fact, we pretty much have done so years ago and there hasn't been any great paradigm shift.
and progress to an Object Machine, where every element of data is an active processing element
It sounds like you're talking about agent-based programming. Again, it hasn't caught on, except in writing botnets and perhaps back-ends for massively-multiplayer online games.
generating random sequences of logic and see if they do anything useful
And just how do you decide what's "useful"? Or as Robert Pirsig put it in Zen and The Art of Motorcycle Maintenance, "And what is good, Phaedrus, And what is not good—Need we ask anyone to tell us these things?" You'd probably enjoy reading that since it's really about philosophy, not hard computer science.
because all software will be written by software(*)
Of course. And the Singularity will arrive and bathing in Unicorn Milk will keep us young forever.
do not just binary digital processing but higher-base hardware processing ... feed a base-3 digital processor a compatible pair of base-2 & base-3 instructions
Hmm... Are you really amanfrommars in disguise? If so I claim my £5.
But seriously, do you even know what Turing-complete means? In particular, a Turing machine can be re-expressed in terms of Gödel numbers, which it turn can be mapped onto the set of natural numbers. Crucially, all practical number bases are isomorphic to each other, so binary, ternary or base 10 (or balanced ternary or whatever) all have the same expressive power so there's no theoretical reason to favour one over the other. It only comes down to issues of practicality. For most purposes binary is good enough, and it's only if you want to represent certain numbers with a finite number of digits that you might want to consider other bases (the string to represent 1/10 is infinitely long it binary, for example, while it's just "0.1" in decimal or binary coded decimal, for example). And in case you're wondering, going from the natural numbers to the reals doesn't magically grant your computer new powers either: the naturals are perfectly sufficient for "universal" computation, so, eg, a phinary-based computer can't do anything more than a binary one can, except be a pain to build and program. Another book recommendation for you: you might like Godel, Escher, Bach, and Eternal Golden Braid...
(*) Actually, there is one kind of "program that writes programs" that can benefit from having massive amount of cores to work with, though I mean "program" in the kind of mathematical sense that Turing did, rather than the way you think of it (eg, a word-processing package). I'm thinking of something like Turbo Codes, which are effectively bit-level programs that tell a receiving computer how to reconstruct some embedded data even if some of the bits are dropped or corrupted in transit.
Another, similar type of application is data compression, since you can treat the compressed data as a "program" that tells the decoder how to unpack the message. I think that that's the most interesting possible application in this realm: given enough computing power, we should be able to try out many different ways of compressing some given data and output a compressed string and a decompressor. Obviously, this still isn't going to be able to magically compress incompressible data and it's quite impractical as a replacement for general-purpose compression schemes like gzip, bzip and so on (since there is an infinite--or worse, transfinite--number of "languages" to consider, and the best compression ratio possible is sensitive to the choice of language) but it still could be quite useful for discovering good compression schemes for certain types of data. See Kolmogorov Complexity for background details.
I just don't get where you get the idea that you can't backup ARM servers, or even why it's a stumbling block to deployment. If you want them, there are plenty of backup solutions you can compile from source, or you can use the venerable rsync if you don't have any special requirements like snapshotting a filesystem so that it's in a consistent state during the backup (though I understand that LVM can do this).
The second point is to consider whether you really need backups in the first place. I think you may be misunderstanding the use case of (most?) ARM server deployments. You're probably more used to thinking of having a variety of servers each doing different things, or running a number of VMs, perhaps? I see the use case of ARM servers more in terms of grid or cluster computing. Looked at in that way, there's probably nothing on any of the nodes that you'll actually want to back up explicitly. The system image (or a large chunk of it, anyway) will probably reside on an NFS server and will be shared among several nodes. If you're using them for "OLTP" type applications, then your database is definitely going to be distributed, with replication of data across several nodes. The upshot of both of these points is that if something goes wrong with one of the nodes, it's not important: you just replace it or reimage it. If your database is already distributed and replicated across nodes, it can survive some number of failures like this, so again, there should be no need to backup individual nodes. You will want to make sure that you've got some way of backing up your entire database, but that's a whole different kettle of fish, and nothing to do with what you say is the problem here.
But then how am I supposed to read my morning emails?
what exactly is "rotating at speed X" here?
The accretion disk. The article says that "the outer edges of the NGC 1365 black hole are spinning at 84 percent of light-speed or more."
Infalling matter follows the rules of relativity, so that in a relatively flat spacetime, by definition it can't travel at c or more, while in a degenerately curved spacetime (like falling into a black hole) it's red-shifted to such a degree that it will disappear from our relative view before it even appears to approach or exceed c (even before it hits the event horizon). It's just cosmic censorship in action.
RE: Somehow I can easily imagine someone believes that, in spite of the fact that he missed and none of them died.
Yes, more of a "Small Meteor Hits Russia: Not Many^H^H Nobody Dead".
re: I thought that we had a system for tracking and watching dangerous objects in space. Why didn't that system warn us?
Why not? Probably because their budget isn't big enough. I thought I read that it was $5.4 million, but I'm not 100% sure of that figure. It's certainly only a few million (3--6) spent on the problem. Less than what a typical Hollywood disaster movie costs, anyway.
RE: Remember your assertion: "Thinking about better algorithms is never a bad idea."
While you should probably "never say never", I'm siding more with the original poster. Although there is often a balancing act involved in how much time you can spend on finding a better solution and bearing Knuth's "premature optimisation is the root of all evil" quotation in mind, there's still often a good case to be made for looking for a fairly efficient algorithm right from the start.
I don't think that anyone is saying that we should go to excess in looking for the best solution, but for what we're talking about here (processing big data sets), we should really be aware of how expensive and time consuming each possible solution might be. It's the mindset that's important: do you just write the most basic SQL query, or do you take care to minimise expensive join operations or defer them to be operated over a reduced data set, for example? Also, experienced coders will of course realise that there's no point in blindly trying to optimise every single aspect of the code. They'll use a profiler (or equivalent) to identify where their efforts stand to reap the most benefit.
Speaking of programmer effort, I think that in many cases, it can be false economy to use inefficient algorithms. If your algorithm is bad enough, you can end up spending more time waiting for results when you're coding and testing the thing (on real data, as opposed to just a small amount of test data) than you would if you'd just thought about the problem a bit more from the outset. Granted, you can multitask and do other stuff while you're waiting, but it's not ideal to have too many context switches or your productivity will suffer. Plus, what happens when you finally realise (or have to be told) that the solution isn't good enough? Most often, you have to go back to the drawing board and do what you should have done in the first place and implement a half-way decent algorithm.