A Cross Tick!
When enterprise solutions hawk onanism-uncovering lifetime data.
Always less legal under standard ethics. Save that eejit gangs
and narco organisations generally reveal aught per habitual Yank
1921 posts • joined 8 Nov 2007
When enterprise solutions hawk onanism-uncovering lifetime data.
Always less legal under standard ethics. Save that eejit gangs
and narco organisations generally reveal aught per habitual Yank
But it's saddening that in this world where so much is ripped off, processed and repeated ad nauseum that another original is now lost to us.
Sounds kind of like Bowie's music, apart from the "repeated ad nauseam" part. Before the downvotes, I mean that in the sense that "good artists copy, great artists steal". Bowie was famous for "stealing" all kinds of musical influences (and non-musical, like Brian Gysin's "cut-up" technique) and making something unique and new out of it.
A case in point: it only struck me only a few months ago that Bowie had actually done a drum n' bass (-inspired) album: Earthling. In retrospect it should have been obvious, but despite many listenings I'd never pigeon-holed it into any particular style or genre---it was just pure Bowie.
Definitely a great artist, with an amazing legacy. RIP.
Before someone insults me ...
Not at all. Gets up my nose, too. Phrasal verbs need to be verb + space + preposition, not these franken-verbs. I will not "setup" your computer or tell you how to "login" (or any of numerous other abominations that "computer" folk seem to think are OK).
Not a hope
Oh well, was a thought anyway. I guess I'll have to retake Bondesque Villainy 101.
Let's say that you don't actually want to make a self-contained bomb, but do want the right type of explosion. Wouldn't it be easier to rig up some cannons (or rail-guns, but ignore that) containing non-critical fissile material, point them all at a target (which may include a second-stage mechanism intended to achieve fusion) and then synchronise all the shells to fire at once. It should produce the same effects as an equivalent bomb (and will probably be easier to rig than precisely-shaped charges) but probably a lot easier to set up.
Of course, the easier way to fake this would be to set up your lab near a fault line, then wait for an earthquake of sufficient magnitude and claim that you caused it, after the fact.
What does that icon do again? --->
Getting this 1/2 size shrink
<pedant>halving the side of a square means one quarter of the area, not a half</pedant>
three-um to eight-um. Who, ah, um, needs Latin?
"... it doesn't look like the (?) of design, either."
Joe Bauers> [...]
"nicked" from The Life of Brian, I'd guess.
On another note, I wonder if there are any overflow bugs lurking there? Anyone think that an 8-bit unsigned value is enough to hold the "years_left" field?
Sure, if you hack the lottery you might get a million here or a million there, but that's peanuts compared to the potential payoff from fixing an election.
edit: didn't notice m7s's similar post when I wrote the above...
Thanks, El Reg. Explaining right now to the missus how the many-worlds interpretation of quantum mechanics works. Specifically, how virtual particles (ie, receipts that purport to show a parallel me booked for a liaison in a certain hotel) can spontaneously (and with no intent on my part) be created and impinge in our classical universe.
Fingers crossed ...
(rating technological level of space-faring civilisations mainly based on available energy)
I always cringe when I hear IT people talk about moving "fast"
Agreed. As that guy who recently won an MMA title fight said, "timing beats speed, precision beats power."
of that new Thinkpad smell? Or do they not make them like that any more?
Ying tong yiddle I po!
Without seeing the actual paper, with formulas and such, it's impossible to refute the article. I think that I can make some educated guesses, though.
There's a lot of pseudo-science and wishful thinking around so-called "organic" farming, but one thing that does seem to be backed up by actual science is the idea of "Biointensive" farming. One of the major planks of that is the ratios of different crops, eg:
If this works (and let's say for the sake of argument that it does) then there are two things that immediately come to mind.
First, lettuce and such things aren't a good thing to be focusing on in a comparison. They essentially don't provide any calories, and so are a very inefficient use of land. We still need them, but they shouldn't be seen as the major part of a diet.
Second, all that livestock needs to be fed. It may be that animals are better at converting raw materials into meat. If that were the case then it might be more efficient to feed the crops to livestock and thus convert them into meat for us to eat. I'm not sure that that case can be made, though: I'm pretty sure that eating corn-fed pig is less efficient overall than eating the corn yourself. The other thing about passing crops through animals with the intention of eating them is that various livestock can eat things that are either indigestible(1) or unpalatable to us (or just unfashionable; it can be a cultural thing where we consider certain perfectly good foods as being only fit for animals, eg brown rice in Japan, maize and other "fodder" crops). If you look at the sorts of things that pigs or geese will eat, it strikes me that this (eg, geese converting slugs, among other things, into meat and eggs) is a more convincing argument for being a carnivore than any argument about how efficiently animals can convert the same raw food stuffs into meat.
Where I'm going with this, I strongly suspect that the comparison the paper makes is between an inefficient human diet based mainly on low-calorie, high-effort stuff like lettuces and a much more efficient one used for raising animals. In fact, I'm nearly willing to bet that the kind of integrated farming system that the paper has in mind for raising livestock is probably going to follow the ratios I mentioned above. So I suspect that it's really an apples/oranges comparison: basically assuming that humans and pigs have different dietary needs (which we don't, really) then using a really bad food production model for our diets and a really good one for the livestock.
So basically, if we had access to the paper and could do a proper apples-to-apples comparison, we'd probably find that it supports the ideas of a vegetarian diet (and probably biointensive farming) rather than the opposite.
PS 1: I'm deliberately glossing over livestock that eats only grasses since we're mainly talking about pigs; sometimes land is only fit for grazing, though
PS 2: I'm not a veggie or a hippy
Infecting all (eg, Linux, Mac, BSD) machines would be impressive. Accessing available Windows network shares, not so much.
MS handling of the situation has been bad though.
Maybe, maybe not. They wouldn't be the first to discontinue a free storage service. "Ubuntu One" went away last year.
Does your child answer "yes" when asked "do you like daddy or chips?"
some scientists and researchers have questioned whether quantum computing really exists
Do they keep peeking into the box when the program is running?
maybe he's a time traveller
Hmmm.. Should we throw the name "John Titor" into the hat, too, then?
Why do I get the feeling we are back in the 1930's?
Maybe it's the mood of isolationism everyone's so keen on these days?
Oddly enough, I already know that:
a) Al Gore didn't invent the Internet,
b) he didn't claim to, and
c) manbearpig is the single largest threat that this, or any, country faces
I'm just pointing out that asking Bill Gates to do something about this is as farcical as asking Al Gore or Stephen Hawking (and all the other elders of the Internet that live in Big Ben).
He's obviously asking the wrong guy, innit? He should be talking to Al Gore, surely.
Of course, I expect Al Gore will ask for a little quid quo pro, most likely re urgently-needed action on manbearpig.
Actually, it's even easier. If it's a question of "how many?" then the correct qualifier is "fewer" (eg, "how many out of ten?" -> "fewer than 1 in ten"), while if it's a question of "how much?", then use "less" (eg, "less than 5l", "less than 5%", "less than ideal", "less than full employment", etc.).
Does it really still use 'fewer'?
Yes, it does.
In "Fewer than three in ten science and engineering jobs", "fewer than" is essentially qualifying "three jobs". Jobs come in unit quanta (are countable things) so you use "fewer" instead of "less". To show this, and that it's not qualifying the fraction, you can rephrase as: "fewer than three science and engineering jobs in ten". The meaning of the re-ordered sentence is exactly the same.
If we were qualifying an actual fraction or percentage (or weight, volume and so on) like "three tenths" or "30%", you would use less than: "Of all science and engineering jobs, less than 30% are filled ..." or "less than 30% of all jobs ..."
Consider for a moment how very easy it was for me to get a whole bunch of Atheists really, really upset in this thread. Why? Because I had the temerity to tell them what Atheism is, when they had their own interpretations of what Atheism is that did not match what I was saying.
If you're not an atheist, Trevor, then going around to actual atheists and telling them what they really believe is sure to get up their noses.
Here's the first thing that google turns up when I enter "atheism as a belief" (with my emphasis added):
Atheism is usually defined incorrectly as a belief system. Atheism is not a disbelief in gods or a denial of gods; it is a lack of belief in gods. Older dictionaries define atheism as "a belief that there is no God."
I am an atheist and I agree with that definition. I do not have a belief that there is no God, much less a belief system. Arguing that an atheist believes in no-God is as irrelevant as a goldfish espousing a belief in the water he's living in. It is a supremely unrewarding line of thinking and thus we can, and do, safely discard it.
PS, as someone else mentioned, there's a difference between a-theist and anti-theist. The hint is there in the prefix; a- means "without"; whereas anti- means "against". I am without the baggage of belief in a god (ie, I call myself an athiest), and also without the baggage of having to believe that there is none (I am not an anti-theist).
I seem to going be against the trend of commenters here. I think it's fair enough if the natives don't want building there. Would you want to see some giant construction on top of Mt. Fuji? Or would you show a bit of sensitivity?
Can you elaborate?
Quick answer: https://xkcd.com/327/
Slightly longer answer: in the first example, the programmer creates the query string as the concatenation (or interpolation) of the query (s)he intended to do, plus user data. There's nothing stopping the user from supplying data that turns the query from being "intended_statement" to "intended_statement; malicious_statement". This is called an SQL injection attack, by the way.
By using prepare, there are checks to ensure that user data can't morph the intended statement into some other arbitrary SQL command. Most easily done by escaping any metacharacters like ', ", ; and so on.
How Bobby Tables would work in real life... First, the code would have to select the student name and store it in a variable. Then, there would be some other query that's intended to, eg, show some details of that student. It could look like:
select * from Students where name = '$name'
Without input sanitisation, this becomes:
select * from Students where name = '$name' ; DROP TABLE Students; -- '
(-- introduces a comment in SQL, so the parser won't complain about the extra trailing apostrophe)
I suggested a live CD so that you don't have to install anything. Maybe some packages need to be downloaded (into RAM), so it might take longer, but I'm sure it's all in the READMEs.
As for doing it on a server: don't! Stick a USB key into your laptop or whatever spare PC is to hand, boot from it, do the stuff needed to generate the cert and then copy it onto the secure server. It's not like you're going to be running this stuff (or any other configuration experiments) on a live production server, is it?
Those of us required to use Windoze appear to be out of luck
Cygwin might work, I guess, but easiest is probably to download a small live Linux distro and run the script in there. I don't suppose the script will produce configs for IIS or whatever you're running though. Still, you should be able to manually install the cert.
Anyway, some sort of live Linux distro (like Knoppix, especially) is a good tool to have handy even in an all-Windows shop. Using it to reset a forgotten admin password or removing a corrupt page file are a couple of applications that come to mind.
Can I be honest with you Mary?" the PFY asks.
Only if you can be Frank.
And definitely not to the tune of Scatman.
Bar bitch you ate?
Why should I feel smug? Because I "won" because you decided to rage-quit? Please ...
Well then, you're going to have to start doing your own research. This is the last time I'm going to spoon-feed you.
I have presented my research.
For a given hash & block size, there are a finite number of blocks that will cause collisions in a given hash. By removing some of that finite set, we have fewer potentials for collision. It is that simple
Yes, but your argument was about git, not fixed-sized block. I have pointed out that we are not dealing with finite sets there. Thus, your counting argument is fallacious.
It is clear that collisions are a problem in the general case
And equally clearly (actually, more so), I gave you the equation for quantifying the collision rate and outlined a simple proof that the error rate can be made arbitrarily small for practical input parameters.
I don't know why you have such a problem with understanding this.
I also don't know what is your hang-up with this "but in the general case" line of argument. We agree on the pigeonhole principle (hash collisions must exist) and I think we can both agree that the analysis of the birthday paradox is apropros. That is the general case and I'm confident that I've analysed it correctly and that it vindicates my argument. Of course, I left some small amount of work for you to do to verify that what I said is correct, but that's simple high-school maths.
If you do decide to argue further, please make it clear whether you're arguing about git or block-level hashing. And don't try to bring a fallacious argument from one (ie, git) across to bolster your argument (such that it is) in the other. Thank you.
OK, so I'm going back on my promise to not write any more, but ...
1. Does git's normal input type lead to fewer collisions?
Your line of reasoning about the structure of git's input leading to fewer collisions is pure conjecture. There is no "ipso facto" about your conclusions. You say that subspacing the input space discards potential collisions, but you neglect to consider that it also removes many non-colliding inputs, too. For your argument to work you'd have to explain why subspacing preferentially removes more colliding inputs, proportionately speaking.
In fact, the input to hash algorithms are (in the class of) infinite strings, so the normal rules of logic when dealing with finite numbers simply don't apply. Half of an infinite space is still an infinite space. In those terms, it's hard to see how your counting argument can hold any water on this point.
2. Is assuming hashes are collision-free a "reasonable" assumption in a block-centric app?
I believe that it is, but you have to plug the block size and digest size into the formulas for the birthday paradox so that you get an acceptable collision rate for a given capacity/occupancy (ie, how many blocks you intend to store).
A simple analysis should be able to convince you that doubling the number of possible hash buckets (ie, adding one more bit to the digest) will more than halve the collision rate for practical occupancy levels (which is a minuscule fraction of the hash space). This takes it out of the realm of being a question of pure mathematics and turns it into an applied maths/engineering question. And that's why I'm saying that it's an entirely reasonable assumption to make if you pick the right hash size for the problem at hand.
The actual formula to determine the probability that there is no collision, with a hash of h bits and an occupancy level o is:
(2 h Po) / 2 ho.
where xPy is the Permutation function.
I submit, without proof, that as h increases (with o fixed), the P term will increase faster than twice the rate of increase in the other term, assuming that o is sufficiently small. (in fact, it should increase much faster). Thus we can make the collision probability (1 minus the above) arbitrarily small by making a linear increase in the number of hash bits.
Basing the assumption that a well-chosen digest algorithm will be (practically speaking) collision-free on the above is, I am sure, completely justified.
The analogy did put me in mind of dictionary-based compression (including LZW), too, as well as Merkle trees (for the sync problem).
The idea of re-using dictionaries sounds good, but you need good algorithms and data structures to make it tractable. You can't just blindly try compressing all blocks against all dictionaries to find the best compression ratio. That would take forever. I think that some clever tree-based/recursive construction (probably along with Bloom filters or maybe Rice-Golomb encoding) and some dynamic programming could make this work.
OK, this is my final post on the question, Vic. I promise.
I understand better now the point you're making about "entropy". You're saying that, eg, if I hashed all the natural numbers, represented in ascii, in a range (let's say they're zero-padded up to some minimum block length to make a fair comparison) that there will be fewer collisions than if the blocks all contained random 8-bit data. I think that a well-designed hash won't have significantly different collision rates over the two kinds of data for a given capacity range (ie, the number of messages to be stored).
And I'm, saying that, in the general case, [that the risk is "vanishingly small enough"] is simply not true
Mathematically speaking, you're absolutely right. Any hash can and will have collisions as proved by the counting argument/pigeonhole principle (or birthday paradox). All I'm saying is that in practice assuming that the same hash implies the same contents can be a reasonable engineering assumption. As evidenced by how git does it.
The assumptions that git makes on the actual hash size (160 bits) and expected number of commits (and, if I were to accept it, a factor to account for "entropy") aren't going to hold for a massive block-based de-dupe system, but you can plug the numbers into some formulas to find the expected collision rate and choose your digest size to make the risk "low enough" that it's not worth worrying about (eg, a mean time to collision of 50 years, if that's what you want).
I don't know what you're trying to say, there, Vic. By your logic, git having less entropy in the source should imply less entropy in the output hashes, which would in turn imply more collisions.
Anyway, entropy is irrelevant since the point of a good (cryptographic) message digest is to decorrelate patterns in the input from those in the output hash. Entropy measures over the set of output hashes should be the same regardless of input (again, for a "good" hash algorithm).
I'm just making the point that while you can't get away from the pigeonhole principle, if you have vastly more holes than pigeons, you can put some sort of bound on what's an acceptable probability of collision and design your application with the assumption that this risk is "vanishingly small enough".
It's all a trade-off, like Trevor explained in the article and in his post above.
That is not reasonable.
The git tool uses large digests (sha-1) to identify all commits. This is a 160-bit hash and apparently you'd need 2^80 random messages to get a 50% chance of a collision.
I recall reading that git hasn't any safeguards or double-checks to see if two commits hash to the same value. So Linus was just trusting the maths and making a calculated risk. For his purposes, at least, the chance of collision was so vanishingly small that's it's a risk that he was willing to take (for the sake of simplifying the code, one assumes).
I get what you're saying about the pigeonhole principle, it's not always unreasonable to assume that collisions won't happen.
From what I've read, these guys used no encryption at all.
Is "bandwagonesque" something other than a word to describe the minds of politicians (or a Teenage Fanclub album, obviously)?
It's "Lego", not "Legos".