Re: '2 Factor- This is proper security'
"Proper Security? Face it 2-factor is anything but, we need 3-Factor asap!"
You're still thinking too small. We need aleph-null factor security! The crims will never get in!*
* by definition
46 posts • joined 7 Apr 2017
"If it's such a fustercluck, though, why isn't anyone LEAVING?"
Most of us with half a brain saw this coming as a result of George III (aka G. W. Bush) and left while the getting was good.
The US has been going down the tubes since the Nixon campaign (first to really target white supremacists as a base). Reagan accelerated the decline (destroying union power, tax cuts for the rich, tax hikes for the poor), as did the Bushes (more tax cuts for the rich and tax cuts for the poor), and now so-called "president" Trump has pushed the pedal to the metal...
(To be fair, Clinton was also a problem, allowing the gutting of welfare under his watch.)
BTW: I'm old enough to remember Nixon, and voted against Reagan twice because I could see how bad for the country he was. If you weren't there, and can't see how what they did harmed the country, you aren't qualified to wax poetic about their brilliance.
"...has been vilified for activities that are not only legal, but also widely accepted as a standard component of online advertising in both the political and commercial arenas,"
Yep. Yet another proof that legal != ethical.
The villification is entirely justified when the "victim" is, in fact, a villain.
Let's see where this logic gets us...
People who bought Pintos should have known that they had flaws in the gas tank protection. Therefore, when getting into an rear-end accident, the resulting fireball was entirely their fault, and Ford had no blame.
Sorry, I don't buy it.
Unethical data creation / protection / use is still unethical, even if the provider of the initial data is an idiot. It can be argued that the company is not 100% to blame, but not that it is 0% to blame.
Plus, this ignores those of us who *were* aware of possible consequences, created accounts because we basically *had* to (I am a university professor; my students don't communicate via email, they do it via fb), gave it as little information as possible, and *still* had our data weaponized.
We really need to stop the neo-liberal "companies don't do anything unethical, since you knowingly agreed to the terms of service and information flow is symmetric, instantaneous, and cost-free, and consumers are entirely rational" mindset in its tracks. The very existence of advertising companies denies the premises.
It's open source. Rip the fucking code out and build. Problem fixed.
To make it a little easier long term, save the change as a patch, apply to future releases.
May have to be tweaked occasionally, but this is one of the nice things about open source.
That said, I may switch to a fork myself. We'll see. Certainly not planning to watch any stupid ads.
= CTRL + Z was a control key to suspend the currently running process to the background in the C
= shell (csh) in the late 1970s BSD kernel, while CTRL + Z being used for undo was first done at
= Xerox PARC not Apple.
You know, I think you may be onto something here. Kieran may have been a little sharper than we realized. It is actually a ^Z in this sense; because the FCC obviously did not follow correct procedure, and this idiocy is going to be stop by the courts so fast it will make Pai's head spin. Plus, when the dems take congress back in 2018, it'll be passed in legislation if it is still being foughy over (which it probably will be). So it's just backgrounded for a bit, and the fg is coming soon. :-)
= And that is one of the more enlightening web sites. Are you making this up?
I'm a professor in a business school (ex-computer scientist for 20 years prior). It is a term that is used in economic circles. It is not well known outside those circles.
The very short version is that in a capitalist system, capital is equipment or tools. Not money. Money is just a short form token that allows you to convert one type of capital into another, or exchange capital for labor, or vice versa. It's an accounting trick, and nothing else.
The current American system has turned that onto its head, giving actual primary value to the money itself, not to the capital it theoretically represents.
Money doesn't do jack shit. Capital does. And yes, it does make a difference. You do know, for instance, that companies nothing from the stock market, right?
If that's news to you, then I suggest you do some reading up on real economics. And not just micro.
(Oh, and since your abiliity to do actual research seems to be wikipedia, go look at "financialism" and "finance capitalism" there.)
The "law of supply and demand" only applies in microeconomics, and only in a perfect world that does not, and can not, exist. It requires:
* Perfect competition
* Commodity non-distinguishable goods
* Perfect information flow
* No assymetric information
* No transaction costs
* Perfect mobility
* An infinite number of suppliers
* An infinite number of consumers
Break any one of these conditions, and it is an approximation, not a "law". Break a couple more, and you have complete market failure.
Raise it to the macroeconomic level (country wide or world wide), and it's completely useless and does not describe reality at all.
It is you who does not understand fundamental economics, whether capitalist or otherwise. (Oh, and a small hint: the US system hasn't been capitalist in a long, long time; since at least the 18th century, and quite possibly the 17th. It's actually financialist.)
Your analogy is wrong.
What anti-net neutrality want to do is charge you to enter the road AND to leave the road.
If I am downloading terabytes of stuff from netflix, sure, that implies a lot of data enters my ISPs network from netflix. But in order for me to RECEIVE those bytes, I will have had to pay my ISP to give me a big pipe.
So they have already been paid to carry those big trucks.
= TBH, women like you are the reason feminism gets such a bad name.
And men still don't get to decide how bad "grabbing an ass" is. The rest of your comments I'll ignore because you didn't understand the subtleties of the point that I was trying to make-- which is that Microsoft has a right and a duty to act based on suspicion. They are not a court of law. Just like Google was right to fire their jerk.
= I can definitely say that as an undersized 13-year-old girl, while I found having my butt & chest
= grabbed by the jocks as part of a bullying campaign was upsetting, it didn't freak me out half as
= much as their threats to rape me if they ever found me alone. (They weren't smart enough to
= threaten to do it if I told on them, or I might not have gotten the little asshats suspended.)
Okay, fair, enough, I was being extreme, but to try to make a point-- men don't get to decide that "grabbing a little ass" isn't serious. It's the grabee that gets to decide. "Boys will be boys" is not acceptable, and is legally actionable.
The "proven guilty" standard is for criminal cases in criminal court with criminal penalties. It applies nowhere else. In civil courts, it's "preponderance of evidence", not "beyond a reasonable doubt". In a company, the threshold is even lower.
So yes, Microsoft could have, and should have, done more. Offering to transfer her to a different department is not sufficient, nor is it appropriate (unless offered to her as a possibility to choose from). The accused should have been transferred; he, after all, was the alleged wrongdoer, not she.
Doing so is in Microsoft's best interests-- whether there was a crime or not, she perceived one, and that would cause problems in the department. Moving someone was necessary simply from a productivity point of view. Absent convincing evidence, the transfer for the accused should not itself be punitive; it should be to another useful experience (since he was an intern), but which involved no further contact with the woman.
And, finally: it is always men saying that there are "degrees"and "grabbing an ass is not as bad as forcible rape". I doubt the difference is so apparent to the victim. They are both equally invasive, and equally wrong. Yes, I think we should charge people grabbing an ass with rape.
You are not the one who gets to define how bad the crime is. It's the victim that gets to decide.
= So if America is so bad, why not move elsewhere?
I did. As soon as I got my PhD, I left the US permanently. It's a hideous place. And, FYI, I was born in the US, lived and worked in the US for 50 years. I am now in France, and am happier than I have ever been in my life.
Until Macron ruins it, of course. Vote Melenchon!
= And .... One Day .... A Fruit Fly will be elected President of the United States based on it's superior = looks and decision making skills ... Verily it will be Thus.
Too late, it already happened. Although it was its fruity-appearing looks and ability to appear crudely human rather than its decision-making skills that did it.
=It just so happens that you can't fire people over opinions.
Actually, in the US, in most states (though possibly not California), yes you can. You can also fire them for voting for the wrong candidate, being a Democrat, parting their hair on the wrong side of their head, or looking at you funny. See "employment at will" (typically a part of the doublespeak "right to work" laws).
I would like to thank the author for this article. This is why I am a researcher, and not a journalist. The summary of the social psychology research was excellent and readable, which my summaries are not; I do strive for excellence, but readability isn't as important in the research world. :-)
The facts are:
1. Yes, men and women have obvious biological differences.
2. Yes, there are measurable psychological differences between men and women.
That's it. Those are the only facts.
What the white male bros here are doing (and I say this as white male, but not a member of the white male bros, to the extent that I can be) is assuming a causal link between statements 1 and 2. This causal link is DECIDELY NOT PROVEN. In fact, there is considerable evidence to support the idea that many, if not most, of the psychological differences between men and women are due to environment (socialization, external biases and restraints) rather than biology.
Whether the environmental differences should be tackled is a fair question, even if I think you're an asshole for answering "no". But that the psychological differences are inherently biological is NOT a question of your uninformed opinion.
Looking through the original article (yes, I DID RTFA), I note several things:
1. The personality differences ascribed to being female has a huge confound. The traits of agreeableness, awareness of feeling, interest in people, empathy, lack of assertiveness, gregaroiusness, and neuroticism also correspond to the traits expressed by any group of people who are raised in a position of low societal power. I cannot quickly find if this has been studied; if not, I may have my next research topic. :-)
2. Much of the author's argument rests on conclusions from evolutionary psychology. That field has some serious problems, notably falsifiability. Much of their theory is basically "Just So" stories. The research in that area is largely self-contained; there are not many other fields that reference evolutionary psychology in their own research, whereas social psychology research is used extensively through the social sciences, having been proved useful.
3. This is a clasic, and I will grant you well-crafted, example of "argue the controversy". There is very little difference in the fundamental argument the author makes from those of creationists (or climate-change deniers) who argue that what they are arguing is science, and that evolution is wrong. They use a few carefully chosen facts, bound together by strongly implied (but not stated!) dubious causal links, to come to conclusions which they find morally desirable. As such, it is FUD propaganda, and should in fact be denounced. The author misuses the science and leads the reader to infer scientific backing for things which are simply not true.
I expect this will be my last post on this subject. It has fallen very much into the "Someone on the Internet is WRONG!" category. The assholes who wish to protect their privileged positions from the "others" are not going to listen, and the others are bored or annoyed. I must agree with one of the earliest replies: I strongly suspect an invasion of the alt-right, either by bots or simply their attention drawn not by the tech focus of El Reg but the political focus of these articles, based upon the suddenly larger number of upvotes for comments supporting the white male tech bros and downvotes for those opposing.
I used to be an asshole like Damore myself (and worked 20 years as one of the best programmers you'll ever find). I got better, learned my own biases, and moved into social psychology research with a focus on differences between IT people and management. I still have my own biases, but I am much more aware of them, and of the fact that success in the world is much more about luck, the color of your skin, what you have between your legs, and who you know, than your own innate abilities. I would love to live in a real meritocracy, but I also recognize that that is impossible; and that leveling the starting point (I.e. yes, discriminating against people who are members of the privileged classes-- which is mostly white male in the developed world) leads to better outcomes than allowing a fundamentally flawed fake meritocracy to fester.
I'm a white, male, anglo-saxon, ex-protestant, American, and am also a feminist, socialist bordering on marxist, supporter of affirmative action. If you find yourself offended by that, I will point out: I'm probably older than you, almost certainly more educated than you, probably more intelligent than you, probably have lived in more different places and dealt with more different people than you, almost certainly more widely read than you, and I'm not interested in hearing your drivel.
I fully expect to set a new Register record for downvotes.
You did have almost one correct statement in your reply. Let me fix it for you, so that you can at least have one completely true statement (I won't comment on your grammar, as that will guarantee that I make a grammatical error):
= TL;DR: I'm an asshole, who pointed out an asshole author on an asshole article being bias and
= poorly concluded.
TL;DR: I'm an asshole, who pointed out an *well-informed* author *who actually bothered to look carefully at social psychology research* on *a well-synthesized* article *summarizing that research* and concluded with *a lot of ignorant vitriol which I added above*.
N.B.: My comments refer, as do most of the original poster's, to the content of the article before talking about the asshole-outing. While I generally agree with the author there as well, that is obviously opinion.
= A handy graphic for understanding outrage at statements about average differences between groups:
A handy definition for those that think statistical differences between groups should inform general policy: https://en.wikipedia.org/wiki/Ecological_fallacy
(That said, the graph is in fact correct.)
*sigh* I should have put the PS into a different comment. I should have thought it was clear that my argument was the body of the reply-- which you ignored. The PS was not part of the argument, but was a completely separate and very subsidiary point. Let me spell it out for you:
1. The author of the post I replied to believes that people should be judged solely on ability.
2. This implies that either a. we live in a meritocratic world, in which his and others' judgment is completely unclouded by unconscious biases, or b. we do not but the author does not care (i.e., is an asshole).
3. If we live in a meritocratic world, this implies that the rewards that are obtained are obtained purely on merit.
4. I have attained one of those awards, which fewer than 1% of the people in the world (and less than 2% in the developed world) have been awarded, for my expertise in social psychology.
5. I stated that statement 1.a. is false. This is supported by two possible arguments:
5.a. Since we live in a meritcratic world, I got my award purely by merit, and thus have the necessary expertise to evaluate the truth value of statement 2.a., which I declare to be false (denying the premises of this specific argument).
5.b. If I did get my award solely on merit, then statement 3 is false. Since statement 3 is false, statement 2.a. is necessarily false, and statements 4 and 5 are irrelevant.
6. Thus, statement 2.a. is proven false. Therefore, we are left to conclude that statement 2.b. is true.
Is that clear enough? It is not argument from authority; while statement 5.a. does hinge on that, it is also required that the reader conclude that argument from authority is appropriate. Argument 5.b. shows that statement 2.a. is false regardless.
Since my comments were about a previous comment and the idea of meritocracy, I can only assume that you are using the usual tactic of the neo-fascists^Walt-right and trying to claim I said something I did not. Besides, my PhD in (effectively) applied social psychology is more relevant than a PhD in biology, even if the poster to whom I was responding had one.
And by the way, those links (not proper cites, but I felt links would be more appropriate in an online forum) were just a quick check. There were a few that claimed merit was stronger than socioeconomic, gender, or race, markers, but they were all rebutted, and the cite count was considerably higher for those latter indicating merit was a poor determinant, which indicates consensus in the field that merit is largely a myth which supports the status quo.
Reading later comments, I see that the value of diversity is also questioned. I could provide a list of cites that argue against that, mostly in the business literature, but I won't bother, since I've probably bored you enough already. TL;DR: yes, diversity is good, no matter how diversity is measured. More diversity is better. While diversity of town, etc. is certainly good, it is typically not as different (at least in the US), and thus less valuable, as sociecomic, race, gender, or cultural markers.
And no, firing an employee for sharing screeds detrimental to company performance and morale using company resources on company time is not "reducing diversity". Free speech doesn't mean I have to provide you the resources, nor does it mean that I can't say that "you are an asshole" and firing your lily-white entitled ass.
= The only form of discrimination that is valid is discrimination by ability.
While a noble idea, the meritocracy is a myth. Social research has shown that it does not, in an all likelihood, cannot, exist in reality. FWIW, Deming (one of the founders of the quality movement) concludes that "merit" based pay was mythical and harmful to the organization.
Bootnote: social mobility (a measure of how meritocratic a society is) is lower in the United States than it is in any of the member states of the European Union, especially in the Scandinavian countries, but even including supposedly class-ridden Great Britain and France.
(TL;DR: while merit has some effect, gender, race, and other social class markers have at least comparable, and probably larger, effects on measured attainment.)
And, not an academic publication, but probably more readable, and cites much of the relevant research:
P.S. Yes, I have a Ph.D. in social science. Business Administration, in fact (IS specialty), and teach and do research in a business school. This is relevant because, if we do in fact live in a meritocracy, then I must be an expert in the field, with a better understanding than that of the layman. My pronouncement that there is no meritocracy thus has a higher confidence value. If, on the other hand, you feel that my opinion is of less value, then you implicitly deny that we live in a meritocracy. I.e., the conclusion inescapably falls to "there is no meritocracy". And, in fact, I am white anglo-saxon recovered protestant first-born male from an upper-middle class family, whose father was a sociologist. Which probably provides significant explanation for my having my Ph.D. beyond my innate ability.
Okay, fair point, I was a little sloppy in that. The set of orders of magnitude is countably infinite. I should have used muliples of aleph-1 instead.
The point is, it's a phase change. The deterministic model (ALL modern computing, except possibly quantum computers, and I remain to be convinced that they are truly non-deterministic) simply cannot be used to model the brain. It's like trying to apply Newton's laws of motion when you're in a relativistic frame. The model is inadequate and fails.
Beat me to it, and *exactly* my point. Göedel has a few pithy things to say about it, too. As, in fact, does Plato, albeit in a different context; see _Phædo_, which (among other things) he points out that language is an insufficient construct to truly convey thoughts.
Along those lines, to the person who accused me of failing first year comp sci-- I was not making the full logical argument in CS terms. Because, frankly, it can't be made (again, see Gödel), at least without fully including Gödel's proof*S* (yes, there were more than one, and in some ways the subsequent proofs were more important). If you're only aware of the incompleteness thereom (as I suspect you are, and probably only in the _Gödel, Escher, Bach_ form, which is itself incomplete), I suggest a few remedial logic classes. Finally, if you do not understand the truly gigantic implications of the difference between deterministic and non-deterministic Turing machines, you are the one that needs some remedial CS. Hint: it is not merely computability. That is merely *one* of the implications, and one of the least interesting.
Now, the obligatory _ad hominem_ out of the way, the point is, that analog computers are in fact infinitely better than digital, simply because of Cantor's proof. That there is a discrete quantum underlay might one day become an issue, but I suspect it would be overwhelmed at that point by the probabilistic (analog!) nature of the choice of those discrete states. Further, don't confuse the map with the territory. Quantum mechanics is a *model*, it is not the *reality*. For instance, it currently requires that we treat EM radiation (e.g. light) as a wave and as a particle means that our map is not sufficiently accurate-- the radiation is neither a light nor a particle, both of which are models, but something else that we don't actually understand very well. We can use the "particle" model or the "wave" model at certain times to predict certain behaviors, but that does not actually mean that light is changing between being a particle and being a wave. It's always light. Someday we might have a more accurate model of something that seems to behave as both models (and, in fact, maybe we do; I'm not a theoretical physicist, though I play one on the Internet), but for the time being switching maps when appropriate is sufficient. It's kind of like relativity versus Newton's laws. Newton's laws are sufficient for predicting how your car behaves. They're not for predicting how a neutrino behaves. You choose the map that fits the resolution you're using. Even relativity is only a model-- at some point, *it* will almost certainly prove to not have enough fidelity, and need to be modified to better model the strange, strange thing we call reality.
How does this get back to AI? Well, we have a model of how the brain works. It's nice and all, and can do some fairly amazing things. But it's still only a model. Neuroscientists are still trying to figure out some of the grosser ways that the brain works; we're a long way from understanding it at more fundamental levels. The complexity is staggering; hundreds or thousands of different chemicals interacting, modified by a non-deterministic network of interconnection, all operating in an analog fashion. Hell, we don't even have a Newton's laws of motion level of fidelity of model for the brain, much less relativistic models. How can we hope to replace the human brain when we are not only not even the stone age of modeling it, we're probably still some small animal scurrying around under the feet of T-Rex trying not to get squashed in our understanding and modeling of the human brain. Hell, probably even of the *flatworm* brain.
So, no, we're nowhere near true machine intelligence, much less machine consciousness. And probably won't be for hundreds of years.
"We will have a true AI in 50 years" is a time-invariant statement. And yes, I've done AI stuff as well, and am now a social scientist, so I mix the two freely. We are at least five orders of magnitude from having the same computing power density as the human brain.
Actually, arguably, we are infinite orders of magnitude away from the same power. As the previous poster pointed out, the human brain is actually an exceedingly complex analog computer; thus it has infinite possible states, and since our computing power is not currently infinite, QED.
Also, the brain is not deterministic. So not only is it not digital, it's also a non-deterministic analog computer. It's going to be a long, long, long time before we get anywhere within shouting distance of its actual power. Unless P=NP, which is seeming less and less likely (and it never was very likely) as time goes on.
This, absolutely and completely. Amazon, Netflix, etc. pay a ton for their connectivity to the net, proportional to the bytes they push. I pay a much smaller amount, proportional to the bytes I receive. I am paying for n packets per second, as are they. That's correct use-based pricing. What the non-neutrality folks want is to have a high-cost toll road where you pay to enter *and* to leave, and then stop maintenance on the public-supported road.
The truck analogy fails badly because Netflix's packets are no more heavy than the packets from my mom-n-pop video site. Netflix has more *volume*, because their rate is faster, and they pay for that. But no individual packet is heavier.
Feh. The whole thing is absurd. The goal of anti-net-neutrality is, as the previous poster said, simply double-dipping. A pox on all their houses.
That's all lovely. The problem is, that the American system is now so corrupt that any new regulation would probably be even worse than Title II. *IF* it could even be passed. Most American legislation is now, and has been for years, written by the corporations.
America is not a democracy (actually, it never has been; but it's no longer even the republic that it was supposed to be). It's an oligarchy, run by the very rich for the benefit of the very rich (there is, IIRC, a Harvard study that confirms this; maybe Yale).
The best chance the American public has for decent non-oligarchic control of the internet is Title II.
To pretend anything else is to assume that one lives in a fantasy world where legislation that actually protects consumers happens. While Title II is certainly not optimal, it is the best possible realistic choice.
(And yes, I am American, have been politically active most of my life, and decided to chuck it all as a lost cause when I turned 50 and moved to France. Permanently.)
At this point, the transition is not slow, painful, or even expensive. It's a classic case of the situation where once you have invested enough, the technology pays for itself. It won't even be less than currently available. It truly is a win-win, just with a high activation threshold. We've crossed that threshold, and are beginning to slide down the good side. Murika will be left in the dust, and good riddance to bad apples.
Biting the hand that feeds IT © 1998–2019