Re: The royal WEEE ???
Yes, "we". As in "we who did the work represented by the paper that we are publishing".
613 posts • joined 23 Jan 2015
I, and a few others with similar levels of knowledge, have been painstakingly attempting to explain this until exhaustion from day one. Our communal moniker aside, El Reg's community contains an amazing brain trust.
- Rule #1 of business is that the customer is king.
- General consumers don't understand security, and do not care unless they personally are inconvenienced.
- - General consumers actively punish companies that provide security at the cost of their convenience.
- - General consumers actively punish companies that provide more expensive solutions with no apparent benefit.
The outcome of the above is that anyone selling into the general consumer market is either going to be like Intel (selling vulnerable product) or Blackberry (driven out of the market).
From a technical standpoint, data leaking through the cache response times is core to the existence of a cache on the part. THIS DOES NOT DEPEND ON SPECULATIVE EXECUTION. Speculative execution permits rapid reading out of the data, but even without it, if I have access to a wall clock, I can tell if my data has been ejected from the cache or not. This is a data leak.
This leakage, however, is not subject to attacker control. Various strategies by defensive applications or the OS can prevent an attacker from deriving usable information this way.
Speculative execution, in and of itself, does not affect the situation. Speculative execution that bypasses memory protections, however, very much does.
So, what was the situation in the nineties? Speculative execution with memory protection bypass provided consumers with a substantial speed improvement. Yes, we all knew that there was a theoretic risk of exploit. We tried (not me personally, the industry) AND FAILED to realize that exploit. So the designs were shipped. And for more than twenty years, there was no publicly known exploit.
While I have strongly condemned Intel's response to the discovery, there is simply no honest way to condemn them for the decision that they made in the nineties to ship this design.
I will also point out that I have been aggressively throwing shade on these software "fixes" since they have been coming out. Memory protection bypassing in the hardware is not something software can fix. I said this a year ago. Its truth is obvious to anyone that has played around at that level.
Again, the potential fixes are as follows:
1) Turn off all caching.
2) Turn off all speculative fetching.
3) Replicate ALL caching at ALL levels so that cache ejections due to speculative fetching are recovered. (I have become more pessimistic about this over time for various reasons--but it roughly doubles cache sizes & adds a lot of logic. It also is not clear that this would defend against an indexed load being speculatively fetched from an address space controlled by the attacker--and I do not believe that gadgets of this sort are avoidable.)
4) Enforce memory protection during speculative execution.
5) Ban untrusted code.
Anyone who as done significant work designing or validating microprocessors understands just how bad options 1-4 are from a performance/watt standpoint. Which is why I've been talking about 5 for the last few months.
Dedicated machines running only trusted applications are can safely ignore Spectre-class attacks. This will give them a HUGE performance/watt bonus over Spectre-secure machines. The market is going to bifurcate over this--and we should rejoice, because once it does, there is a chance, however small, that x86 will finally get the boot from the consumer space.
No, apparently, YOU were the one with Wikipedia open--and you misread it.
Yes, the #1 rule of crypto is you don't role your own--but this applies to creating the primitives, not implementing them. As I've repeatedly mentioned here, however, if you're not using libraries, you better have a **** good reason.
And the prior poster mentioned libraries.
But secret sharing is actually one of the more simple primitives (and arguably less sensitive than encryption or hashing). I actually would not be too freaked out over an independent implementation.
If you want to share this secret among n people (with infinite storage) so that any three or more can know it, just translate the secret into a plane in R^3, and hand out random points on the plane.
Of course, we are finite beings, so you need to use finite fields instead of R--say, of order p^k. This in turn means that any two collaborators can "compromise" the secret to the point that only p^k possibilities remain. So, choose p^k to be as high as you need for your application, and you check that no set of three points end up on the same line.
If you want to require four to reveal, just step up the dimension of the affine plane (and "line") in question.
If you need to look up anything I just wrote, then use a ******* library.
The issue is application development. All of the usual issues apply here, but it's important to keep in mind that the secret is not known to the application until enough keys are provided--and the application properly combines them.
There is another angle to this. According to last summer's Party congress, everyone in China has the duty to steal tech. There is a non-trivial possibility that the engineer in question was facing reeducation for himself or his family if he did NOT attempt to steal the tech.
There is a huge false equivalency bouncing around here between the behavior of individual Americans in history verses the current state policy of China. Follow that line, and you blind yourself to real threats.
I much prefer the decrementing reference. Office362 it was for a while. Now we're at Office361....
As for the rest of m$ "issues', that's just "how we got to be so rich". Questionable to call it "news", reallly, except for the fact that it so regularly ruins the day of so many people.
All of these attacks can be prevented by the addition of suitable caches, which are flushed at instruction retirement. Unfortunately, the caches are the bulk of the area of the chips. Furthermore, the wires and logic to drive them are also large.
But I'm seriously thinking about figuring out how to turn Spectre mitigations off when I'm gaming. Unplug the network, turn off Spectre mitigation, and see how much speed I gain. Stuff is getting unplayable.
While I was there, someone posted a video about a female engineer being talked down to aggressively. She claimed that every female engineer at Google experienced this. When challenged, she then claimed that every female engineer at Google experienced it every day.
I called ******** on the first claim, let alone the second. I've not been at that many companies, (AMD, IBM, and Google being the ones you will know) but everywhere that I've gone, there has been a strong culture of zero tolerance for harassment. (This includes the overnight retirement of a director at IBM.)
We live & die by our code, and harassed individuals don't produce their best code.
Of course, people who are scared to talk to each other, or to joke around also don't produce their best code. So we carry on. I've seen a guy do a long-form virtual seppuku during standup. I"ve had a manager discuss defenstration. (The day after I was reported for ceremonially banging my head on a wall.) We are also sensitive to stepping over the line. On multiple occasions, I've seen people check themselves. I've been asked a few times if I was bothered.
We're all human. We all have our strengths, weaknesses, and foibles. Respect means backing off when requested. It means watching out for the possibility that you might have touched something unintentionally. It also very much means assuming that there was no ill intent until proven otherwise.
So I am in possession of proof that at least some of the women at Google are making ludicrous claims of harassment. As another mentioned, this makes things worse, potentially much more so, for those who actually are suffering from it.
Apparently, I'm being slow here. Suppose I want to run DNS over port 3306 on some server out there, known as sneakydomain.com. Regular DNS resolves sneakydomain.com. You blocking that? Then I contact sneakydomain.com on port 3306, which is already being used for MySQL activity. What firewall rule blocks "SELECT * FROM DNSTABLE WHERE NAME = 'HOSTIWANTTOGETTO.COM' ;" ?
I understand that network operators are going to be pulling their hair out over this sort of thing. I understand that malware operators (especially Big Social) are going to start abusing it. But this is going to take a lot more than the work that OpenRelay is doing to manage. Blacklisting is whackamole. Whitelisting is a netsplit of the entire Internet.
Like all technologies, this can be used for good or ill. What I am trying to say is that it is technically unstoppable. Not a good day.
Except that the mere existence of DoH servers anywhere on the net mean this. In fact, there is nothing stopping an application from implementing DNS over Telnet or any other port they want. When you are running an app, you are trusting the app do to anything it is allowed to do.
If you are already paranoid, this is not going to help things. Sorry.
Freedom House and the like need to monitor the effects of Big Social directly.
Just today, I had a comment censored for mentioning a news item that was relevant to the discussion.
Three or four years ago, execs at Google were bragging about how they tilted an election in a third-world country to halt a dictator-wannabe. There is nothing specific to their technology as it relates to that election, and indeed, these pages published research in August, 2016, that appears to demonstrate that they were attempting the same thing in the US.
Interfering with an election is interfering with an election, no matter if it is done by government agents, immunized pro-government groups, mobs, or major corporations. It all needs to be examined.
"There's an effort underway to support “'N+1 redundancy at the facility level'"
1) If you're not running N+1, then you've not learned the first lesson of SRE.
2) For the repos (not all the cruft GitHub has added to them), unless you have a hash collision, resolving "some writes here, some writes there" is trivial by the design of git.
3) For the local cruft, adding shadow git repos to handle their changes should be at worst simple. Again trivial to recover from split brain.
4) For the non-local cruft (ie: webhooks), yes, there would be work to be done to deal with the fact that apparently there are people allowed near keyboards that think that IP is a reliable system for data transport.
What am I missing here?
Just how many tons of chemical weapons does it take to qualify as having WMDs? As I recall, we found eight tons at a single location.
And I've personally spoke with a service member who said their chem alarms triggered multiple times during operations in Iraq.
No, we did not capture the German-built mobile nuke labs. (Smart money was that they moved to Syria during the buildup.) If they actually had nuclear material in them. But WMD is not the same as "nukes", or we would not have used the broader term.
The were not mathematicians--they were in fact the 'programmers'. (Or engineers were, I presume.) The "programmer' was someone who broke a complex computation down for for the 'computers'--who were generally high school educated (when that counted for something). My understanding is that the programs were executed multiple times and the (intermediate) results cross-checked.
It's a shame that he did not recognize the failure of this phrase and clarify it.
A "civilian" is someone who can rely on others for their protection.
Is there ANY commentard who has not conceded that the end users are their own worst enemies because of their failure to protect themselves?
If you know what he was saying, it is a truism. But if you don't, it's easy to dismiss it as military over-eagerness.
Now, if we could just convince management that security matters...
Nah, he's pissed. There were a couple of obvious comebacks, but the peals of laughter from the audience were not going to let him do it. He was screwing the world, everyone knew it, and this was a rare place and time that he could be made to pay, even if only a little bit.
That was the advice I got when entering the WinTel world almost 30 years ago. People were complaining about being beta tester then.
Yes, back in the day, Bill Gates was a crackerjack programmer. But when he took the role of CEO, he well and truly digested the cardinal rule of consumerism, "Nobody ever went broke underestimating the intelligence or taste of the American public."
At first, businesses loved Bill Gates because he screwed IBM. By the time they realized who else was on the list, it was too late.
Devs properly trained in TDD produce working code faster than without. You don't unit test constants, nor java setters & getters. You do write a test before you implement a branch in code execution.
For simply algorithms, yes, it is possible to create a golden dataset, and when the code passes it, it passes. Oh, wait. That's a different form of TDD!
But for complicated algorithms (and we are all guilty), state explosion makes this impossible. Worse, unless you have an advanced degree in mathematics, or a first-class undergraduate degree, you're going to miss things when you write tests before or after. (If you do, you are still going to miss things, but your training will keep you going back enough that your chance of committing bad code goes way, way down.)
1) If you are aware of the costs yourself, you are much more likely to avoid mistakes in the first place.
2) If you write the test before you write the code, you are MUCH more likely to write proper code.
3) If you write the test before you write the code, you are MUCH less likely to write code with needless functionality (which will break needlessly and inopportunely.)
Sorry if you haven't learned these things yet, but untested == broken. Far more time is lost sending code back & forth than would be taken by having the devs do TDD.
The test team should be made up of senior devs whose minds are sufficiently warped to think about the nasty things that should not happen but do.
The problem is that those of us with a background in hardware design, manufacture, and/or maintenance all agree: this story is ********. What we're in a hurry to do is to explain this fact to the world so that the focus will go where it belongs: "Why did it come out, and why is Bloomberg still standing by it?" If you've read as thoroughly as you imply, you see that the general consensus is that the story was planted by a national security service. What is not clear is who and why.
Almost any reasonable explanation has really nasty implications for our freedom generally--not just privacy.
This is a common error for those who have never worked the process. Die size <<< chip size. Even ten years ago, chip size was basically the number of pins x the area required per pin. The die in the middle was routinely < 10% of the area. Often < 5%.
It is really, REALLY hard to tease cause & effect apart in these studies. I would argue impossible when only a single election is being studied.
For instance, while political scientists often count numbers in the 20% range as independent, political operatives know the real number is around 6%. (The difference comes because a lot of people are afraid to declare their allegiances, even anonymously.) Without digging into the study, I can be highly confident that the only serious persuasion going on was about turnout. As it almost always is.
Both sides had really bruising primaries, but their nature was quite different. The Sanders-Clinton fight was mostly about old guard vs new. The new in their case being mostly the recently brainwashed<bs><bs><bs><bs><bs><bs><bs><bs><bs><bs><bs> college educated. The Republican primary was about Trump upending the usual order within the party, and bringing a significant number of general election voters into the primary.
Like connects to like. So the college-educated Ds were already committed--to a candidate they could not vote for. Their "independent" friends saw little reason to get involved, and a bunch of ads on behalf of someone who they considered to be a thief was not going to do much good. The general election Rs were excited to "finally" have someone to shake things up. Their "independent" friends were at the tipping point to go to the polls.
Guess whose ads were more effective?
I could do a very similar analysis for 2008. (And recall that the Obama campaign was praised for hooking into FB's analytics (with FB help) to do the same sort of microtargetting.)
For those without a deep education in mathematics or physics, these concept can be hard to explain. It does not help that some of the core terms can have multiple meanings.
If you don't mind the very "of the times" language and imagery, the book Flatland, by A. Square is an extraordinary exposition of the subject of higher dimensions in the form of an engaging story. My explanation is going to take a very different tack, so if why I write does not help, you might look there.
As hinted, part of the problem is the need to be very clear about the terms in use. Let's start with "dimension". Consider a Cartesian plane. Such a thing is not real. Specifically, there is no corresponding physical entity to a Cartesian plane. Nevertheless, we are comfortable working with it to solve mathematical problems. As the problems get more interesting, we create more interesting mathematical objects to address them. Cartesian 3-dimentional space. Cartesian 4-dimentional space. Spherical spaces. Hyperbolic spaces. Whatever.
Most of the time, we find it useful to examine entities "from the outside". If we are considering the set of points x, y such that x * x + y * y = 1, we don't think of our selves as inside the curve--indeed it would be really difficult to do so. For the points x, y, z such that x * x + y * y + z * z = 1, however, we often talk as if we can. Because there is a (very rough) approximation that forms our lives, namely the surface of the earth.
Suppose you met with someone from an isolated tribe. In their experience, the world is flat. You inform them that the earth is round, and perhaps there is a fortuitous eclipse that even allows you so show the Earth's shadow. They hear your explanation, see your demonstration, and laugh at you. Why? "Because surely everything on the other side falls off!" In their minds, "down" is not "whatever direction gravity pulls", but is a coordinate in their Cartesian understanding of nature of the world.
This is key. Their mathematical model of world is a Cartesian 3-space. In such a space, a finite world has to have an edge. And a center that one might reach by walking or sailing. Of course, their model is wrong.
This kind of thinking is hard to overcome. I read a philosophy paper written in 2010 which began, "We know with probability one that the universe is infinite." That's a whole lot of wrong for so few words. Nevertheless, the author, and whomever he consulted for the paper, all educated people, thought such a sentiment true.
Every direct observation we can make tells us that the Space is at least roughly a Cartesian 3-space. Perhaps as a result of this, our brains are not wired to think about other possibilities. But our physicists tell us that it is almost certainly not the case. Recall that a circle (S1) is one-dimension. But to look _at_ this one-dimensional thing, you think of it as sitting inside a two-dimentional Cartesian system--specifically you think of it as the surface of a two-dimensional ball. Likewise, the sphere (S2) is a two-dimensional thing, but we think of it as sitting in a three-dimensional Cartesian system--again as the surface of a three-dimensional ball. Physicists tell us that space is probably shaped like the surface of a four-dimensional ball (S3).
But our brains don't really go there. At all.
So for your question, "where is the center"? There is in fact an answer. Just as the center of S1 is not in S1, and the center of S2 is not in S2, the center of S3 is not in S3. If we model Space as sitting in some four-dimensional space, however, we can specify it. But never go there. So the obvious follow up question, "Is the center 'real'?" becomes a matter of philosophy.
Unless further observations force us to consider forces operating on the Universe from the "outside" ...
Fascinating that the Googlers & x-Googlers appear to be silent. Oh well. This is going to be long enough that there is no point in trying to be AC. I was there 2015-6.
Google culture encourages staff to "bring your whole self". And stay all day. Free three meals a day if you like, showers, gyms, and more. (One guy lived out of his car for two years until management found out.) This is particularly attractive to green grads who don't have a strong view of life after college. (And are not in a relationship outside work.) It also deliberately smears the line between work and not-work. In such an environment, "professionalism" can have implications that are very different from most places.
For example, consider Google's internal Imgur/Memegen. While I was there, the decision was made to officially support it. At least two FTOs. Few companies would allow such a thing. But there, it became one of the best sources of internal news in the company. To the point that at least one team copped to reading it to help debug an outage. So are these memes "official communications"? If so, when did it start?
When I started, there was an aggressive (how much depended on where) culture of enforcing screen locking by inventive embarrassment (posting embarrassing memes was fair game). Apparently, some thin-skinned director got busted this way at some point, and demanded an end to the practice. Shortly, a document came out proscribing what was allowed & not. It was decidedly NSFW for many places. (A fact which was the source of several highly popular memes.)
When you have a culture which is (officially) consciously relaxed, you're going to have people regularly needing to be herded back inside the smeared lines. This looks like one of those events.
Of course, the entire thing gets really messy when you take a consciously relaxed culture & start demanding adherence to the liberal pieties. But that's a different post.
See, here's the thing. I'm a dev, not a DBA. I learned about .htaccess about 15 years ago for a project I was on at the time. OF COURSE, if I were to make a new project, I would re-read the docs. But in the back of my head, I already know about .htaccess. Do the current docs still have the warning about .htaccess going away? Are the prominent enough that my brain won't miss them?
Security is EVERYONE's job. If you do some ****** ** ******** like this, you've made my permanent **** list. The VERY least you can do is to check if the file is there, and refuse to continue if it's being ignored.
Asterisks because if Linux isn't permitted to call out radioactive waste for what it is, I'm certainly not.
This is EXACTLY the sort of business that should be on GCP or AWS. Properly configured, the worst a customer will see is a long response time. Even if they screw up & do a thundering herd, autoscaling will prevent actual outages. (And if they do a rolling deploy, they will realize the thundering herd LONG before it takes their systems down.) Straight up failure to apply basic SRE principles.
If it is DDOS, the route to mitigation is already quite well known. Again, straight up fail.
Github was around before anyone was talking about "the cloud". They were (and are) a free git hosting service, where you can pay for ugraded services.
It is senseless to complain about availability on the free side. Pay for it or run your own. (A previous employer of mine was running private GitHub--they sell that.) And shame on you and fie on your business if you have paying customers and are relying on a free service in order to satisfy your contractual obligations.
If their paying customers are having a problem, then this is a much more serious matter. The reports suggest that they are using some bs "eventually consistent" model of data resilience. Best review those contracts.
You can probably spin a custom ARM chip suitable in a year. You cannot build a component validation team in a year. I don't know Apple's internal structure, but unless they already have competent verification and validation teams in place, this will fail badly.
As for a translation layer, remember that x86 chips already are translating into a risc-type internal instruction set. There might be less of a performance loss than we expect.
LOL. Maybe this is why everywhere I go, I'm considered a regexp expert.
State machines are NOT that hard. Certainly, we want to abstract them out most of the time, because we really don't want to think of a 32-bit register as >4 billion states. What's hard is when people fail to decouple the state machine from the rest of the code.
And, yes, goto is still considered harmful, so if some junior programmer, especially without the appropriate training, attempts one, he's likely to mess it up as badly as anything else that he's not prepared to handle.
For serious parsers, you might just want to look into these newfangled tools out there--they go by "lexx" and "yacc".
I've never gone so far that I needed these tools, but then I'm a mathematician. I DO view processors as state machines, I just know when & how to abstract that detail away.
Biting the hand that feeds IT © 1998–2019