* Posts by patrickstar

643 publicly visible posts • joined 1 Dec 2015

Page:

Largest advertising company in the world still wincing after NotPetya punch

patrickstar

Re: Address the real software vulnerability

Forward 15 years after a lot of places followed your advice and switched to MacOS:

"There should be no sympathy for companies, organizations, academia, governments or any other entity that continues to use Apple Mac Operating System (OS) Software that has been confirmed as the attack point for every Ransomeware and attack Vector in last several years.

Viruses and Ransomeware do not affect Windows, Linux or ChromeOS computing endpoints, so the continued use of Windows is a stupid and non-sensical decision in 2017, particularly for Business and governments."

It's a matter of market share and thus attacker's desire to target it, not some inherent security deficiency that the other systems lack. Especially in the case of the current outbreak (NotPetya) blaming Windows is a sure sign of idiocy since it largely spread via stupid admin practices and not some inherent Windows flaw.

In fact, there has been ransomware for both Linux and MacOS. They just aren't nearly as wide-spread because both systems have a really low share of desktop users, especially in the kind of places that tend to give large headlines when hit by ransomware.

As for ChromeOS, if you want a really locked-down desktop environment with no chance of running applications introduced from outside, you can do that with Windows (or any other of those systems) instead of signing over your soul (and corporate secrets) to Google. If anything, THAT should be considered a non-sensical decision, particularly for business and government and anyone else that actually does real work on their computers.

PS. The large TJX credit card compromise a number of years ago mostly involved open WLANs and SQL injection... not exactly Windows specific attack vectors there either.

patrickstar

What does public clown have to do with outsourcing?

There are lots of places which even have their own physical datacenter but outsource large parts of operating the things in it.

If anything, moving to your typical public clown (like AWS) with their "lots of disposable unreliable servers" model would require a typical shop to start outsourcing, since their in-house team won't be able to deal with the huge increase in complexity that follows from that model. Plus you might actually have to toss out and replace a lot of existing, well-functioning, software.

For the "clown services" where individual servers are actually reliable, at most you've gained capacity scaling and not having to maintain the physical boxes. The latter (and to some extent the former) which you can gain in ways that don't involve using someone else's shared infrastructure with all the issues that arise from this.

In any case, this in no way saves you from needing someone (outsourced or not) to keep the things running on the servers.

Sysadmin bloodied by icicle that overheated airport data centre

patrickstar

Re: The unexpected perils fo data centre migrations

Funny, all BFUPSE (Big F... UPS Explosion) I've encountered have been "unprecedented" according to the vendors...

Feelin' safe and snug on Linux while the Windows world burns? Stop that

patrickstar

Re: Advanced bullshit.

For desktops in general, the market share of Linux is very small.

Including home users.

Harder to measure of course, but the times I've looked at user-agent statistics of various big sites it's always been in the low single-digits at most.

patrickstar

Re: Crickbait

Yet when Windows boxes are compromised because of crappy admining, the Linux crowd immediately and loudly proclaims it's the fault of Microsoft...

MH370 researchers refine their prediction of the place nobody looked

patrickstar

Re: The point is...

Not an accident - after AF447, someone at Inmarsat had a sudden flash of insight and decided to start logging more data about the comms.

How to pwn phones with shady replacement parts

patrickstar

Re: Come again?

"You'd still need context, though. Harder to get without access to the innards."

Context here would be a password/PIN entry screen, or what's being typed in general. If you say "randomize the positions of things on the PIN entry screen", then you have suddenly slowed down the user and thus made shoulder-surfing/secret recording of the entry a lot easier. Tradeoffs and all...

And I don't sit around designing exotic iPhone bugs for a living, believe it or not. I'm sure that the people who actually do can come up with a myriad other ways of haxxor you with a day's unsupervised access to the phone, which don't involve a dodgy screen.

"Still need a way to EXfiltrate those conversations, and if the radio chips are also protected, then you'll need a total package. Might as well use a specialized bug in that instance."

The problems you encounter when making a small bug are the power supply and antenna. In a phone you have both - a miniature transmitter is not only readily available commercially but also trivial to build from parts.

"ATMs have to sit by their lonesome for days at a time. Who within a location actually pays attention to the PIN pads during normal operation?"

I can't find a public document with the whole standard (thank Jesus/Allah/Buddha/Kek I haven't had to deal with PCI standards in a good while), but the requirements are in the range of withstanding tampering for 10 hours or a budget of a couple tens of thousands USD. Solitary ATMs presumably have additional layers (as opposed to payment terminals or such) - the whole shell of the ATM itself, associated alarms, CCTV, etc.

"As for techs, that usually points to inside jobs, meaning they have access to key chips. Rogue techs could use side channels like hidden cameras, but again that's close to insider status to get them clandestinely in the machines and outside this context."

The EPP standards basically say that opening the thing (eg for service) should nuke the keys. They say very little about what's stopping someone from grabbing the keys as they are re-entered, becacuse this is really difficult to do.

"That's why they've been working on this VERY hard for the last 20-30 years, coming up now with this chain of trust system for the 4K systems (as well as the consoles, which double as 4K players) based on what the phone makers have been doing"

Budget for copying a single movie: Small (price of movie for a home user or total sales for a commercial piracy operation)

Budget for pwning a single phone: Large (potentially millions)

It's even worse than that - stopping a phone from leaking data to a physical attacker would be like stopping someone from recording a movie by pointing a camera at the screen.

Plus, perhaps most importantly, 4K movies get pirated all the time - so either it's broken already (just not public), or there's no incentive to break it because they get out another way. Admittedly they're not as frequent on the torrent sites, it seems (I rarely watch movies and don't even own a 4K display so I don't keep track of the particulars), but this might just be due to lack of demand for the higher quality.

"(and some phone STILL haven't been rooted or custom-ROM'd at this point; ask xda)."

All of them can be and regularly are rooted... with a couple of million dollars worth of gear (scanning electron microscope, FIB workstation, high-freq logic analyzers, etc), knowledge and time/budget.

It's just meant to be unfeasible for the end user and lower-range attackers (and slow down higher-range attackers so they can't do it en masse).

If screens turned out to be a viable vector of pwnership and DRMish protection applied to them, that sort of budget would immediately start going towards breaking it.

Then the sort of attacker who would pwn your phone with a fake screen would ... pwn your phone with a more expensive fake screen.

So even if we don't consider all the other very viable (and far more likely) attacks that applies if you give someone a day of unmonitored fiddling with your phone, the most you have accomplished is shifting the attackers' budget bracket slightly upwards. I should remind you that a fake phone-pwning screen wouldn't exactly be cheap on the grey forensics/spook market in the first place - five or six digits most likely.

patrickstar

Re: Come again?

You can presumably sniff things like EMI, or otherwise detect hand movements. Lots of possibilities here, with interesting precedent in what's been done against PIN pads.

Plus your phone has other secrets to protect than just its' contents. Like everything being said in the same room as the phone, even if it's off if it's bugged.

Regarding PIN pads, the VISA EPP standard is not meant to withstand a day or so of unsupervised access, which is what handing your phone in for repair certainly does in a lot of cases.

Or protect against a rogue service technician at all, atleast in more ways than having the keys split across multiple persons (which doesn't do you any good if the thing comes back from service trojaned to the hilt).

The scenario for DVD/BluRay/etc is to protect the actual digital data, to prevent an exact (high-definition high-quality) copy, not keep the contents per se seciret. Their whole purpose is to do a very lousy job at that so you can actually watch the movie.

Same with games - you are SUPPOSED to be able to play the game.

The scenario of a phone is to protect many different secrets from getting read out in any way, or intercepted in the first place.

Plus the value of making a copy of a single BluRay disc is substantially lower than the potential value of getting the contents, or simply bugging the environment, of a single phone.

If you hand something in for service and don't trust the service techs, consider it pwnd. This is almost a basic law of computing.

patrickstar

Re: And people complain about Apple discouraging third-party repair shops

And really, do people actually want unrepairable phones?

Today's smartphones can cost more than a decent desktop or even laptop computer. Do you really want them to be impossible to repair reasonably - or only repairable under the conditions and prices dictated by the manufacturer (if they're even interested in doing it at all)? Just to stop some attack scenario with dodgy parts that you'd expect in case of a nation-state level attacker and/or high-level industrial espionage, not someone out to empty random bank accounts or get ad clicks (taps?).

Just look at the uproar a number of years ago when Apple started using Pentalobe screws to discourage fiddling with the phone internals. And that's something that's trivial to defeat even on a shoe-string budget...

There could definitely be a market for phones that are essentially epoxy bricks riddled with tamper detection gizmos and severely paranoid hardware (TrustNo1, not even the screen), if there isn't already, but I doubt even the vast majority of security-conscious users would appreciate the tradeoff.

Such a phone would presumably be subject to similar security testing/certifications to other tamper-protected devices (PIN pads for card transactions, for example... or good-old fashioned locks and safes) where you have a clear threat model - an certain amount of time and/or money needed to break it. Even though there certainly is some overlap in the technology employed, this is still very different from some ad-hoc DRM scheme on random components a bunch of leet haxorz at the manufacturer came up with.

patrickstar

Re: Come again?

You simply need to add a small circuit board with a microphone (or other listening device - radio/EM fields/position/etc) on it. This is not stopped in any way whatsoever by any chain of trust.

Rather it's stopped by tamper protection and physical security, both of which are, by definition, not relevant if you just handed your phone to someone and expect him to switch out the screen.

There's no way to compare keeping secrets on a phone unreadable to preventing home users from pirating BluRay discs - there are simply no commonalities between the scenarios.

Regarding switching out the entire phone - sure, but it might be a tad suspicious if you hand in your old worn thing (probably dinged up from whatever broke the screen as well) and get back a brand new phone. Just sayin'.

patrickstar

Re: Come again?

You don't need to replace any hardware in a phone to pwn it. You might simply add a bug - this has been done since the early days of telephony.

Or you could replace the entire contents of the phone with something that just shows you a fake login screen and then errors out after entering the password/PIN code, sending it to the guy in possession of the real phone, if that's what you're after.

Etc.

Also, the threat models are radically different, but that's probably another discussion.

patrickstar

Re: Come again?

Again - my point is that you HAVE to consider the hardware trusted, not that someone can't actually compromise the hardware with physical access.

If someone has access to your phone to the point where they can change the screen, it's game over.

If you want to prevent that, you don't do it by putting some DRMish stuff in the screen to authenticate it (a la Apple and the fingerprint sensor). This is completely meaningless even if we assume there's no way to stick an evil screen in place considering that they have unrestricted access to literally everything.

To prevent this, you simply don't allow untrusted parties to have that sort of access to the phone in the first place.

It would be relevant if this was about connecting an external screen to a desktop computer, or perhaps some sort of Lego phone where replacing the screen does not involve taking it apart.

It's not relevant here.

patrickstar

This is, of course, not a new concept or fear.

In the case of a pure CPU backdoor at the mask level, it would be pretty easy to insert a backdoor that for example would allow a local attacker full kernel compromise. For example "if certain conditions are true, then bypass all page protection checks". This would be very desirable for a TLA looking to compromise phones - then all they would need would be a single clientside exploit in any app (of which there are plenty), instead of the usual chain of clientside exploit -> sandbox escape / local privilege escalation.

The same thing could be done with desktop systems of course, but somehow I imagine/hope it's more difficult to sneak a backdoor in at Intel than at some obscure SoC vendor...

patrickstar

Yes - good clarification.

I think the general cutoff between untrusted/trusted should be something along the lines of "if the screen is locked / user logged out, can this be reasonably be used to bypass that?"

So USB sticks would be untrusted. The motherboard would be trusted even though you could theoretically hook up a logic analyzer and signal generator to it. Memory would be trusted as long as it remains in the box (you'd expect someone to be able to remove the memory and read it out so you'd scramble the contents, but not expect it to be under attacker control as long as it remains in place).

Other considerations might apply if you have an advanced threat model, but then the answer isn't to attempt to build a box where nothing trusts anything else or even itself, but rather to prevent someone from getting in the box in the first place (tamper detection and/or filling the entire thing with epoxy and/or applying physical security like locks and safes around it).

patrickstar

My point is that you essentially HAVE to consider the hardware trusted. If it's compromised, game over.

If an attacker can replace basic hardware components they have already won.

patrickstar

I totally fail to envision a scenario - any scenario at all - where the HARDWARE ITSELF wouldn't be considered trusted...

Bonkers call to boycott Raspberry Pi Foundation over 'gay agenda'

patrickstar

Where did they get this gay stuff from? Obviously the Pi foundation are actually pushing a neo-Nazi agenda since the rainbow is actually a hate symbol: https://pics.onsizzle.com/the-rainbow-flag-is-the-newest-hate-symbol-of-the-24412651.png

Look at Trump (Literally Hitler) displaying it. Are we really gonna let the Pi foundation get away with this brazen display of racist Nazi propaganda?

BOFH: Halon is not a rad new vape flavour

patrickstar

Re: Halon?

Most of the Halon scares are about accidental (or not-so-accidental...) discharges, not actual fires.

And if things really are burning, you are unlikely to be worse off from the Halon byproducts than what the combustion would have resulted in otherwise.

Hydrogen halide production isn't something that only occurs in a fire in the presence of Halon, you know*. Not to mention the others - I'm no expert on the toxicology of combustion gases, but carbon monoxide certainly comes to mind.

* Halon produces HF while many burning plastics would be mostly HCl. HF is more toxic (spill the liquid on you and the acid burns aren't your big problem - it gets absorbed into the bloodstream and poisons you... really does a job on bones as well unlike most acids), but HCl is the stronger acid. I'd think direct lung/airway damage would be relevant long before systemic toxicity when inhaling the thing?

Kaspersky repeats offer: America can see my source code

patrickstar

Re: An Education for the TLA's

Of course you are not going to arrive at the exact same implementation details. But it's not gonna be news to you (or anyone else writing AVs in the post-Dark Avenger Mutation Engine era, so early rather than mid-90's) that one is needed in the first place if you're gonna do a full-blown AV. From then you also have to decide on whether it will be a simple "big switch()" type emulator or some sort of binary translation.

But the source code of the emulator of one specific AV is pretty uninteresting for evasion purposes, plus any source has a short shelf-life since this is among the things frequently fiddled with in auto-updates. If an AV company encounters a sample that screws up emulation and there are no other usable signatures, they are gonna push out an update to the emulator (or the rules governing it). And that's the end of whatever smart evasion trick you found by reading the source.

Noone actually sits around evading one specific AV and nothing else. And there's a lot of sharing of samples/signatures (as well as outright pilfering from rivals, but that's another story entirely), so once you have one detection more are sure to follow.

At most the situation can arise where one AV keeps detecting something after the rest have been successfully evaded, but the chances of that particular AV being KAV of the specific version you have source code of is pretty slim.

Plus, the most important emulator trick is probably just spending enough cycles doing make-believe "work" so that the emulator gives up.

This is also an issue you will face universally when developing an emulator without ever having seen an existing AV before (and you'll also realize early on that one of the most important things you need to do is quickly determine how much time to spend emulating a specific file).

patrickstar

Re: An Education for the TLA's

I'm sure that just about everyone for whom it would be relevant has read the Kaspersky source that leaked a number of years ago, and/or reverse engineered any parts that would be interesting.

But really, there's not much to see in standard AV software. Basically, if you sit down and try to accomplish the same thing as they do, you'll realize there are only a few ways it can be done. At most there'll be some rootkit detection tricks and such, but they will almost by definition be useless since rootkit authors will have tested their stuff against the AVs and worked around it already.

patrickstar

They most likely run a lot of Windows atleast, since that's what's used for pretty much all industrial control systems.

Tick-tick... boom: Germany gives social media giants 24 hours to tear down hate speech

patrickstar

Re: And is there an actual definition of hate speech attached to this bill?

Whether or not someone else perceives your "need" to say something should not be a prerequisite for being allowed to say it. That's part of the whole freedom of speech thing.

Just as whether or not someone else perceives your speech as offensive, factually wrong or just plain stupid should not be a reason to prohibit it.

Look - you should read some of the things that Germany considers "illegal hate speech".

Take the writings of Germar Rudolf for example. Germany has literally imprisoned him for years, banned his books, destroyed the books already in circulation (an old-fashioned state-sanctioned book burning!), and confiscated the proceeds from the sale.

Is his overall conclusion wrong? Most likely.

Is there any shred of "hate", incitement to violence, or anything except an attempt at civilized discourse anywhere in the banned writings? No.

Is he a Nazi, perhaps acting as part of some banned group with a violent agenda? No. His motive is essentially that the German genocide of Jews is being used to justify the post-war genocide of Germans.

(I'm not linking anything here but use a search engine located in a country with something actually resembling freedom of speech and you'll find his personal site)

Again - disagreeing with someone is not a valid justification for banning him from saying it. Even if it hurts someone's feelings, or a lot of persons feelings. Even if it's provably wrong, on the "Earth is flat" level of moronity. Even if literally everyone else in the whole world think there's no "need" to say it.

patrickstar

Re: And is there an actual definition of hate speech attached to this bill?

So, saying that it's your opinion that something didn't happen doesn't count as having an opinion about it if the subject is controversial enough. Got it.

patrickstar

Re: And is there an actual definition of hate speech attached to this bill?

In Germany, even questioning the established narrative of certain historical events (most famously the Holocaust) is illegal. People literally go to jail for years for this.

Restricting speech because the speaker has the wrong opinion on something clearly goes way beyond restricting direct incitements to violence or such.

Everything you need to know about the Petya, er, NotPetya nasty trashing PCs worldwide

patrickstar

It's somewhat weird that they didn't implement all functionality needed for ransomware. They must have known that sooner or later, probably sooner, someone would realize that it's not possible for anyone to decrypt the data. Wouldn't exactly be a lot of extra work at that point.

Did they actually intend for this to be discovered after the initial chaos?

patrickstar

In case someone has missed it (and I can't find a Reg article mentioning it, but that might just be me being a retarded starfish as usual) - apparently this "NotPetya" is not ransomware: https://securelist.com/expetrpetyanotpetya-is-a-wiper-not-ransomware/78902/

It simply isn't possible for anyone, including the attacker, to decrypt the data.

patrickstar

Re: WMI (and seriously - passwords in memory?)

PS. How can someone downvote a post containing nothing but statements of fact? If any of the facts are wrong I recommend that you post a reply explaining so instead, for the benefit of all.

patrickstar

Re: WMI (and seriously - passwords in memory?)

Atleast MIT and Heimdal kerberos store the credentials in a file in /tmp...

In Windows, they are stored in the LSASS process. I don't know where you think they are stored or how accessible they are, but at the very least you need an administrative account with SeDebugPrivilege.

I don't have Kerberos on any of my Solaris boxes, but even if they are actually stored in kernel memory in the native Solaris implementation as opposed to a userspace process or file, none of these systems have a great track record of keeping attackers out of the kernel, especially when they have administrative privileges. And certainly none of them have a great track record of keeping attackers from gaining that.

That's why you have the whole Credential Guard thing - so that even if the kernel is compromised you can't read them out without also compromising the minimal virtual system holding the creds.

There is no difference between the ability of processes to read memory space on Linux, Solaris, Windows or any of the other systems, by the way. They all use the same basic VM/memory protection model.

And you can't hash the credentials and still have them usable as a cache. The whole point of a cache is to be able to re-use them. At most you could encrypt them with a key that's harder to access than the credentials itself, which is basically what Credential Guard is doing (though a better solution would be a HSM/TPM enforcing rules for when they can be used).

patrickstar

Re: It would be real funny if...

Atleast in the US, there's solid legal precedent that you have no right to restitution if the government doesn't deliver police services when you need them. I'm sure the same reasoning would be applied in this case.

See, there's a difference between actors exchanging money for goods/services voluntarily, and an actor unilaterally taking money by force with a vague promise of delivering.

patrickstar

Re: WMI (and seriously - passwords in memory?)

It's cached credentials, not the password store itself.

They have to be readable by definition - otherwise they couldn't be used to authenticate... That's why you have the whole Credential Guard thingie to prevent reading them out even if you compromise the normal OS.

Kerberos on *ix has the same problem, by the way.

patrickstar

Re: Bring Back

While having a more mixed environment certaionly can help somewhat with security - especially against automated / non-targeted attacks like this - essentially all OSes have been (and continue to be) compromised regularly.

Back in the days of UNIX diversity it wouldn't even stop a script kiddie without any custom exploit development skills - see for example http://insecure.org/sploits.html as to what sort of resources were publicly available.

Microsoft: We'll beef up security in Windows 10 Creators Edition Fall Update

patrickstar

Re: Application Guard

Application Guard and AppArmor doesn't do the same thing.

In fact, what AppArmor does you can accomplish with the standard ACL system in Windows that has been in place since the very beginning.

patrickstar

Re: This might be the year MS became relevant

For your information, your default Linux installation is a lot weaker when it comes to exploit mitigation than a Windows 7 box with EMET...

Huge ransomware outbreak spreads in Ukraine and beyond

patrickstar

Funny with the mandatory "stupid Windows is so insecure, use Linux!!!111" comment considering that the article clearly states that this was not relying on any Windows specific vulnerability, but rather compromising the auto-update servers of some company and then being able to move across the network due to bad admin practices. Both things would work equally well against Linux if the attackers wanted to target it instead.

Linus Torvalds slams 'pure garbage' from 'clowns' at Grsecurity

patrickstar

Re: Linus exhibits all the qualities of pure sociopath

OS X is based on NeXTSTEP, which existed well before Linux did. Not BSD.

Maybe the lack of Linux would have shifted the OS market in ways that would mean Apple had made another OS choice, but this sort of reasoning quickly gets more or less impossible... (and if they hadn't gotten Steve Jobs back on board by buying NeXT, who knows where they'd be today, etc)

Anyways, personally I kinda fail to see how Linux had an impact another OS wouldn't sooner or later if it hadn't appeared when it did.

There was already a solid base of *ix hobbyists - just that it was restricted to people with access to UNIX workstations and shell accounts (and the occasional one playing with Minix at home). There wasn't exactly a lack of free (as in speech and beer) and open source software, or people eager to develop more of it.

The reason BSD and others are low in the popularity rankings is obviously because Linux took their place - not some inherent property of Linux itself that the others would be lacking today.

patrickstar

Re: Linus exhibits all the qualities of pure sociopath

Uhm, Linux becoming the go-to x86 *nix clone was basically just a historical accident.

If it hadn't gained traction when it had, the BSDs would probably be where it is today.

Or maybe GNU HURD would have taken off.

Or some now-dead system would now be dominant. Or some now-nonexistent one would have been born.

patrickstar

Re: Grumble

The proper comparision would be whether the STAFF of IBM, Intel, et al. are working for free or not...

Not even the people involved in the KSPP are working for free. How come the grsec people (spender, pipacs and whoever else is involved) are expected to work for free, while the KSPP guys aren't?

Why is there even a KSPP when they could have just funded grsec with much better results?

Answer: Linus doesn't give a fuck about security or see exploit mitigation and hardening as something that belongs in the kernel. He along with certain others have successfully alienated the people who are actually have a damn solid track record of providing it.

Now when the pressure is on to actually do something about the sorry state of Linux kernel security, trying to mend things would mean he publicly admitting he was wrong ... which he's far too proud to do.

Besides, he doesn't actually care about security, only about appearing to do something about it, so the results don't really matter.

Enter the KSPP which consisted mostly of taking random parts from grsec without any deeper understanding of the issues. Considering that the people involved have a near-zero record of meaningful innovation in this field, I would suspect they are pretty much screwed now without public grsec patches. At most they can add a bunch of half-assed useless "features" for show, probably introducing more vulnerabilities in the process just as they have done before.

So, to save Linus' face, money is being spent on make-believe work and every Linux users security suffers.

patrickstar

Re: Ego Overload

The story is basically this:

Linus, and the other kernel maintainers, have been essentially ignoring security for years. Attempts to introduce security hardening into mainline has been met with indifference or outright hostility. In the rare occasion where something has ended up landing, it has been in a watered-down form with limited value, often demonstrating the commiter's lack of deeper understanding of the issues involved.

Linus has even publicly stated that he doesn't view security bugs as any different from any others, with predictable results. Lots of security issues that actually do get fixed do so more or less silently with a non-descriptive commit, causing much joy for blackhats reading the changelogs and much pain for people trying to backport security fixes to old kernels.

m

During this whole period, grsec has basically been the only way to let untrusted code run on a Linux box without guaranteeing eventual compromise.

At some point, there was some sort of debacle with Wind River marketing (I'm weak on the details here - someone can fill in perhaps?) that pissed off spender to the point where he stopped making stable grsec patches public.

After a round of bad press, the mainline kernel guys launched the Kernel Self-Protection Project with major corporate backing. It turned out to consist of poor reimplementations (or even cut and paste without understanding the details) of features from grsec/PaX. The very same features that were previously rejected by Linus et al. for political/personal/religious reasons (or plain lack of understanding) mind you, and which were developed without anything comparable to the corporate backing of the mainline Linux kernel.

Absolutely no interest has been demonstrated in actually involving grsec in any of this, despite them having been doing this work with excellent results for many years on a shoe-string budget.

Not too surprisingly, this was the last straw and now no grsec patches are publicly available anymore.

And if you think spender/pipacs produce garbage, I'd suggest you start by turning off ASLR and DEP on all your systems. They invented those, after all... grsec/PaX had those literally years before any mainline Linux kernel eventually implemented them in half-assed watered-down ways.

patrickstar

SELinux has stopped how many kernel exploits exactly? Of which there has been A LOT, regularly supplied with SELinux disabling shellcode and all...

And in case you thought grsec was just about exploit mitigation, it does come with RBAC as well - a very powerful ACL system.

Australian govt promises to push Five Eyes nations to break encryption

patrickstar

Re: Ahmed the Terrifying Terrorist

You should study some information theory. Information can be added to images in a way that would survive all that.

Intel's Skylake and Kaby Lake CPUs have nasty hyper-threading bug

patrickstar

Re: Perhaps we've been unfairly blaming Windows..

Sorry to ask, but have you followed the proper well-established procedures for troubleshooting bugchecks? Probably not, since you can't even tell if this issue can be related or not.

I mean - you are aware that those letters and numbers on the blue screen are there for a reason, aren't you? (Though for serious debugging you will want a kernel dump as well...)

patrickstar

Re: Finally and explaination

You can load microcode after boot as well, from your favorite OS. Linux has a tool for it that runs on boot - t Think it's included with Debian but the actual microcode isn't since it's a non-free binary blob.

Windows has it as well and with some luck microcode updates arrive via Windows Update.

Doing it at the BIOS/UEFI level is mostly needed when a bug/missing feature prevents the system from booting at all.

patrickstar

Re: ugh

Well, on the other hand, how many people actually bother (or are able) to root-cause random crashes to the Point where the microcode can be properly blamed?

Even if it's a perfectly repeatable crash most people would blame it on application/OS bugs and work around it.

I'd think that the only people who actually HAVE to hunt down bugs like this would be compiler authors and related fields.

Heaps of Windows 10 internal builds, private source code leak online

patrickstar

Re: Even pro-Microsoftie Thurrott...

Uhm, the kernel is the GOOD part of Windows. It's a masterpiece as far as kernels go. The coding style is a bit too militant for my taste, but it's certainly easier on the eyes than most of Linux.

Whatever your issues with Windows are, chances are the kernel isn't where they stem from.

patrickstar

Re: Will

I have read tons of MS source code and never seen a single GPLed source file in any of the closed source stuff...

The standard MS style also happens to be different enough from most *ix code that it'd be easy to spot if it was just a cut&paste job with license removed.

patrickstar

Re: So how'd they get it?

This sounds like the stuff they hand out to partners/important developers. So it probably came from one of those, not the internal MS network.

Microsoft PatchGuard flaw could let hackers plant rootkits on x64 Windows 10 boxen

patrickstar

Re: ticking time bomb... tick tick tick tick BOOM!

All these gotchas apply to Windows as well, of course... just that module signing is mandatory and you either have to stick to the documented ways of doing things in kernel mode or bypass PG/KPP.

patrickstar

Re: ticking time bomb... tick tick tick tick BOOM!

In a real life social engineering attack you would of course have the user run some application that does it, not instruct the user to run the actual commands. That's obviously the case for Windows as well - instead of manually adding a service to the registry and starting it you'd just hand him/her/it an application that does it.

patrickstar

Re: ticking time bomb... tick tick tick tick BOOM!

This is not a vulnerability.

And if you can get someone to LOAD A DRIVER, that person is pretty darn owned regardless of the OS.

For your information, your shiny little Linux box (or whatever) gets just as pwned if you get someone to do 'insmod evil.ko' as root.

As you don't seem to be advocating for something like iOS where the user is considered hostile, I really fail to see what point you are making.

Hell, since you consider a PatchGuard bypass a vulnerability, you should consider other OSes (most of them) that lack any sort of PatchGuard equivalent really, really, vulnerable.

patrickstar

PatchGuard bypasses are literally as old as PatchGuard itself. By definition, it can always be bypassed.

Its primary purpose is to stop eg. AV vendors from hooking things they shouldn't in ways they aren't competent to. Both because it leads to complaints from end users ("durr windoze is so stupid look it just crashed again!!11") and also because it creates a whole lot of pain for MS since they have to test for and work around this sort of stupidity each time they release a kernel update.

Its secondary purpose is to make things trickier for rootkit authors, since even if you bypass it at one point there's no guarantee the bypass will keep working in the next update.

It has been pretty succesful at both these things, by the way. AV (and other driver) vendors no longer do the really stupid stuff (atleast not any of the widely deployed ones), and rootkits largely avoid the PatchGuard protected parts like the plague.

Page: