* Posts by patrickstar

637 posts • joined 2 Nov 2015

Page:

Sci-Hub domains inactive following court order

patrickstar
Bronze badge

Re: I am leet hacker

That command line wouldn't even work on an actual Linux system with sudo, since the >> redirection will take place in the original shell as the current user and not the one started as root.

And to top that off, none of my Linux systems have sudo...

1
0
patrickstar
Bronze badge

Re: re: I think the advantage is supposed to be ...

I have access to a major university library with subscriptions to basically everything that's on Sci-Hub. I still use Sci-Hub - it's much, much quicker and easier than getting articles from the publishers' sites.

12
0

Samba needs two patches, unless you're happy for SMB servers to dance for evildoers

patrickstar
Bronze badge

And I thought only the Windows SMB implementation had vulnerabilities?

0
2

Microsoft's memory randomization security defense is a little busted in Windows 8, 10

patrickstar
Bronze badge

Older versions of Windows are available on a lot of platforms as well.

Itanic, PPC, MIPS, Alpha, some archs I forgot...

1
1
patrickstar
Bronze badge

Re: yet ANOTHER reason

Win 10 is actually a major security improvement over 7...

And unexpected behavior in a non-standard ASLR configuration isn't exactly the end of the world or a huge security lapse. I suspect the reason this wasn't noticed earlier is simply that there isn't much software left that's not built with DYNBASE (ASLR opt-in)...

And the ONLY issue here is a user interface issue in Exploit Guard - nothing else.

0
1

Some 'security people are f*cking morons' says Linus Torvalds

patrickstar
Bronze badge

Most OSes don't even have an OOM killer, and yet they run just fine. It's an example of one of the many trade-offs that might very well be valid for some scenarios but reduce reliability in others.

0
0
patrickstar
Bronze badge

Re: Linus Torvalds is not a Security Expert

So, let me get this straight - you seriously think you are better off getting compromised than the system crashing? What if the attacker grabs all your data, wipes the disks, and then crashes the system anyways? The potential damage from an intrusion is unlimited - the potential damage from a crash is limited and manageable.

And why do you think there even is a concept of 'kernel panic', the BUG() call in the Linux kernel, etc? Sometimes the system simply can't continue.

What if a something has corrupted random memory, for example? This could include disk buffers so it ends up writing garbage to the disk.

Or what if it just enters one of the many possible weird twilight states where things just fail at random? Try troubleshooting that at 3 AM - I have, far too many times, and would certainly prefer a kernel panic any day.

Still think it's preferable to keep running? If so, are you by any chance utterly insane?

And I take it you haven't read the patch and understood what it does? The kernel would be very likely to crash shortly afterwards anyways if the type of bug it's meant to detect is triggered. If anything having a deterministic crash with a known cause would make post-mortem debugging a lot easier and thus help avoid crashes in the future...

And Mom says I can't have big boy pants yet :-(

0
2
patrickstar
Bronze badge

Re: Aircraft Engine Example

If you think a CNC machine counts as "hard realtime", or that Linux is a "hard realtime OS", you obviously have no idea what the term means.

You can even do things that are 'harder' than controlling a typical CNC machine (the specifics differ depending on the exact hardware in the CNC of course), like bitbanging various serial protocols, from a Linux driver/kernel module or even userland (see iopl(2) and the CLI/STI x86 instructions). This is not what constitutes a hard realtime OS:

0
1
patrickstar
Bronze badge

Probably. Cook is for all practical purposes an idiot when it comes to anything security related. I'd guess he just took the entire idea from Grsecurity and re-implemented it poorly without understanding the full implications... That's what he usually does when it comes to kernel hardening, atleast.

0
0
patrickstar
Bronze badge

Re: Security has become a buzzword for non security groups.

Those computers are totally separate, although there have been some interesting attacks where you can travel from the media center to more interesting stuff over the CAN bus.

And with the possible exception of Tesla, no car runs Linux on the actual ECU. The ECU typically runs some custom OS - I think Bosch is the most common vendor.

1
0
patrickstar
Bronze badge

The Windows kernel is already very modular. Much more so than Linux in fact. It's basically designed as a microkernel (though everything runs in ring 0 for performance reasons).

Plus, the code of the kernel itself is a LOT cleaner than anything in the Linux kernel. (Note that this does not apply to things like Win32k and some of the drivers - they are pretty hairy.)

Maybe you should read both kernel sources and compare them before trying to be funny? Or at the very least Read The Fine Wikipedia Entry: https://en.wikipedia.org/wiki/Architecture_of_Windows_NT

0
2
patrickstar
Bronze badge

Re: " allowing 'buggy' processes to run"

The kernel would be pretty darn likely to crash if a bug like what this patch is targeted at would be triggered. This just crashes it in a way that doesn't turn it into a security vulnerability (plus simplifies debugging since it immediately tells a developer what's wrong and where the problem is, as opposed to crashing some random time later).

0
2
patrickstar
Bronze badge

This doesn't apply to userland. Obviously, userland stuff should never be able to kill the kernel.

The only changes required are within the kernel itself - so unless you are a kernel developer you don't have to care one iota. And if you are a kernel developer, there are regularly breaking changes made between versions, so it's not like you could sit around twiddling your thumbs if it wasn't for this.

If your code is in the mainline kernel, Kees Cook has already made any required changes for you.

Userland code won't be affected in any way. It won't have to know anything about this and no observable behavior will change in any way (well, the kernel will panic in case it triggers certain kernel bugs, but it probably would have paniced regardless).

0
0
patrickstar
Bronze badge

Re: Linus Torvalds is not a Security Expert

Note: 'stock' Linux kernel.

As in straight from a vanilla distro or whatever, with standard config and no customization.

0
0
patrickstar
Bronze badge

Re: Linus Torvalds is not a Security Expert

No. Have you even read the patch?

This is not some signature-based engine to detect kernel exploits or whatever you seem to believe it is.

What the patch intends to do is basically restrict copies to/from userland to the memory regions where it's valid to do so. (You do know what that sentence means, right? *)

This is not something which is going to have "false positives" which randomly and unexpectedly shows up sometime in the future. If it's properly implemented ( == all relevant areas whitelisted) then if it ever triggers it's because of an actual kernel bug (== trying to copy outside that area, either intentionally or unintentionally). If you introduce a new potential area as part of adding some code and forget to add it, it's not going to randomly cause the computer to turn into a bomb later. It's going to crash the very first time, 100% of the time, that you try to run your shiny new code.

It's not some fuzzy guess or Bayesian logic. Either an address is within those areas or it's not.

The only scenario where you'd have actual "false positives" would be if the CPU ended up doing something other than what the code actually says due to some hardware issue, and that's obviously not related to the code itself.

* Well, either you don't or you have a much wider definition of "false positives" than I do. Mine should have been perfectly clear from the mentioning of 'properly implemented'.

0
0
patrickstar
Bronze badge

Re: ... most projects don't have project managers like Linus Torvalds.

Most Linux kernel developers are in fact paid to do so*. Just that it's typically not Linus who pays them.

Many work at RedHat, IBM, Intel, etc - or Google like Kees Cook. This is literally his dayjob, not something he is fiddling with as a hobby.

* Well, probably not most in terms of total number of contributors. But definitely in terms of total lines of code.

2
0
patrickstar
Bronze badge

Re: Aircraft Engine Example

Yes - but not for any of the hard realtime stuff. Linux is not a hard realtime kernel any more than say FreeBSD or Windows is.

Typically for the smallest systems you don't reall have an OS, just a scheduler and some libs. Or something like VXworks which is just one step above that.

For the larger ones you normally use a microkernel like QNX or Integrity RTOS. There's also RTLinux which is a (commercial) microkernel that can run Linux as a preemptible process. Integrity has some virtualization stuff as well that lets you run hard realtime stuff in one VM and Linux in another.

There's the PREEMPT_RT patch for Linux of course which does improve the timing characteristics of standard Linux and is usable in some scenarios, but it's a far cry from a full RTOS. And you would definitely use a very custom kernel for that kind of task, so whether or not stock Linux is too kernel panic happy on certain errors isn't exactly relevant.

5
1
patrickstar
Bronze badge

Re: Did Google implemented it on its servers to test it fully and at scale?

Does Google actually have fanboys? I thought it only had victims, i.e. those who have given up and surrendered to the almighty Google overlords.

8
0
patrickstar
Bronze badge

Re: Build statues in honor of Linus

Kees Cook is not some diversity hire.

He's a long-time Linux kernel developer and head of the Kernel Self-Protection Project.

Not that he has been doing a particularly good job at that, or shown much security clue, but certainly more clue than Linus.

4
17
patrickstar
Bronze badge

Re: Aircraft Engine Example

Actually, pilots do reset (read: power cycle by pulling circuit breakers) the computers in planes to resolve various issues as part of the standard checklists.

But nitpicks aside - critical embedded systems (and not-so-critical as well) use hardware watchdog timers for exactly the reason that you can't trust the software to never, ever crash. Even if you had guaranteed 100% absolutely perfect software/firmware, there are still scenarios like voltage spikes, cosmic rays, slightly off-spec-components, etc that can cause random glitches.

You'd never get something like an engine controller approved if it didn't have a proper hardware watchdog. And probably not if it was running a standard Linux, BSD, Windows, etc. kernel either. To start with, none of them are hard realtime systems.

Arguing about the optimal behavior in a safety critical system isn't even a slightly bit relevant for a general purpose OS kernel.

12
2
patrickstar
Bronze badge

Re: Linus Torvalds is not a Security Expert

If you run hard safety-critical systems on a stock Linux kernel, you are in for a world of hurt anyways.

As for false positives - with this kind of mitigation there are no false positives. If it's properly implemented, triggering it means there's a kernel bug and not someone joking around in userland. If the system continued to run without the mitigation, it was sheer luck, and you don't know for how long. Atleast in the case of copying TO the kernel. In the case of copying FROM, it's the userland process triggering it that's gonna malfunction instead.

3
5
patrickstar
Bronze badge

Re: Design

SELinux has little to nothing to do with exploit mitigation. It's an access control system. In the normal case, it doesn't stop someone from pwning the kernel and disabling SELinux - in fact, kernel exploits regularly do this.

To be fair, there are some scenarios where a proper SELinux ruleset can prevent you from getting to the point where a kernel exploit can actually be executed, but it's not the main purpose.

9
0
patrickstar
Bronze badge

Re: Design

This is Linus displaying exactly the attitude that many people (me included) have been complaining about for well over a decade.

He is stuck in a 90's mindset when it comes to security.

Back then it was a common delusion that we could somehow just fix/avoid all memory corruption bugs and introducing mitigations (even from the start with the very first implementations of noexec stack, what later expanded to DEP) was seen as somehow being "impure". Most people have advanced since then, but apparently not Linus.

He has grudgingly accepted SOME mitigations due to outside pressure, but clearly he hasn't understood why they are actually needed or why lots of work remains to be done.

What others have realized is that there are always going to be bugs in this kind of software. Some of them will turn out to be exploitable security issues. Even if you somehow magically fix all of them at

some point in time, new ones are going to be introduced.

And the proper mitigations can be very, very effective at preventing exploitation. Sometimes you can kill entire bug classes. Other times it makes exploitation less reliable ( == more likely to draw attention due to stuff crashing), more complex ( == raising market prices for exploits thus reducing the amount of attackers having access to them, and making the rest less likely to risk them against all potential targets) and/or require chaining bugs and thus requiring new exploits as soon as one of them is killed.

There aren't less security issues in the Linux kernel now than say 10 years ago. This in itself should be all the evidence needed to conclude that exploit mitigations are needed.

And yes, security issues are fundamentally different than other bugs. Not only because of their potentially severe (unlimited) damage, but also because how they should be dealt with. You shouldn't just fix them and move on. You need to actually learn from the past bugs to prevent introducing similar ones in the future and find those that slip by earlier.

Now that we are living in a world where your adversary might very well be an intelligence agency with unlimited funding, and not just some random kid or criminal gang, proper software security - where exploit mitigations have an important role to play - is more important than ever.

Though, Kees Cook doesn't exactly have a stellar record when it comes to kernel security work, so I'm sure this patch is crap for other reasons...

22
21

Does UK high street banks' crappy crypto actually matter?

patrickstar
Bronze badge

Re: 2 Factor Authentication

In addition to the issues with the phone company, the networks themselves (SS7 and other vectors), there's also the fact that nowadays the phone might very well be the same device they are banking from. Or it might be hooked up to the computer used for banking regularly and thus be susceptible to being compromised that way. So even non-SMS based schemes are off. Plus you can't expect ALL your customers to have smart phones. No, really!

It's better than nothing, but still... While I am not much into the whole 'false sense of security' thing in this case - it's clearly better than just a password - if you're gonna roll out 2FA you better do it properly from the start. Getting people tokens and instructing them in using them isn't THAT much more work than getting everyone's cellphone numbers and instructing them in how that works.

You really want an actual separate hardware token. That also means there's a lot less opportunities for things like shoulder-surfing PIN codes, since you enter it a lot less often and probably not in a lot of public places.

4
0
patrickstar
Bronze badge

As far as I know, not a single cent has ever been stolen because of sub-optimal TLS settings...

The real push should be to enable 2FA - I've actually seen claims that not all UK banks have it...? Totally absurd if that's actually the case.

8
1

Inside Internet Archive: 10PB+ of storage in a church... oh, and a little fight to preserve truth

patrickstar
Bronze badge

ITYM "and if it's accessible via SciHub".

Easy to automatically find PMIDs and DOIs and link them straight there as well...

2
0

Parity's $280m Ethereum wallet freeze was no accident: It was a HACK, claims angry upstart

patrickstar
Bronze badge

Re: Piece of p**s to think up a new crypto currency.

I doubt the customers had signed an agreement that let the bank hold their money and later confiscate parts of it to pay off their own debts (both things happened, by the way).

You are not allowed to use funds held for customers to prop up a failing business - end of story. If this wasn't banks but an actual sane line of business, the executives would rightly be in jail and personally liable for the full amount they stole and/or gambled away from their customers.

I can't believe anyone here is actually defending a bunch of crony capitalists colluding with the governments to make money at everyone else's expense...

1
1
patrickstar
Bronze badge

Re: Piece of p**s to think up a new crypto currency.

Try telling the people who had money in Cyprus during the crisis that getting robbed by Merkel et al. wasn't so bad after all.

2
5

WikiLeaks drama alert: CIA forged digital certs imitating Kaspersky Lab

patrickstar
Bronze badge

Re: HTTPS much?

This has nothing to do with browser security. It's the cert used by the backdoor when it's phoning home. If someone tried serving up a HTTPS web site using it, the browser would rightly flag the cert as being invalid.

The only purpose is to look a bit better if someone sniffs the traffic. Unless you actually verify the cert - which network monitoring tools typically don't - it'll look like it's just a Kaspersky AV product phoning home.

While I agree that TLS in general and the entire CA security model in particular is fundamentally flawed, unfortunately it's the only universal thing we have for encrypting HTTP traffic for the foreseeable future. Even just using self-signed certs is many, many times better than sending the traffic unencrypted, since at the very least you now need an active attack as opposed to passive traffic sniffing to see it. Plus you get forward secrecy if the proper TLS magic is supported by both parties.

3
0

US government seizes Texas gun mass murder to demand backdoors

patrickstar
Bronze badge

Re: Easy to crack (for any governments engineers)

No. Provided that he just used a PIN and not a proper passphrase, all you have to do is at most bruteforce the PIN. Depending on the particular attack there might be a limit to how fast you can actually try PINs but a 4-digit PIN would still be crackable within a reasonable time period. Like, lets say it takes you 10 seconds to try each PIN. Then it'd still just be a bit more than 24h. A lot of people, me included, have even cracked 4 digit PINs (or scanned the similar amount of phone numbers in the phreaking days, etc) by hand in various contexts.

The whole purpose of this 'secure enclave' thing (atleast in this context) is that it holds the actual encryption key but won't give it up without the proper PIN, and also enforcing limits/delays on PIN attempts. This lets you achieve a decent security level (far more crackable than 'proper strong' encryption, but still needs time/decent budget) without having to enter anything more than a PIN to use the phone.

As to the previous poster, what you describe is probably a NAND mirroring attack. This indeed worked against older iPhones but doesn't work against the newer ones. Search the Fine Web for details.

I don't do iPhone stuff, but I think that now you either need some software/firmware/hardware bug, or have a long nice chat with the surly uncooperative chip using pretty darn expensive chip reversing gear (SEM, FIB, yadda yadda).

0
0

Don't worry about those 40 Linux USB security holes. That's not a typo

patrickstar
Bronze badge

Re: Another day, another bug

What I meant to say was that if you write software without any regard for security - with things like basic overflows in unbounded string operations (strcpy strcat sprintf et al.), spawning external processes in insecure ways, etc - chances are the bugs will go undiscovered longer if its closed than if its open source. While they are normally pretty easy to find by fuzzing, firing up grep (or whatever) on the source tree is still less effort than implementing whatever protocol it speaks in a fuzzer.

But that's not the kind of bugs that plague any major software today.

Any argument that either open or closed source is inherently more or less secure is, of course, totally bogus. But you can definitely draw the conclusion from today's vulnerability landscape that there's no inherent all-encompassing advantage for open source projects.

And to elaborate on my previous argument a bit: I really, really doubt there's significantly more people studying Chrome or Firefox than Edge source for security vulnerabilities before a version hits production. You simply can't rely on people doing this kind of work for free. And these projects are basically entire universes of their own - you need to have lots of experience with each particular project to be able to audit them in any meaningful way.

Regardless of the source code availability or distribution model you need to have a solid security team that are actually in the loop about all the internals.

You can't rely on the mythic "lots of eyeballs" because there's simply not gonna be a lot of people reading that kind of source code at all, let alone people who are good at finding vulnerabilities in it. And you need solid exploit mitigations in place because there ARE gonna be vulnerabilities that gets discovered by 'bad guys' long before any 'good guys' as long as you stay within current software development paradigms.

Google has clearly understood this, probably Firefox too.

For the record, I hate Google with a passion and avoid their products like the plague, but atleast their security efforts are pretty good when it comes to Chrome... Certainly a lot better than many open source projects AND commercial closed source vendors (not MS on a good day though).

0
0
patrickstar
Bronze badge

Re: Another day, another bug

Remember what happened last year? No?

A Linux kernel bug that's been in there for 10 years was fixed. Because it was caught in the wild. Turned out Bad Guys <TM> had been using it for many, many years until finally their luck gave out...

Gazillions of people had read the vulnerable code, but noone except whoever wrote that exploit ever spotted it before.

Very few people are good at finding security vulnerabilities in software, even with full source code availability. Even fewer are willing to do this painstaking work for free. Open source might have an inherent advantage when it comes to the really simple stuff you can basically 'grep' a codebase for (simple overflows and such), but that's not very relevant for any major software these days.

Apart from kernels, the most attacked software is probably web browsers. Firefox and Chrome (both open source) bugs are certainly caught in the wild with some frequency. Edge/IE bugs are certainly not more frequent in the wild, though comparisions are hard because of their different popularity and usage patterns (Chrome has bigger market share, Edge/IE probably higher % of interesting targets because lots of enterprises are stuck on them).

And most bugs that are discovered by "good guys" in those are found by fuzzing, not reading the source code.

In case you don't know (since you obviously has no software security background whatsoever considering the statement you just made), fuzzing is not dependent on having the source code. Admittedly it does help somewhat because it means you can build with ASAN et al., but apart from that it only helps with root causing which is only relevant if you are either an attacker or trying to fix the bug.

0
1
patrickstar
Bronze badge

Re: Wakey, wakey.

Like I have explained, there are also lots of scenarios where an attacker DOES have physical access but it's either not complete (like just having access to the screen, keyboard and some USB ports), the attacker only has a small amount of time (walking past a computer and sticking a USB stick into it vs. having to take the damn thing apart or atleast rebooting it), or simply that the disk is encrypted, the console locked, and you need to get the encryption key from memory to be able to access the data.

The whole "you're screwed if an attacker has physical access" near-truism simply doesn't apply for all values of "physical access", "attacker" and "screwed".

If it was universally true, there wouldn't be any reason to use full-disk encryption. The entire purpose of that is to protect against attackers with physical access, after all.

Or even the real classics like screen lockers, console login prompts, BIOS/bootloader passwords, or lockable computer cases. Much less tamper-resistent hardware or such where you can actually make a calculation for how much time/money is needed to bypass it.

Threat models and layered defense, you know...

2
0
patrickstar
Bronze badge

Re: Meanwhile...

Imagine if the disk is encrypted, no user is logged in and/or the screen is locked, and you can't do a cold-boot attack. Now the ability to execute code by plugging stuff into various orifices (on the computer, not on yourself) suddenly becomes a very, very relevant security issue.

Not to mention various kiosk-mode systems with exposed USB ports...

Or perhaps you ever plug in USB sticks that have been used on another system? Then this also suddenly becomes a security issue. A wormable security issue, even.

As is publicly known to have been done against Windows systems in the past - see Stuxnet for the most famous example. I wouldn't be surprised if these and other vulnerabilities have been exploited against Linux as well, just not made headlines in doing so... or perhaps never even been discovered by the victims.

4
0
patrickstar
Bronze badge

Re: Wasn't that the primadonna maintainer project

Linus has stated in public that he does not consider security vulnerabilities any different from other bugs. That's a pretty apathetic attitude to security concerns in my book...

And he has basically told real experts trying to improve the security of the Linux kernel to go fuck themselves (probably not literally - I'd expect him to use much more creative insults than that). See refusal to interact with the Grsecurity guys in any meaningful way, for example, and the half-assed Kernel Self Protection Project that followed public pressure to improve the situation (which, by the way, is most certainly not composed of 'real [security] experts')

Plus, black hat kernel security wizards are paid handsomely for their efforts at doing black hat kernel security stuff nowadays. You can't just ask them nicely to start doing work for free instead and expect anything but a chorus of laughs.

9
9

'Lambda and serverless is one of the worst forms of proprietary lock-in we've ever seen in the history of humanity'

patrickstar
Bronze badge

Re: "really depends on one enorrmous binary firmware"

Are you assuming genders just because of the username "JulieM"? What if the person identifies as an attack helicopter, with the preferred pronoun "it"?

Anyways - just wanted to point out that binary blobs to get random HW going isn't what people really are complaining about with the Pi. It's the fact that the entire fundamental firmware / "BIOS" is closed source and locked down. You can even get x86 systems that are far more open than the Pi (see libreboot).

You wouldn't see people bitching (as much) if you just needed some blob to get graphics going for example, and could just avoid it if you don't need that - or even better, get basic unaccelerated graphics without it. But as it is now, if you would somehow manage to remove the closed source stuff the Pi wouldn't just fail to boot, it would never even execute a single instruction.

0
0
patrickstar
Bronze badge

Re: AWS Lambda “lock-in”

I wouldn't consider a language where you don't even have to declare variables for anything where actual important things are at stake...

0
0
patrickstar
Bronze badge

Re: "really depends on one enorrmous binary firmware"

This - i.e. pluggable firmware for various hardware - is not (only) what he's referring to.

The entire GPU firmware on the RPi is closed source, to the point where you can even buy licenses (!) to enable additional features (like the MPEG2 decoder). The hardware is there, it's just disabled until you convince the firmware that you've paid for using it.

And there's no way around using the GPU and thus running closed-source code, because the GPU is what actually initializes and boots the Pi from the SD card (yes, really, the GPU, look it up!). It basically does the job of what would be BIOS/EFI on a PC. And there's no "libreboot" or whatever equivalent for the Pi.

The Pi is far from an open platform despite schematics and such being available. I get it - this was probably the only reasonable option given the price point - but if you're an open source purist you most definitely shouldn't use the Pi.

I doubt the GPU is happy to give up all its firmware, so it's actually worse than running closed source software in general. Atleast in the software case you have the option of reverse engineering it, without having to bring out a scanning electron microscope and focused ion beam workstation.

1
0
patrickstar
Bronze badge

Re: AWS Lambda “lock-in”

Why would you ever want to write anything that's actually important in either Python or JavaScript?

They are, beyond reasonable doubt, two of the worst languages for the purpose.

I know you can compile other languages to JS, but still...

14
17

NetBSD, OpenBSD improve kernel security, randomly

patrickstar
Bronze badge

Re: It doesn't matter that it doesn't relocate in RAM while running

The 'common interface point' is the syscall interface. This doesn't have to reveal anything about the underlying memory layout, any kernel addresses, etc. In fact, when it does, it's considered a security issue and fixed.

See my earlier posting giving an example of a kernel address leak via a syscall. This turned into https://nvd.nist.gov/vuln/detail/CVE-2017-14954

syscalls on x86/64 are typically done via the 'syscall' instruction (or the classic way of using a software interrupt, eg int 0x80 on Linux and int 0x2e on Windows). This does not, in itself, reveal any information that would be useful for an attacker. Userland code just invokes the magic instruction, and some time later the execution resumes and typically a register has changed so that it now holds the return value/error code. That's it.

0
0
patrickstar
Bronze badge

The really important question that remains after reading the kgraft presentation is, of course, if "getties" really is the plural of "getty"?

(PS. Note that the presentation somewhat ambigously refers to the 'World view' checking code as 'trampoline'. This can indeed safely be removed once everything is done. The JMP at the start of the original function will remain, however. Also note that if they had more cooperation from the compiler like the Windows hotpatch scheme, instead of ftrace piggybacking on the GCC profiling code, they could presumably do this without any delay whatsoever on the other CPUs, but I suspect this was evaluated and rejected for various reasons.)

0
0
patrickstar
Bronze badge

kgraft is essentially based on the principles I've described, with some neat hacks on top. It does, indeed, work on a function-by-function basis, by inserting a trampoline at the start of the original.

On top of this it has a mechanism that ensures everyone 'sees' (well, executes) a consistent view, in case the changed behavior of one function is dependent on the changed behavior of another.

It doesn't actually replace functions "in-place" or such, for the obvious reasons, including what you mentioned (if it's bigger than the original it'd have to move everything after it, which can't be done for any function there might be function pointers to or any other external reference, like interrupt handlers).

See eg. http://events.linuxfoundation.org/sites/events/files/slides/kGraft.pdf (slide 11)

"-Callers are never patched

-Rather, callee's NOP is replaced by a JMP to the new function

- So a JMP remains forever

- But this takes care of function pointers, including in structures"

So you have the original around, but I think it does free up old patched versions if they are replaced in another patch applied after.

You could presumably remove (well, zero out) the code after the trampoline eventually.

However, if you're going to do this for the entire kernel, after one full "move" you are in exactly the scenario which was discussed here earlier - all function calls just go to a trampoline that calls the actual function. With exactly the same issues - just done a lot more inefficiently.

TL;DR kgraft doesn't actually MOVE code, which is what we're discussing here. It diverts execution. It's meant for small patches, not shuffling the entire kernel around.

0
0
patrickstar
Bronze badge

kgraft and similar schemes don't actually delete the old code. To do that, you will need to stop CPUs to make sure they aren't in the middle of executing the code you are deleting. Just like I described.

The scheme it's describing (INT3 yadda yadda) is so it can insert the jumps into code that's not been specifically prepared for it. Just like I described...

(Hmm - noticing a pattern here. Might have something to do with the fact that I have actually implemented a lot of this in other contexts...)

There's a bit of discussion regarding hotpatching in Windows here and how they solved that same issue, in that case by having the code be prepared for it beforehand: https://blogs.msdn.microsoft.com/oldnewthing/20110921-00/?p=9583/

kgraft and similar schemes are making far smaller changes than moving the entire damn kernel would be. Even if it would be nuking old code etc, it's making tiny changes to individual functions, which typically are not interrupt handlers or any of the other tricky cases. It's simply not comparable.

If you're gonna MOVE the kernel code, you need to have all CPUs stop at a safe point.

This would typically mean that you have to stop them one by one as they reach it, not just suddenly stop them all at once.

You'll either have to re-route IRQs from the CPUs one by one as they are being stopped or live with a potentially large random delay in serving IRQs from the CPUs stopped before the others.

Then once they're stopped, or sit spinning at some code you won't be removing, with interrupts disabled, you have a lot of work to do.

For once, you need to locate all function pointers in memory and change them. How would you even do that? You'd need some sort of managed memory scheme to know what's a relevant pointer and not, similar to mark/sweep GC with exact tracing. At the very least, you need full cooperation from ... everything in the kernel. Including all drivers. Good luck with that.

If you want to move the static data as well, you're in for exponentially more fun. You might as well stick a compacting mark/sweep GC in the kernel and be done with it.

How long would all of this take? The answer is most likely "too long" in a lot of cases. At the very least it's utterly unpredictable beforehand.

And for all this you will have accomplished... nothing. Maybe you have prevented some cache/paging side channel attack or something that takes longer to execute than the interval between two "total kernel re-mixes", but that's not what actual kernel exploits use in the wild.

Not to mention that none of this exercise helps you move anything else than the kernel code and possibly static data. You have just moved the kernel code, which is good for stopping things like ROP (eg to disable SMEP), but there are lots of other attack vectors you haven't affected at all.

So to sum up:

How much work would be needed to do it? A lot.

Would the resulting impact on performance, responsiveness and timing cause a problem for anyone? Yes, for a lot of users, and the exact impact would be dependent on the specific combination of kernel, drivers, workload, etc.

How much would it improve security? Very close to nothing.

0
0
patrickstar
Bronze badge

Generally speaking live patching is done by putting the new code somewhere else entirely and inserting a jump to it at the start of the code to be replaced. So basically you insert a trampoline where one didn't exist before.

If you have some degree of cooperation from the code to be patched (eg hotpatching in Windows), this can be done even though others may be executing the code at the moment. However, if you're going to remove the old code, you need to prevent execution of it regardless (like stopping all CPUs and resuming their execution at a known location).

0
0
patrickstar
Bronze badge

Re: Stating the obvious?

Uhm, the kernel code itself is read only. Kernel overflows target the stack and heap. They have to be read/write for obvious reasons.

0
0

Official Secrets Act alert went off after embassy hired local tech support

patrickstar
Bronze badge

Grasses (the kinds you walk on, no the kinds you smoke) contains silica, to deter animals from eating them. Causing a very predictable evolutionary arms race between the teeth of grazers and the stuff they're grazing on.

5
0

UK.gov joins Microsoft in fingering North Korea for WannaCry

patrickstar
Bronze badge

Re: I blame

You ARE aware that the open source SMB implementation (Samba), has a horrible security history, right? Quite possibly worse than Microsoft's, actually.

0
0
patrickstar
Bronze badge

Re: Downgrade to XP

XP was definitely vulnerable, but as I've understood it the specific exploit used in WannaCry didn't work reliably against it. So usually it ended up bugchecking XP boxes instead of compromising them.

So you still have to migrate off XP, disable SMB, and/or patch them somehow before someone else comes around with an exploit for XP.

I suspect the confusion is because it didn't ALWAYS fail against XP, but worked against SOME of them while working against (almost?) ALL Win7 boxes.'

0
0

You're designing an internet fridge. Should you go for fat HTML or a Qt-pie for your UI?

patrickstar
Bronze badge

HTML is fine for small, simple UIs for uncomplicated tasks.

For anything above that, please take one for mankind and remove yourself from the gene pool if you even consider it. This also applies if you think "FPS" is relevant for the UI of a fridge.

14
0

Samsung to let proper Linux distros run on Galaxy smartmobes

patrickstar
Bronze badge

Re: And what do telcos think about this?

Why would they care what you do to the phone as long as your are not hacking up the radio and interfering with other customers because of that?

(N.B. This is an actual question, not snark!)

0
0

Page:

Forums

Biting the hand that feeds IT © 1998–2017