* Posts by Uffe Seerup

68 posts • joined 10 Dec 2007


Microsoft drops rush Internet Explorer fix for remote code exec hole

Uffe Seerup

Re: Pro Tip

> A simple fix for this is to not allow browsers to run under admin accounts by default.

If you do not turn off UAC - you are never running with admin capability by default on Windows. On Windows - unlike Unix - your identity is separated from your privileges.

When you log in using an administrator account, you retain the identity, but all administrative privileges are stripped from the *token* that is created. Security tokens on Windows are infinitely more capable than the naive Unix "effective user" thingy.


Hold that upgrade: Critical bug in .NET 4.6 'breaks applications'

Uffe Seerup

Re: So What's the Solution?

There's a registry setting that controls whether the new JITter or the old one is to be used. Set the JITter to the the classic one until this problem has been fixed.


Get root on an OS X 10.10 Mac: The exploit is so trivial it fits in a tweet

Uffe Seerup

Re: Congratulations on repeating exploits before they can be fixed

> Congratulations on repeating exploits in detail before they can be fixed by anyone...

When Stefan Esser tweets it, you can consider it in the public domain. At that point, raising awareness can be seen as a service to the public. El Reg does us all a service here.

I agree that security researchers should not just blurp out exploit code. I totally support responsible disclosure. But once it is out there, the bad guys certainly know about it. Telling the good guys about it so that they can prepare for it is a good thing.

Apple has fixed this in a beta of OS X. If they believe that they can silently fix security errors and nobody will know about them until they publish the advisory, then Apple is being *incredibly* naive.

I would venture a guess that Stefan Esser has diff'ed the binaries or diff'ed decompiled binaries for changes between the same utilities in different versions. Any change is potentially a security hole that has been fixed.

This is trivial, especially if you are actively looking for security vulnerabilities. Publishing code with fixed vulnerabilities is even better (from a bad guys view) then disclosing through an advisory: At that point you have everything needed to create an exploit, and the potential victims are unaware of the threat, and thus cannot defend against it.

This one would light up with the extra checks on the allowed paths. From there it is easy to infer that the current version does something incomplete.

> Yellow journalism.

It doesn't mean what you think it means.

> However, the article does not Emphasise that you must first have privileged access through an app. Double yellow, click bait.

A compromised Firefox or Safari process is a local user. This will allow an attacker to go full root on a system from there. But I am sure that there is no chance of a vulnerability in Firefox or Safari? or mail clients? or SSH?

Are you one of those who also dismisses threats of malware against OS X by referring to how it will ask you for a password before installing anything? Well, with this one there will be no password prompt, but attacker can install anything.

Uffe Seerup

The real culprit

Is the deliberately holed *nix security model. Once again a SUID/setuid utility strikes.

Because of SUID, the *nix security model is not a security boundary. A security boundary guarantees that every access is checked against an access policy or permission set. By design, the *nix model is that if you are root you bypass all security checks.

It is a deliberate hole, drilled in the model out of necessity since the model is otherwise not capable of expression necessary permissions in modern environments.

This is going to bite again and again like it has been responsible for numerous vulnerabilities and exploits in the past.


Microsoft attaches Xbox stream bait to Windows 10 hook

Uffe Seerup

Streaming high-end games to rendered on XBox One to laptops and tablets

Or even play multiplayer on separate screens (XBox + Win 10 PC) streamed from the same XBox.

That's why.

Even without a beefy GPU you can stream the game to a laptop or tablet, which incidentally can also connect the XBox controllers.

More here: http://www.xbox.com/en-US/windows-10


OPEN WIDE: Microsoft Live Writer authoring tool going open source

Uffe Seerup

Re: What licence?

> Microsoft have made source code available before, with a license that said something like: "If you could have seen this source code, and you ever make money out of software in future, Microsoft can sue you for copyright infringement.

In the past Microsoft has released some source code under a *shared* source license. Maybe that's what you are thinking of. That is not *open* source, however, and Microsoft has never used the term "open source" to describe the shared source license.

Microsoft has been on a roll lately releasing products as *open* source. Each and every time Microsoft has said *open* source it has been an OSI approved license, usually MIT, Apache or MS-PL. Yes, MS-PL is also OSI approved.


It's 2015 and Microsoft has figured out anything can break Windows

Uffe Seerup

Re: Surely...

"Except they don't. Because - again, like a goddamned broken record - you are counting every security issue in every package of a distro against the core Windows OS, without regard to vulnerability type or severity."

Sorry, Trevor, but you are wrong. Let's take the latest full year (2014) . And let's take Windows 8.1 and compare to *just* the Linux kernel. From there on you can add X, Gnome/KDE to get to the same functional level as Windows 8.1. But just the kernel:

Linux kernel: http://www.cvedetails.com/vulnerability-list/vendor_id-33/year-2014/Linux.html

Windows 8.1: http://www.cvedetails.com/vulnerability-list/vendor_id-26/product_id-26434/year-2014/Microsoft-Windows-8.1.html

Linux kernel, year 2014: 135

Windows 8.1, year 2014: 38

For the the year 2015 so far the numbers are 60/40 in Linux favor but keep in mind that it is not a full year and that it counts only KERNEL vulnerabilities for Linux versus ALL vulnerabilities for Windows 8.1

Let's go back to 2012-2013 then. Windows 8.1 did not have a full year of 2013, so let's compare Windows 7 to Linux (kernel only again) for 2013:

Linux kernel for year 2013: 189 vulns

Windows 7 for year 2013: 100 vulns.

Linux kernel for year 2012: 116 vulns

Windows 7 for year 2012: 44 vulns.

Again, contrary to your claims this is counting only Linux KERNEL vulns against a fully functional Windows.

So it would appear that you are incorrect, Trevor.


Wrestling with Microsoft's Nano Server preview

Uffe Seerup

Pets or cattle?

> But if it runs on its own hardware, say some kind of appliance, you may need some direct access if the network components don't work for some reason.

The mantra is: You should not manage your servers like your pets. You should manage them like your cattle. As Snover said: "If one get's ill you do not check it into the animal hospital - you fire up the barbecue". While I personally would not like to eat a sick animal, I totally get the idea when i comes to servers.

If a server becomes unresponsive you nuke it and re-install it using whatever method you used originally (PXE). Your environment based on PowerShell DSC or Chef or Puppet should ensure that the server comes up configured like the rest of the hoard. If that fails you discard it.

You have to consider that the target for Nano is not your basement hobby server. It is servers on (huge) datacenter scale *and* single workload VMs.

When the datacenter is built from containers with hundreds or thouands of servers in each container - you do NOT send in a repairman (veterinarian) when one misbehaves. You disable it and chalks it up to cost of doing business. When enough servers have failed you may consider refurbishing them.

Your infrastructure should already be resistant to server failures. As soon as a server fails, workload should shift to other servers, either as part of clustering or hot-standby or super-fast provisioning. Either way, the only reason to try to salvage a server should be HW savings - not to make services available again. If you depend on salvaging a bad server for availability to services you are doing it wrong.

Which means that you should have no rush. Whatever was on that server was redundant (in the sense that it is available elsewhere) and you can just re-commision it at a time of your choosing and with no regard to data. I.e. re-install.

Uffe Seerup

The why and what for

Per Microsofts Jeffrey Snover (chief architect for Windows server), Server Nano is *primarily* a scratch-your-own-itch refactoring.

The biggest user of Windows Server - by far - is Microsoft Azure. If you can save 25% on the size, you can increase VM density by 20%. If you can save 50% you can double the VM density.

Harddisk footprint has been reduced with a factor 20. That is a *massive* saving once they scale to Azure.

OS RAM usage is down considerably as well. The fewer features means fewer patches (both bugfixes and security patches) and consequently fewer reboots. Microsoft investigated how many of the patches in 2014 that touched the components in Nano and concluded that 80% of patches would not have been required, as they concerned components not in Nano.

But to turn the question around? Why does a *server* - by definition a machine whose primary task is to run a workload - need to have a *command interpreter*, a *shell* and an *editor* - even if it is very basic.

Why should you need to log in to a machine over SSH, start a command interpreter on the server and issue commands? Why would you want a Ruby interpreter? All extra components has to be maintained and adds to the attack surface.

Microsoft has come to this realization late, but at least they now go the whole way and may very well take it a bit further.

Ideally the remote server is "just a server" with a standard interface to control and configure it and no way to log in locally. That is what Server Nano is.

Btw, PowerShell has this nice property that it can submit "script blocks". Script blocks are semi-compiled script, so while MS will still need some PowerShell infrastructure on Nano, they could very well cut away the *shell* part of it - leaving only the execution engine. Already today - if you use PowerShell remoting - you can send scripts to the remote that is not just text. They are parsed and turned into a PS script block locally and then sent to the remote PS engine. The upshot is that you can create scripts that refer to *local* script files but execute them remotely. PS will send the parsed scripts blocks for those files across the wire.


Microsoft points PowerShell at Penguinistas

Uffe Seerup

It is a "platform play"

Yes, it aims at solving some of the same problems as Chef, Puppet et al. But unlike those, PowerShell DSC follows industry standards for datacenter/enterprise management.

That said, PowerShell DSC is not at all comparable to the full-featured Chef/Puppet offerings. Instead (according to Microsofts Jeffrey Snover - the inventor of PowerShell) Microsoft would like other vendors to build management products on top of the open platform.

Chef and Puppet may be available as open source as well as commercial offerings - but they do not adhere to any published standards, hence you get locked-in when you base your datacenter management on one of those products.

DSC builds upon Management Object Format (MOF) files which can be used to declaratively describe the desired state of a node. MOF is an format standardized through the Distributed Management Task Force (DMTF) (see http://dmtf.org/) along with standards for interacting with nodes.

The open source OMI server for Linux implements the OMI standard of the Open Group. The OMI server takes the MOF files and applies the configuration to the nodes.

It is all open source and based on open industry standards supported by a number of tech companies.

Chef recipes are written i Ruby. By using Ruby in a clever fashion the recipes look almost declarative. But they are still Ruby. Imagine if Chef could use MOF files instead.


Microsoft HoloLens or Hollow Lens? El Reg stares down cyber-specs' code

Uffe Seerup

Re: No Holography In Evidence. It's More Of The Old 'Pepper's Ghost'.

Not so sure about that. Microsoft has not said anything either way, except for mentioning that they've had to develop a "holo chip".

I have quizzed some of my colleagues who were at Build and tried them. Specifically, they said that they focused on the objects in the real world, and that the holograms appeared sharp when focusing at the distance.

That, to me, suggests that there's more going on than simply stereoscopic lenses. If they were simply lenses that overlay an image a few centimeters from your eyes, the image would be blurred when you focus your eyes on an object 2 meters away. Try for yourself: Hold a finger in front of your eye and see if you can focus on that at the same time as you look at an object even just 50 cm away (and vice versa).

The limited viewport seems to be a dealbreaker. If they do not solve that it will see very limited usage.

But at this point you have absolutely *nothing* substantiating your claims that it is simply stereoscopic lenses, while there are at least some indication that there's more going on.


Entity Framework goes 'code first' as Microsoft pulls visual design tool

Uffe Seerup

Re: I ran into serious problems with EF

"LINQ supports data types that are queries, using a .NET array type where the variable itself uses lazy evaluation"

No, that is not how LINQ composes queries. LINQ supports IQueryable<T>. IQueryable can be composed with other queries, more clauses etc. In general, when you compose using IQueryable, the result will also be (should also be) an IQweryable.

"it doesn't retrieve or compute the result until your program actually accesses the value"

(Guessing here) It sounds like you are talking about a *property* with a *getter* which is then evaluated on each reference. If it does not return an IQueryable - it cannot compose lazily with other queries. If it is indeed an "array" type as you describe - that is definitively not the way to do it.

"For example, you can point your GUI to one of these, have it show 10 records from a query, if the query has 50 results it will only retrieve 1-10 until you scroll down. Well, in theory -- EF doesn't support these, it loads the whole enchilada (all 50 records) into RAM and then "lazily" loads values out of that in-memory copy as you use it (it doesn't generate any optimized SQL at all.)"

EF does indeed support paged queries. Use Skip() and Take(): Skip to skip n results, Take to retrieve the next m results. I suspect that you may be using some GUI element that does not use this EF feature.


Windows 10 feedback: 'Microsoft, please do a deal with Google to use its browser'

Uffe Seerup

Re: Stop. Just Stop.

> Fully agree. And your justifiable rant reminds me of the question I've been meaning to put to anyone more experienced with the windoze environment than I am.

You are looking for Windows Applocker: http://technet.microsoft.com/en-us/library/dd759117.aspx

(part of Windows since Windows 7 - before that there were software restriction policies).

With Applocker you can enforce a policy where executables are only allowed to launch from a few protected folders, such as Program Files and Program Files x86. Or you can set a policy that only select publishers are whitelisted, e.g. Microsoft, Adobe etc.


Vanished blog posts? Enterprise gaps? Welcome to Windows 10

Uffe Seerup

> What I don't really like is the requirement of having an online Microsoft account

There is no such requirement. In fact, I tried to set it up on a Surface Pro 3 (mistake) - but because it doesn't come with drivers for the SP3 (no Internet connection during install) I had *only* the local account.

But even if you *do* have a network, the Microsoft account is still optional.

Uffe Seerup

Shutdown is by default a mix between shutdown and hibernate

It is the *kernel* that hibernates. All user programs and user sessions are terminated.

When you show down, Windows sends shutdown signals to all user processes in all user sessions. The kernel and services in session 0 (including device drivers) are hibernated, i.e. the state is written to disk.

Upon start, Windows will check that the hw signature of the machine (ntdetect) is still the same, and if so it will read the state from disk rather than go through a full boot process.

If the drivers are up to it, you should not see any effect from this apart from the (much) faster booting.


Desktop, schmesktop: Microsoft reveals next WINDOWS SERVER

Uffe Seerup

Re: hate for powershell?

Trevor, you claim that PowerShell is generally despised as a means of day to day administration".

That's a very unspecific and unverifiable claim. When I come across an admin who uses arcane tools or self-automates through the GUI by manually following "manuals", I always demonstrate to them how they can accomplish the tasks using PowerShell. And the response is generally overwhelmingly positive. So I have the exact opposite experience from you; although like yours it is still anecdotal.

PowerShell *is* the primary tool. Through the module feature it has quickly grown to become the hub of CLI administration on Windows boxes. I dare you to find an area that can not be administered using PowerShell.

> But for standardised deployments, policy pushes, standardised updates or even bulk migrations any of the proper automation/orchestration tools are just flat out better.

The right tool for the job. On Windows there is still group policies which remains the most important vehicle for pushing policies. PowerShell is not intended to replace GPOs. PowerShell *can* fill the gaps where group policies cannot reach.

> And for those situation where you're fighting fires, an intuitive GUI is way better, especially for those instances where you're fighting fires on an application you haven't touched in months, or even since it was originally installed.

> This is - to put it mildly - a different role than is served by the CLI in Linux.

If you are "fighting fires" in an application you haven't touched in months and find the GUI better for that, how come you think the Linux CLI is better for that? If your problem is unknown/forgotten instrumentation, *discovery* is what you are after. A GUI can certainly help you there, but I frankly cannot see how a *Linux CLI* would offer more discovery.

In fact, PowerShell offers *much more* in the discovery department than any Linux CLI or scripting language. Granted, you have to be familiar with a few central concepts of PowerShell: Modules, naming convention, Get-Member, Get-Help, Get-Command. However, these are very basic concepts that you are not likely to forget even if youjust use PowerShell occasionally.

> The CLI in Linux is much more mature, with a multitude of scripting languages grown around it, and the majority of application configuration done as flat text files, not as XML. (Systemd can die in a fire.)

More mature? How about *old*? Case in point: The arcane bash syntax with its convoluted parser that has placed hundreds of thousands systems at risk.

Yes, the Unix tradition is to use plain text files. The Windows tradition is to use APIs, possibly backed by XML files. There are advantages and disadvantages to each approach. Personally, I prefer the API approach as it makes it easier to create robust scripts and procedures that will also carry forward in versions, even if the underlying format changes.

Uffe Seerup

Re: Powershell 5 and W7 / Svr2K8r2

It is the *preview* that installs exclusively on 8.1 and Server 2012R2.

I was looking for a source on which platform the *final* product will be available on.

PowerShell 4 was *also* platform limited during the preview phase, but was backported afterwards.

Uffe Seerup

Re: Powershell 5 and W7 / Svr2K8r2

> It doesn't look like Powershell 5 will be backported to Windows 7 or Server 2008/R2?


Uffe Seerup

Re: User Interface

> There are many UNIX shells, and it's impossible to compare all

> of them with Powershell and come out with the conclusion that

> "Powershell is the bestest",

Indeed, a comparison with the goal of crowning "the best" makes little sense. Not least because much of PowerShell makes sense mostly on Windows and would add little value on *nix systems. Conversely, the *nix shells with their line-text-centric processing is often at odds with Windows where you either need to control through APIs, XML or similar.

> ... has an interactive prompt that, from my trials, can only be

> accessed while "on" the machine or over RDP

Then you have not tried PowerShell since version 1.0. This first version had no built-in remoting, but could be used across SSH connections.

In PowerShell 2.0 a lot of remoting and job features were added. Many commands take a -ComputerName parameter where you can pass a list of hosts where the command should execute remotely but return the output to the controlling console. There is a general Invoke-Command that can execute commands, scripts, functions on remote machines simultaneously and marshal the output back.

Uffe Seerup

Re: User Interface

> Really? Get back to us when it can do full system job control.

For jobs control:

Debug-Job, Get-Job, Receive-Job, Remove-Job, Resume-Job, Start-Job, Stop-Job, Suspend-Job, Wait-Job.

Some of these have aliases for shorthand use:

gjb -> Get-Job, rcjb -> Receive-Job, rjb -> Remove-Job, rujb -> Resume-Job, sajb -> Start-Job, spjb -> Stop-Job, sujb -> Suspend-Job, wjb -> Wait-Job

For process control:

Debug-Process, Get-Process, Start-Process, Stop-Process, Wait-Process

Again, with aliases:

gps -> Get-Process, kill -> Stop-Process, ps -> Get-Process, saps -> Start-Process, spps -> Stop-Process, start -> Start-Process

You can start jobs on remote computers without first "shelling" into those. Output from the jobs are marshalled back to the controlling console. It even works if the remote job stops and query for a value (using Read-Host): The prompt will appear on the controlling console when you poll for output/status of the job.

Scripts can be workflows. Workflow scripts are restartable, even across system restarts.

So, between using a remote GUI admin interface and scripting, Windows Server is very well covered.

Uffe Seerup

Re: User Interface

> Let's hope the version has a user interface suitable for use on a server...

PowerShell 5


Stunned by Shellshock Bash bug? Patch all you can – or be punished

Uffe Seerup

Re: what else lurks

> Well, the attack is based on a feature of Bash.

No - it is a *bug*. The ability to define functions in env vars was the feature. The unintended consequence of using a poorly implemented parser was that it proceeded to *execute* text that may come *after* the function definition in the env var.

> This means that it's been "out in the open" for the entire existence of the feature

Nope. Nobody ever considered the possibility that extraneous text following such a function definition would be executed *directly*. At least, we *hope* that it was the good guys who found this first. But we really don't know.

> It also points out why it's a bad idea to have so much running with root permissions, besides not sanitizing input.

Yes. And why SUID/setuid is such a monumentally bad idea. It is a deliberate hole in a too simplistic security model.

> The equivalent on a Windows system would be to pass in PowerShell script and .NET binaries through the http request, and then run it all with Administrator permissions.

Not sure I agree. That is such an obviously bad idea that nobody would ever do it. Shellshock is - after all - a bug (unintended behavior). However, there are multiple historic reasons on Unix/Linux that tend to amplify this bug:

1: The (dumb) decision to use env vars to pass parameters through the Common Gateway Interface (CGI). Env vars are stubbornly persistent: They are inherited by all child processes. This serves to make security auditing much harder: You cannot simply audit the script that *received* control directly from CGI; you also has to consider all processes that can ever be launched from the script, its child processes or descendants.

2: Inadequate and too restrictive Unix security model. This led to the invention of SUID/setuid (a hole in the security boundary) because there were still a lot of tasks that "plain users" (or least privilege daemons) legitimately needed to perform - such as printing, listen on sockets etc. Rather than refining the security model, SUID punched a deliberate hole in it. However, this means that user accounts are frequently not granted rights to access a *resource* or a syscall - rather they are granted right to launch an *executable* (SUID) that will execute with an effective user (often root, sadly) who has access to the resource. Which means that you have created a culture where you all but *need* to launch processes through system() calls to get the job done. With a proper security model where such right could be *delegated* you would not have to invoke external executables.

3: Old software that lingers on even in modern operating systems. It has long been accepted that the bash parser is, well, quirky. The way that it is semi-line oriented, for instance (definitions only take effect on a line-by-line basis). Today, parser construction is under-graduate stuff, all the principles are well known. The bash code in question was written in another era and by another community where those principles were perhaps not as well known as they are today.


Munich considers dumping Linux for ... GULP ... Windows!

Uffe Seerup

Re: 1800 jobs


1) The Microsoft HQ is already there. They'll move 15 kilometers to the south in 2016

2) The decision has already been made, planning completed and they are about to start building - if they have not already

3) The decision was takes almost a year ago during the previous administration. The currect Stadtrat was elected this March!

It could still be a case of a few mayors having a pet project and wanting to make an impression. But trying to connect the HQ move to this decision is disingenuous.


Panic like it's 1999: Microsoft Office macro viruses are BACK

Uffe Seerup

Re: js and pdf proprietary extension, @big_D

[i]>>It's the same as in MS Office[/i]

It's similar. But MS Office (since Office 2010) also *sandboxes* documents that have been received from the Internet zone. This applies to files received through email or downloaded through a browser (all browsers support this).

Such files contain a marker in an alternate data stream that specifies that the file came from the "Internet zone".

When Office opens such a file it will open in a sandboxed process. The entire process runs in "low integrity mode" - and thus whatever it's macros may try to do - even if enabled - they will be restricted to the permissions of the low integrity process.


Microsoft C# chief Hejlsberg: Our open-source Apache pick will clear the FUD

Uffe Seerup

Not Apache httpd (the "server"), but the Apache *license*

Microsoft open sourced their C# and VB compiler under the Apache License 2.0 (http://www.apache.org/licenses/LICENSE-2.0), which includes grant of patent license:

"Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted"

Uffe Seerup

Re: I hope Apple do similar

The Common Language Infrastructure (CLI) has been placed as a standard under ISO. It is open for anyone to implement (as the Mono project did). There is no situation comparable to Java where you had to implement the full stack and pass a (closed source) compliance test.

A prerequisite for ISO adopting the standard was that Microsoft granted any patents essential for the implementation on RAND terms. Microsoft actually granted free right-to-use for any patents necessary for implement CLI. And they placed it under the community promise.

This is the C# (and VB) *compiler*. Mono already had their own compiler (and did a good job at that - sometimes beating Microsoft to market on new language features) - but not like this one with incremental compilation and every stage of compilation open for interception and/or inspection by the host process.

For years we've heard "It's a trap. Microsoft will sue and force Mono underground". Well, they cannot sue anyone for implementing the ISO standard*. Now they cannot sue anyone for using their own open sourced compiler. There are still a few libraries in the .NET stack which have not been open sourced or put under an ISO standard - but they get fewer and fewer and all the important ones (ASP.NET, MVC, all of the core libraries etc) are now open.

*"Well they can just sue anyway and use their army of lawyers to force you out of business" someone will still claim. Well, no. The community promise has legal estoppel. Should Microsoft sue, a lawyer can point to the promise (a "one-sided contract") and stand a very good chance of having the case dismissed outright.

Uffe Seerup

Re: Thought I was losing my mind..

That sound like a reusable parser, which is not novel or unusual.

Never claimed it was novel. However, reusable parser hardly does Roslyn justification. First of all, it is not just the "parser" - the lexer, scanner, parser, type analysis, inference engine, code-generation etc are all open for hooking by a hosting process.

Furthermore the "compiler" supports incremental compilation where the host process can subscribe to events about incremental changes for each step of the compilation process (lexer, parser, type analysis, codegen, etc). This allows virtual "no compile" scenarios where the output appears to always be in sync with the source - because only the parts actually affected by each code change will be recompiled. The typical compiler - even with reusable parsers - have much more coarse grained dependency graphs than what is available in Roslyn.

Most IDEs nowadays use this approach for syntax highlighting and associated warnings.

Indeed - but to support those features, the IDEs often implement more than half of their own compiler (excluding the codegen) because they need more sophisticated error recovery, more information about types/type analysis and (much) more sophisticated pattern recognition to identify possible code smells or refactoring candidates, than what is typically offered by the reusable parser.

In Roslyn you can hook into the compiler/parser and directly use the "compiler" to perform even global refactorings - simply by manipulating the compiler structures.

That's not the same as releasing the blueprints of a sophisticated virtual machine implementation, it's just documenting the bytecode.

Releasing the specification ensures that the *specification* is known. Having it placed under ISO means that Microsoft cannot just change it at a whim. It goes to trust. Remember the debacle about how Office documents were defined by what MS Office did? How many other execution systems have been standardized in a similar way. The Java VM spec is still owned by Oracle and not owned by any vendor-independent standards organization.

Incidentally, while the JIT probably has some optimization tricks up it's sleeve, most of the important optimizations will happen at the semantic level within the compiler that is now open source. This will benefit Mono a lot. Once they switch to Roslyn it should be interesting to see what happens over at the computer language shootout.

Uffe Seerup

CLI is an open specification

Microsoft's *implementation* is not open source (yet), but the specification is an open ISO specification.


Microsoft grants patent license for any Microsoft patent necessary to implement the spec.

Mono has implemented a CLI according to the specification.

The specification has also been placed under the Community Promise


Uffe Seerup

Re: Thought I was losing my mind..

The Roslyn "compiler" is so much more than a mere producer of bytecode. The big thing of Roslyn is how it opens up its internal services and allows tool makers (IDE authors, refactoring authors, reporting/analytics, metaprogramming, code generators, etc etc) to use the parts of the compiler that applies to their problem.

For instance, Microsoft has demonstrated how Roslyn in a few lines of code can be used to implement style warnings and automate style fixes (such as missing curly braces) in a way that *any* IDE or tool based on Roslyn can pick up and use right away.

BTW - the "core" .NET infrastructure (the CLR and the core libraries) are already open specifications which anyone can implement the way Mono did. The specifications has been adopted as ISO standards and comes with patent grants (you are automatically granted license to any Microsoft patent necessary to implement the CLR and core libraries). Furthermore, the CLR and core libraries specification has been placed under the Community Promise to address concerns that Microsoft could just sue anyway and win not by being right but by having more lawyers.

The Community Promise has (at least in the US) legal estoppel - meaning that it will be trivial to have a case where Microsoft asserts its patents against an implementation throw out of court outright - even with possible retribution against the officers of the court who brought such a frivolous case. Meaning that Microsoft would have a hard time finding such lawyers.


Linux distros fix kernel terminal root-hole bug

Uffe Seerup

Re: In the Microsoft World

Got a link?



Powershell terminal sucks. Is there a better choice?

Uffe Seerup


I am sorry mam, but your pedantic tone is not bearable to me. Most of your post is dedicated to teaching me and telling me how I was wrong and you're right all the way.

Please do not take this the wrong way, but that seems pretty ad-hominem to me.

You have been arguing the relative merits of PowerShell and Bash from a point where you even admit that you do not know PowerShell. In that setting you should expect that opponents will try to teach you about it.

For me, I thank both you and H4rmony for the debate. :-).

@H4rmony: May I save the link with your take on ACLs for future use? It is precise my sentiment, but I could never explain it as elegant as that!

All the best

Uffe Seerup

Re: @h4rmony

That's lovely. I had no idea those existed.

Then try out-gridview and be blown away :-)

Uffe Seerup


First, MS Windows didn't have file permissions at all, remember that time, thus fat32 and earlier filesystems still don't have proper file permissions.

Sorry, that is BS. Windows NT was a clean-room implementation which had nothing to do with the Win9x line, except it exposed some of the same APIs.

All current Windows versions is based on the original Windows NT - which had proper ACLs (and network-ready SIDs instead of 16bit numeric uids/gids) from the start.

The *nix systems had it from day one

Wrong. The *nix systems had me-us-everyone rwx permissions from day one. No ACLs.

*nix uids and gids were designed with single-authority in an era where it was not foreseen that you one day would need to network the credentials (multiple authorities), hence you now need to map uids and use other types of quirks. Windows SIDs were designed to support multiple authorities and allow for networked credentials from the start.

The *nix model only had standard users and root and the privileges were fixed: Some things a user could do, everything else required root. Permissions were (and still are) extremely coarse grained. And *nix permissions *still* only applies to file system objects. The need for more securable resources have led to designs where e.g. processes mapped" to file system objects.

All modern *nixes now have ACLs. But you typically have to install/enable them explicitly and much the tooling is still not geared towards ACLs. You yourself have used examples here where you assume a single owner and "octal permission flags", for instance.

The lack of tool integration and the fact that ACLs competes with the traditional file system permissions makes them 2nd class citizens. They are frequently being ignored by tool and script authors. "find" is a very good example.

In computer science (especially language design) we use the term "first class" for kinds of objects that "fit in" analogous to similar or corresponding kinds of objects. When a kind of object is limited in its use cases, we consider it a 2nd class citizen.

You have many *nix tools (and GUI applications) that still assume the traditional *nix rwx me-us-world.

Uffe Seerup

Re: @h4rmony

How so? It only specializes in finding files and printing information on that.

That's 2 things already.

It is called "find" because it should find files. The UNIX principle would be to leave the "printing" to another tool. The find part overlaps with several other tools. The "printing" part is an entire printing language - private to the find tool and not in common with any other tool.

You write that "it's just the C printf". No it is not. C's printf does not know anything about owners, file sizes, rwx permissions, block sized. C's printf understands how to format strings and numbers passed to it as parameters. C's printf does not understand %u. C's printf does not go out and obtain information by itself. It is merely the name and excessive use of the % character that is common.

But even if you want us to ignore it, printf *also* starts and executes new processes. Cookbooks frequently use that capability. The find command can even prompt the user for ok to continue.

That's 3 (to be generous).

The date/time formatting is another area where find excels. But why does a file finding utility have to know about formatting dates and times?

That's 4.

Then there's the expression language. One can argue if it is done well, but a very, very large part of find is dedicated to this expression language and optimizations/tweaks for the evaluation. I'm not referring to the abundance of tests available. I am referring to the parenthesis and logical operators. Find has it's own format for these, incompatible with other tools that also does boolean expressions.

That's 5.

Then there are strange options such as -quit

The find command mixes at least 5 independent "things". Not one thing. There is no reason for this other than the limitations of the constant text formatting and parsing, and the limited bandwidth and expressiveness of a line oriented text format. Many of these "things" are overlapping with functionality of the shell itself (like the "tests") and with other tools. "ls" also excels at printing - only it's a different way to specify the format, and the format is married to the tool.

I am sorry if this sounds like I think find is a bad tool. It is not. It is actually an awesome tool. But it is undeniably complex. And the skills you need to master it does not help you at all when you continue to ls, ps, du or other popular tools.

While PS is a jack of all trades, right?

It has been pointed out to you, that PowerShell has been able to take a responsibility such as finding/locating items and separate it from responsibilities such as managing processes, formatting output, performing date/time arithmetic and interpreting propositions. Those other responsibilities has been generalized and composes with all other responsibilities.

Thus, PS tools tend to be smaller and have (far) fewer options. And yes, then you tend to use more commands on a pipeline, because you pass the object on to the to the tool that excels at it's (single) task.

Consider these commands:

Get-ChildItem | Where Length -gt 2MB | Format-Table Name,Length

Get-ChildItem | Where Length -gt 2MB | Format-Wide Name -Col 8

Get-ChildItem | Where Length -gt 2MB | Format-List FullName,Length,Attributes

Get-ChildItem | Where Length -gt 2MB | ConvertTo-Html

Get-ChildItem | Where Length -gt 2MB | ConvertTo-Json

Get-ChildItem | Where Length -gt 2MB | ConvertTo-Csv

Get-ChildItem | Where Length -gt 2MB | Select FullName,Length,Attributes,LastWriteTime | Out-GridView

Notice how filtering and formatting/printing is left to generalized cmdlets which has the filtering/formatting as their single responsibility.

Uffe Seerup

Re: @h4rmony

Is any of that possible with PS and dir command?

Yes, of course it is.

(I still don't know the secret behind formatting code in these forums :-( )


ls *, */*, */*/* -pv f | Get-Acl | ?{

$f.Name -match '.*img[1-4]{1,3}\..*'

-and $f.LastAccessTime.AddDays(10) -ge (get-date)

-and $f.Length -gt 1MB -and $f.Length -lt 2MB

-and $_.Owner -eq 'john123' } |

ft {$f.FullName},{$f.Length},AccessToString



"ls" lists childitems. Here it lists 3 levels.

"Get-Acl" retrieves ACLs for the files on the pipeline.

"?" (an alias for "Where-Object") filters using a boolean expression.

"ft" (an alias for Format-Table) formats the select columns.

Keep in mind that the OSes are different. There is no "octal permissions" on Windows. Windows has true first-class ACLs. (ACLs were added later on *nix, and is still not fully integrated - e.g. there is no "octal" representation).

The only "trick" used here is the pipeline variable (the pv parameter of the ls command). It saves the file in a variable which can then be used in the filter expression.

You will no doubt note how this is "longer" than the specialized find command. However, you should also note how the filter expression can reach out and evaluate for any expression. H4rmony alluded to this previously: Once you need to filter on anything "find" has not been coded for, you are out of luck. PowerShell can draw on anything that can be evaluated as an expression.

Uffe Seerup

Re: @Uffe, on find

I don't like the -exec function myself and prefer xargs

Interesting. So how would you write the example using xargs?

-exec is just that alias of that pipe you were mentioning when talking about PS options

I think i miss something here (honestly). Could you clarify, please?

Uffe Seerup

Re: bash

Because text is more universal than any (other) object

Really? How do you describe an object graph (like a tree or a network) using text?

Uffe Seerup

PASH @TheVogon

Pash hasn't been updated since 2008 where it stalled. It is a defunct project.

PowerShell would not be a good for Unix/Linux anyway. Unix/Linux does not expose management interfaces in an object model that could be used by PowerShell.

For Linux/Unix, Bash is still the way to go.

Uffe Seerup

Re: AWK, which you don't seem to know

(I have not figured out of to use pre and code tags, any help?)

find . -printf "%u\n" | sort -u

(you might use the 2>/dev/null to rid of the permission problems if any)

Ah. Yes, otherwise the command line along with 'permission denied' will be reported as an "owner". So the command really should be:

<code>find . -printf "%u\n" 2>/dev/null | sort -u</code>

(find all files from current location and below, for each file output a string with the owner-name suffixed with a new-line, ignore errors/warnings; sort all the "lines" and return only unique lines)

Compare that to

<code>ls -recurse | get-acl | select -uniq owner</code>

(find all objects from current location and below; for the objects get the access-control-lists; for each access-control-list get the unique owners)

h4rmonys observation was that in bash, fitting commands together frequently requires text monging (formatting and immediate reparsing), whereas in PowerShell the properties/objects "fit" together.

BTW, do dir or ls operator in PS have as many features as find?

Now that is a really good question. The answer, of course, is that in PowerShell you only have Get-ChildItem (aliased to ls). It does not have as many options as find. However, that is not for the reason you think. find is a monster command which violates the "unix'y principle" (do one thing and do it well).


* traverses files and directories (so does ls - but with different capabilities)

* has multiple options to control output format (why?)

* has multiple options to performs "actions", among which are the ability to execute other programs (!), including ability to prompt the user (!)

* find has an entire expression language which is different from expressions used in the shell itself, different from expressions used in other utilities.

Furthermore, find has many options for dealing with filenames that may contain spaces. It has these options exactly because text parsing is brittle and error-prone. In the past this has led to not only unstable scripts but to severe security problems as well!

No other tool can reuse the expression language. Awk has it's own "language", as does grep and ls and even the shell itself ("tests").

Now, compare that to PowerShell:

* The Get-ChildItem cmdlet traverses files, directories (and outputs objects). It works on all location types, ie. file system objects but also SQL locations, IIS locations, certificate store, registry etc.

* The Get-ACL cmdlet retrieves ACL objects for items that are passed to it.

* The Sort-Object (aliased to "sort") sorts objects of any type by using property names or expressions.

* Output formatting is handled by separate cmdlets, avoiding the need for each and every command to define output formatting options. Do only one thing and do it well.

PowerShell expressions are "script blocks" defined by the shell and thus common for *all* commands whether packaged with PowerShell or 3rd party modules. You use the same expression language, and each tool is relieved from having to implement parsing, compilation etc.

Uffe Seerup

Re: AWK, which you don't seem to know

why using find? Why to pipe it to uniq if there is a way to handle it with -u?, reminds me of peculiar pipes like "cat file | grep..." or "cat file |sed ....")

I think the point was that he wanted the entire tree processed (recursively). ls -al doesn't do that.

or even

ls -al | awk '$1 ~ /x/{print $3}'| sort -u

This does not process the tree; only the current directory. Even so, the equivalent powershell command would be:

ls | get-acl | select -uniq owner

Now, which is more *readable*?

AWK is a serious language, please don't diminish it. It's not "a part of bash" or any other shell.

awk is quite awesome. It is a functional text-processing language with nice features and beautiful consistency. I don't think the point was to *diminish* it. Rather, the point was that the power awk brings simply is not *needed* to compose advanced PowerShell commands. Alas, the power of using objects instead of text.

If you didn't get it, it doesn't mean it is no good.

ahem. I am *really* tempted to use this opportunity for a snide remark.

2) OOP makes syntax less readable, more programming-like

Look at the above examples. Which is more readable? Honestly?

There is a reason that *nix systems never though an OOP shell, other than an experiment, like python shell

Well, if you believe all the grapes that you cannot reach are sour, go ahead and believe that was why *nix systems never though of an OOP shell. I choose to believe that it is because *nix systems never had a system wide object model (and still hasn't). If you were to process objects in *nix, what would they be? Java objects? Without an interoperable object model, the value of an object oriented shell would be severely limited. Ksh actually has an object model - but it is internal to ksh itself and not really a system-wide object model.

Windows has COM and .NET - both of which PowerShell use as object models in it's type system.

Another important difference is that on *nix configuration is generally text based in line-oriented text files. Which makes tools like sed, awk and grep essential for reading and changing configuration.

Windows has an abstracted configuration model, where the actual configuration storage is abstracted away behind a configuration API. To configure something you invoke API functions rather than change text files. Those APIs are almost always either COM or .NET based (or WMI).

Hence, it makes sense that *on Windows* you use an object oriented shell and on *nix'es you use a text oriented shell. PowerShell would not be a good fit for *nix. Bash is not a good fit for Windows.

The pretext of this was that the OP wanted to learn PowerShell *for Windows*. Deal with it.

Uffe Seerup

Re: Questionable reason to use powershell over bash. Part II

If it's not an ad bullshit, and you do mean parallelizing tasks, than it's awesome, however, it always has limitations, plus with parallel and some ad-hoc tools you can parallelize tasks inside your scripts with Bash as well.

Interesting. So Bash parallel execution allows the parallel parts of the scripts to run in the same environment, having read/write access to the same shell variables and synchronization mechanisms?

Again, depends on the goal you're trying to accomplish, tramp-mode (in GNU Emacs), GNU Parallel and/or GNU Screen come to my mind.

Hmm. In Powershell you can *script* multiple remote sessions. For example you can start by opening sessions to 50 hosts. Each host will be a "session". You can now execute commands on each individual session while the state within each session os carried over from command to command, and the result of the command (success code) marshalled back to the controlling console. Which means the the controlling script can exert control over each individual command executed on the remote hosts. This is not just piping a script over to be executed and having the final result reported back. This is about having fine-grained control over each command executed remotely - as part of a script.

And yes - if you execute commands in parallel using the Invoke-Command cmdlet (alias icm) you can control how many parallel executions you want.

9) PowerShell web access.

What the hell is that, what for?

For remote control using a well-known and firewall-friendly protocol (https).

Sounds very fishy and insecure to me.

Yeah well. Any time you open up your infrastructure to the outside world, it incurs a certain amount of risk. See it as an alternative to OpenSSH, only you can integrate with web based authentication mechanisms and minimize the different authentication mechanisms, assuming that you already has a web security regime in place. So, do you open open an SSH port or simply PWA on an existing administrative or intra-corporate website? PWA defaults to "no access" and must be configured to allow PWA to use PowerShell remoting to the internal hosts. The commands/modules available and access levels allowed can be configured as well.

10) Superior security features, e.g. script signing, memory encryption..

"Memory encryption" sounds redundant, did you turn off ASLR?

Memory encrypting in PowerShell is used to ensure that sensitive data - such as passwords and private keys - are stored encrypted in memory. This has nothing to do with ASLR. ASLR is *not* encryption (where did you get that idea?).

Heartbleed demonstrated what can happen to passwords and privatekeys stored unencrypted in memory. When you read a password/private key through the Read-Host -AsSecureString cmdlet or similar, the password/private key is read and encrypted character-by-character and is *never* stored in cleartext in memory. The encryption is based on the data-protection API (DPAPI) ensuring that not even an administrator using a tool to scan the memory will be able to decrypt the password. There are lots of ways passwords in memory can bleed out. Heartbleed demonstrated one (nasty) way. Other channels are backups, swap files, memory dumps etc.

What precludes you from signing or encrypting a Bash script? You can also automate it by writing an easy bash function to check for a signature and execute a script, you can package every script or a number of them inside the Debian/RedHat or other package container with signatures to install it system-wide.

In PowerShell this is built-in security, and the default level is quite restrictive. And it can be controlled through group policies, making it trivial to enforce policies about e.g. script signing across an entire organization. Sure you can build *something* with bash, but how do you *enforce* it?

How do you sign a bash script, btw?

Correct me if I am wrong, but I had the impression that Debian/RedHat package container signatures were only check when *installing* the package. PowerShell script signing is used to ensure that scripts executing on a computer has been approved by some authority and has not been tampered with.

Furthermore, the script execution policies can block scripts obtained through browsers/mail clients/instant messengers etc. or e.g. scripts copied from a non-intranet zone. Files obtained through such channels are "tainted" with the origin, and powershell can be set to disallow execution of such scripts. This is a level of protection against phishing.

11) Do that in bash?

Do what again, an explanation is required, if you don't mean quantum theory.

I think he was referring to using a browser to connect to a command prompt and typing commands/running scripts with intellisense support - right in the browser.

12) Strongly typed stripting, extensive data types,

Why not scripting in C then?

Because C is not script friendly. C is not type-safe. C's scope rules interferes with REPLs.

"e.g first class xml support and regex support right in the shell."

What kind of regex is there, BTW? Is it as good and efficient as in grep, sed, awk or perl?

Yes. It is even better. While it is the "perl" variant it even allows some nice tricks to support e.g. parenthesis balancing (matching only properly balanced open-close parenthesis). PowerShell/.NET regexes are is *very* efficient. Actually the .NET regex engine supports compiling regex'es all the way to machine code.

No, it won't. Someone wrote that by default PS tries to read the data in memory, unless you use some non obvious and ugly syntax to do ti the right way.

I assume you are referring to how PowerShell process pipelines of objects? PowerShell actually *does* process objects in a progressive manner - "pulling" each result object through the pipeline one at a time. So if that was what you were referring to, you are wrong (although *some* cmdlets like e.g. sort *will* collect all objects before starting to produce output).

14) Instrumentation, extensive tracing, transcript and *source level* debugging of scripts.

Of course, it needs a lot of debugging thanks to its complexity.

If you develop scripts for any type of automation, you will appreciate debugging capabilities. This sounds rather like a case of sour grapes.

The main thing is though, how PS compares with Bash and other shells in usability, ease and power as shell, an envelop between utilities and processes? That is the real question.

To each their own. Bash robust and well established. But it is not without quirks. To someone well versed in Bash, the concepts of PowerShell can be daunting. You will be challenged on "wisdom" you simply accepted as facts before. In my experience, the hardest part for "bashers" to get about PowerShell is how it was made for objects and APIs instead of text files.


Powershell Terminals

Uffe Seerup

Re: Powershell Terminals

Use the Integrated Scripting Environment (ISE). Start it from a shortcut or by simply typing "ise" in the command window. It is always available, even on a gui-less server.

The ISE has auto-suggestings, sane cut-and-paste, multiple session tabs, remote tabs, command builder window, snippets, source-level debugging etc.

Alternatively use another console program, such as Console2.


MS brandishes 'Katana' HTTP/2.0 server

Uffe Seerup

Re: .Net of course

Mono. Mono. Mono!

It is not intended to be a production server. It makes perfect sense to implement a PoC using a language which has a high productivity (using e.g. LINQ and - probably more relevant in this case - async/await asynchronous methods) combined with a good performance.

Async/await makes creating asynchronous methods (much) easier while still have the methods resemble the logical flow of the application.

Uffe Seerup

Re: C# runs fully compiled

But it still runs within the .NET VM and is subject to its checks and workings.

There is no such thing as a .NET VM.

There is the common language runtime (CLR) which (as the name hints) is more of a library and initialization code. There is no "virtual machine" that interprets byte codes.

You can host the CLR in your own process (.NET is actually a COM "server"), although the compilation is performed by a JIT compilation service.

When you ask the CLR to run a .NET executable, you ask the CLR to load the assembly, use reflection to find the entry method and ask CLR to execute that method. At that time the CLR will compile the method from the MSIL code of the assembly (or take it from the cache if it has already been compiled) and invoke the compiled code. If the method invokes other methods, the method may be a compilation stub which compiled the method and replaces the reference to the stub with a pointer to the compiled method. Subsequent invocations will thus directly invoke the already compiled method.

.NET code (at least on the Windows platforms) execute fully compiled.

Sure, part of the code is turned into native opcodes, still it is different from a fully compiled native application which runs fully on the processor directly.

No, all of the code is turned into native instructions. All of it. It may not happen until just before the method is executed, but all of the code that is eventually executed is compiled code.

The difference from fully compiled native code is that the compiled code is obtained dynamically from a compiler (or cache) on the target platform, i.e. applications are distributed as MSIL and depends on the MSIL compiler service being available at the target computer.

There is even a tool (ngen.exe) that will pre-populate the cache with *all* of your application code compiled into native code, alleviating the need for JIT compilation.

You may want to read this: http://msdn.microsoft.com/en-us/library/9x0wh2z3(v=vs.90).aspx

Uffe Seerup

C# runs fully compiled

That MSIL somehow runs interpreted is a common misunderstanding. When a C# program executes it executes fully compiled.

C# is compiled *ahead* of execution, in principle on a method-by-method basis (in reality multiple classes are compiled at once). When a method has been executed *once*, subsequent invocations simply run the compiled method.

MSIL was never intended to be an interpreted language. From the start it was designed to be compiled to the final representation on the target architecture. Because it is type-safe and statically typed, it also do not need the typical type checks that dynamic language suffer from.


What's the most secure desktop operating system?

Uffe Seerup

sudo is not a security model

sudo is a kludge, developed because of a lacking underlying model where privileges cannot be properly delegated. It is not part of a "model" - indeed the sudoers exists in parallel with and competing with the real (but inadequate) file system permissions.

sudo breaks one of the most important security principles: the principle of least privilege. sudo is a SUID root utility and will run *as root* with *unlimited* access.

Some Linux distros now use Linux Capabilities (although these have not been standardized). Had capabilities existed when Unix was created, we never would have had the abomination that is sudo.

Many vulnerabilities in utilities that must be started with sudo have lead to system compromises *because* of the violation of least privilege. Sendmail allows you to send a mail. But it requires you to run it as root. So you run it with sudo, allowing users to sudo sendmail. But a simple integer underflow (like this one: http://www.securiteam.com/exploits/6F00R006AQ.html) can now lead to total system compromise!

The security problems with sudo and other SUID root utilities are well-known so please do not try to pass it off as a superior "model". It was always and remains a kludge that is used to drill holes in a too simplistic, file-system oriented security model of the 1970ies.

How is a security auditor supposed to audit the capabilities of users? Once a user is allowed to execute binaries with root privileges through sudo or other SUID root's the security auditor have no way of knowing what can be done through those utilities, short of overseeing the process by which they were compiled and distributed. The operating system cannot guarantee that the file system privileges are restricting the users as they can be bypassed by sudo/SUIDs. Compare that to operating systems with security models where the permissions are actually guaranteed to restrict the account.

SELinux has a security model. Sudo is not a security model, it a drill that destroys security models.


The good and the bad in Hyper-V's PowerShell

Uffe Seerup

Re: @Uffe Seerup

That is justthe default behavior!!! BTW, It has nothing to do with the setuid root of sudo!

It has EVERYTHING to do with setuid root of sudo! It is the very way setuid works! Sudo may be instructed to *drop* to another user (-u option), but it starts as root (because the owner is root) and that is the default.

You may want to read this instructive article:


At the end you will find this "The general rule of thumb for setuid and setgid should always be, “Don’t Do It.” It is only in rare cases that it is a good idea to use either of these file permissions, especially when many programs might have surprising capabilities that, combined with setuid or setgid permissions, could result in shockingly bad security".


Let me repeat. The operating system sees that the executable has the setuid flag. It then launches the process with the owner as the effective user. If the owner is root - then the process is running as root. A root process can drop to another user (seteuid http://man7.org/linux/man-pages/man2/setegid.2.html). Sudo will do that prior to executing the specified tool if the -u option is specified.

Don't believe me? You may believe the Linux man page for the sudo command: "Because of this, care must be taken when giving users access to commands via sudo to verify that the command does not inadvertently give the user an effective root shell."

Okay, than tell us how to effectively exploit this "vulnerability"

See above sudo man page security note.

or read this caveat from the same sudo man page:

"There is no easy way to prevent a user from gaining a root shell if that user has access to commands allowing shell escapes.

If users have sudo ALL

there is nothing to prevent them from creating their own program that gives them a root shell regardless of any '!' elements in the user specification.

Running shell scripts via sudo can expose the same kernel bugs that make setuid shell scripts unsafe on some operating systems (if your OS supports the /dev/fd/ directory, setuid shell scripts are generally safe)."

There has been many, many vulnerabilities in setuid programs. In general, a vulnerability in a setuid root process means assured system compromise because the attacher will be running as root. Ping is a setuid root utility. Here is an example of a vulnerability: http://www.halfdog.net/Security/2011/Ping6BufferOverflow/

A user who has execute access to ping with this vuln has root access and can compromise a system.

If on your system you have two regular users and know passwords for each one you can jump from one user to the other via "su anotheruser". Notice, you never become a root here.

It may look that way, but technically you are incorrect. su works by running as root and *then* dropping euid to the other user. Only root can do that because changing euid is a privileged operation. In this case the time spent as root is very short, so I can see how you may be confused.

Another setuid/SUID vulnerability: http://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2012-3485

However, this is beside the point. Sudo exists because you need it. There is no delegation of privileges in standard *nix or in POSIX. You cannot delegate the right to change system time. Only root can do that. So to change system time you run "sudo date -s ...". Sudo is a setuid root and start as root. By default it runs the tool you specify as root as well, so the "date -s ..." succeeds and sets the time. Had you specified "sudo -u date -s ..." (drop to user) you would not had succeeded in setting the time. Why? Because only root can change system time.

Windows does not need an equivalent to sudo (and no it is not "runas"), because 1) it is a liability and 2) privileges in Windows can be delegated.

Uffe Seerup

Re: @Uffe Seerup

You keep repeating this, but this is wrong! There is no way you ever get uid=0 automatically by just running sudo.

(sigh). Try typing "sudo whoami" or "sudo id". So, what did it answer? On my system it says "root"!

So who am I when I run a command through sudo? Answer: root.

More proof: "sudo bash" and in the new shell do a "ps -aux|grep bash". What processes are listed? Which user do they indicate they run as? On my system it says that "sudo bash" and one "bash" process run as root.

When will you get this? sudo is a setuid/SUID tool. A setuid tool runs with the owner of the file as the effective user of the process. You may restrict *who* can call sudo in the sudoers file, but you *cannot* change the fact that sudo starts as root.

By some reason you think that the "setuid root bit" is equated to "becoming root"

Yes! yes! If you have execute permission to the file then you are becoming root when you execute it. Not "equated" to becoming root. You become root. Plain and simple. This is basic Unix/Linux stuff and I shouldn't have to explain this to a Linux advocate. Something is wrong with this picture...

Q:Tell me, how do I generate a report of the rights/privileges of a certain user?

A: Depends how much info you want. I'd suggest running id command :

Proving my point. The id tool does *not* generate a report of what a user can do when what you can do can be bypassed by sudo! The owner of a resource has no way of determining who has access to his resource, when tools with setuid root may just bypass. To correctly assess users that could access your resource you'll have to figure out which setuid tools exist on the system, who has execute right on the setuid root tools, what tools are allowed to execute through sudo and which users have access to those tools through sudo, and finally (and crucially!) knowledge about what each of these tools can do. Some tools, like "find" can actually execute commands(!) and other tools (editors) allow you to start new shells/processes. Such tools are very dangerous when allowed through sudo.