Feeds

* Posts by Uffe Seerup

57 posts • joined 10 Dec 2007

Page:

Entity Framework goes 'code first' as Microsoft pulls visual design tool

Uffe Seerup

Re: I ran into serious problems with EF

"LINQ supports data types that are queries, using a .NET array type where the variable itself uses lazy evaluation"

No, that is not how LINQ composes queries. LINQ supports IQueryable<T>. IQueryable can be composed with other queries, more clauses etc. In general, when you compose using IQueryable, the result will also be (should also be) an IQweryable.

"it doesn't retrieve or compute the result until your program actually accesses the value"

(Guessing here) It sounds like you are talking about a *property* with a *getter* which is then evaluated on each reference. If it does not return an IQueryable - it cannot compose lazily with other queries. If it is indeed an "array" type as you describe - that is definitively not the way to do it.

"For example, you can point your GUI to one of these, have it show 10 records from a query, if the query has 50 results it will only retrieve 1-10 until you scroll down. Well, in theory -- EF doesn't support these, it loads the whole enchilada (all 50 records) into RAM and then "lazily" loads values out of that in-memory copy as you use it (it doesn't generate any optimized SQL at all.)"

EF does indeed support paged queries. Use Skip() and Take(): Skip to skip n results, Take to retrieve the next m results. I suspect that you may be using some GUI element that does not use this EF feature.

1
0

Windows 10 feedback: 'Microsoft, please do a deal with Google to use its browser'

Uffe Seerup

Re: Stop. Just Stop.

> Fully agree. And your justifiable rant reminds me of the question I've been meaning to put to anyone more experienced with the windoze environment than I am.

You are looking for Windows Applocker: http://technet.microsoft.com/en-us/library/dd759117.aspx

(part of Windows since Windows 7 - before that there were software restriction policies).

With Applocker you can enforce a policy where executables are only allowed to launch from a few protected folders, such as Program Files and Program Files x86. Or you can set a policy that only select publishers are whitelisted, e.g. Microsoft, Adobe etc.

1
0

Vanished blog posts? Enterprise gaps? Welcome to Windows 10

Uffe Seerup

> What I don't really like is the requirement of having an online Microsoft account

There is no such requirement. In fact, I tried to set it up on a Surface Pro 3 (mistake) - but because it doesn't come with drivers for the SP3 (no Internet connection during install) I had *only* the local account.

But even if you *do* have a network, the Microsoft account is still optional.

1
0
Uffe Seerup

Shutdown is by default a mix between shutdown and hibernate

It is the *kernel* that hibernates. All user programs and user sessions are terminated.

When you show down, Windows sends shutdown signals to all user processes in all user sessions. The kernel and services in session 0 (including device drivers) are hibernated, i.e. the state is written to disk.

Upon start, Windows will check that the hw signature of the machine (ntdetect) is still the same, and if so it will read the state from disk rather than go through a full boot process.

If the drivers are up to it, you should not see any effect from this apart from the (much) faster booting.

2
0

Desktop, schmesktop: Microsoft reveals next WINDOWS SERVER

Uffe Seerup

Re: hate for powershell?

Trevor, you claim that PowerShell is generally despised as a means of day to day administration".

That's a very unspecific and unverifiable claim. When I come across an admin who uses arcane tools or self-automates through the GUI by manually following "manuals", I always demonstrate to them how they can accomplish the tasks using PowerShell. And the response is generally overwhelmingly positive. So I have the exact opposite experience from you; although like yours it is still anecdotal.

PowerShell *is* the primary tool. Through the module feature it has quickly grown to become the hub of CLI administration on Windows boxes. I dare you to find an area that can not be administered using PowerShell.

> But for standardised deployments, policy pushes, standardised updates or even bulk migrations any of the proper automation/orchestration tools are just flat out better.

The right tool for the job. On Windows there is still group policies which remains the most important vehicle for pushing policies. PowerShell is not intended to replace GPOs. PowerShell *can* fill the gaps where group policies cannot reach.

> And for those situation where you're fighting fires, an intuitive GUI is way better, especially for those instances where you're fighting fires on an application you haven't touched in months, or even since it was originally installed.

> This is - to put it mildly - a different role than is served by the CLI in Linux.

If you are "fighting fires" in an application you haven't touched in months and find the GUI better for that, how come you think the Linux CLI is better for that? If your problem is unknown/forgotten instrumentation, *discovery* is what you are after. A GUI can certainly help you there, but I frankly cannot see how a *Linux CLI* would offer more discovery.

In fact, PowerShell offers *much more* in the discovery department than any Linux CLI or scripting language. Granted, you have to be familiar with a few central concepts of PowerShell: Modules, naming convention, Get-Member, Get-Help, Get-Command. However, these are very basic concepts that you are not likely to forget even if youjust use PowerShell occasionally.

> The CLI in Linux is much more mature, with a multitude of scripting languages grown around it, and the majority of application configuration done as flat text files, not as XML. (Systemd can die in a fire.)

More mature? How about *old*? Case in point: The arcane bash syntax with its convoluted parser that has placed hundreds of thousands systems at risk.

Yes, the Unix tradition is to use plain text files. The Windows tradition is to use APIs, possibly backed by XML files. There are advantages and disadvantages to each approach. Personally, I prefer the API approach as it makes it easier to create robust scripts and procedures that will also carry forward in versions, even if the underlying format changes.

2
6
Uffe Seerup

Re: Powershell 5 and W7 / Svr2K8r2

It is the *preview* that installs exclusively on 8.1 and Server 2012R2.

I was looking for a source on which platform the *final* product will be available on.

PowerShell 4 was *also* platform limited during the preview phase, but was backported afterwards.

1
0
Uffe Seerup

Re: Powershell 5 and W7 / Svr2K8r2

> It doesn't look like Powershell 5 will be backported to Windows 7 or Server 2008/R2?

Source?

0
1
Uffe Seerup

Re: User Interface

> There are many UNIX shells, and it's impossible to compare all

> of them with Powershell and come out with the conclusion that

> "Powershell is the bestest",

Indeed, a comparison with the goal of crowning "the best" makes little sense. Not least because much of PowerShell makes sense mostly on Windows and would add little value on *nix systems. Conversely, the *nix shells with their line-text-centric processing is often at odds with Windows where you either need to control through APIs, XML or similar.

> ... has an interactive prompt that, from my trials, can only be

> accessed while "on" the machine or over RDP

Then you have not tried PowerShell since version 1.0. This first version had no built-in remoting, but could be used across SSH connections.

In PowerShell 2.0 a lot of remoting and job features were added. Many commands take a -ComputerName parameter where you can pass a list of hosts where the command should execute remotely but return the output to the controlling console. There is a general Invoke-Command that can execute commands, scripts, functions on remote machines simultaneously and marshal the output back.

5
1
Uffe Seerup

Re: User Interface

> Really? Get back to us when it can do full system job control.

For jobs control:

Debug-Job, Get-Job, Receive-Job, Remove-Job, Resume-Job, Start-Job, Stop-Job, Suspend-Job, Wait-Job.

Some of these have aliases for shorthand use:

gjb -> Get-Job, rcjb -> Receive-Job, rjb -> Remove-Job, rujb -> Resume-Job, sajb -> Start-Job, spjb -> Stop-Job, sujb -> Suspend-Job, wjb -> Wait-Job

For process control:

Debug-Process, Get-Process, Start-Process, Stop-Process, Wait-Process

Again, with aliases:

gps -> Get-Process, kill -> Stop-Process, ps -> Get-Process, saps -> Start-Process, spps -> Stop-Process, start -> Start-Process

You can start jobs on remote computers without first "shelling" into those. Output from the jobs are marshalled back to the controlling console. It even works if the remote job stops and query for a value (using Read-Host): The prompt will appear on the controlling console when you poll for output/status of the job.

Scripts can be workflows. Workflow scripts are restartable, even across system restarts.

So, between using a remote GUI admin interface and scripting, Windows Server is very well covered.

14
5
Uffe Seerup

Re: User Interface

> Let's hope the version has a user interface suitable for use on a server...

PowerShell 5

10
5

Stunned by Shellshock Bash bug? Patch all you can – or be punished

Uffe Seerup

Re: what else lurks

> Well, the attack is based on a feature of Bash.

No - it is a *bug*. The ability to define functions in env vars was the feature. The unintended consequence of using a poorly implemented parser was that it proceeded to *execute* text that may come *after* the function definition in the env var.

> This means that it's been "out in the open" for the entire existence of the feature

Nope. Nobody ever considered the possibility that extraneous text following such a function definition would be executed *directly*. At least, we *hope* that it was the good guys who found this first. But we really don't know.

> It also points out why it's a bad idea to have so much running with root permissions, besides not sanitizing input.

Yes. And why SUID/setuid is such a monumentally bad idea. It is a deliberate hole in a too simplistic security model.

> The equivalent on a Windows system would be to pass in PowerShell script and .NET binaries through the http request, and then run it all with Administrator permissions.

Not sure I agree. That is such an obviously bad idea that nobody would ever do it. Shellshock is - after all - a bug (unintended behavior). However, there are multiple historic reasons on Unix/Linux that tend to amplify this bug:

1: The (dumb) decision to use env vars to pass parameters through the Common Gateway Interface (CGI). Env vars are stubbornly persistent: They are inherited by all child processes. This serves to make security auditing much harder: You cannot simply audit the script that *received* control directly from CGI; you also has to consider all processes that can ever be launched from the script, its child processes or descendants.

2: Inadequate and too restrictive Unix security model. This led to the invention of SUID/setuid (a hole in the security boundary) because there were still a lot of tasks that "plain users" (or least privilege daemons) legitimately needed to perform - such as printing, listen on sockets etc. Rather than refining the security model, SUID punched a deliberate hole in it. However, this means that user accounts are frequently not granted rights to access a *resource* or a syscall - rather they are granted right to launch an *executable* (SUID) that will execute with an effective user (often root, sadly) who has access to the resource. Which means that you have created a culture where you all but *need* to launch processes through system() calls to get the job done. With a proper security model where such right could be *delegated* you would not have to invoke external executables.

3: Old software that lingers on even in modern operating systems. It has long been accepted that the bash parser is, well, quirky. The way that it is semi-line oriented, for instance (definitions only take effect on a line-by-line basis). Today, parser construction is under-graduate stuff, all the principles are well known. The bash code in question was written in another era and by another community where those principles were perhaps not as well known as they are today.

3
0

Munich considers dumping Linux for ... GULP ... Windows!

Uffe Seerup

Re: 1800 jobs

Except:

1) The Microsoft HQ is already there. They'll move 15 kilometers to the south in 2016

2) The decision has already been made, planning completed and they are about to start building - if they have not already

3) The decision was takes almost a year ago during the previous administration. The currect Stadtrat was elected this March!

It could still be a case of a few mayors having a pet project and wanting to make an impression. But trying to connect the HQ move to this decision is disingenuous.

2
2

Panic like it's 1999: Microsoft Office macro viruses are BACK

Uffe Seerup

Re: js and pdf proprietary extension, @big_D

[i]>>It's the same as in MS Office[/i]

It's similar. But MS Office (since Office 2010) also *sandboxes* documents that have been received from the Internet zone. This applies to files received through email or downloaded through a browser (all browsers support this).

Such files contain a marker in an alternate data stream that specifies that the file came from the "Internet zone".

When Office opens such a file it will open in a sandboxed process. The entire process runs in "low integrity mode" - and thus whatever it's macros may try to do - even if enabled - they will be restricted to the permissions of the low integrity process.

1
0

Microsoft C# chief Hejlsberg: Our open-source Apache pick will clear the FUD

Uffe Seerup

Not Apache httpd (the "server"), but the Apache *license*

Microsoft open sourced their C# and VB compiler under the Apache License 2.0 (http://www.apache.org/licenses/LICENSE-2.0), which includes grant of patent license:

"Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted"

2
1
Uffe Seerup

Re: I hope Apple do similar

The Common Language Infrastructure (CLI) has been placed as a standard under ISO. It is open for anyone to implement (as the Mono project did). There is no situation comparable to Java where you had to implement the full stack and pass a (closed source) compliance test.

A prerequisite for ISO adopting the standard was that Microsoft granted any patents essential for the implementation on RAND terms. Microsoft actually granted free right-to-use for any patents necessary for implement CLI. And they placed it under the community promise.

This is the C# (and VB) *compiler*. Mono already had their own compiler (and did a good job at that - sometimes beating Microsoft to market on new language features) - but not like this one with incremental compilation and every stage of compilation open for interception and/or inspection by the host process.

For years we've heard "It's a trap. Microsoft will sue and force Mono underground". Well, they cannot sue anyone for implementing the ISO standard*. Now they cannot sue anyone for using their own open sourced compiler. There are still a few libraries in the .NET stack which have not been open sourced or put under an ISO standard - but they get fewer and fewer and all the important ones (ASP.NET, MVC, all of the core libraries etc) are now open.

*"Well they can just sue anyway and use their army of lawyers to force you out of business" someone will still claim. Well, no. The community promise has legal estoppel. Should Microsoft sue, a lawyer can point to the promise (a "one-sided contract") and stand a very good chance of having the case dismissed outright.

2
2
Uffe Seerup

Re: Thought I was losing my mind..

That sound like a reusable parser, which is not novel or unusual.

Never claimed it was novel. However, reusable parser hardly does Roslyn justification. First of all, it is not just the "parser" - the lexer, scanner, parser, type analysis, inference engine, code-generation etc are all open for hooking by a hosting process.

Furthermore the "compiler" supports incremental compilation where the host process can subscribe to events about incremental changes for each step of the compilation process (lexer, parser, type analysis, codegen, etc). This allows virtual "no compile" scenarios where the output appears to always be in sync with the source - because only the parts actually affected by each code change will be recompiled. The typical compiler - even with reusable parsers - have much more coarse grained dependency graphs than what is available in Roslyn.

Most IDEs nowadays use this approach for syntax highlighting and associated warnings.

Indeed - but to support those features, the IDEs often implement more than half of their own compiler (excluding the codegen) because they need more sophisticated error recovery, more information about types/type analysis and (much) more sophisticated pattern recognition to identify possible code smells or refactoring candidates, than what is typically offered by the reusable parser.

In Roslyn you can hook into the compiler/parser and directly use the "compiler" to perform even global refactorings - simply by manipulating the compiler structures.

That's not the same as releasing the blueprints of a sophisticated virtual machine implementation, it's just documenting the bytecode.

Releasing the specification ensures that the *specification* is known. Having it placed under ISO means that Microsoft cannot just change it at a whim. It goes to trust. Remember the debacle about how Office documents were defined by what MS Office did? How many other execution systems have been standardized in a similar way. The Java VM spec is still owned by Oracle and not owned by any vendor-independent standards organization.

Incidentally, while the JIT probably has some optimization tricks up it's sleeve, most of the important optimizations will happen at the semantic level within the compiler that is now open source. This will benefit Mono a lot. Once they switch to Roslyn it should be interesting to see what happens over at the computer language shootout.

1
1
Uffe Seerup

CLI is an open specification

Microsoft's *implementation* is not open source (yet), but the specification is an open ISO specification.

http://www.ecma-international.org/publications/files/ECMA-ST/ECMA-335.pdf

Microsoft grants patent license for any Microsoft patent necessary to implement the spec.

Mono has implemented a CLI according to the specification.

The specification has also been placed under the Community Promise

http://www.microsoft.com/openspecifications/en/us/programs/community-promise/covered-specifications/default.aspx

1
1
Uffe Seerup
Boffin

Re: Thought I was losing my mind..

The Roslyn "compiler" is so much more than a mere producer of bytecode. The big thing of Roslyn is how it opens up its internal services and allows tool makers (IDE authors, refactoring authors, reporting/analytics, metaprogramming, code generators, etc etc) to use the parts of the compiler that applies to their problem.

For instance, Microsoft has demonstrated how Roslyn in a few lines of code can be used to implement style warnings and automate style fixes (such as missing curly braces) in a way that *any* IDE or tool based on Roslyn can pick up and use right away.

BTW - the "core" .NET infrastructure (the CLR and the core libraries) are already open specifications which anyone can implement the way Mono did. The specifications has been adopted as ISO standards and comes with patent grants (you are automatically granted license to any Microsoft patent necessary to implement the CLR and core libraries). Furthermore, the CLR and core libraries specification has been placed under the Community Promise to address concerns that Microsoft could just sue anyway and win not by being right but by having more lawyers.

The Community Promise has (at least in the US) legal estoppel - meaning that it will be trivial to have a case where Microsoft asserts its patents against an implementation throw out of court outright - even with possible retribution against the officers of the court who brought such a frivolous case. Meaning that Microsoft would have a hard time finding such lawyers.

1
2

Linux distros fix kernel terminal root-hole bug

Uffe Seerup

Re: In the Microsoft World

Got a link?

http://arstechnica.com/apple/2008/04/report-microsoft-fastest-to-issue-os-patches-sun-slowest/

2
3

Powershell terminal sucks. Is there a better choice?

Uffe Seerup

Debate

I am sorry mam, but your pedantic tone is not bearable to me. Most of your post is dedicated to teaching me and telling me how I was wrong and you're right all the way.

Please do not take this the wrong way, but that seems pretty ad-hominem to me.

You have been arguing the relative merits of PowerShell and Bash from a point where you even admit that you do not know PowerShell. In that setting you should expect that opponents will try to teach you about it.

For me, I thank both you and H4rmony for the debate. :-).

@H4rmony: May I save the link with your take on ACLs for future use? It is precise my sentiment, but I could never explain it as elegant as that!

All the best

1
0
Uffe Seerup

Re: @h4rmony

That's lovely. I had no idea those existed.

Then try out-gridview and be blown away :-)

1
0
Uffe Seerup

ACLs

First, MS Windows didn't have file permissions at all, remember that time, thus fat32 and earlier filesystems still don't have proper file permissions.

Sorry, that is BS. Windows NT was a clean-room implementation which had nothing to do with the Win9x line, except it exposed some of the same APIs.

All current Windows versions is based on the original Windows NT - which had proper ACLs (and network-ready SIDs instead of 16bit numeric uids/gids) from the start.

The *nix systems had it from day one

Wrong. The *nix systems had me-us-everyone rwx permissions from day one. No ACLs.

*nix uids and gids were designed with single-authority in an era where it was not foreseen that you one day would need to network the credentials (multiple authorities), hence you now need to map uids and use other types of quirks. Windows SIDs were designed to support multiple authorities and allow for networked credentials from the start.

The *nix model only had standard users and root and the privileges were fixed: Some things a user could do, everything else required root. Permissions were (and still are) extremely coarse grained. And *nix permissions *still* only applies to file system objects. The need for more securable resources have led to designs where e.g. processes mapped" to file system objects.

All modern *nixes now have ACLs. But you typically have to install/enable them explicitly and much the tooling is still not geared towards ACLs. You yourself have used examples here where you assume a single owner and "octal permission flags", for instance.

The lack of tool integration and the fact that ACLs competes with the traditional file system permissions makes them 2nd class citizens. They are frequently being ignored by tool and script authors. "find" is a very good example.

In computer science (especially language design) we use the term "first class" for kinds of objects that "fit in" analogous to similar or corresponding kinds of objects. When a kind of object is limited in its use cases, we consider it a 2nd class citizen.

You have many *nix tools (and GUI applications) that still assume the traditional *nix rwx me-us-world.

0
0
Uffe Seerup

Re: @h4rmony

How so? It only specializes in finding files and printing information on that.

That's 2 things already.

It is called "find" because it should find files. The UNIX principle would be to leave the "printing" to another tool. The find part overlaps with several other tools. The "printing" part is an entire printing language - private to the find tool and not in common with any other tool.

You write that "it's just the C printf". No it is not. C's printf does not know anything about owners, file sizes, rwx permissions, block sized. C's printf understands how to format strings and numbers passed to it as parameters. C's printf does not understand %u. C's printf does not go out and obtain information by itself. It is merely the name and excessive use of the % character that is common.

But even if you want us to ignore it, printf *also* starts and executes new processes. Cookbooks frequently use that capability. The find command can even prompt the user for ok to continue.

That's 3 (to be generous).

The date/time formatting is another area where find excels. But why does a file finding utility have to know about formatting dates and times?

That's 4.

Then there's the expression language. One can argue if it is done well, but a very, very large part of find is dedicated to this expression language and optimizations/tweaks for the evaluation. I'm not referring to the abundance of tests available. I am referring to the parenthesis and logical operators. Find has it's own format for these, incompatible with other tools that also does boolean expressions.

That's 5.

Then there are strange options such as -quit

The find command mixes at least 5 independent "things". Not one thing. There is no reason for this other than the limitations of the constant text formatting and parsing, and the limited bandwidth and expressiveness of a line oriented text format. Many of these "things" are overlapping with functionality of the shell itself (like the "tests") and with other tools. "ls" also excels at printing - only it's a different way to specify the format, and the format is married to the tool.

I am sorry if this sounds like I think find is a bad tool. It is not. It is actually an awesome tool. But it is undeniably complex. And the skills you need to master it does not help you at all when you continue to ls, ps, du or other popular tools.

While PS is a jack of all trades, right?

It has been pointed out to you, that PowerShell has been able to take a responsibility such as finding/locating items and separate it from responsibilities such as managing processes, formatting output, performing date/time arithmetic and interpreting propositions. Those other responsibilities has been generalized and composes with all other responsibilities.

Thus, PS tools tend to be smaller and have (far) fewer options. And yes, then you tend to use more commands on a pipeline, because you pass the object on to the to the tool that excels at it's (single) task.

Consider these commands:

Get-ChildItem | Where Length -gt 2MB | Format-Table Name,Length

Get-ChildItem | Where Length -gt 2MB | Format-Wide Name -Col 8

Get-ChildItem | Where Length -gt 2MB | Format-List FullName,Length,Attributes

Get-ChildItem | Where Length -gt 2MB | ConvertTo-Html

Get-ChildItem | Where Length -gt 2MB | ConvertTo-Json

Get-ChildItem | Where Length -gt 2MB | ConvertTo-Csv

Get-ChildItem | Where Length -gt 2MB | Select FullName,Length,Attributes,LastWriteTime | Out-GridView

Notice how filtering and formatting/printing is left to generalized cmdlets which has the filtering/formatting as their single responsibility.

1
0
Uffe Seerup

Re: @h4rmony

Is any of that possible with PS and dir command?

Yes, of course it is.

(I still don't know the secret behind formatting code in these forums :-( )

-----------------------

ls *, */*, */*/* -pv f | Get-Acl | ?{

$f.Name -match '.*img[1-4]{1,3}\..*'

-and $f.LastAccessTime.AddDays(10) -ge (get-date)

-and $f.Length -gt 1MB -and $f.Length -lt 2MB

-and $_.Owner -eq 'john123' } |

ft {$f.FullName},{$f.Length},AccessToString

-----------------------

#Explanation:

"ls" lists childitems. Here it lists 3 levels.

"Get-Acl" retrieves ACLs for the files on the pipeline.

"?" (an alias for "Where-Object") filters using a boolean expression.

"ft" (an alias for Format-Table) formats the select columns.

Keep in mind that the OSes are different. There is no "octal permissions" on Windows. Windows has true first-class ACLs. (ACLs were added later on *nix, and is still not fully integrated - e.g. there is no "octal" representation).

The only "trick" used here is the pipeline variable (the pv parameter of the ls command). It saves the file in a variable which can then be used in the filter expression.

You will no doubt note how this is "longer" than the specialized find command. However, you should also note how the filter expression can reach out and evaluate for any expression. H4rmony alluded to this previously: Once you need to filter on anything "find" has not been coded for, you are out of luck. PowerShell can draw on anything that can be evaluated as an expression.

1
0
Uffe Seerup

Re: @Uffe, on find

I don't like the -exec function myself and prefer xargs

Interesting. So how would you write the example using xargs?

-exec is just that alias of that pipe you were mentioning when talking about PS options

I think i miss something here (honestly). Could you clarify, please?

0
0
Uffe Seerup

Re: bash

Because text is more universal than any (other) object

Really? How do you describe an object graph (like a tree or a network) using text?

2
1
Uffe Seerup

PASH @TheVogon

Pash hasn't been updated since 2008 where it stalled. It is a defunct project.

PowerShell would not be a good for Unix/Linux anyway. Unix/Linux does not expose management interfaces in an object model that could be used by PowerShell.

For Linux/Unix, Bash is still the way to go.

3
0
Uffe Seerup

Re: AWK, which you don't seem to know

(I have not figured out of to use pre and code tags, any help?)

find . -printf "%u\n" | sort -u

(you might use the 2>/dev/null to rid of the permission problems if any)

Ah. Yes, otherwise the command line along with 'permission denied' will be reported as an "owner". So the command really should be:

<code>find . -printf "%u\n" 2>/dev/null | sort -u</code>

(find all files from current location and below, for each file output a string with the owner-name suffixed with a new-line, ignore errors/warnings; sort all the "lines" and return only unique lines)

Compare that to

<code>ls -recurse | get-acl | select -uniq owner</code>

(find all objects from current location and below; for the objects get the access-control-lists; for each access-control-list get the unique owners)

h4rmonys observation was that in bash, fitting commands together frequently requires text monging (formatting and immediate reparsing), whereas in PowerShell the properties/objects "fit" together.

BTW, do dir or ls operator in PS have as many features as find?

Now that is a really good question. The answer, of course, is that in PowerShell you only have Get-ChildItem (aliased to ls). It does not have as many options as find. However, that is not for the reason you think. find is a monster command which violates the "unix'y principle" (do one thing and do it well).

find:

* traverses files and directories (so does ls - but with different capabilities)

* has multiple options to control output format (why?)

* has multiple options to performs "actions", among which are the ability to execute other programs (!), including ability to prompt the user (!)

* find has an entire expression language which is different from expressions used in the shell itself, different from expressions used in other utilities.

Furthermore, find has many options for dealing with filenames that may contain spaces. It has these options exactly because text parsing is brittle and error-prone. In the past this has led to not only unstable scripts but to severe security problems as well!

No other tool can reuse the expression language. Awk has it's own "language", as does grep and ls and even the shell itself ("tests").

Now, compare that to PowerShell:

* The Get-ChildItem cmdlet traverses files, directories (and outputs objects). It works on all location types, ie. file system objects but also SQL locations, IIS locations, certificate store, registry etc.

* The Get-ACL cmdlet retrieves ACL objects for items that are passed to it.

* The Sort-Object (aliased to "sort") sorts objects of any type by using property names or expressions.

* Output formatting is handled by separate cmdlets, avoiding the need for each and every command to define output formatting options. Do only one thing and do it well.

PowerShell expressions are "script blocks" defined by the shell and thus common for *all* commands whether packaged with PowerShell or 3rd party modules. You use the same expression language, and each tool is relieved from having to implement parsing, compilation etc.

4
0
Uffe Seerup

Re: AWK, which you don't seem to know

why using find? Why to pipe it to uniq if there is a way to handle it with -u?, reminds me of peculiar pipes like "cat file | grep..." or "cat file |sed ....")

I think the point was that he wanted the entire tree processed (recursively). ls -al doesn't do that.

or even

ls -al | awk '$1 ~ /x/{print $3}'| sort -u

This does not process the tree; only the current directory. Even so, the equivalent powershell command would be:

ls | get-acl | select -uniq owner

Now, which is more *readable*?

AWK is a serious language, please don't diminish it. It's not "a part of bash" or any other shell.

awk is quite awesome. It is a functional text-processing language with nice features and beautiful consistency. I don't think the point was to *diminish* it. Rather, the point was that the power awk brings simply is not *needed* to compose advanced PowerShell commands. Alas, the power of using objects instead of text.

If you didn't get it, it doesn't mean it is no good.

ahem. I am *really* tempted to use this opportunity for a snide remark.

2) OOP makes syntax less readable, more programming-like

Look at the above examples. Which is more readable? Honestly?

There is a reason that *nix systems never though an OOP shell, other than an experiment, like python shell

Well, if you believe all the grapes that you cannot reach are sour, go ahead and believe that was why *nix systems never though of an OOP shell. I choose to believe that it is because *nix systems never had a system wide object model (and still hasn't). If you were to process objects in *nix, what would they be? Java objects? Without an interoperable object model, the value of an object oriented shell would be severely limited. Ksh actually has an object model - but it is internal to ksh itself and not really a system-wide object model.

Windows has COM and .NET - both of which PowerShell use as object models in it's type system.

Another important difference is that on *nix configuration is generally text based in line-oriented text files. Which makes tools like sed, awk and grep essential for reading and changing configuration.

Windows has an abstracted configuration model, where the actual configuration storage is abstracted away behind a configuration API. To configure something you invoke API functions rather than change text files. Those APIs are almost always either COM or .NET based (or WMI).

Hence, it makes sense that *on Windows* you use an object oriented shell and on *nix'es you use a text oriented shell. PowerShell would not be a good fit for *nix. Bash is not a good fit for Windows.

The pretext of this was that the OP wanted to learn PowerShell *for Windows*. Deal with it.

4
1
Uffe Seerup

Re: Questionable reason to use powershell over bash. Part II

If it's not an ad bullshit, and you do mean parallelizing tasks, than it's awesome, however, it always has limitations, plus with parallel and some ad-hoc tools you can parallelize tasks inside your scripts with Bash as well.

Interesting. So Bash parallel execution allows the parallel parts of the scripts to run in the same environment, having read/write access to the same shell variables and synchronization mechanisms?

Again, depends on the goal you're trying to accomplish, tramp-mode (in GNU Emacs), GNU Parallel and/or GNU Screen come to my mind.

Hmm. In Powershell you can *script* multiple remote sessions. For example you can start by opening sessions to 50 hosts. Each host will be a "session". You can now execute commands on each individual session while the state within each session os carried over from command to command, and the result of the command (success code) marshalled back to the controlling console. Which means the the controlling script can exert control over each individual command executed on the remote hosts. This is not just piping a script over to be executed and having the final result reported back. This is about having fine-grained control over each command executed remotely - as part of a script.

And yes - if you execute commands in parallel using the Invoke-Command cmdlet (alias icm) you can control how many parallel executions you want.

9) PowerShell web access.

What the hell is that, what for?

For remote control using a well-known and firewall-friendly protocol (https).

Sounds very fishy and insecure to me.

Yeah well. Any time you open up your infrastructure to the outside world, it incurs a certain amount of risk. See it as an alternative to OpenSSH, only you can integrate with web based authentication mechanisms and minimize the different authentication mechanisms, assuming that you already has a web security regime in place. So, do you open open an SSH port or simply PWA on an existing administrative or intra-corporate website? PWA defaults to "no access" and must be configured to allow PWA to use PowerShell remoting to the internal hosts. The commands/modules available and access levels allowed can be configured as well.

10) Superior security features, e.g. script signing, memory encryption..

"Memory encryption" sounds redundant, did you turn off ASLR?

Memory encrypting in PowerShell is used to ensure that sensitive data - such as passwords and private keys - are stored encrypted in memory. This has nothing to do with ASLR. ASLR is *not* encryption (where did you get that idea?).

Heartbleed demonstrated what can happen to passwords and privatekeys stored unencrypted in memory. When you read a password/private key through the Read-Host -AsSecureString cmdlet or similar, the password/private key is read and encrypted character-by-character and is *never* stored in cleartext in memory. The encryption is based on the data-protection API (DPAPI) ensuring that not even an administrator using a tool to scan the memory will be able to decrypt the password. There are lots of ways passwords in memory can bleed out. Heartbleed demonstrated one (nasty) way. Other channels are backups, swap files, memory dumps etc.

What precludes you from signing or encrypting a Bash script? You can also automate it by writing an easy bash function to check for a signature and execute a script, you can package every script or a number of them inside the Debian/RedHat or other package container with signatures to install it system-wide.

In PowerShell this is built-in security, and the default level is quite restrictive. And it can be controlled through group policies, making it trivial to enforce policies about e.g. script signing across an entire organization. Sure you can build *something* with bash, but how do you *enforce* it?

How do you sign a bash script, btw?

Correct me if I am wrong, but I had the impression that Debian/RedHat package container signatures were only check when *installing* the package. PowerShell script signing is used to ensure that scripts executing on a computer has been approved by some authority and has not been tampered with.

Furthermore, the script execution policies can block scripts obtained through browsers/mail clients/instant messengers etc. or e.g. scripts copied from a non-intranet zone. Files obtained through such channels are "tainted" with the origin, and powershell can be set to disallow execution of such scripts. This is a level of protection against phishing.

11) Do that in bash?

Do what again, an explanation is required, if you don't mean quantum theory.

I think he was referring to using a browser to connect to a command prompt and typing commands/running scripts with intellisense support - right in the browser.

12) Strongly typed stripting, extensive data types,

Why not scripting in C then?

Because C is not script friendly. C is not type-safe. C's scope rules interferes with REPLs.

"e.g first class xml support and regex support right in the shell."

What kind of regex is there, BTW? Is it as good and efficient as in grep, sed, awk or perl?

Yes. It is even better. While it is the "perl" variant it even allows some nice tricks to support e.g. parenthesis balancing (matching only properly balanced open-close parenthesis). PowerShell/.NET regexes are is *very* efficient. Actually the .NET regex engine supports compiling regex'es all the way to machine code.

No, it won't. Someone wrote that by default PS tries to read the data in memory, unless you use some non obvious and ugly syntax to do ti the right way.

I assume you are referring to how PowerShell process pipelines of objects? PowerShell actually *does* process objects in a progressive manner - "pulling" each result object through the pipeline one at a time. So if that was what you were referring to, you are wrong (although *some* cmdlets like e.g. sort *will* collect all objects before starting to produce output).

14) Instrumentation, extensive tracing, transcript and *source level* debugging of scripts.

Of course, it needs a lot of debugging thanks to its complexity.

If you develop scripts for any type of automation, you will appreciate debugging capabilities. This sounds rather like a case of sour grapes.

The main thing is though, how PS compares with Bash and other shells in usability, ease and power as shell, an envelop between utilities and processes? That is the real question.

To each their own. Bash robust and well established. But it is not without quirks. To someone well versed in Bash, the concepts of PowerShell can be daunting. You will be challenged on "wisdom" you simply accepted as facts before. In my experience, the hardest part for "bashers" to get about PowerShell is how it was made for objects and APIs instead of text files.

4
1

Powershell Terminals

Uffe Seerup

Re: Powershell Terminals

Use the Integrated Scripting Environment (ISE). Start it from a shortcut or by simply typing "ise" in the command window. It is always available, even on a gui-less server.

The ISE has auto-suggestings, sane cut-and-paste, multiple session tabs, remote tabs, command builder window, snippets, source-level debugging etc.

Alternatively use another console program, such as Console2.

5
0

MS brandishes 'Katana' HTTP/2.0 server

Uffe Seerup
Megaphone

Re: .Net of course

Mono. Mono. Mono!

It is not intended to be a production server. It makes perfect sense to implement a PoC using a language which has a high productivity (using e.g. LINQ and - probably more relevant in this case - async/await asynchronous methods) combined with a good performance.

Async/await makes creating asynchronous methods (much) easier while still have the methods resemble the logical flow of the application.

1
0
Uffe Seerup

Re: C# runs fully compiled

But it still runs within the .NET VM and is subject to its checks and workings.

There is no such thing as a .NET VM.

There is the common language runtime (CLR) which (as the name hints) is more of a library and initialization code. There is no "virtual machine" that interprets byte codes.

You can host the CLR in your own process (.NET is actually a COM "server"), although the compilation is performed by a JIT compilation service.

When you ask the CLR to run a .NET executable, you ask the CLR to load the assembly, use reflection to find the entry method and ask CLR to execute that method. At that time the CLR will compile the method from the MSIL code of the assembly (or take it from the cache if it has already been compiled) and invoke the compiled code. If the method invokes other methods, the method may be a compilation stub which compiled the method and replaces the reference to the stub with a pointer to the compiled method. Subsequent invocations will thus directly invoke the already compiled method.

.NET code (at least on the Windows platforms) execute fully compiled.

Sure, part of the code is turned into native opcodes, still it is different from a fully compiled native application which runs fully on the processor directly.

No, all of the code is turned into native instructions. All of it. It may not happen until just before the method is executed, but all of the code that is eventually executed is compiled code.

The difference from fully compiled native code is that the compiled code is obtained dynamically from a compiler (or cache) on the target platform, i.e. applications are distributed as MSIL and depends on the MSIL compiler service being available at the target computer.

There is even a tool (ngen.exe) that will pre-populate the cache with *all* of your application code compiled into native code, alleviating the need for JIT compilation.

You may want to read this: http://msdn.microsoft.com/en-us/library/9x0wh2z3(v=vs.90).aspx

3
1
Uffe Seerup
Windows

C# runs fully compiled

That MSIL somehow runs interpreted is a common misunderstanding. When a C# program executes it executes fully compiled.

C# is compiled *ahead* of execution, in principle on a method-by-method basis (in reality multiple classes are compiled at once). When a method has been executed *once*, subsequent invocations simply run the compiled method.

MSIL was never intended to be an interpreted language. From the start it was designed to be compiled to the final representation on the target architecture. Because it is type-safe and statically typed, it also do not need the typical type checks that dynamic language suffer from.

9
2

What's the most secure desktop operating system?

Uffe Seerup
FAIL

sudo is not a security model

sudo is a kludge, developed because of a lacking underlying model where privileges cannot be properly delegated. It is not part of a "model" - indeed the sudoers exists in parallel with and competing with the real (but inadequate) file system permissions.

sudo breaks one of the most important security principles: the principle of least privilege. sudo is a SUID root utility and will run *as root* with *unlimited* access.

Some Linux distros now use Linux Capabilities (although these have not been standardized). Had capabilities existed when Unix was created, we never would have had the abomination that is sudo.

Many vulnerabilities in utilities that must be started with sudo have lead to system compromises *because* of the violation of least privilege. Sendmail allows you to send a mail. But it requires you to run it as root. So you run it with sudo, allowing users to sudo sendmail. But a simple integer underflow (like this one: http://www.securiteam.com/exploits/6F00R006AQ.html) can now lead to total system compromise!

The security problems with sudo and other SUID root utilities are well-known so please do not try to pass it off as a superior "model". It was always and remains a kludge that is used to drill holes in a too simplistic, file-system oriented security model of the 1970ies.

How is a security auditor supposed to audit the capabilities of users? Once a user is allowed to execute binaries with root privileges through sudo or other SUID root's the security auditor have no way of knowing what can be done through those utilities, short of overseeing the process by which they were compiled and distributed. The operating system cannot guarantee that the file system privileges are restricting the users as they can be bypassed by sudo/SUIDs. Compare that to operating systems with security models where the permissions are actually guaranteed to restrict the account.

SELinux has a security model. Sudo is not a security model, it a drill that destroys security models.

9
1

The good and the bad in Hyper-V's PowerShell

Uffe Seerup

Re: @Uffe Seerup

That is justthe default behavior!!! BTW, It has nothing to do with the setuid root of sudo!

It has EVERYTHING to do with setuid root of sudo! It is the very way setuid works! Sudo may be instructed to *drop* to another user (-u option), but it starts as root (because the owner is root) and that is the default.

You may want to read this instructive article:

http://www.techrepublic.com/blog/security/understand-the-setuid-and-setgid-permissions-to-improve-security/2857

At the end you will find this "The general rule of thumb for setuid and setgid should always be, “Don’t Do It.” It is only in rare cases that it is a good idea to use either of these file permissions, especially when many programs might have surprising capabilities that, combined with setuid or setgid permissions, could result in shockingly bad security".

Nice.

Let me repeat. The operating system sees that the executable has the setuid flag. It then launches the process with the owner as the effective user. If the owner is root - then the process is running as root. A root process can drop to another user (seteuid http://man7.org/linux/man-pages/man2/setegid.2.html). Sudo will do that prior to executing the specified tool if the -u option is specified.

Don't believe me? You may believe the Linux man page for the sudo command: "Because of this, care must be taken when giving users access to commands via sudo to verify that the command does not inadvertently give the user an effective root shell."

Okay, than tell us how to effectively exploit this "vulnerability"

See above sudo man page security note.

or read this caveat from the same sudo man page:

"There is no easy way to prevent a user from gaining a root shell if that user has access to commands allowing shell escapes.

If users have sudo ALL

there is nothing to prevent them from creating their own program that gives them a root shell regardless of any '!' elements in the user specification.

Running shell scripts via sudo can expose the same kernel bugs that make setuid shell scripts unsafe on some operating systems (if your OS supports the /dev/fd/ directory, setuid shell scripts are generally safe)."

There has been many, many vulnerabilities in setuid programs. In general, a vulnerability in a setuid root process means assured system compromise because the attacher will be running as root. Ping is a setuid root utility. Here is an example of a vulnerability: http://www.halfdog.net/Security/2011/Ping6BufferOverflow/

A user who has execute access to ping with this vuln has root access and can compromise a system.

If on your system you have two regular users and know passwords for each one you can jump from one user to the other via "su anotheruser". Notice, you never become a root here.

It may look that way, but technically you are incorrect. su works by running as root and *then* dropping euid to the other user. Only root can do that because changing euid is a privileged operation. In this case the time spent as root is very short, so I can see how you may be confused.

Another setuid/SUID vulnerability: http://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2012-3485

However, this is beside the point. Sudo exists because you need it. There is no delegation of privileges in standard *nix or in POSIX. You cannot delegate the right to change system time. Only root can do that. So to change system time you run "sudo date -s ...". Sudo is a setuid root and start as root. By default it runs the tool you specify as root as well, so the "date -s ..." succeeds and sets the time. Had you specified "sudo -u date -s ..." (drop to user) you would not had succeeded in setting the time. Why? Because only root can change system time.

Windows does not need an equivalent to sudo (and no it is not "runas"), because 1) it is a liability and 2) privileges in Windows can be delegated.

1
0
Uffe Seerup

Re: @Uffe Seerup

You keep repeating this, but this is wrong! There is no way you ever get uid=0 automatically by just running sudo.

(sigh). Try typing "sudo whoami" or "sudo id". So, what did it answer? On my system it says "root"!

So who am I when I run a command through sudo? Answer: root.

More proof: "sudo bash" and in the new shell do a "ps -aux|grep bash". What processes are listed? Which user do they indicate they run as? On my system it says that "sudo bash" and one "bash" process run as root.

When will you get this? sudo is a setuid/SUID tool. A setuid tool runs with the owner of the file as the effective user of the process. You may restrict *who* can call sudo in the sudoers file, but you *cannot* change the fact that sudo starts as root.

By some reason you think that the "setuid root bit" is equated to "becoming root"

Yes! yes! If you have execute permission to the file then you are becoming root when you execute it. Not "equated" to becoming root. You become root. Plain and simple. This is basic Unix/Linux stuff and I shouldn't have to explain this to a Linux advocate. Something is wrong with this picture...

Q:Tell me, how do I generate a report of the rights/privileges of a certain user?

A: Depends how much info you want. I'd suggest running id command :

Proving my point. The id tool does *not* generate a report of what a user can do when what you can do can be bypassed by sudo! The owner of a resource has no way of determining who has access to his resource, when tools with setuid root may just bypass. To correctly assess users that could access your resource you'll have to figure out which setuid tools exist on the system, who has execute right on the setuid root tools, what tools are allowed to execute through sudo and which users have access to those tools through sudo, and finally (and crucially!) knowledge about what each of these tools can do. Some tools, like "find" can actually execute commands(!) and other tools (editors) allow you to start new shells/processes. Such tools are very dangerous when allowed through sudo.

1
0
Uffe Seerup

Re: @Uffe Seerup

As a matter of fact, mechanism or utility like sudo is absent on MS Windows. Runas is more of an su.

That is correct. So how does Windows do it? It has proper delegatable rights and privileges:

http://technet.microsoft.com/en-us/library/bb457125.aspx

One of the most important security principles is called Principle of least privilege.

http://en.wikipedia.org/wiki/Principle_of_least_privilege

SUID/setuid tools like sudo is a violation of that principle. The utilities run with sudo will run with root effective user and thus have privileges far beyond what is needed for the specific operation.

In Windows, if a user or a service need to be able to backup the entire system, you can delegate that specific privilege to that user or to the service account. That does not mean that the user gains system-wide root privileges, not even briefly.

In Windows, the administrator and the administrators group have privileges not because they are hardwired into the system, but because the privileges have been granted to the administrators group. Those privileges can be removed and/or delegated to other users/groups. There is no account which inherently bypasses all security checks and auditing. This adheres to the Principle of least privilege.

So it will always ask you to authenticate yourself for a user you're trying to be, not for the user you currently are.

Yes, if you use runas. Runas is not an attempt to create a sudo. Sudo is not needed on Windows. Instead you assign privileges directly to the users that need them. Security bracketing is achieved through UAC prompt: If you have actually been granted powerful privileges, Windows strips those privileges from your token when you log on. When you go through the UAC prompt you are *not* becoming administrators as many - especially those with *nix background - seem to believe. Instead the original privileges are simply restored to the token for the duration of the specific process. So instead of assigning God privileges to the user, it merely assigns the privileges already granted to the user.

Take a look at how simple yet sophisticated a permission system could be made with sudo.

That is anything but simple - especially from a security administrators viewpoint. How is the SA supposed to audit privileges of a user when he has to consider text files on each and every system and multiple concurrent and competing security mechanisms (file permissions, file permissions of pseudo files, apparmor or SELinux security descriptors and then sudoers) that may bypass anything?

Tell me, how do I generate a report of the rights/privileges of a certain user?

2
0
Uffe Seerup

Re: @Uffe Seerup

First sudo can be executed "as another user" by an allowed user, not necessarily a root

You are the one who is confused. sudo runs as root but can be invoked by an allowed user. Allowed users are listed in sudoers, but during execution sudo is definitively running with root as the effective user (euid==0). Really!

sudo and su when used with a username option ("sudo -u anotheruesr") don't get the uid=0 if that user is not root.

Correct. But "anotheruser" cannot invoke privileged syscalls. Only root can. To invoke privileged syscalls you need to have root as effective user. And when doing so you receive privileges far beyond what is needed for any single operation. Standard users cannot change time on a *nix system.

For instance, look at this BSD function: http://cvsweb.netbsd.org/bsdweb.cgi/src/lib/libc/sys/settimeofday.c?rev=1.3

Clearly requires euid==0 to succeed when setting time. No other user can do it. You *need* to be root (as effective user) to be able to make that syscall.

To be fair, Linux has introduced Linux capabilities: http://linux.die.net/man/7/capabilities

As you can read from that page:

For the purpose of performing permission checks, traditional UNIX implementations distinguish two categories of processes: privileged processes (whose effective user ID is 0, referred to as superuser or root), and unprivileged processes (whose effective UID is nonzero).

But also note:

15) Where is the standard on which the Linux capabilities are based?

There used to be a POSIX draft called POSIX.6 and later POSIX 1003.1e.

However after the committee had spent over 10 years, POSIX decided

that enough is enough and dropped the draft. There will therefore not

be a POSIX standard covering security anytime soon.

There is no standard *nix way of describing capabilities. While some Linux distros certainly support SUID with capabilities instead of simply root, this is by no means universal and in general other *nix (including a large number of Linux distros) implementations still SUID root on a number of utilities.

Take a look here: http://aplawrence.com/Basics/sudo.html:

The sudo program itself is a setuid binary. If you examine its permissions, you will see:

---s--x--x 1 root root 81644 Jan 14 15:36 /usr/bin/sudo

So sudo *will* run as root. Every time. You may be able to instruct it to drop to another user before invoking a utility/tool. But sudo is most often invoked exactly because you want to invoke privileged syscalls. So dropping to another standard user rather defeats the purpose.

2
0
Uffe Seerup

Re: @Trevor

No, I can't use the old ones because they aren't ubiquitous. The Text editor *NEEDS* to be part of the CLI and installed on BLOODY EVERYTHING with PowerShell on it.

I am curious. If a server is running headless, why would you need even a text editor? Why do you not use powershells remoting capabilities, e.g. from PowerShell ISE?

Invoke-Command (alias icm) can execute script files remotely, even when the script file resides locally on the controlling host. It can execute single script blocks or script files as part of the same session. The script executing remotely can even refer to local variables (something that bash cannot do and could be a reason you choose to "jump" (SSH) to the remote machine.

0
0
Uffe Seerup

Re: @Uffe Seerup

I think you're confusing something here. Su and sudo do delegate the privileges if allowed.

I'm not confused. Sudo is itself a setuid/SUID utility. The tools that are executed with sudo are executed with effective user 0 (root). They can do *anything* they want.

Su and sudo do not delegate privileges at all. They execute with root as effective user, which is *not* delegation. During the execution you are *root* and any vulnerability in the tool can allow an attacker root access *even* if you restricted his access through sudoers.

The protection you have with sudo is that it checks the sudoers list first to see if you are allowed to execute the utility through sudo. The privileged syscalls still only checks to see if you are root.

If you want to execute a privileges syscall from your own app/daemon you will have to become root *or* make a system() call to sudo if you try to manage privileges that way.

3
0
Uffe Seerup
Happy

Re: Often overlooked about PowerShell

Well it could in theory by passing a shared memory key on the command line but its hardly ever done because if you need such close coupling you'd use a library, not a seperate program.

The problem is that the shared memory key does not tell the receiver much about the object being passed. There is no ubiquitous object model available in *nix. A type-safe object model with runtime type information - like .NET or Java (or even COM) - makes passing objects orders of magnitudes easier.

Lacking a common object model, the sender and receiver need to have very specific knowledge about the objects being exchanged.

That is why PowerShell is a very good fit for Windows (and not for *nix) and why a flat-stream shell like bash is a better fit on *Nix than it is on Windows. PowerShell can leverage the objects already exposed through the Windows API (.NET, WMI, COM).

And you think thats a BAD thing?? The MS kool aid has certainly done its job on you!

Nah. I just think that being able to delegate the actual privileges my app needs would be awesome.

Why would my service have to become an all-powerful God just to configure a printer? In *nix you don't delegate privileges. The syscalls are not protected by a granular security model; only root can invoke certain privileged syscalls. Anyone who wants to call a privileged syscall has to become all-powerful root and gain privileges far beyond what is needed. Because that is an unmanageable security risk, SUID (setuid) tools are created and access to *those* are restricted. Internally they run as root but only a limited set of users are allowed to invoke the tools.

That design has (at least) 2 problems: 1. It requires you to invoke the functionality through system() calls because the functionality is exposed exclusively through tools (unless you are root). 2. The tool is *still* running in God mode. A single vulnerability in such a utility can allow an attacker unlimited access to the system. Thankfully a number of Linux distros have started to use more fine-grained Linux capabilities. Which are uncannily similar to privileges in Windows.

Oh excellent. So something someone else has written has admin priviledges running inside your process. I can see that ending well. Not.

I was not talking about code "someone else" had written. I was talking about a developer first creating a scriptable management interface (in a library) and then leveraging that library to *also* create a management GUI.

Under Windows you do not run your own services/websites with admin privileges. If the service needs some admin capabilities one can delegate those privileges to the service account. Just like you generally try to avoid running daemons as root on *nix.

1
0
Uffe Seerup
Happy

Re: Often overlooked about PowerShell

So let me get this straight - a developer wants to use some external functionality so he'll embed an entire powershell engine in it which then calls out to a seperate program to run inside the engine which then manipulates the memory in his app?

Errr, no.

Look here: http://msdn.microsoft.com/en-us/library/windows/desktop/ee706610(v=vs.85).aspx

It is more like:

1. A developer creates cmdlets which constitutes management of his server application like e.g. setting permissions for users, configuring rules, setting failover policies etc. Now he can manage his application through CLI and scripts.

2. Then the developer wants to create an GUI for easy guided management. He starts to create a management GUI application. He realizes that he already has the functionality, only in a GUI you often use concepts such as lists with current/selected items etc. The developer creates the list views by calling his Get-UserPermission, Get-Rule, Get-FailoverPolicy cmdlets. Those cmdlets return objects which has methods and properties. Properties become list columns and methods are candidates for actions, as are his cmdlets such as New-UserPermission, New-Rule etc. The GUI application databinds to the properties and methods. So the management GUI application ends up as a simple shell around the cmdlets.

As a bonus(!) the GUI application can even tell the user what the actions look like when executed. If the user wants to he can copy-paste the command and later execute it from the command line. This is what Exchange does.

Had the cmdlets been traditional "external commands" of a shell that would be a *really* bad idea because external commands run in their own process and have a very narrow input/output channels where you have to serialize all input/output to/from text. But these are cmdlets and you can pass the very objects that was used to data-bind the lists. Cmdlets can manipulate the objects and/or produce new objects as result. Objects can have events so when the objects are changed by the cmdlets, the changes create notification events which the GUI app listens to and uses to update/synchronize the lists.

The cmdlets are the Command Object of the common Command design pattern.

I'm sorry , but could you explain how exactly thats better than just loading a a 3rd party dll and calling someFunction() other than allowing crap developers to do things they otherwise wouldn't know how to do?

Well, loading a 3rd party DLL is exactly what is going on. The PowerShell engine is itself a DLL - not an entire process. You simply load the System.Managent.Automation.dll in your application. When you initialize the engine you tell it which modules it should load from the start. You tell it to load your managtement module (a DLL) with your cmdlets. Cmdlets are really .NET classes where the parameters are public properties.

(In the same way some unix "coders" just use system() all over the place because the Posix API is apparently too complicated for them to grok)

No - not the same way. system() calls have an incredibly narrow communication channel and your host application cannot easily share objects with the processes it spawns this way. That makes that approach brittle and error prone. Still, because of the limited security model in *nix where some syscalls can only be run as root, developers are sometimes *forced* to do this. Sometimes it is a choice between system() to run a SUID utility or demand that your app has *root* privileges. The PowerShell cmdlets run in-process and the host application can simply share .NET objects with the cmdlets. If the host application subscribes to events from objects it passes to cmdlets it is notified about changes and can update active views accordingly.

2
0
Uffe Seerup
Holmes

Re: shells, configs, editors etc

There is a few things that a non-MS person might notice here.

Indeed

Why not having one or a few editable config files to accomplish all described tasks plus a million of other things? No, I am not talking about the abominable Windows registry or XML gibberish. It's a common practice for the *nix systems to have a human readable/editable file, or a directory residing in /etc/ or else.

Um. Why not use text files for everything? Because text files do not scale. Take a look at what Trevor was doing:

1: Joined a domain. A domain join is actually a complicated transaction where trusts are established. Domain accounts become trusted users of the computer. Would you establish such trusts by editing text files on every single computer? What about the relationship part of the equation. How would you query an organization for the computers (which may be offline)? Editing another text file to contain all the computers? Who will keep that in synch? Trust relationships cannot be established by editing a single text file, but they *can* be established through a single command. Which is what Trevor did. Would you prefer that an organization keep a text file with all computers authorized. Is this amateur hour? Text files do not scale as configuration store.

2: Trevor used commands to grant privileges. Would you prefer an enterprise use old-style passwd files to maintain their directory of users, accounts and privileges? Really? Or do you recognize that, well, text files are not really suitable for storing credentials and privileges?

3: Trevor used cmdlets to discover network adapters and their configuration. On linux you'd use ifconfig or ip. Or would you rather edit a text file?

4: Trevor then installed and activated failover clustering. I could see how that conceivably *could* be done by editing some text file. The "installation". However you *will* need to somehow activate it. A command perhaps?

5: New VMs can be created and started using cmdlets. Would you configure a new VM by editing a text file? How about space allocation?

For this purpose, MS would need to come up with not only alternatives to a *nix shells, but also with a decent editor, like vi(m) or GNU Emacs. Yes, it also remains to teach, convince your users that it is a good thing to use, a mere trifle ... not!

Abstracting configuration so it is accessed through an interface has numerous advantages. For instance, the configuration store can change and evolve while the commands maintain backwards compatibility. By NOT blindly copying the *nix text-file obsession it has become easier to create robust scripting for Windows.

Another advantage (also illustrated by Trevor here) is that the command approach allows computers to be configured through scripting from a remote controller. Yes, you *could* create scripts and send them across SSH. But with remote commands you can just issue the command from remote and not worry about which version of the text configuration file format the script has to work with on that particular system.

Otherwise, the main idea of pretty much every article dedicated to PS is See, you can do it with PS as well, without any GUI really, Yaaaay!!!"

Well, yes. Exactly. And because you can do it with PS you can do it with a GUI as well. You see, the GUI can host the PS engine. Look at how the Server Manager in Server 2012 can be used to administer multiple remote systems, issuing commands to execute simultaneously and robustly across multiple nodes.

4
3
Uffe Seerup
Thumb Up

Often overlooked about PowerShell

Is how it is so much more than just a shell. Unlike e.g. bash and other general CLI shells, PS has been designed with a hostable engine. A admin application can easily host a PS engine and let the cmdlets operate directly on the application in-memory objects. With a traditional text based shell that is not possible because the "objects" manipulated by the shell are not sharable in memory and the shell runs in its own process isolated from other running apps.

What this means is that vendors of servers/devices that need to be expose a management interface can implement the manageability *once* as PowerShell cmdlets. *Then* they can create a GUI admin module or admin website which merely uses the cmdlets through a hosted PowerShell engine. That is what Exchange does. That is what the new Server Manager in Server 2012 does.

No need to implement manageability twice, one or CLI and one for GUI. Simply implement cmdlets and build the GUI on top of the cmdlets. You have both scriptability, CLI and GUI. The fact that a vendor can follow this pattern and achieve two management interfaces means that we will continue to have both options: Concise and scriptable CLI as well as guided and exploratory GUI interfaces.

4
3

Microsoft raises 'state of the art' son of NTFS

Uffe Seerup
Mushroom

Restart Manager

The purpose of Restart Manager is to allow for transactional changes to open or closed files. Despite its name it is nor primarily about restarting the system. Rather, if a process holds an exe or dll open (because it is running as a service or application), RM can determine which processes to restart. Processes voluntarily registers with RM and they can let RM preserve state (open documents, changes, cursor/scroll positions etc). RM can restart the app/service and bring it into the same state. This beats just replacing files which can easily leave a process in an unknown state (started with version 1.2 and suddenly the libraries it loads dynamically are version 2.0).

RM is the reason system restarts are rare on Windows nowadays.

It is also the reason why *sometimes* the "restart badge" mysteriously disappears from the start button. That happens when RM has determined that files scheduled for replace are being held open by processes which have *not* enlisted with RM (and thus RM must assume it cannot just restart the processes without risk of losing state). RM actually monitors the open files and if they suddenly are closed (because you closed the app) it *will* replace the files en-block and remove the restart badge.

Ever wondered how Windows 7 can start Chrome, Word etc, open the same pages and scroll to the position right before the system was shut down (or lost power)? That's RM working with well-behaved apps.

0
0

Grow up, Google: You're threatening IT growth

Uffe Seerup
FAIL

Google? Invented AJAX?

Actually, AJAX was invented (the term AJAX was first coined years later) by Microsoft; more specifically the Outlook team.

The Outlook team were the ones who came up with the XmlHttpRequest which became the kingpin of AJAX. And they did it for the exact reason: To make Outlook web access more fluent and allow javascript to update the DOM asynchronously and without re-requesting the entire page.

http://en.wikipedia.org/wiki/XMLHttpRequest#History_and_support

Google is good at copying, though.

3
1

Ellison wrestles Google to strangle 'unofficial' Java

Uffe Seerup
Alert

Yes, .NET specs are open. Java's not so much

"[.NET is open] as long as you get it from Novel since only then you're covered by a license to the patents."

Wrong. You are mixing things up here. The core .NET (C# and Common Language Infrastructure) has been ISO and ECMA standardized. They are *also* covered by Microsofts "Community Promise" which has legal ("estoppel") value. Microsoft has forfeited any right to sue any implementation for infringement on patents necessary to implement covered standards and specifications.

So right there, if Google had chosen C#/CLR they would not have been in this position. The community promise and the RAND requirements of ISO/ECMA covers this use case perfectly.

Mono goes beyond the core C#/CLI and implements a number of APIs developed by Microsoft for the full .NET. On top of the community promise, Microsoft has granted the Mono project right to distribute without risk of patent infringement. *This* is the pledge which "only" covers Novell customers e.g. anyone who downloads Mono/Moonlight from Novell. This is not to say that Mono *actually* infringes any MS patents - just that they or their customers will *never* be sued for something like that.

Google became vulnerable to patent litigation from Oracle because the patent grant of OpenJDK only covers *full* implementations (no super/subset). Google chose to implement a VM and only support a number of core classes. Presumably to wiggle around licensing (or require device vendors to license) Java ME/MIDs.

Sun *never* relinquished control of Java. They only open sourced the *implementation* of Java SE. The spec was always controlled fully by Sun (and now Oracle) even though they appeared to take advice from the community through JCP. Had Sun allowed Java to be standardized through ISO or ECMA this would never had happened.

2
1

Page: