24 posts • joined 10 Dec 2007
Re: .Net of course
Mono. Mono. Mono!
It is not intended to be a production server. It makes perfect sense to implement a PoC using a language which has a high productivity (using e.g. LINQ and - probably more relevant in this case - async/await asynchronous methods) combined with a good performance.
Async/await makes creating asynchronous methods (much) easier while still have the methods resemble the logical flow of the application.
Re: C# runs fully compiled
But it still runs within the .NET VM and is subject to its checks and workings.
There is no such thing as a .NET VM.
There is the common language runtime (CLR) which (as the name hints) is more of a library and initialization code. There is no "virtual machine" that interprets byte codes.
You can host the CLR in your own process (.NET is actually a COM "server"), although the compilation is performed by a JIT compilation service.
When you ask the CLR to run a .NET executable, you ask the CLR to load the assembly, use reflection to find the entry method and ask CLR to execute that method. At that time the CLR will compile the method from the MSIL code of the assembly (or take it from the cache if it has already been compiled) and invoke the compiled code. If the method invokes other methods, the method may be a compilation stub which compiled the method and replaces the reference to the stub with a pointer to the compiled method. Subsequent invocations will thus directly invoke the already compiled method.
.NET code (at least on the Windows platforms) execute fully compiled.
Sure, part of the code is turned into native opcodes, still it is different from a fully compiled native application which runs fully on the processor directly.
No, all of the code is turned into native instructions. All of it. It may not happen until just before the method is executed, but all of the code that is eventually executed is compiled code.
The difference from fully compiled native code is that the compiled code is obtained dynamically from a compiler (or cache) on the target platform, i.e. applications are distributed as MSIL and depends on the MSIL compiler service being available at the target computer.
There is even a tool (ngen.exe) that will pre-populate the cache with *all* of your application code compiled into native code, alleviating the need for JIT compilation.
You may want to read this: http://msdn.microsoft.com/en-us/library/9x0wh2z3(v=vs.90).aspx
C# runs fully compiled
That MSIL somehow runs interpreted is a common misunderstanding. When a C# program executes it executes fully compiled.
C# is compiled *ahead* of execution, in principle on a method-by-method basis (in reality multiple classes are compiled at once). When a method has been executed *once*, subsequent invocations simply run the compiled method.
MSIL was never intended to be an interpreted language. From the start it was designed to be compiled to the final representation on the target architecture. Because it is type-safe and statically typed, it also do not need the typical type checks that dynamic language suffer from.
sudo is not a security model
sudo is a kludge, developed because of a lacking underlying model where privileges cannot be properly delegated. It is not part of a "model" - indeed the sudoers exists in parallel with and competing with the real (but inadequate) file system permissions.
sudo breaks one of the most important security principles: the principle of least privilege. sudo is a SUID root utility and will run *as root* with *unlimited* access.
Some Linux distros now use Linux Capabilities (although these have not been standardized). Had capabilities existed when Unix was created, we never would have had the abomination that is sudo.
Many vulnerabilities in utilities that must be started with sudo have lead to system compromises *because* of the violation of least privilege. Sendmail allows you to send a mail. But it requires you to run it as root. So you run it with sudo, allowing users to sudo sendmail. But a simple integer underflow (like this one: http://www.securiteam.com/exploits/6F00R006AQ.html) can now lead to total system compromise!
The security problems with sudo and other SUID root utilities are well-known so please do not try to pass it off as a superior "model". It was always and remains a kludge that is used to drill holes in a too simplistic, file-system oriented security model of the 1970ies.
How is a security auditor supposed to audit the capabilities of users? Once a user is allowed to execute binaries with root privileges through sudo or other SUID root's the security auditor have no way of knowing what can be done through those utilities, short of overseeing the process by which they were compiled and distributed. The operating system cannot guarantee that the file system privileges are restricting the users as they can be bypassed by sudo/SUIDs. Compare that to operating systems with security models where the permissions are actually guaranteed to restrict the account.
SELinux has a security model. Sudo is not a security model, it a drill that destroys security models.
Re: @Uffe Seerup
That is justthe default behavior!!! BTW, It has nothing to do with the setuid root of sudo!
It has EVERYTHING to do with setuid root of sudo! It is the very way setuid works! Sudo may be instructed to *drop* to another user (-u option), but it starts as root (because the owner is root) and that is the default.
You may want to read this instructive article:
At the end you will find this "The general rule of thumb for setuid and setgid should always be, “Don’t Do It.” It is only in rare cases that it is a good idea to use either of these file permissions, especially when many programs might have surprising capabilities that, combined with setuid or setgid permissions, could result in shockingly bad security".
Let me repeat. The operating system sees that the executable has the setuid flag. It then launches the process with the owner as the effective user. If the owner is root - then the process is running as root. A root process can drop to another user (seteuid http://man7.org/linux/man-pages/man2/setegid.2.html). Sudo will do that prior to executing the specified tool if the -u option is specified.
Don't believe me? You may believe the Linux man page for the sudo command: "Because of this, care must be taken when giving users access to commands via sudo to verify that the command does not inadvertently give the user an effective root shell."
Okay, than tell us how to effectively exploit this "vulnerability"
See above sudo man page security note.
or read this caveat from the same sudo man page:
"There is no easy way to prevent a user from gaining a root shell if that user has access to commands allowing shell escapes.
If users have sudo ALL
there is nothing to prevent them from creating their own program that gives them a root shell regardless of any '!' elements in the user specification.
Running shell scripts via sudo can expose the same kernel bugs that make setuid shell scripts unsafe on some operating systems (if your OS supports the /dev/fd/ directory, setuid shell scripts are generally safe)."
There has been many, many vulnerabilities in setuid programs. In general, a vulnerability in a setuid root process means assured system compromise because the attacher will be running as root. Ping is a setuid root utility. Here is an example of a vulnerability: http://www.halfdog.net/Security/2011/Ping6BufferOverflow/
A user who has execute access to ping with this vuln has root access and can compromise a system.
If on your system you have two regular users and know passwords for each one you can jump from one user to the other via "su anotheruser". Notice, you never become a root here.
It may look that way, but technically you are incorrect. su works by running as root and *then* dropping euid to the other user. Only root can do that because changing euid is a privileged operation. In this case the time spent as root is very short, so I can see how you may be confused.
Another setuid/SUID vulnerability: http://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2012-3485
However, this is beside the point. Sudo exists because you need it. There is no delegation of privileges in standard *nix or in POSIX. You cannot delegate the right to change system time. Only root can do that. So to change system time you run "sudo date -s ...". Sudo is a setuid root and start as root. By default it runs the tool you specify as root as well, so the "date -s ..." succeeds and sets the time. Had you specified "sudo -u date -s ..." (drop to user) you would not had succeeded in setting the time. Why? Because only root can change system time.
Windows does not need an equivalent to sudo (and no it is not "runas"), because 1) it is a liability and 2) privileges in Windows can be delegated.
Re: @Uffe Seerup
You keep repeating this, but this is wrong! There is no way you ever get uid=0 automatically by just running sudo.
(sigh). Try typing "sudo whoami" or "sudo id". So, what did it answer? On my system it says "root"!
So who am I when I run a command through sudo? Answer: root.
More proof: "sudo bash" and in the new shell do a "ps -aux|grep bash". What processes are listed? Which user do they indicate they run as? On my system it says that "sudo bash" and one "bash" process run as root.
When will you get this? sudo is a setuid/SUID tool. A setuid tool runs with the owner of the file as the effective user of the process. You may restrict *who* can call sudo in the sudoers file, but you *cannot* change the fact that sudo starts as root.
By some reason you think that the "setuid root bit" is equated to "becoming root"
Yes! yes! If you have execute permission to the file then you are becoming root when you execute it. Not "equated" to becoming root. You become root. Plain and simple. This is basic Unix/Linux stuff and I shouldn't have to explain this to a Linux advocate. Something is wrong with this picture...
Q:Tell me, how do I generate a report of the rights/privileges of a certain user?
A: Depends how much info you want. I'd suggest running id command :
Proving my point. The id tool does *not* generate a report of what a user can do when what you can do can be bypassed by sudo! The owner of a resource has no way of determining who has access to his resource, when tools with setuid root may just bypass. To correctly assess users that could access your resource you'll have to figure out which setuid tools exist on the system, who has execute right on the setuid root tools, what tools are allowed to execute through sudo and which users have access to those tools through sudo, and finally (and crucially!) knowledge about what each of these tools can do. Some tools, like "find" can actually execute commands(!) and other tools (editors) allow you to start new shells/processes. Such tools are very dangerous when allowed through sudo.
Re: @Uffe Seerup
As a matter of fact, mechanism or utility like sudo is absent on MS Windows. Runas is more of an su.
That is correct. So how does Windows do it? It has proper delegatable rights and privileges:
One of the most important security principles is called Principle of least privilege.
SUID/setuid tools like sudo is a violation of that principle. The utilities run with sudo will run with root effective user and thus have privileges far beyond what is needed for the specific operation.
In Windows, if a user or a service need to be able to backup the entire system, you can delegate that specific privilege to that user or to the service account. That does not mean that the user gains system-wide root privileges, not even briefly.
In Windows, the administrator and the administrators group have privileges not because they are hardwired into the system, but because the privileges have been granted to the administrators group. Those privileges can be removed and/or delegated to other users/groups. There is no account which inherently bypasses all security checks and auditing. This adheres to the Principle of least privilege.
So it will always ask you to authenticate yourself for a user you're trying to be, not for the user you currently are.
Yes, if you use runas. Runas is not an attempt to create a sudo. Sudo is not needed on Windows. Instead you assign privileges directly to the users that need them. Security bracketing is achieved through UAC prompt: If you have actually been granted powerful privileges, Windows strips those privileges from your token when you log on. When you go through the UAC prompt you are *not* becoming administrators as many - especially those with *nix background - seem to believe. Instead the original privileges are simply restored to the token for the duration of the specific process. So instead of assigning God privileges to the user, it merely assigns the privileges already granted to the user.
Take a look at how simple yet sophisticated a permission system could be made with sudo.
That is anything but simple - especially from a security administrators viewpoint. How is the SA supposed to audit privileges of a user when he has to consider text files on each and every system and multiple concurrent and competing security mechanisms (file permissions, file permissions of pseudo files, apparmor or SELinux security descriptors and then sudoers) that may bypass anything?
Tell me, how do I generate a report of the rights/privileges of a certain user?
Re: @Uffe Seerup
First sudo can be executed "as another user" by an allowed user, not necessarily a root
You are the one who is confused. sudo runs as root but can be invoked by an allowed user. Allowed users are listed in sudoers, but during execution sudo is definitively running with root as the effective user (euid==0). Really!
sudo and su when used with a username option ("sudo -u anotheruesr") don't get the uid=0 if that user is not root.
Correct. But "anotheruser" cannot invoke privileged syscalls. Only root can. To invoke privileged syscalls you need to have root as effective user. And when doing so you receive privileges far beyond what is needed for any single operation. Standard users cannot change time on a *nix system.
For instance, look at this BSD function: http://cvsweb.netbsd.org/bsdweb.cgi/src/lib/libc/sys/settimeofday.c?rev=1.3
Clearly requires euid==0 to succeed when setting time. No other user can do it. You *need* to be root (as effective user) to be able to make that syscall.
To be fair, Linux has introduced Linux capabilities: http://linux.die.net/man/7/capabilities
As you can read from that page:
For the purpose of performing permission checks, traditional UNIX implementations distinguish two categories of processes: privileged processes (whose effective user ID is 0, referred to as superuser or root), and unprivileged processes (whose effective UID is nonzero).
But also note:
15) Where is the standard on which the Linux capabilities are based?
There used to be a POSIX draft called POSIX.6 and later POSIX 1003.1e.
However after the committee had spent over 10 years, POSIX decided
that enough is enough and dropped the draft. There will therefore not
be a POSIX standard covering security anytime soon.
There is no standard *nix way of describing capabilities. While some Linux distros certainly support SUID with capabilities instead of simply root, this is by no means universal and in general other *nix (including a large number of Linux distros) implementations still SUID root on a number of utilities.
Take a look here: http://aplawrence.com/Basics/sudo.html:
The sudo program itself is a setuid binary. If you examine its permissions, you will see:
---s--x--x 1 root root 81644 Jan 14 15:36 /usr/bin/sudo
So sudo *will* run as root. Every time. You may be able to instruct it to drop to another user before invoking a utility/tool. But sudo is most often invoked exactly because you want to invoke privileged syscalls. So dropping to another standard user rather defeats the purpose.
No, I can't use the old ones because they aren't ubiquitous. The Text editor *NEEDS* to be part of the CLI and installed on BLOODY EVERYTHING with PowerShell on it.
I am curious. If a server is running headless, why would you need even a text editor? Why do you not use powershells remoting capabilities, e.g. from PowerShell ISE?
Invoke-Command (alias icm) can execute script files remotely, even when the script file resides locally on the controlling host. It can execute single script blocks or script files as part of the same session. The script executing remotely can even refer to local variables (something that bash cannot do and could be a reason you choose to "jump" (SSH) to the remote machine.
Re: @Uffe Seerup
I think you're confusing something here. Su and sudo do delegate the privileges if allowed.
I'm not confused. Sudo is itself a setuid/SUID utility. The tools that are executed with sudo are executed with effective user 0 (root). They can do *anything* they want.
Su and sudo do not delegate privileges at all. They execute with root as effective user, which is *not* delegation. During the execution you are *root* and any vulnerability in the tool can allow an attacker root access *even* if you restricted his access through sudoers.
The protection you have with sudo is that it checks the sudoers list first to see if you are allowed to execute the utility through sudo. The privileged syscalls still only checks to see if you are root.
If you want to execute a privileges syscall from your own app/daemon you will have to become root *or* make a system() call to sudo if you try to manage privileges that way.
Re: Often overlooked about PowerShell
Well it could in theory by passing a shared memory key on the command line but its hardly ever done because if you need such close coupling you'd use a library, not a seperate program.
The problem is that the shared memory key does not tell the receiver much about the object being passed. There is no ubiquitous object model available in *nix. A type-safe object model with runtime type information - like .NET or Java (or even COM) - makes passing objects orders of magnitudes easier.
Lacking a common object model, the sender and receiver need to have very specific knowledge about the objects being exchanged.
That is why PowerShell is a very good fit for Windows (and not for *nix) and why a flat-stream shell like bash is a better fit on *Nix than it is on Windows. PowerShell can leverage the objects already exposed through the Windows API (.NET, WMI, COM).
And you think thats a BAD thing?? The MS kool aid has certainly done its job on you!
Nah. I just think that being able to delegate the actual privileges my app needs would be awesome.
Why would my service have to become an all-powerful God just to configure a printer? In *nix you don't delegate privileges. The syscalls are not protected by a granular security model; only root can invoke certain privileged syscalls. Anyone who wants to call a privileged syscall has to become all-powerful root and gain privileges far beyond what is needed. Because that is an unmanageable security risk, SUID (setuid) tools are created and access to *those* are restricted. Internally they run as root but only a limited set of users are allowed to invoke the tools.
That design has (at least) 2 problems: 1. It requires you to invoke the functionality through system() calls because the functionality is exposed exclusively through tools (unless you are root). 2. The tool is *still* running in God mode. A single vulnerability in such a utility can allow an attacker unlimited access to the system. Thankfully a number of Linux distros have started to use more fine-grained Linux capabilities. Which are uncannily similar to privileges in Windows.
Oh excellent. So something someone else has written has admin priviledges running inside your process. I can see that ending well. Not.
I was not talking about code "someone else" had written. I was talking about a developer first creating a scriptable management interface (in a library) and then leveraging that library to *also* create a management GUI.
Under Windows you do not run your own services/websites with admin privileges. If the service needs some admin capabilities one can delegate those privileges to the service account. Just like you generally try to avoid running daemons as root on *nix.
Re: Often overlooked about PowerShell
So let me get this straight - a developer wants to use some external functionality so he'll embed an entire powershell engine in it which then calls out to a seperate program to run inside the engine which then manipulates the memory in his app?
Look here: http://msdn.microsoft.com/en-us/library/windows/desktop/ee706610(v=vs.85).aspx
It is more like:
1. A developer creates cmdlets which constitutes management of his server application like e.g. setting permissions for users, configuring rules, setting failover policies etc. Now he can manage his application through CLI and scripts.
2. Then the developer wants to create an GUI for easy guided management. He starts to create a management GUI application. He realizes that he already has the functionality, only in a GUI you often use concepts such as lists with current/selected items etc. The developer creates the list views by calling his Get-UserPermission, Get-Rule, Get-FailoverPolicy cmdlets. Those cmdlets return objects which has methods and properties. Properties become list columns and methods are candidates for actions, as are his cmdlets such as New-UserPermission, New-Rule etc. The GUI application databinds to the properties and methods. So the management GUI application ends up as a simple shell around the cmdlets.
As a bonus(!) the GUI application can even tell the user what the actions look like when executed. If the user wants to he can copy-paste the command and later execute it from the command line. This is what Exchange does.
Had the cmdlets been traditional "external commands" of a shell that would be a *really* bad idea because external commands run in their own process and have a very narrow input/output channels where you have to serialize all input/output to/from text. But these are cmdlets and you can pass the very objects that was used to data-bind the lists. Cmdlets can manipulate the objects and/or produce new objects as result. Objects can have events so when the objects are changed by the cmdlets, the changes create notification events which the GUI app listens to and uses to update/synchronize the lists.
The cmdlets are the Command Object of the common Command design pattern.
I'm sorry , but could you explain how exactly thats better than just loading a a 3rd party dll and calling someFunction() other than allowing crap developers to do things they otherwise wouldn't know how to do?
Well, loading a 3rd party DLL is exactly what is going on. The PowerShell engine is itself a DLL - not an entire process. You simply load the System.Managent.Automation.dll in your application. When you initialize the engine you tell it which modules it should load from the start. You tell it to load your managtement module (a DLL) with your cmdlets. Cmdlets are really .NET classes where the parameters are public properties.
(In the same way some unix "coders" just use system() all over the place because the Posix API is apparently too complicated for them to grok)
No - not the same way. system() calls have an incredibly narrow communication channel and your host application cannot easily share objects with the processes it spawns this way. That makes that approach brittle and error prone. Still, because of the limited security model in *nix where some syscalls can only be run as root, developers are sometimes *forced* to do this. Sometimes it is a choice between system() to run a SUID utility or demand that your app has *root* privileges. The PowerShell cmdlets run in-process and the host application can simply share .NET objects with the cmdlets. If the host application subscribes to events from objects it passes to cmdlets it is notified about changes and can update active views accordingly.
Re: shells, configs, editors etc
There is a few things that a non-MS person might notice here.
Why not having one or a few editable config files to accomplish all described tasks plus a million of other things? No, I am not talking about the abominable Windows registry or XML gibberish. It's a common practice for the *nix systems to have a human readable/editable file, or a directory residing in /etc/ or else.
Um. Why not use text files for everything? Because text files do not scale. Take a look at what Trevor was doing:
1: Joined a domain. A domain join is actually a complicated transaction where trusts are established. Domain accounts become trusted users of the computer. Would you establish such trusts by editing text files on every single computer? What about the relationship part of the equation. How would you query an organization for the computers (which may be offline)? Editing another text file to contain all the computers? Who will keep that in synch? Trust relationships cannot be established by editing a single text file, but they *can* be established through a single command. Which is what Trevor did. Would you prefer that an organization keep a text file with all computers authorized. Is this amateur hour? Text files do not scale as configuration store.
2: Trevor used commands to grant privileges. Would you prefer an enterprise use old-style passwd files to maintain their directory of users, accounts and privileges? Really? Or do you recognize that, well, text files are not really suitable for storing credentials and privileges?
3: Trevor used cmdlets to discover network adapters and their configuration. On linux you'd use ifconfig or ip. Or would you rather edit a text file?
4: Trevor then installed and activated failover clustering. I could see how that conceivably *could* be done by editing some text file. The "installation". However you *will* need to somehow activate it. A command perhaps?
5: New VMs can be created and started using cmdlets. Would you configure a new VM by editing a text file? How about space allocation?
For this purpose, MS would need to come up with not only alternatives to a *nix shells, but also with a decent editor, like vi(m) or GNU Emacs. Yes, it also remains to teach, convince your users that it is a good thing to use, a mere trifle ... not!
Abstracting configuration so it is accessed through an interface has numerous advantages. For instance, the configuration store can change and evolve while the commands maintain backwards compatibility. By NOT blindly copying the *nix text-file obsession it has become easier to create robust scripting for Windows.
Another advantage (also illustrated by Trevor here) is that the command approach allows computers to be configured through scripting from a remote controller. Yes, you *could* create scripts and send them across SSH. But with remote commands you can just issue the command from remote and not worry about which version of the text configuration file format the script has to work with on that particular system.
Otherwise, the main idea of pretty much every article dedicated to PS is See, you can do it with PS as well, without any GUI really, Yaaaay!!!"
Well, yes. Exactly. And because you can do it with PS you can do it with a GUI as well. You see, the GUI can host the PS engine. Look at how the Server Manager in Server 2012 can be used to administer multiple remote systems, issuing commands to execute simultaneously and robustly across multiple nodes.
Often overlooked about PowerShell
Is how it is so much more than just a shell. Unlike e.g. bash and other general CLI shells, PS has been designed with a hostable engine. A admin application can easily host a PS engine and let the cmdlets operate directly on the application in-memory objects. With a traditional text based shell that is not possible because the "objects" manipulated by the shell are not sharable in memory and the shell runs in its own process isolated from other running apps.
What this means is that vendors of servers/devices that need to be expose a management interface can implement the manageability *once* as PowerShell cmdlets. *Then* they can create a GUI admin module or admin website which merely uses the cmdlets through a hosted PowerShell engine. That is what Exchange does. That is what the new Server Manager in Server 2012 does.
No need to implement manageability twice, one or CLI and one for GUI. Simply implement cmdlets and build the GUI on top of the cmdlets. You have both scriptability, CLI and GUI. The fact that a vendor can follow this pattern and achieve two management interfaces means that we will continue to have both options: Concise and scriptable CLI as well as guided and exploratory GUI interfaces.
The purpose of Restart Manager is to allow for transactional changes to open or closed files. Despite its name it is nor primarily about restarting the system. Rather, if a process holds an exe or dll open (because it is running as a service or application), RM can determine which processes to restart. Processes voluntarily registers with RM and they can let RM preserve state (open documents, changes, cursor/scroll positions etc). RM can restart the app/service and bring it into the same state. This beats just replacing files which can easily leave a process in an unknown state (started with version 1.2 and suddenly the libraries it loads dynamically are version 2.0).
RM is the reason system restarts are rare on Windows nowadays.
It is also the reason why *sometimes* the "restart badge" mysteriously disappears from the start button. That happens when RM has determined that files scheduled for replace are being held open by processes which have *not* enlisted with RM (and thus RM must assume it cannot just restart the processes without risk of losing state). RM actually monitors the open files and if they suddenly are closed (because you closed the app) it *will* replace the files en-block and remove the restart badge.
Ever wondered how Windows 7 can start Chrome, Word etc, open the same pages and scroll to the position right before the system was shut down (or lost power)? That's RM working with well-behaved apps.
Google? Invented AJAX?
Actually, AJAX was invented (the term AJAX was first coined years later) by Microsoft; more specifically the Outlook team.
Google is good at copying, though.
Yes, .NET specs are open. Java's not so much
"[.NET is open] as long as you get it from Novel since only then you're covered by a license to the patents."
Wrong. You are mixing things up here. The core .NET (C# and Common Language Infrastructure) has been ISO and ECMA standardized. They are *also* covered by Microsofts "Community Promise" which has legal ("estoppel") value. Microsoft has forfeited any right to sue any implementation for infringement on patents necessary to implement covered standards and specifications.
So right there, if Google had chosen C#/CLR they would not have been in this position. The community promise and the RAND requirements of ISO/ECMA covers this use case perfectly.
Mono goes beyond the core C#/CLI and implements a number of APIs developed by Microsoft for the full .NET. On top of the community promise, Microsoft has granted the Mono project right to distribute without risk of patent infringement. *This* is the pledge which "only" covers Novell customers e.g. anyone who downloads Mono/Moonlight from Novell. This is not to say that Mono *actually* infringes any MS patents - just that they or their customers will *never* be sued for something like that.
Google became vulnerable to patent litigation from Oracle because the patent grant of OpenJDK only covers *full* implementations (no super/subset). Google chose to implement a VM and only support a number of core classes. Presumably to wiggle around licensing (or require device vendors to license) Java ME/MIDs.
Sun *never* relinquished control of Java. They only open sourced the *implementation* of Java SE. The spec was always controlled fully by Sun (and now Oracle) even though they appeared to take advice from the community through JCP. Had Sun allowed Java to be standardized through ISO or ECMA this would never had happened.
Could not have said it any better.
And there is a clue in the fact that this is a meta *http-equiv* tag. You can actually add this site-wide or even server-wide as a http header instead, That would be a single operation for the sites you *know* to be compatible only with IE7 and previous; not on each page.
Also, notice that the tag content is "extensible" and can easily allow for other browsers as well. Going forward we may see other browsers use this as well, as the doctype switching is inadequate.
Future standards will be ambigous in some detail and will contain bugs like they have until now. Yes, standards can contain bugs! Browsers *will* experience incompatibilities because of this. Some browsers *will* need to change their rendering (or ecmascript engine) as a result of errata and disambiguating efforts. Cue how browsers interpret (and round) relative (percentage) widths.
IE has been lagging far behind the other browsers and thus IE is the browser most in need of this tag. But *every*single*browser* will experience problems like these and may need a tag like this as well if the changes are big enough or as a result of a demand for fidelity by web designers.
Re: Release Time
"Well, do not underestimate Microsoft and Apple. I'm fairly certain that if they REALLY want to, they could release a patch within a couple of days."
I don't know about Apple, but Microsoft cannot do it because it is NOT Microsofts fault. It was Flash (made by *Adobe*) that was exploited.
And before you jump the "FF/Ubuntu would protect better" bandwagon, that is NOT the case. In FF plugins (like Flash) executes in the FF process, which started by you and which has all of your privileges. A Flash vuln. on Linux is just as devastating as on Windows.
In fact, if it were not for the stupidity of Adobe - who actively circumvented the extra layer of security of Vista+IE7 - the opposite would have been true. FF+Ubuntu would have been vulnerable, Vista+IE7 would not.
@Olivier "Hacker went for value"
There was a price of $20,000 on day one, $10,000 on day two and $5,000 on day three in *addition* to the laptop. That's And the prices for the remaining laptops were still offered, and contestants did make attempts at pwning the remaining laptops on both day 2 and 3. The contest continued after the Macbook AIR was pwned.
According to the hacker who took the Vista using a Flash vuln, he could have brought down any of the others using the same vuln; with a few hours tweaking.
Just some basic facts
IE on Vista by default runs under a low-privilege account. Basically all it can do is to access the web and write to a secluded cache on disk. It cannot read or write files anywhere else, not even from/to the logged on user who launched IE. This is called protected mode.
Now, sometimes users need to download and save files and/or upload files (photos etc). To this end Vista uses a "broker process" (called ieuser.exe in the task manager), This broker process implements a few functions such as file saving and reading. The broker process talks to the plugins, which can request its services, but they cannot control it. Even if a plugin is vulnerable to an exploit and the entire IE process is pwned, it is still limited in what it can do by this design.
Linux (Ubuntu) does not have anything akin to this. On the typical Linux Firefox executes under the logged-in users account. If FF gets pwned your userspace is owned and the process may delete/change/ftp your files away. I believe that the same is the case of OS/X.
The Vista model is clearly more secure than running the browser under your own account.
So how did this pwnage of Vista happen, you ask? Because Adobe in their wisdom decided that the standard broker process did not meet their needs. For some reason (documented in the flash "type library") the broker process can read/write/create/delete files and launch applications! (go figure). Such a broker process effectively circumvents *any* security precautions imposed by the protected mode. So, the *extra* security of IE does not help one iota when plugin developers are this stupid. When you do something like this you'd better A) absolutely limit the functionality implemented by the broker process and B) audit the living daylight out of that inherently risky code. I still cannot fathom why Flash should be able to launch applications.
But fact remains that the same APIs exists in Flash on *all platforms*. On Vista it does sits outside the plugin (to break out of the sandbox).
That is why the winner of the Vista machine was confident that he could have used it on Ubuntu or OS/X as well. It was a Flash vuln. Cross platform. He didn't gain admin rights; he just got to execute a process as the logged-on user. All the platforms are vulnerable to this.
But the same API is available.
BTW, the "broker process" on vista is called "Flash Helper" in the task manager. That's accurate, I suppose. It just leaves out that the ones it is helping are the blackhats.
What LINQ is not and what it is.
LINQ is not SQL.
LINQ *is not* SQL.
LINQ is also not SQL embedded in a language.
Even if LINQ introduced a few keywords with an uncanny likeless with SQL DML, these are but syntactic sugar.
var q = from db.Customers where c.City=="London" select c;
is equivalent to (and indeed translated into):
var q = db.Customers.Where(c => c.City=="London");
When working with LINQ I have found that the SQL-like syntax is often more verbose than the method syntax. As in the above example.
LINQ to SQL is a way to use LINQ for querying and updating a database on a SQL Server.
@Chris: LINQ will not pull in the entire table. This is where the "expression trees" comes in. A boolean expression used in a query, like c.City == "London" (C# syntax) is represented as an expression tree. LINQ to SQL inspects the tree and generate the equivalent SQL.
Why is it nessecary to mock around with error prone strings to query a database? I makes it a pain to work with database. Even simple queries blows up when using multiple parameters.
Why is it nessecary to loop through my arrays and lists to locate items which satisfy certain criterias. Why can't I query, join, intersect, project on something first class as arrays and collections, when these operations are perfectly well understood using something as foreign as a database?
Why is it nessecary to use different techniques for querying relational data, hierarchical data (XML), in-memory collections, directory services, system instrumentation, reflection or web services? Wouldn't it be great if the way to query these data sources were just variants over the same theme? That is what LINQ is.
Microsoft has supplied LINQ "application" for SQL Server, XML and in-memory objects/lists. But LINQ is open for anyone to build their own "LINQ to LDAP" or "LINQ to comma-delimited-files".
IMO LINQ will have a deep impact on how future programming languages are shaped. To enable LINQ C# and VB.NET have also aquired constructs from functional and logic programming. Expression trees a well suited to design smarter, declarative validation and rule engines.
Microsoft has clearly taken the lead here. Don't be surprised to see a push to incorporate something like LINQ into Java, Ruby (which actually have most of the nessecary parts, except expression trees) and other popular programming languages.
Some facts wrong, too much focus on databases
While you touch on the subject that LINQ is actually not specifically targeted at databases, what you mostly talk about in this article is actually LINQ to SQL. LINQ to SQL is actually an extension to LINQ which - as you say - addresses the impedance mismatch between OO programs and a specific relational database, SQL Server.
LINQ to SQL *do* have an abstraction (mapping) layer. While it is not as flully fledged as some ORM mappers, LINQ to SQL actually does allow for several common changes in the table structure (adding/removing/renaming columns, adding tables etc.). The mapping layer will work to absorb changes just like any ORM solution out there.
Your claim that LINQ writes directly to the tables is flat out wrong. LINQ to SQL observes the objects which were retrieved from the database. At the programmers' request it will use the mapping information to update/insert changed/newly registered objects. So, LINQ to SQL uses a retained, mapped approach, not a direct approach.
Contrary to your claims, LINQ to SQL will also work with stored procedures, for both querying and updating/inserting. Again the mapping layer allows the programmer to specify on a per class basis how the objects are retrieved/updated: By SQL DML or by stored procedures.
However, LINQ to SQL is actually "LINQ to SQL Server". Microsoft will not create LINQ to Oracle or to any other database. Microsoft leaves it to the vendors or the communities to implement those. To this end it is worth mentioning that LINQ is not just "embedded SQL", but is built upon a number of other sound language enhancements.
I disagree that LINQ is a way to let programmers forget (or not learn) SQL. LINQ to SQL will generate SQL for you, but for the more complicated queries you should really not rely on generated SQL. Most of all, LINQ to SQL drastically simplifies writing the simple queries, especially when they are required to take parameters.
A very important aspect of LINQ to SQL is also the fact that it finally and conclusively outs the exploit prone string manipulation of queries. No more queries in strings. Queries are written using the native boolean expression syntax of your language of choice.
My experience with LINQ (not just LINQ to SQL) is that it will drastically change (for the better) how I program any set/list manipulating code, not just database results. Why should I have to loop through an array to locate items with specific properties? With LINQ I can just formulate the criterias declarativelty and be done with it.
- Does Apple's iOS 7 make you physically SICK? Try swallowing version 7.1
- Fee fie Firefox: Mozilla's lawyers probe Dell over browser install charge
- Pics Indestructible Death Stars blow up planets with glowing KILL RAY
- Video Snowden: You can't trust SPOOKS with your DATA
- Hands on Satisfy my scroll: El Reg gets claws on Windows 8.1 spring update