So secure that even the Linux.org site got hacked and taken offline?
If even the core gurus can't get it right, methinks you've swallowed an untruth.
41 posts • joined 24 Oct 2007
So secure that even the Linux.org site got hacked and taken offline?
If even the core gurus can't get it right, methinks you've swallowed an untruth.
You should remove your tin-foil hat - I think you're overheating
"It is high time that Microsoft were actively prevented from using anticompetitive measures, technical or economic, to corrupt the market. Regulators should get off their behinds and unbundle Windows now!"
Microsoft operated from 2002 -> March 2011 under DOJ oversight and in compliance with the 2002 DOJ consent decree. The DOJ has kept a close eye on all of Microsoft's business dealings to make sure that it completely changed the way it did business.
OEM's are entirely free to ship machines running Linux if they want and some do. Dell offers Linux as an option on all its servers for example. They used to offer a range of PC's running Linux too but quit that business because NOBODY BOUGHT THEM.
With the razor-thin margins that the OEM's operate under it costs them too much to sell and support PC's preinstalled with free OS'.
Then you should learn to read.
Win8 WILL boot if SB is disabled, but it will be unable to validate that its core system binaries haven't been tampered with.
How else will Win8 be able to install and run on a non-UEFI PC (like the Sony Vaio laptop I am sitting in front of running Win8 dev preview today)?
The end user can/should be able to make a choice as to whether or not they want to disable Secure Boot.
This is an OEM issue and has nothing at all to do with Microsoft.
If someone was considering installing and operating Linux, rebooting their PC, hitting F12 (or similar) and disabling secure boot will be the easiest part of the process.
They'll have FARRRR more technical things to deal with just to get the OS installed and running than something as simple as changing a BIOS setting.
... in a time of economic uncertainty, Wall St. loves execs who cut heads. MS grew from around 30K heads in 2000 to more than 95K heads in 2009 - it was a sound, although cold, plan to cut some dead wood loose.
The space on a CPU die taken up by instruction decoders is dwarfed by the space taken up by caches and other logic.
While many of today's CISC processors do indeed include many instructions and this the logic circuitry to decode these instructions, most decent CISC processors actually decode CISC into internal RISC instructions that the core executes.
Also, don't forget that RISC ISA's are not as efficient as, for example, x86/x64 in terms of code density: An x86 core can retrieve several instructions per memory read whereas a RISC processor can only read one instruction per read.
To counter this, ARM developed the THUMB and then the THUMB2 instruction subsets which require fewer bits to encode the supported instructions, but they're still not quite as dense as x86 ISA's.
HOWEVER, considering Google's specialist needs, there's little to stop them working with ARM and a fab specialist to create an ARM core that is quite specific to Google's needs, potentially eschewing any instructions it doesn't need whilst adding instructions that specifically aid the kind of processing that its algorithms require, thus producing a processor that offers a net benefit over today's x86 cores.
Then they could pack several cores per chip and several chips per board into dense server racks (with much lower power and thermal envelopes) to form algorithm-crunchers offering FAR higher levels of processing density than they get today.
Whether these processors will be useful outside Google is another matter, however.
... until your closing paragraphs.
"It's classic Microsoft. It's the attitude that led to COM and DCOM instead of using Corba"
WRONG: MS chose not to back CORBA because IBM was bullying its way around the standardization effort and making CORBA overly prescriptive in parts (that didn't need it) and woefully relaxed in others (that did). COM/DCOM won the distributed object wars. CORBA lost. Who was right?
"and that saw Microsoft "tune" Java and distribute a version not compatible with Sun Microsystems' implementation"
Sun won the case with a VERY weak ruling. Sun quickly released JNI so that they could start the complaints and push MS into court because they realized they'd given over too much power to MS, including the right to augment Java with platform-specific enhancements.
Frankly, MS' legal team probably did more damage to the case than MS did. A proficient team would have won the case hands down in MS' favor.
"Now it seems for all the talk of having learned its lessons on standards in IE 8, IE 9 will be a case of moving slowly and selectively where it suits the company and using its own approaches elsewhere."
WRONG - they're weighing up time vs. resources vs. features. There are literally hundreds of "standards" currently either boiling away hoping to become "real" standards in the future Imagine you run the IE team. You have $100 - where do you spend it:
b) Improve rendering performance
c) Improve compositon support & better compliance with CSS
d) Improve security & reliability process isolation & extension sandboxing
e) Improve the remaining top 3 customer pain points based on actual telemetry/stats and customer feedback
f) Implement support for any number of random other technologies which may never gain widespread use
I'm guessing youd spend VERY little on f.
What would be the point in building complete support for a version of HTML which hasn't yet been ratified? How many sites are likely to widely use the new HTML flavor before SP1 can be rolled out?
"It's an approach that will continue to leave developers struggling to support multiple browser architectures and invariably defaulting to Microsoft first and everyone else second based on market share, while leaving standards advocates and browser rivals as frustrated and angry as ever."
No two browsers support every standard in the same way. There will ALWAYS be differences that web developers have to code around. Until the W3C does a better job of accurately defining standards and producing *real* compliance test suites this problem will persist ... even if MS closed the IE team down tomorrow.
Now I *NEED* both WTF *and* FAIL icons!!!
Google have well and truly shown their spots and their depth of character with the announcement of ChromeOS.
Not only is it now clear that ChromeOS is nothing more than a way for them to get users eyes in front of their ad's so that they can earn more money, but they've also shown how little stomach they truly have to take on Microsoft and provide a credible alternative to Windows.
For all their bluster and shameless grandstanding, they've revealed themselves to be nothing more than the self-promoting bunch of failmeisters that many of us suspected them of being.
What I want to know is why, when it is eventually released, it's going to have taken them 2 years to create their own Linux distro'?
Most of the great unwashed don't have access to 24x7x365 wireless internet connectivity.
What's the point in carrying with you a computer which is only usable when within range of an internet connection to which you are a subscriber/member?
I think that they should stop whatever they're smoking at Google's campus - the inmates have CLEARLY taken over the asylumn now.
Whilst Vista isn't the speed demon it was originally touted to be, it's certainly not the perf whore many are claiming it to be.
If you're seeing major perf issues with Vista, you should:
1) Remove unnecessary start-up apps from your startup group and using MSCONFIG.
2) Update! Make sure you're fully patched via Windows Update. You ARE running SP1, right?
3) Check your HDD's - major sudden perf degradation often indicates a dying HDD.
4) Check your 3rd party drivers - graphics drivers in particular have improved enormously over the last 2 years.
However, you'll probably enjoy life more if you move to Win7 - it is Windows Redefined.
Paris because she too enjoys a dubious reputation.
MS have already plugged both the UAC and Rundll issues outlined above in RC builds.
Nothing to see here - please move along.
... but Microsoft already has "Compute Shaders" built into DirectX 11:
MS has been working on taking advantage of the enormous latent processing horsepower laying dormant in most people's machines for many years. Now they have something that's appreciably easy to work with (for a subject this complex) and yet VERY performant.
If you know DirectX, you won't find Compute Shaders THAT hard to get to grips with.
(1) built-in recovery and backup (CD, DVD, HDD; it should be able to create a slip-streamed cd/dvd of the current installation)
Already in Vista
(2) force apps to use their own files only - no registry entries by apps apps install in their own program folder and may make shortcut on start-program list or desktop only
Many apps already do this but many apps apuse the registry. Some apps do need to update the registry to store per-user settings and/or per machine config. A few apps have to register themselves in the registry in order to allow other software to find them.
(3) no app install can require a reboot
In all reality VERY few apps require reboot ... but they opt to test less and force a reboot in the installer. Contact those app vendors and give them hell. They're just being lazy.
(4) no windows updates should require a reboot
Sometimes, updates require core OS/Kernel infrastructure to be patched which can only be done after shutting down major OS components. This requires a reboot.
However, the number of circumstances where this is truly necessary are smaller now than before and smaller still in Win7.
(5) all updates to windows create an automatic rollback file
Already in Vista.
(6) rock solid and secure (no BSODs)
BSOD are caused by buggy code running in the kernel. 90% of the time, BSOD are caused by 3rd party drivers. This is why Vista introduced a new video & printer driver architecture that reduced the amount of driver code running in kernel. Although initial drivers were pretty buggy because the infrastructure was so new, things are A LOT better post SP1.
If you're seeing BSOD, make sure your drivers are up to date.
(7) proper device interfaces that don't change with every release driver updates should not require a reboot
Sometimes, changes to the driver infrastructure are necessary (see above). This happens VERY rarely however.
Some drivers require reboot because they cannot be shut down and restarted without rebooting the kernel.
(8) no drm
Then you don't get to play DVD/BluRay. DRM is required by RIAA etc. You don't like it? Complain to them.
(9) no nagware or registration
You gotta prove you paid for it. However, I agree that this can be made less obtrusive.
(10) apps cannot automatically register for startup (except maybe AV & Firewall
This is necessary for some hardware to work. Better still would be to make it easier for you to find out what apps are running and why.
(11) multi-processor capability and transparent support for amd/intel/?
Has always been a feature of Windows.
(12) there shoudn't be but ONE version available with ALL options loadable
Should or shouldn't? I am guessing the former.
Due to licensing, MS shipped versions sans DVD decoder, saving per-copy license fees and passed those savings onto customers that don't need DVD: Vista Home Basic.
(a) can anyone tell me why I can't have file manager functionality?
(b) can anyone tell me why there are still only 15 hardware interrupts available?
Hardware compatabillity mostly. More interrupts would only make your machine more complex, slower and more prone to bugs anyhow.
(c) can anyone tell me why windows can't do what Irfanview does?
Because Windows is an OS. It can't do everything for everybody. In fact, in Win7, MS is opting to move some apps (Movie Maker, Mail) to a separately downloadable Live app suite. Reduces size, bloat, bugs, security threats, etc.
(d) can anyone tell me why I can't really multi-task?
Because you're a little slow? ;) Windows has been able to Multi-task since Windows 2.0.
(e) can anyone tell my why any app can lock a window in the center of your screen?
If you mean a system modal dialog? Actually, they can't! They manage to achieve a similar effect through trickery! ;)
Hosted Exchange is just the start of a whole slew of services that will enable you to choose how much infrastructure you want to host and support vs. how much you want to have hosted for you. This is a VERY big issue for the majority of businesses today who don't have, nor want, the headaches of having to operate the systems to allow them to leverage the power of the internet.
If Microsoft (and any other technology vendor for that matter) didn't carefully explore where it might to tomorrow (TM), then they'd be out of business already.
Azure is (and will increasingly be) an extremely powerful infrastructure that has been carefully designed, taking into account Microsoft's experience of running not only one of the world's largest and most complex IT organizations, but also some of the world's largest online services (Hotmail, Spaces, XBox Live, etc).
For those that complacently sit back and assume that their non-MS OS of choice is inherently safer and more secure than Microsoft's offerings, you should educate yourselves about the facts before spouting off in public.
For January through March of 2008, Mac OS X users experienced the highest
number of vulnerabilities as well as the highest number of High severity
vulnerabilities while Windows Vista users experienced the fewest and the fewest
High severity vulnerabilities.
NO operating system / application / software / user is ummune from hackers and malware and to assume otherwise is just plain stupid.
Microsoft, and any other vendor for that matter, should be commended for releasing well-tested fixes for important vulnerabilities. But they DEFINITELY deserve commendation for offering a webcast explaining the vulnerability and the fix - how often do YOUR platform/app vendors do THAT for YOU?
The fact that he gets PAID *that much* or that Oracle's shareholders approved it!
Mine's the coat with nothing in the pockets.
Don't be too quick to love Ada:
Whilst many are annoyed by DRM, it's here to stay. Why? 3 primary reasons:
1) Content publishers demand it. If the DRM was stripped, MS would be open to multiple lawsuits from any number of powerful, well-backed organizations for deliberately releasing an OS that permits people to view/listen-to DRM protected content, but without protecting the DRM rights. Don't like DRM? Then go talk to your senators and overturn RIAA.
2) DRM is actually extremely useful in a corporate setting - it allows you to determine who can read the documents/spreadsheets etc., that you create. If you DRM your Word/Excel/Etc. files, and they happen to fall into the wrong hands, then they can't be cracked and read. If only more apps and more people used DRM to protect content by default, there'd be less hullabaloo each time a civil servant left their laptop on a train and/or had it stolen!
3) DRM is actually used to protect some of your most personal data and settings within the OS itself ... things you most certainly would NOT want someone else to be able to easily obtain.
Regarding the questions about whether Win7 is faster and more stable than Vista ... I think you're going to be pleasently surprised.
And Alexis Valance: Read the paper again - the start bar image you linked to was a screenshot of the Vista start bar, identifying the regions of the bar discussed in the article. You'll have to wait to see what the Win7 team have cooked up to make its start bar more usable and helpful.
That would be a VERY dangerous assumption.
I have seen more cultural change inside MS in the last year than in the prior decade.
Over the last 12-18 months, so much of the old-guard (good and bad) has gone/changed (Bill Gates, Jim Allchin, Brian Valentine, Jeff Raikes, Kevin Johnson, Peter Moore, Charles Fizgerald, ...) and has been replaced by strong leaders with proven ability to more effectively ship great product on time and with high levels of quality.
Evidence? You'll have to take my word for it for now, but just wait 'til you see Win7 & Office14 - they're significantly improved over previous versions. VERY significantly.
The whole company has realigned internally and is cooperating and leveraging strengths *much* more than ever before. In the past, MS set up a lot of internal competition in a Darwinian "survival of the fittest" kind of approach to product development. Much of this insanity has gone and we're already seeing early beneifcial results.
Finally, the culture within the company has changed. Much of the combative culture has gone, replaced with a healthy desire to create the best products and to compete openly and fairly with all comers.
The last point is one that I cannot stress enough. Do not assume that Microsoft will stop competing. We understand very deeply that the only way to survive is to compete ... hard. But don't assume that by competing, the Microsoft is doing things in an underhand manner - most of the new guard saw what has happened to Microsoft (and others) in the early 2000's, and we have no desire to be part of that.
Rest assured, old Microsoft is dead ... long live the new Microsoft.
... was to get people talking. Seems it worked! :)
More to follow.
(Paris because she REALLY knows how to get people talking! ;))
... you're not bemoaning Microsoft's focus on encouraging standards on the web whilst ensuring a great user experience for users browsing intranet sites crafted largely by people who are not professional web developers are you?
I mean ... you're not biased in any way, right?
Of course you aren't ... I mean, even though your CTO of one of IE's competitors ...
... you wouldn't abuse that position would you?
You would? Damn!
Mr. Walsh - great post. Gosh, I think I might post that on my wall.
To AC: "The real men will give you a clue lads": Let a "real man" give you the tip you should know by now - you want to close the last n editor windows? CTRL+F4 n times should do the trick nicely. Abandon your mouse wherever you can - it's worth the investment - you become a hell of a lot more productive VERY quickly.
To everyone complaining about versions: KNOW YOUR TOOLS. The strength of the NETFX is that it's cumulative - you install FX 3.0, it relies on FX 2.0 so installs it if it's not already there. Install 3.5 and it'll install 3.0 and 2.x if not already there.
You can't expect improved and new features without new releases of the framework. The fact that your current framework largely stays the same IS A GOOD THING. If you had to rebuild your apps and components against a new FX every time it released, you'd REALLY hate the .NET FX.
To all posters. Please think before lashing out - you don't make yourself look clever just by bitching.
Just wanted to right the apple cart here:
Microsoft, Apple, Yahoo, Real and any other online media store has been forced to bend to the will of the content publishers and implement a workable DRM solution.
While some content publishers have opted to make some of their content available for purchase without DRM, the total amount of DRM-free content is still dwarfed by DRM-ed content. Hopefully, the amount of media you can purchase sans DRM will increase over time, but that's up to the content publishers.
In Zune's case, when you buy DRM-free content, it's downloaded as MP3 files. DRM-ed content on the other hand is downloaded as protected WMA files.
This latter is (currently) particularly necessary in order to support Zune's $14.99-per-month-for-as-much-music-as-you-want-to-download plan. Whilst not everybody likes the notion of "subscribing" to the right to listen to any amount of music from a huge library of content, it is, having been an avid subscriber since Nov 2007, a *FANTASTIC* way to enjoy and explore FAR more music than I'd ever have the funds and/or inclination to normally.
Once you let go of the hindrance of owning physical media (and the management overhead therein), one quickly finds music subscription as THE killer reason to become a Zunester :)
The whole point of the Roomba is that you can set it and let it do it's thing daily. The first few days it'll quickly fill up and stop cleaning, but once you've got most of the dust etc., up, it should be able to clean far broader areas without filling up as quickly.
The short of it: use it for a month, set a daily schedule. Clean it each day when you get home and reseat it on the charger base. Then tell me you don't love yours too a couple of weeks later!
This is like trying to compare the taste of a juicy sweet apple to the taste of some animal's innards.
It'd be interesting to see the performance of the Cell chip emulating an x86 processor running Windows/Linux/MacOS.
It may have done a half-decent job of encoding video, but it'd have been more meaningful to compare the performance of Toshiba's Cell chip to the latest from nVidia or AMD/ATI.
Paris because she too is difficult to compare to a human female!
Then you want to get yourself a Via PicoITX board ... perhaps like the one in Via's ARTiGO PC: 1GHz CPU, integrated SXGA graphics, 1GB RAM, 4 x USB, 1 x IDE, HD Audio, 100mbps network, VGA/DVI out (serial optional).
I'd argue that 4 x USB is the absolute minimum - less than that and you'll run into issues connecting keyboards, mice, cameras, GPS, controllers, robots, etc.
See "Assembling an Artigo Pico-ITX device" (http://www.bitcrazed.com/2008/05/26/AssemblingAnArtigoPicoITXDevice.aspx) for more details and photos.
My Ferrari 4000 has been an absolute workhorse for 3 years now and is still going strong. BY FAR the best laptop I've ever had!
A few points from the posts above:
1) AMD deserves a great deal fo credit for coming up with x64. At a time when Intel and HP were off running around trying to get the Itanic to float, AMD's deceptively simple approach of augmenting the instruction set that most compilers already knew was a masterstroke.
2) Fact is, CISC will generally flatten RISC in terms of perf because
a) CISC instruction streams are very dense - sometimes managing to store several instructions in the space it takes to store one RISC instruction. This keeps the CPU pipeline full whilst minimizing RAM wait times.
b) Because RISC was essentially benefitting from their ability to live on the outer edge of the ever increasing clock speed curve, they're now in trouble. Clock speeds aren't increasing much and won't be for some time to come. So now they have to get more done per tick - and you can't do that if your instruction set uses sparse instructions. ARM have recognized this through their compressed instruction schemes.
3) It's not about RISC vs. CISC any more. It's not about multi-core vs. CPU's. It's all going to be, quite honestly, as Moore points out, about having chips with multiple special-purpose processing units on them. Some processors wil be REALLY good at decoding and executing x86/x64. Others will be REALLY good at matrix math. Others will be really good at floating point. Others will be really good at DSP. Collect more and more of these things together per package and you have the future of computer processing.
My biggest worry isn't the number and diversity of these cores it's how we build a memory and IO ifrastructure that can effectively arbitrate all the competition for RAM and IO!
At the end of the day, a single execution thread already spends too much time waiting for memory and IO before getting real work done.
Nexox: "I suspect that Apple wants to keep their apps visually seperated from 3rd party apps, so they hide some of the GUI calls they use. Thats not particularly nice, but its also not completely anti-competitive."
Tell me you're kidding? Microsoft has been found guilty of doing PRECISELY the same thing. Not all the undocumented API's that Microsoft has been forced to publish were to do with communications and protocols and interop etc.
Apple has as much of a monopoly for software that runs on their proprietary machinery as Microsoft has claimed to be on standard PC gear. Actually, they have a proportionally far bigger monopoly on software that runs on their OS than MS has on Windows.
So why are they not held to the same standards and forced to publish and support a large number of the API's that they've written and included in their OS that give Apple an advantage *ON THEIR OS* that they don't offer to other app developers who target that platform?
Personally, I would recommend avoiding the use of undocumented API's wherever possible because the minute you embed use of said API into your app and ship it, you have a support nightmare on your hands ... it's only a matter of time until a new version of the OS is released that removes or changes the API and busts your app.
Hey El Reg guys ... you REALLY need to add a Jobs with halo and Jobs with horns icon too. Just be sure they're big horns.
Paris because at least she documents all of her interfaces! ;)
Alas, this is sensationalist reporting that only increases the risks to all web users by reinforcing their ignorance to EV-certs. That's a shame. I expect more from El Reg.
From above: "The SSL providers such as Verisign etc. all charge an arm and a leg more, and for what? SSL certs are money for old rope."
Acutally, the extra cost is well worth it! When you apply for an EV Cert, the CA carries out more than 10 identity checks against you, your position within the company you're applying for, the company's existence and registered place of business, whether the company is undergoing investigation for or has been convicted of a range of fraudlent charges, etc. Often, they will in fact comission an independent, certified autitor to personally visit your place of business to ensure that the company does in fact do work from that location.
Only when the checks all come back positive does the CA then issue you with an SSL cert.
So what does an EV Cert say that an normal SSL cert doesn't? The cert contains the name and location of the company is that owns the cert and the URL against which the cert is issued. Along with the multi-million dollar liability each EV-cert issuing CA signs up to, the identity of sites protected by an EV-cert are therefore are more trustworthy than sites protected with a standard SSL cert that does nothing to validate the identity of the cert's owner.
While EV-certs do nothing to prevent cross-site scripting (this is an issue that's completely orthogonal to SSL), they are an important step forward in the march to make the web a safer place to surf.
Paris becuase I'm sure she enthusiastically uses all forms of protection!
You may have a few poisoned cookies from MSN/Live/other LiveID authenticated sites. They'll timeout eventually, but clearing them out manually may well resolve your issues.
Paris because she clearly cleared out her cookies. Allegedly!
It'll be interesting to see how they plan to prevent access to these API's from malicious software running on the host.
Paul: If you don't want a UI and want an absolutely stripped down OS installation specifically to run dedicated functions (or to host "functional" instances in VPCs), then ServerCore is your friend.
But if you do want a UI, you can (as I do) access my server machine's full complement of management tools, apps, etc. from anywhere in the world using RemoteDesktop which works astonishingly well ... even over low-bandwidth links.
And if I don't want to use Remote DEsktops, most serious software has client-side management tools that can administer remote servers.
And if I want, I can just administer via the command line or better still, Powershell.
One of the biggest problems with Windows is it's popularity. Quite honestly, by far the biggest threats to your Windows based PC is running crappy software from verious sources that don't take security and reliability seriously. Good examples include Real, Adobe, Sun and Apple ... as evidenced on the Secunia site today (see below).
It doesn't matter what the OS is ... if it's pretty much #1 then chances are that it's going to see the largest volume of hackery. Just be glad Linux isn't the world's most used OS!
From Secunia (2/14/2008):
During the last 24 hours, we have seen security updates for some very popular Windows programs from four major vendors: Sun, Adobe, Apple, and Skype.
Based on these four security updates, we have gathered some statistics from our free Secunia PSI that shows a startling picture, detailing the amount of users who need to patch their computers, in order to safely do something as ordinary as surfing the Internet.
Currently, the Secunia PSI has been installed on 282,726 computers.
Unique installations, counting each application only once per. computer:
Adobe Reader 8.x 172,653 61.07% of all computers affected
Apple Quicktime 7.x 133,169 47.10% of all computers affected
Sun Java 1.5.x 98,618 34.88% of all computers affected
Skype 3.x 57,496 20.34% of all computers affected
If you're using DataSets to serialize data from one component to another ... or even worse, from one machine to another, you're may be paying a SERIOUS overhead for the privelige.
While the paper that Ingo Rammer and I wrote on this subject was based on .NET FX 1.1, many of the principles remain the same. I'll try to find the cycles to convert the code to NETFX 3.5 and see what the results look like with today's technologies including WCF.
Multiple exit points, just like multiple exit wounds, are *DEFINITELY* things to avoid.
Having to read through and fully comprehend every state that a given method/function can represent in order to work out just why the developer decided to jump out of one of the several (often randomly placed) exit hatches often leads to errors - some subtle, some less-so.
Reading through a piece of code that clearly disambiguates precisely in which conditions the code will act, and where it won't, resulting in a single point of exit (at the bottom of a method where everyone expects it to happen) is *FAR* easier to understand, support and maintain.
Simon - when did you last visit a Starbucks and see all the Steve diciples pretending to study whilst posing with their iMac, iPod, iPhone and iLatte?
Morely: Almost all firewall devices are simply cut-down machines with a CPU, some RAM, a network card and some form of NV storage upon which the OS & software is stored. ALL of them are driven by software that defines their function as firewalls. All of them are updateable otherwise you'd need to throw the tin away once every couple of months as the manufacturer released improvements and enhancements. Running well engineered firewall software on your machine is (for most people) as good as running a separate physical firewall device since that software should put a hard boundary between the outside world's network and the user's environment.
I think that what the researchers were pointing out is a valid weakness in Apple's current firewall in that it can't easily arbitrate traffic based on whether the network the user is connected to is a trusted network (e.g. home or work) or an untrusted network (e.g. Starbucks, hotel, airport, etc). I'm sure they'll get around to fixing this, but I agree that it's something of a glaring omission which could well result in an increasing number of diciples getting smacked by a hacker.
As MacOS increases in popularity, Apple is going to have to start taking security seriously as serious hackers don't tend to attack a weakness because of religious or idealogical positions - they tend to do it for noteriety and/or "to prove it can be done", and/or for personal gain.
Only time will tell if Apple have the maturity and ethics that result in them doing the right thing.
Understand that signing and locking are two different things. Locking is an attempt by the carrier to prevent you using your handsets on other networks and vice-versa. They want to own not only the calls you make but also the handsets that you use.
Code signing is a mechanism to ensure that you know who build the apps you're running and to let the system identify code that is tampered with by unauthorized third parties. The only thing you need to sign code is a certificate from an approved Certificate Authority (CA). So long as the device manufacturer/carrier allows apps signed by CA's that offer reasonably priced digital signing certificates, there's nothing to stop freeware software providers from writing apps that could be installed and used on the system.