18 posts • joined Wednesday 11th July 2007 09:15 GMT
One must wonder how this can happen when MS digitally signs critical system executables. It's not hard to embed MS's public key in your product and trust files with a valid signature from their key.
Something modifies the MS system file? Signature isn't valid anymore.
Admittedly this is imperfect, not least because MS (for some bizarre reason) doesn't sign all its executable files. However, you can certainly blacklist any signed executable replaced with an unsigned executable, or any signed executable with an invalid signature.
This should dramatically reduce the false positive rate, too, particularly with critical system executables.
I've had a LanTec NextStar device that's essentially the same as this for almost a year. The only difference is that mine only does USB 2.0 High Speed and eSata, not FireWire. And, yes, it came with all cables.
Note that over eSATA you can run disk diagnostics like S.M.A.R.T. queries and self-tests that require the ability to send ATA commands directly. It's indescribably good for data recovery and testing.
Microsoft OneCare Live Safety Scan
"no website can run an anti-malware scan on your computer simply by your visiting the site"
Microsoft Live OneCare, or whatever it's called today, comes pretty close. See http://safety.live.com/ . It's a legit ActiveX-based web virus scanner from Microsoft. Most users would be hard pressed to distinguish a fake site from it, or vice versa.
Why on earth would I download it from Mozilla when I have a perfectly good copy integrated into my Linux distro? Sure, it's RC2+patches, but it'll be updated to 3.0-final any minute (and the RC works great anyway). The only package available directly from the Mozilla folks doesn't integrate into Linux distros at all and makes central installation for many users painful and difficult.
As usual the mozilla folks completely fail to consider distros. I wouldn't be too surprised if distro packagers would be willing to provide .debs / .rpms (to be included in the distros' updates repositories shortly thereafter) for mozilla.com to offer as alternatives to the .tar.gz bundle. The mozilla folks, however, never think about Linux distros at all.
They also apparently don't think about load distributed clustering ;-)
UnionFS & SquashFS
The Xandros folks appear to be unaware of SquashFS.
If you're going to use a union file system, the right way to do it for both space and performance reasons is to use SquashFS for the base file system. This compressed file system *improves* I/O because the CPU cost of compression/decompression is negligible compared to IO device bandwidth limits. Doubly so on slow flash like the eee.
Once the SquashFS image is created it can be written to a partition exactly the right size, and all the rest of the disk can be dedicated to user data and system updates using a writeable filesystem like ext3 or jffs2. ext3 is probably a better choice, even on flash, for volumes above a gig because of jffs2's long mount delays.
SquashFS is read only; to be much use in a conventional desktop you need to overlay it with a writeable FS either using a tree of symlinks or (more usefully) with UnionFS.
The LTSP thin client images at work are 453MB worth of files which fit in a 500MB ext3 partition. The SquashFS boot image created from those files is 173MB. You should expect similar results for the eeePc.
If you're going to be using a recovery/readonly and union setup, SquashFS is a no-brainer. It's bewildering that they chose ext3 instead.
I'll be rearranging my Ubuntu eeePC 701 firewall/router/accesspoint/server to use a SquashFs+UnionFS+ext3 combo shortly, and I expect to see a major space saving as well as significant performance improvements. (By the way, with Ubuntu installed and a 1TB USB HDD plugged in the eee is a *great* access point/firewall/router/server, and dead silent too. Be prepared for a little work to get wifi going though.).
Flash lifetimes are so long that if you write to the volume _continuously_ for years you probably still won't wear it out. Add wear leveling into the mix and other parts of the machine are vastly more likely to fail first.
If you were really worried you could change the ext3 commit interval (see "man mount", ext3 section; edit the options section of /etc/fstab to alter) and the dirty writeback interval (see /etc/sysctl.conf) so it didn't write as frequently. However, this opens you up to slightly more risk of data corruption on a crash or sudden power loss, and really won't gain you much. It's a powersaving measure on a laptop with a spinning disk but won't make much if any difference on the flash-based eee.
Ceramics with induction power & bluetooth
There are, IIRC, keyboards out there that're smooth ceramic plates, using induction fields for keypress detection and a short range RF or bluetooth connection to the host.
If you added magnetic induction power supply to those devices, you could have an easy-to-clean power pad and a keyboard that was a neatly rounded slab with no cables or protrusions. You could just swap them around every day and drop them all in a bucket of disinfectant ... or with some hardening even autoclave them.
Despite that, hospitals seem to keep on using keyboards that require cables trailing everywhere (dirty!) and that have uneven surfaces (very dirty!).
Thin client - if only it had DVI
This has "great thin client" (be it LTSP with remote X11, RDP/ICA, or RFB/VNC) written all over it - except for one little problem.
Without a digital display option it's really not attractive for much except use as a firewall in some limited situations, as a low-power home server, or with some crappy cast-off screen as a workstation for someone you don't like very much.
As for firewalls, it turns out that the eee PC is an awfully good choice, at least if you have a VLAN switch or only care about wifi. I'm using mine as a great little ubuntu firewall/router/server and wifi access point. Once connected up to a 1TB USB HDD it does an impressive job as a file server. It runs silently with very low power consumption once you enable CPU power management, and it has plenty of grunt to spare when it's required. The built-in battery backup, keyboard, and display is just a bonus.
Hmm, I think we need a TLA soup icon.
@Stan - driver support
"[W]hen linux device drivers 'write themselves' [...] while windows drivers require constant maintenance then why the hell should intel want to get huggy-feely with MS?"
Unfortunately the situation with drivers is almost the reverse of what you describe. Most of Intel's Linux drivers are maintained by ... Intel. I think the core chipset stuff does get a lot of external contribution, but things like their graphics drivers are almost all their own ongoing work. As Linux has no API or ABI stability (let's not get into that discussion, OK?) they must fairly regularly do extra work to fix their drivers up for the latest X.org release / kernel release/ MESA / GLX / etc. That's on top of the usual work for new chip revisions, broken hardware workarounds, etc.
By contrast, Windows releases are infrequent and both the ABI and API for drivers remains almost totally unchanged for the life of the product. Sometimes it's so stable that a single driver can work on versions separated by as much as 10 years (one driver on NT4, 2k, and XP, like with NDIS drivers). Once drivers are finished, most of the work is fixing occasional bugs, tweaking the drivers for new revisions of the hardware, and implementing workarounds for the latest bunch of motherboard manufacturer and BIOS vendor screwups.
So... unfortunately I'd say it's probably much easier for Intel to maintain Windows drivers. As a Linux user this frustrates me, but at least I understand why.
Commenters are missing the point
People complaining about routable addresses being assigned to printers are missing the point. The whole point is that the code executes in the client's browser, inside the LAN. Thus they can connect to printers and all sorts of other interesting TCP/IP-using services, especially HTTP-based ones.
If such a policy isn't enforced, a whole lot more than network printers can be attacked. Think Sharepoint, for example, for companies dependent on that sort of thing. While it's usually locked down by some sort of access control, that can often by NTLM based single-sign-on that the js code could just ride on using the user/browser's credentials.
... and then there's the disks.
Another lovely XServe issue I forgot to mention is that Apple like to charge so much for their hot swap disk enclosures that it's cheaper to buy disks from Apple than to buy just the enclosures. Even at Apple's often obscene disk prices. The machine comes with blanks instead of usable disk enclosures for any bays not configured with a disk at purchase.
At least they now offer SAS on the Mac Pro. I was previously stunned that they only offered 7200rpm disks to go with their ... er ... pair of quad core Xeons . WTF?!? They could've at least offered 10K RPM SATA disks. As it is they make sure that if you want SAS you pay through the nose - no smaller & cheaper SAS disks for OS, swap, apps, etc for you!
I'm a little bitter since my work recently went Mac despite the fact that we were able to get *much* better hardware for a lot less cash and the same apps by going for Precision workstations. So, instead of reasonably balanced Core 2 duo boxes with fast SAS disks we have massively overpowered xeon workstations with crappy 7200 rpm disks that cost almost twice as much, have inferior warranty terms, and need an entirely new server platform to be introduced to the network. Yay!
The 1RU XServe hole in the head
Again, Apple manages to avoid releasing the Apple server product people in their core small business market might actually want.
I've always wondered where the 2RU or 3RU XServe - "Now with drive bays!" - was. Apple seem to prefer to push their slow, expensive, and rather inferior XServe RAID product rather than providing a workgroup server with room for useful amounts of internal storage. Given the price, you'd think they could afford to offer a couple of different case sizes.
Our XServe also spends its entire time with the fans maxed out and screaming. It appears to be made to run in rooms air conditioned to "frigid". A larger unit would allow room for better airflow and larger, slower fans as well as being more flexible.
The problem with internal storage has become particularly galling since Apple moved from four to three internal bays on the 1RU XServe. With four bays you could at least run two RAID 1 arrays or a RAID10 ; with three you're limited to a RAID1 plus a spare, a RAID 1 across three disks for extra redundancy, a RAID 1 plus a non-redundant disk, or an inefficient RAID 5 array. A 2RU model could easily manage a much more useful six drive bays.
I suspect they expect small-ish businesses to just buy a Mac Pro and fit it out as a server.
Definitive downgrade rights reference
Here's the PDF from Microsoft that (astoundingly clearly, especially for a license-related document) explains the details about downgrade rights - including the procedure for installing XP on a Vista box.
Get them at the discount bin when even the publishers get sick of it
"Normal" PC game copy protection is quite bad enough.
Requiring the CD/DVD to be in the drive is ineffective (show me a game that hasn't been cracked) is noisy, annoying, and increases the chance of damaging the media. Which I can't duplicate for regular use, since it'll beak the copy protection. Not that I'd need to duplicate it if I wasn't forced to have it unnecessarily in the drive.
That's if it works. It'll break if it detects anything on the system that it even vaguely suspects might be in any way suspicious. Or it's Tuesday.
I've cracked most of the games I've legally bought because they're more convenient (no noisy, must-find-the-damn-thing DVD) and they're more reliable. This won't work with Internet multiplayer stuff, which I must simply endure.
I'm sick of it, and seriously doubt I'll bother buying any big name games or upgrading my desktop for gaming in future. Gaming isn't fun with this sort of crap, and I'm sick of having a worse "customer experience" than the people who aren't customers at all because they downloaded the damn thing off the Internet.
I work at a newspaper, and the most evilly licensed software we use (things like QuarkXPress, $1500 publishing software) are about on par with a $100 game from a publisher with a massively overinflated sense of self-importance. High end, oft-pirated software from saner companies like Adobe and Microsoft (yes, Microsoft!) has much smoother and saner licensing than a cheap ENTERTAINMENT PRODUCT.
Many of my favourite games had the copy protection patched out in later versions because even the developers/publishers got sick of it. Maybe it's best to just buy games if and only if the devs cut the crapware out of it - get it at the $20 discount box. It'll save me a bunch on video hardware, too.
Run an update server?
I'm always bewildered by the lack of ISP-centered update servers. Almost all the ingredients are there:
- OSes that already support central patching via company-wide update servers (Mac OS X recent-ish, most Linux, Windows XP/Vista).
- Strong crypto and signing to ensure a hostile ISP can't attack user machines
- Strong crypto and signing to ensure machines can get verification of update manifests & update notifications from a central source (so hostile ISP update servers can't hide/delay patches and exploit vulnerable machines)
Unfortunately, the aforementioned OSes are designed to get patches as part of larger central management facilities from trusted servers. They provide no mechanism for discovery and use of untrusted patch mirrors - even though such a method would not be hard (no harder than, say, automatic proxy discovery) and could be made quite safe.
With Apple and Linux machines both facing tens or hundreds of megabytes per patch, and the wide deployment of Windows, you'd think they'd be wondering about this. Networks like Akamai help .... but not as much as pushing patch hosting onto the ISP as a bandwidth saving would.
At least this one is opt-in
Unlike the last filtering scheme, pushed by the Tasmanian conservative Senator Harradine, this filtering scheme is at least optional.
OK by me. It won't work, and it doesn't thrill me to have government money going to something like this over better uses, but at least they have the decency to recognise that adults can make their own choices this time. I'd prefer that people just buy filtering software if they want filtering, however, or use an ISP that offers such services (they already exist). Sure, it's not every effective, but the only thing that ever will be is a well-maintained whitelist - something people just don't want to admit.
The last scheme was universally ignored by ISPs. For all I know it might still be in force - I can't even find references to it anymore. I suspect this one will go the same way.
This is one of the reasons "DRM" schemes are so unattractive. They not only remove true ownership, right of first sale, etc, but permit the company you have "bought" the "content" from to revoke your access to it at any point. They may have to compensate you, but as Google has demonstrated there are ways to do so that make using the compensation more hassle than it's worth (handily reducing the claim rate).
I'm itching to pay for Internet-delivered shows. I live in Australia, where we get everything a year later than everybody else, it's full of ads that are so cheap they tend to be painful to see, and any show can be moved or cancelled if there's overtime in an Australian Rules rugby game or the cricket. They also enjoy randomizing episode order. Cable/satellite aren't significantly better than free-to-air. Given all this, it's desirable to just be able to pay a little bit to download the show, high quality & ad free, off the 'net.
No such luck. The only services like that at present charge video rental prices (AU$7/night or so) for DRMed poor quality unreliable and slow services. Yay. Bittorrent continues to rule the airwaves of Australia to the point where TV networks are really feeling the bite (as are ISPs uplinks!). Despite this, they seem totally unwilling to compete on the same level - providing a faster, more consistent, more reliable and legal alternative to bittorrent-ed shows at remotely sane prices.
Except in Australia
"it seems unlikely that anyone will launch an iPhone [plan] without unlimited data"
... except in Australia, where you should expect to pay in dollars per KILObyte. Mobile data providers here charge like the world has a finite and non-refilling pool of packets so each one you send is the use of some precious resource. It's amazing.
They offer slightly saner per-KB prices if you agree to spend hundreds or thousands of dollars per month to buy fixed data allowances, but even then the prices are appalling.
It's remotely possible that a smartphone that's built around data services, like the iPhone, might shake the market up a bit. But I doubt it. Don't be surprised if iPhone users here have to pay hundreds of dollars a month to get basic web browsing and mail.