1817 posts • joined 15 Jun 2007
The original Xbox really was PC components. The 360 is quite a different beast, using PowerPC derived processors. I'm sure that it is running a Windows variant under the covers, but this is more likely to be WinCE than XP.
The rest of the hardware is pretty much generic, but what do you expect? Memory is memory, disks are disks, and even the wireless and controllers will be using off-the-shelf chips. This is the same with PS3s, Wiis and even Macs.
Shame on you
The BBC version has to be the definitive version. Especially if played with a 6502 2nd processor and a bitstick.
It just shows...
...that people ignore things that they don't understand.
And if he had started buying things on Amazon from their accounts, they would hold their hands in the air running around crazily, blaming Amazon, Starbucks, their ISP, the Government and everyone else but themselves.
I'm beginning to think that the Internet is too dangerous to let Joe Public loose on it! Maybe we need an Internet driving test before allowing them to connect.
As soon as
Fedora has an equivalent of an LTS release, I'll consider switching.
I just can't rework my primary systems each time a new release comes out. I don't have the time.
Mind you, my confidence with Ubuntu has been severely dented recently. I still haven't switched to Lucid yet, because I cannot get suspend/resume or sound working well enough on my trusty Thinkpad T30, things that just worked without problem on Hardy, and my netbook, running Jaunty tells me that there are no further updates for that release. Still, I'll put the netbook remix of Meerkat on that, just to see what it is like.
Don't think so,
and I would be interested to know whether they run at all!
I seem to remember that they weren't the most reliable of devices when they were current! And that strange offset flip up screen and fixed keyboard.
I think that the term used for these and similar devices was 'luggable computers'.
We all know there are problems with X. We know that the abstraction between client and server does not suite all types of application and can be an apparent performance barrier.
BUT (and this is a big but)
If you know X, and work in an environment with many networked systems all of which understand X, then the benefits of the abstraction are HUGE. Don't suggest that VNC can fill the gap, because unless you have a big fat network, the performance is crap compared to properly written X apps. If you've not worked in such an environment, you may not know this, but your view risks throwing the baby out with the bath water.
One of the problems as I see it is that because the font handling in X was based on the fonts that the server know about, rather than the fonts that a client wanted to use, some applications appeared to display poorly. To get round this, both KDE and Gnome introduced models that meant that font glyphs were effectively loaded by each client when the client started, often multiple times.
This increased the X server memory footprint and client startup time no end, and almost completely broke the efficient font model that X had (and still has!). In my view, the best way of handling non-standard fonts is to have a font server somewhere (either locally or on the network) and have a mechanism for font-picky applications to add the FreeType or Type1 scalable fonts to that server, either for the duration of the run, or permanently.
Similarly, the way that some application treat pixmaps (and Java is one of the worst culprits, wanting to do it's own graphic abstraction) mean that X performance is much worse than it needs to be for such apps.
X.org is making what I think is a very sensible move to OpenGL based rendering, especially if it has network abstraction built in (I've not checked). This should allow good 3D performance, and as we can see allows the gloss to be added. What we need now is a well recognised, resource controlled window manager and application framework. Whether this is Gnome or KDE, or something completely different remains to be seen, but introducing a new default must be backed up by allowing users who want to remain with what they like to do so.
I actually cannot stand the netbook remixes, even on systems with small screens. They are just too proscriptive, and just get in the way unless all you want to do is what they provide. I use Gnome on my EeePC 701 (800x480), and I only have a small number of times when windows fall off the screen, so I don't see the need for the netbook remix.
"Hooked up to a Beeb"
As long as the two channels were in phase and not skewed, and you could do without the motor control.
I had no end of trouble with stereo players and the Beeb. Ended up making a cable that would only connect the left or the right channel, but never both. Always recommended a good mono tape recorder to other people.
Beebs were remarkably tolerant of the tape speed. There was a tape deck you could buy that had an adjustable speed controller on the motor. You could speed it up by nearly 10% before you had any loading issues. Really made a difference for the longer games. Some game manufacturers advertised faster loading speeds by actually recording with a slower tape drive before duplication, so they were faster on normal players.
Ahhh. Gone are the days of *OPT 1,2 followed by *LOAD and then by swapping the tapes and a *SAVE with the correct parameters to copy tape games.
Someone has actually got a BBC micro user guide on the net at "http://central.kaserver5.org/Kasoft/Typeset/BBC/Intro.html". Bizarre, but welcome. I didn't have to risk opening my decrepit and fragile copy in order to refresh my memory of the *OPT numbers for extended info for CFS.
Tranny radio listening was a very popular communal pastime
Especially at 12:45 on Tuesday lunchtimes, when the BBC chart was announced. That ages me!
And yes, radio did promote conversation in kids. See how it is portrayed in "The Boat that Rocked" to get a feel of '60 and '70s radio culture. Because of the cost of record players and radios, music listening was a communal pastime. You would take your new LP around to a mates house to listen to, rather than giving them a copy, as happens now. A household would normally have one TV, one record player and a couple of battery powered radios at most.
The revolution started with the "Compact Cassette", which allowed you to tape your mates records, and continued through the Walkman era as cassette decks in Hi-Fis got better. In some respects, this paused a bit when CDs first came alone as they were originally a read-only medium.
I still remember the stir rare-earth magnets caused when they appeared in the headphones which were the other revolution of the time. Allowed music to actually sound passable at low voltages compared to the crap piezoelectric earphones for transistor radios or bulky and current hungry cans that were used on Hi-Fi systems.
@AC re: "authoriser" . That's not the point
The 1990 computer misuse act defines what is illegal as defined by legislation passed by Parliament.
An EULA falls into the category of a License (End User License Agreement), so is contract law rather than criminal law.
License agreements can be (and often are) challenged in the courts, and can be deemed unreasonable. I'm sure that I could if I looked hard enough find a precedent where just what you have described has been judged unreasonable, but you have to be careful of the jurisdiction of the court system looking at the case.
Even if a bad EULA were found reasonable, the penalty for infringing it would be a financial rather than custodial, and may not even be enforceable (for example, if the EULA is judged in Texas which is often where these things are tested, and you are in the UK, then so long as you didn't visit the US, it is unlikely that any action could be taken).
BTW. If you are a Windows user, stop and really read the conditions on the Microsoft EULA that you almost certainly agreed to when you 'accepted' (whatever that means for a pre-installed system) them without checking. I think you will be surprised (and maybe a bit frightened) by what you've signed up for!
Anybody any data
on how long PCs with SSD storage last before requiring the SSD to be replaced?
I know that flash memory is getting better, but even with data shuffling and sparing, I expect to see SSD needing to be replaced before a spinning disk. Any chance of such devices lasting 6 years of daily use?
Upward and downward. It depends
on whether the wing spars running through the fuselage are connected to each other and stiff. If they do not flex or kink, then the fuselage will be suspended from the wing when flying, but if they flex or are not connected together, then the inside ends of the spar will move in the opposite direction to the ends of the wings around the point where the wings enter the fuselage.
From what I can see, there is a join in the spars on the mid-line of the fuselage. This could potentially be a weak point, possibly allowing the spar to kink at the join. Adding bracing to prevent this happening looks like it is a very good idea. Would it not have been an idea to stagger the joins on the individual spars? Or have additional uncut straws to reinforce it so that the joins did not occur at the same point? Oh well, too late now!
I would worry about the weight distribution. I think it looks like it will be tail heavy. The design looks like the sort of thing you would use for a powered plane, which has significant weight at the front (the engine). Do you get the opportunity to see if it will glide before attaching it to the balloon?
I have been really upset by the tricks Google are using to make sure that you have your data connection turned on all the time. I must check whether there is an outgoing firewall app in the Android Marketplace.
that some critical comments got through. Mine, that was posted before any of these, was rejected, even though it says nothing worse than many that got through. I wonder how many from other people were rejected as well.
I'm still wanting a reason added for comment rejection. I know that this is a moderated forum, but I was not abusive or insulting, the grammar and spelling was at least fair to good, and I was commenting on the technical accuracy of the article, although I did accuse the article of being a barely disguised advert. But so did the first comment that got through.
Could we at least know for sure that the comments are not moderated by the author of the article? That would at least give us confidence that critical comments have a fair chance to get through.
"fitted to the operational carrier"
How is this going to work? I'm sure that removing and refitting will require both carriers to be out of service at the same time!
Does he think that it's like a child seat in a car, unstrap, move, and strap? If he does, I think he should go on board a carrier when a catapult is operating, and feel how much the ship is affected when a heavy jet is launched. It takes time to make the necessary heavy-duty attachments to keep a plane-flinger safe and no risk to the ship, aircraft and people.
In the processs
In case you hadn't noticed, his sysadmin blogs are already being carried as articles. Stirred up some interesting comments as well.
..I always put Simon down as an ale drinker.
Two mentions of that hideous brew called 'lager' in a single episode. I suppose I can forgive the last one, if it was an instrument of financial torture used on the PFY.
Wasted so much time
trying to get things to work, and have just given up.
I could, on the odd occasion, get audio to work, but I have never managed to get my Buffalo Linkstation Live, that is suppose to to be DLNA compliant, actually server any video, even that encoded to one of the supposed supported formats. This was to a number of clients including Xbox 360, Windows clients, and open source clients.
The whole concept of specifying the container and codecs required in the standard is just a cynical attempt at building in obsolescence into consumer electronic devices to guarantee future sales! It sucks, and anybody who says otherwise is either a marketing shill, or just does not understand.
Nowadays, where possible, I stream stdout to stdin using SSH as the transport and mplayer as the player. Not got the gloss of a nice GUI, but just works anywhere you have knocked a known port through the network. Don't even need to share anything. And I get to avoid running the hacker friendly protocol uPnP, which will advertise the complete capabilities of the systems on your network to anybody who can get snoop it.
Had exactly the same
Only with PowerPoint. Teacher would not even allow the presentation to be shown, even though the presentation style they had been taught was so simple (background and basic titles and bullet points only), it could have been done using ANYTHING!
This makes Office Student and Home edition the only piece of Microsoft software that I have purchased that did not come bundled with a computer for around 10 years! (I do not copy licensed software).
Not that easy
In order to hijack a range of IP addresses, you have to subvert a core ISP, or find some way of injecting false BGP (or whatever they use nowadays) information into the wider network. You have to be trusted, and in particular points in the network to have BGP info believed by your neighbours.
While I am not saying this is impossible, it is so fundamental in the operation of the Internet as a whole that if this is compromised, the operation of the whole Internet is at risk.
To El. Reg. To see whether an IP address is where you think it is, you can try to use traceroute (oh, sorry, tracert for windows users) to see where the packets appear to go. While it is not a sure-fire thing (traceroute can be blocked easily, and some routers do not respond), you may get sufficient clues from the names of the routers that have DNS entries to guess at the routing of the packets. If this does not work, you might try a ping -R (UNIX/Linux only?) to get the return path of the packets.
There are probably many better tools, but Dig (although I still use nslookup), traceroute, ping, netcat, telnet, nmap, wireshark and other tools such as nessus should all be in the metaphorical toolbox of people who want to diagnose network problems.
@AC re: @Me. Yes, I know. Bad me.
Since being able to directly reply to a comment, I have developed a bad habit of assuming that it will be obvious what I'm replying to. But this is not a threaded forum (thank goodness) so this is not always clear. This is not the first time this has happened, and I'm annoyed with myself for it.
I meant amanfrommars, of course. I may be reading the wrong comment trails, of course, but I don't think I've seen a comment by him for several weeks.
Where is he anyway.
I've missed him.
"allows you to view the screens of the hosted VMs and switch between them"
It's part of the design, and it requires some external hardware (so this may be enough for you to argue), but the PowerVM hypervisor on IBM Power systems is architected so it is possible (using the HMC or IVM partition) allows the console screen of all LPARS to be opened from the console.
The same was true for the daddy of all type 1 hypervisors, IBM VM, which also allows a single master terminal to display the consoles of each of the partitions (IIRC, been some years since I worked with it, and most customers used to want physical consoles anyway).
Yes, I know they are text only, but when you have OSs which do not rely on a GUI to run them, why do you need anything other than a text mode console? You have X11 and if you must, virtual consoles using VNC for that.
Where you are getting confused is believing that VMware and Citrix invented the type 1 hypervisor. They did not.
Of course, anyone in the know realises that there is really no such thing as a "Bare Metal" hypervisor, because, as has already been pointed out, VM is an operating system (used to be actually booted from disk!) and PowerVM is actually a locked down (an not even particularly cut down) Linux kernel in flash storage. But hey! It's convenient for the vendors to sell the hypervisor as a firmware "Black Box", that the customer needs to know nothing about, particularly when the security people come snooping.
For bog's sake. It's easy (although costly).
Zone your network using firewalls. Wireless access appears in one zone, which does NOT have any critical servers in it. Employ a capable network engineer or two, and let them achieve a working relationship to the security people.
Control the keys using the strongest authentication all your official devices can use, preferably based on something like RADIUS. Change any PSK keys that you have to have regularly, only circulate these changed keys to people with registered devices.
Query all devices using a device checker probe (something as simple as nmap or wireshark should be able to get most devices) and track down any unauthorized devices. Scan for unauthorized wireless networks in the vicinity, and attempt to identify whether it is the coffee shop downstairs, or a rogue access-point in the building (I'm serious, it happened somewhere I worked!). Make sure that all laptops physically attached to the wired network have wireless services turned off (including 3G 'dongles' and Bluetooth). Run regular security scans on laptops to check that this is the case.
Put simple services (like printing and possibly mail access) within the DMZ. Allow devices on the DMZ controlled access the Internet and then back in to your corporate gateways exactly the same as if they were coming in from the Internet. Knock specific holes controlled by the strongest access control you have in the inward looking firewall for any apps that absolutely have to be accessed from mobile devices. Argue the case for blocking every singe one, until you have been convinced that it is necessary and appropriate controls are in place.
Review these holes regularly, and have a strong procedures to track leavers and joiners. Ban, with the strongest penalties, sharing of ID's and revealing PSK's to non-authorized users. Lock services to specific ID's using strong authentication, preferably using one-shot password devices.
Be prepared to use VPN for any really critical services, especially those containing private or critical data. Select your approved devices carefully, to make sure that they meet all the security requirements. If there are vulnerabilities known on your mobile device of choice, make sure you have appropriate AV software deployed and updated.
If you are paranoid, consider using glass coatings on the windows to control the leakage of the WiFi signal out of the building, but if you are that worried, you should probably not use wireless services at all. Work out how far your wireless networks spread outside of your controlled space, using normal devices and focused antenna as well. Show the controlling managers this, and demonstrate it as well.
And above all, if you value your business, JUST DON'T USE WIRELESS SERVICES. This should include wireless keyboards, and any future wireless USB technology. If the MD objects, put a reasoned argument that the very business itself is at risk if the network is compromised. And if you are over-ruled, either be prepared to give in, lodging an "I Told You So" letter somewhere in the business, or to resign on principal.
It is clear that the "Block everything, then allow only what's essential" principal operates here.
"cannot afford enough jets for the two ships"
The intention is that only one ship would be at sea at any one time, so why the need for aircraft for both? If, as expected, the delivery of the carriers is staggered, once the Prince of Wales has finished it's acceptance trials, Queen Elizabeth will be ready for it's first R&R and minor refit.
Going by how the old Audacious class Ark was run, the aircraft would fly off to a land-based airfield when the ship returned to it's home port, and would only join again once the ship was back at sea, and passed it's sea-worthiness trials.
You would need more than one ship's worth, but less than two, to account for aircraft maintenance cycles.
Oops, silly me.
I meant to say CDC SMD (Storage Modile Device) drives, not SMB. How memory fades.
@Ocular Sinister. Experience tells me otherwise
When Dapper Drake (6.06) was the LTS release, by the time Hardy Heron (8.04) came along, many of the packages in the repository were functionally stable. This meant that you may get bug fixes, but you would probably not get a bump of the version.
If you were adventurous, you could add the 'backports' repository to the list of subscriptions, and get a select few packages at the same level as a more recent Ubuntu release.
As a result, even though dapper was still 'supported', it began to be very difficult to put .deb files from the Debian repository onto Dapper, because the prerequisite libraries would not be present. Ditto compiling up stuff from source.
Hardy does not appear to be quite so prone to this, now Lucid is available, but you can see it starting to happen, especially with third-party software like the BBC iPlayer.
I'm sure that if you joined the Ubuntu developer community, offering to make the backports repository more complete, you would be welcomed with open arms. But until then, the current developer community will be more interested in putting recent versions of the packages into the latest-and-greatest releases, not into the older ones. I myself would love to do this, but personal commitments do not leave me with the time to do it at the moment.
It's a shame, as I believe that ordinary users would be best served putting a LTS release on their systems and leaving it there for the lifetime of the system.
Strangely enough, I did a Windows XP to Windows 7 upgrade recently (one of my kids gaming rigs), and it was much easier than I expected, at least using a second disk and a parallel Windows 7 retail install to make a dual booting system. I do not think I had to re-license anything. All the programs installed on the XP drive were identified and recognised, and ran without problems. But these were mostly games, but did include Office.
Microsoft must be doing something right!
In the 70's, you could never mount / read-only. The ability to do this only came about when Sun implemented their diskless model, where all of the files that would be modified on the / partition (often the files in /etc such as passwd, utmp, wtmp, and mtab) were moved into /var, specifically so / could be mounted read-only on diskless workstations. I'm a bit vague about Sun timelines (I was woking with PDP11s and Bell Labs. versions of UNIX at the time), but I would guess that this happened around 1983, a few years after Sun was set up, with the release of the Sun2 workstations.
In this model, / and /usr were remote read-only mounts, /var was a remote read/write mount specifically for that workstation, and /home was a read/write shared mount for user files.
@AC re. Sensible compromises
Drive letters were antiquated when MS used them in DOS 1.1!
UNIX already had a fully hierarchical filesystem years before Bill went to see IBM.
The concept of filesystems on separate partitions really goes back to the original Bell. labs V6 and V7 code for PDP11s, where the partitions sizes were hardcoded into the device driver for RP disks (no on-disk partition tables there!), and when the smaller RK disks were barely large enough for / or /usr.
Each device could have a maximum of 8 partitions defined, and the definition of the partitions had to work with all drives of that type present in the system. IIRC, it was normal practice to make one partition span the whole device, two more cover half of the device each, and a further four more to cover a quarter of each device. It was, of course, stupid to try to mount the overlapping filesystems, or use the wrong minor device, but this model gave flexibility.
My old Systime 5000E (a PDP11/34E in Systime covers circa 1982) had 2x32MB CDC SMB disks, with a controller hacked to look like an RP controller with RP03 drives, with overlapping partitions of 1x32MB, 2x16MB and 4x8MB. I had / on 1 8MB partition (formatted to use just 6MB, with the last 2 MB used as swap space), /usr on another 8MB partition, and then used the remaining 16MB as a /user filesystem, which was equivalent to /home on a Linux or more modern UNIX system. There was no /var or /opt at that time, as Sun were only just thinking about diskless systems. A second drive had a single 32MB partition for the /ingres filesystem (which actually had the whole of the BSD 2.6 [for which I sadly do not have a copy of the tape] unpacked in it), and which contained the Ingres database code, and all of the defined databases.
It was the only real way to manage such systems. If you are really interested in knowing what was involved in setting up ancient UNIX systems, I suggest that you start here http://minnie.tuhs.org/PUPS/Setup/v7_setup.html, and then brows the rest of the UNIX Heritage Societies site.
BTW, I started on Version 6, although I have put the link in for Version 7 as that is regarded as the point where UNIX really started to fragment.
I prefer two partitions (but I am a UNIX sysadmin)
It's swings and roundabouts. I tend to use a separate home partition so that I can dry-run a new release while keeping access to all of my files in both releases.
Unfortunately, this is not a perfect solution, as quite often, the configuration files for all of the dozens of apps and utilities change between major releases. You often watch informational messages about configuration files being 'converted' to a new version, and find that it no longer works with the old OS. This broke the sound on my Thinkpad between 6.06 (Dapper) and 8.04 (Hardy) (both LTS releases).
I've never been satisfied that 10.04 is ready to switch to, because there are sound, display and suspend problems, so I am still running Hardy. One day, I will boot Lucid, update everything, and all my problems will be over, but I'm not holding my breath, and I don't want to switch away from LTS releases for my main systems.
Non-nuclear carriers are stupid and escorts are required
The problem with the current strategy is that the gas-turbine powered carriers have such a restricted range that they cannot really go anywhere without Replenishment At Sea (RAS or whatever it is called now). And if you reduce the escort fleet to enough to cover the operational carrier plus some in refit, what the hell is going to protect the tankers that are necessary to provide the fuel? How to cripple the British Navy? Sink all the RFAs.
And Lewis is assuming that all of the Navy is operating in the same place at the same time. What is needed to protect HMS Ocean, which has helicopters but no fixed wing aircraft? It's speed is only about 20 knots IIRC, so could be subject to attack by a conventional sub with some dash capability. It is possible that this ship may be deployed separately from the carrier.
We need a capable and relatively sizeable escort fleet, although possibly with slightly different capabilities, to provide some degree of flexibility. If all we can do is deploy the fleet in a fixed configuration as Lewis is suggesting, then it fixes the way it operates for the next 30 years or so. Not exactly the forward looking attitude Lewis says he is suggesting.
I agree that the Type 45's are about as relevant as the Type 81 40 years ago, of which all but one was cancelled. But the batch 3 Type 22 frigates proved themselves to be immensely flexible vessels, so a combination of these (with something like Aster), plus some ocean going gunboat sized anti-pirate and fishery protection vessels, possibly large enough to operate a single simple helicopter like the old Wasp, are necessary. Equip them with some relatively heavy, rapid reaction 30-60mm weapons to dissuade fast-boat pirates, some towed array sonar, and put a capable containerized AA weapon. Something not dissimilar to the Swedish upgraded Stockholm class.
And the radar picket vessels that Lewis talks about must be able to defend itself, so must be a multi-purpose vessel. As they operate many miles from the fleet, they must have some AS detection capability, even if their main purpose is as a radar early warning. This assumes that they are necessary at all. The only real reason we had type 42's (like Sheffield) for this was because the through-deck-cruisers (that we now call the current generation of [harrier] carriers) were not large enough to operate any AWACS aircraft.
I've got one
and it is a decent enough phone which was a free upgrade from Orange on the monthly plan I pay for. I wanted an HTC Desire, but was not prepared to pay the £150 they wanted as an upgrade charge (I'm tight-fisted, I know).
The only serious problems with it that I can really point to is that 400x240 is really too small a resolution to browse, although pinch to zoom does make it a bit more bearable, and that I had to change the settings on the GPS receiver in order to make it work at all.
Other minor niggles that are specific to the phone are that it requires a strong WiFi signal, it's performance is significantly worse than any of the laptops I have in the house. Also, the battery indicator is inaccurate. The other day it dropped from 50% to 15% (the low battery warning point) in about 10 minutes while browsing the Android Market over Wifi. 2 days moderate use with data and Wifi exhausts the battery.
Finger marks are very obvious, and I end up polishing them off every time I use it.
Most of the Orange apps require connections via data services and will not work over WiFi.
I have other problems with it, but they are mostly Android related. My previous phone was a Palm Treo 650, and I am finding the reliance on data services that Android imposes for pretty much everything bugs me. I had only a small data allowance on my phone plan that I blew in about 10 minutes when I first got the phone. I ended up changing the APN, just to stop it connecting, and then virtually nothing worked. Most apps that checked the state of the data connection would hang for 30-40 seconds when starting. My Palm had complete control of the data connection, and apps would start up just as quick whether or not the data services are enabled. As I said, this is an Android issue.
I find the array of settings that Android and Samsung offer difficult to navigate, but I guess that is inevitable when phones get so complex.
Other than that it works as a phone, the audio and video performance is acceptable to good, and although it is slower than a Desire, you get used to it. I've yet to find an app that does not work. I'll do for a while.
I still miss the simplicity of the Treo, however.
Data gathering, or medication control.
I can see that this will be used for all sorts of thing in the medical world, but that in itself is worrying.
Lets make an assumption that it may allow feedback controlled drug release for people with chronic pain or insulin dependence.
It would appear to me that no longer would your private data be at the mercy of the security of the network, but your very life. Imagine if a 'hacker' could gain access, and trigger an overdose of whatever drug is being dispensed, it might be more serious.
(BTW for all you big-brother conspiracy theorists, consider a scenario where somebody could release drugs remotely to calm a crowd).
I never said UNIX was invulnerable. Only a fool would claim that any OS is absolutely secure, and I can think of several ways to target a UNIX system, but most involve some form of social engineering together with lax root administration.
But in the write up of the worm on the Symantic site http://www.symantec.com/connect/blogs/distilling-w32stuxnet-components, it is quite clear that in order to infect the SCADA PLC, normal Windows methods are involved, although what is described is very sophisticated.
I'm glad that dual-vendor redundant systems are involved in our power stations, but I would guess that the next generation will probably have less of this, because Windows is slowly taking over the world.
"No operating system is bullet proof". Too true, but at least UNIX model privilege separation and no write access for ordinary users to the system directories means that not only do you have to get code running in a system, you have to get it running as a user with escalated privileges in order to do serious damage. A double hurdle. And there are even better security models available.
Let's face it. The security and application installation model on Windows pre-Vista was just terminally flawed. It required serious knowledge of Windows to allow it to work in a secure manner. This is why those systems are such ripe targets, not just because there are so many instances of Windows. And I am prepared to argue this point out with anybody, although preferably over a pint rather than in these forums.
If there is software running nuclear power stations that needs admin rights to run, then I laugh at their folly! Or I would if I did not live within 20 miles of one!
Lots of things you have written trigger outrage in me, but I believe that to encapsulate what you've said, it's always an economic argument. And not to keep services prices to the customer down, but to increase shareholder return.
My view is that some things should be beyond raw bean-counter accountant economics, and safety is number one on this list.
And the argument that other OS's would be equally exploitable is just fatuous. If the account that you logged in to use the PC for everyday use did not have write access to the PLC code, then ordinary everyday use of the system would not expose the control system to infection. This would be the case if it were designed to use a UNIX type OS, or QNX, or VMS, or, in fact, any OS that did not evolve from a 'Personal Computer'.
I am not saying that it would be totally safe, your point about nothing being totally secure is quite true, but the times that the system would be vulnerable would be significantly reduced, mainly to system updates.
Part of the underlying problem is that as Windows is not regarded as a secure OS, many generations of programmers have grown up without having to think of making their code work in a system with anything like a decent security model. I've come across this time and time again when I get to install a piece of software on a UNIX or Linux box that was written by a PC programmer, and find that you have to have log and configuration files globally writeable, and even worse, whole directory trees in a similar state.
It is possible to control user access on a Windows system running NTFS 5 or later, but not enough people care enough to design their software to install and run in a safe manner.
In fact, the underlying user and file permission model on NT based systems is actually much better than UNIX (and this is a UNIX bigot saying this - the UNIX model is actually quite simple and restrictive), but how many people know how to use the policy editor to take advantage of this. If you are a Windows programmer, do you set up multiple accounts. Do you have a specific account to install the software that is neither an admin account or a day-to-day user account. Do you use the access rights user and group attributes to control who can do what with which file. Do you even know what acledit is!
The simple answer is No! I would say that almost without exception, Windows programmers just don't think that way. (BTW, if you are a Windows programmer who does take the require amount of care, wade in, and prove me wrong!). I believe that Windows administrators have more idea than the application developers, and that is just because they have been burned too often because of the vulnerabilities. of Windows.
If you grew up in a UNIX or VMS or even a RACF world, you would understand this or you would not get work.
Maybe I was wrong.
I cannot point to where I picked this up from, which is why I questioned whether I remembered it correctly, but I'm sure that I did read it at one time. Possibly it was an earlier agreement, or maybe one of the other type of arrangement that Microsoft had. I accept that the posting may be partially wrong.
But the mere fact that the keys are still available does not really prove that you are still allowed to use them. Maybe someone who actually has a subscription can check their agreement, and quote or paraphrase what it says about lapsed subscriptions.
I have just read what you are allowed to do with the software you obtain through TechNet from the technet.microsoft.com website. This appears to be an interesting quote regarding the use of the software: "Access over 70+ full-version Microsoft software for evaluation purposes only".
In the License terms there is also:
"Evaluation Software. One user may install and use copies of the evaluation software listed in the COMPONENTS.TXT file, even if you obtained a server license. You may use the evaluation software only to evaluate it. You may not use it in a live operating, in a staging environment or with data that has not been sufficiently backed up."
and later in the same document:
"SCOPE OF LICENSE. The software is licensed, not sold. This agreement only gives you some rights to use the software. Microsoft reserves all other rights. Unless applicable law gives you more rights despite this limitation, you may use the software only as expressly permitted in this agreement."
I believe that these terms taken together would allow Microsoft to judge that long-term use of a particular license may not be for evaluation purposes (yes, I did read the "without any time or feature limits", but this is then qualified "for evaluation purposes only") and this would be enough to allow them to disable a license if they thought that the use was no longer for evaluation.
And the moral is - Please read the terms and conditions that you agree to, especially with Microsoft. You may not get what you think.
Legal Disclaimer: All of the quotes are taken directly from Copyrighted material contained on a Microsoft web site, and the rights remain with Microsoft in accordance with the text contained at http://www.microsoft.com/About/Legal/EN/US/IntellectualProperty/Copyright/default.aspx
I think you need to read the agreement AGAIN
Part of the agreement, if I remember correctly, is that you are only permitted to use the keys that you obtain through Technet while you maintain the subscription.
As soon as you stop paying the subscription, you need to buy new, full licences, or un-install the software. So you can expect that any keys that you used to fail the Windows Genuine Advantage test sometime in the future.
This was the main reason I never took advantage of the apparently favourable conditions offered. I did not want to tie myself into a long-term agreement with MS where they could repeatedly demand money from me at their own terms.
I would laugh you all for dancing with the devil, if it was not so tragic.
I believe that there is a different issue here that may be changed by solid state memory.
My thoughts are that it is an addressing issue. Currently, if you think about it, data in current persistent media is accessed via a filesystem, indirected to some form of adapter, across some form of interlink one or more times, then to a disk.
All of these levels provide addressing information of one kind or another, that may or may not be abstracted one or more times. This is required because of the inherent limitations on the size of disks, the number of disks per device bus, and the number of device and interlinks available. Over and over, this has to be re-worked as disks sizes reach the next barrier. This is expensive, time consuming and slows down what can be done.
With solid state memory, it is in theory possible to implement a block or even a byte addresses space as large as the size of your address. Lets allocate 256 bit addressing, giving a 10 to the 77th power space, which should be enough for anybody (famous repeated last words, maybe make it 512 bits). We don't have to make this all physically addressable immediately. Expose this as a global address space to ALL of your systems. Call this a Storage Bus Address (SBA - I claim trademark and any copyright and patent rights over the name and concepts). Allow SBA virtual mapping so that you can expose parts of your global filestore to individul systems, and maybe allow slow interconnects to use fewer address lines.
Put the resilience in the managing device (two or three times mirror with multi-bit error correction), make the memory hot-swappable in manageable chunks. Add secure page or 'chunk (of address space)' level access security using a global name space and cryptographic keys to protect one systems data from another. Add in some geographical mirroring at any level you like for protection.
Once you have done this, you can abstract the interconnect between your servers in any way you like, provided that you maintain the access semantics. Make it closely coupled (at internal bus speeds), or distance coupled depending on the access speed you require.
Change all the OS's to implement this large space addressing for their persistent store (it's easier with some, like Plan 9 and IBM i, than others), initially as a filesystem, but ultimately as a flat address space in later incarnations of the OS. This could even be added into the processor address space, but I think that would require more changes in system and OS design.
I think that the revolution will come when persistent storage is addressed like this, and it could be done fairly easily, but would require industry agreement. This may be what prevents it.
This is me blue-sky dreaming, but I don't see why it can't happen.
I'm sure that Microsoft and the EFF are in this for different reasons.
Microsoft are treading a narrow path. They don't want their patents overturned, but they do want this one. They have clearly failed to convince the appeals process, but if a review is granted, it would enshrine by precedent a process that may further favour companies with big pockets.
EFF wants at least software patents to more stringently examined before being granted, with a preferred outcome of ruling that software is not patentable.
So it's a dangerous game for both parties, but at least it would air the problems somewhere it might do some good.
You've found the Linux equivalent of Catch 22!
If an OS has no suitable apps, people will not consider it.
If nobody uses the OS, applications will not be written.
Having any apps available to do video manipulation is a step in the right direction, especially in the home market. I've used Avidemux for the last few years to trim and combine video files. It Works For Me.
Care to expand on this? I admit that Ubuntu is not perfect, and in some respects going the wrong way at the moment IMHO, but it is a much more end-user targeted distribution that Fedora, where you have to run to keep up, or OpenSuSE where it sometime seems that the opposite is true, or the niche hobbyist distros (and I include Debian here, even though it is the basis for Ubuntu).
The fact that Ubuntu has a large repository that is kept up to date, has a documented lifetime for each of the releases (I tend to keep to LTS releases because my computers are tools, and spending time maintaining the OS is not high on my list of things-I-have-to-do), has a easy to understand patching strategy, and is actually targeted at ordinary users rather than hobbyists, are serious plus points for someone exploring Linux. Not everybody likes to wear hair shirts.
The other thing that Ubuntu is doing is reaching towards the critical mass where it is taken seriously by computer and software suppliers for mass consumption devices. RedHat or SuSE Enterprise releases will never appear in this segment of the market, merely because of model being followed.
It is quite true that if you are building a business model around Linux, that you may choose a more business oriented distribution, but Canonical are looking in that direction as well.
There can never be a one-size-fits-all distro, but what we are looking at here is what is prevalent. You don't have to like it, but if you make statements like you have, I feel that you have to justify them.
Like with children
you should not always give them what they want. Sometimes they just don't understand the consequences.
The thing that gets me
is that this is not new revenue. It's merely moving a fixed amount of purchasing power from one stream to another. If the operators get a slice of this, then it will be by 'stealing' it from someone else.
What I keep seeing is that businesses believe that they are making new money by offering this type of service. This is just plain wrong.
If I buy a cinema ticket, I do not want to pay more to buy it using my phone. Realistically, people actually believe that they will pay less with new transaction types, especially if they are paying a monthly fee for the privilege. And more interested parties will be taking a piece of the action.
Time is indeed money, but only up to a point.
And sometimes, it's nice knowing that the £50 in my wallet is still there as long as I don't spend it. Someone might steal it, but nobody can legitimately spend it without my knowledge. I'm almost afraid to look at my bank account sometimes, because I have so many direct debits and other transactions, often on irregular days of each month. I think that the same will be true of any e-payment system.
You're making an assumption
that the signal does not go beyond your house. If it does, a neighbour can join your network, or possibly you will end up with a single network spanning more than one house. Combined with uPnP, this could mean that all your media and Internet devices are visible and available. Shiver.
If you want to guarantee privacy, then you should set your own key, and to do that, you need the windows application. This is why it sucks.
There have been Linux (actually Posix compliant) tools, but I have only tested them on the older Homeplug and Homeplug Turbo devices.
Podules were for Acorn Archimedes machines, although I suppose that the A3000 was branded as a BBC micro. Not the classic 8-bit 6502A based BBC A and B though!
Liabillity insurance needed
Bearing in mind how much noise is made by environmentalists about oil contamination if a ship founders, one wonders how much liability insurance would need to be carried by a company operating nuclear merchant ships.
If a nuclear warship is damaged/founders, you expect (and this has been the case so far) that the nation operating the ship will carry the burden of recovery and clean-up of the wreck. There would need to be some guarantee that sufficient resource would be available to prevent nuclear contamination from a merchantman.
If there were arguments and delays after such an accident involving a merchant ship, leading to nuclear contamination, then the outrage that would follow would make the Deepwater Horizon a mere storm in a tea-cup.
I'd also expect that these nucular wessels (sorry, couldn't resist) would also have to be operated far from Somali pirates!
BTW. The description for the warning exclamation icon reads "All hands man the pumps, run for the hills, batten down the hatches and so forth", so I thought it was appropriate.
Played with one owned by a friend
While it worked, it felt unnatural, and the keys, particularly around the edges, were a bit unreliable.
It just didn't feel right with no physical keys, and the flatness meant that anybody used to typing got aching hands quite quickly. I never saw him use it much in the following months. It was a clever and an impressive gadget though.
I found a better solution for sending texts quickly was to link my (then) Nokia phone to my laptop by IR, but that would rather defeat the purpose when using a smartphone. I got a Palm Treo, installed Graffiti, and used that instead. I wish I could use a stylus and Graffiti on my current android phone (I know, both are possible, but Graffiti appears to have been pulled from the Android Market at the moment!)
Our Choices, which became a Blockbuster during the big change a few years ago is fairly clean and tidy, staffed with enthusiastic people, and is never empty of customers.
I live in a rural town, with the nearest large retailer over 25 miles away. We've lost Woolworths and Curries (games), and our small WH Smiths do not sell a large range of DVDs or any games at all.
The Blockbuster is the only remaining local outlet with a reasonable range of DVDs and games to purchase, and has the added benefit of renting both. The only alternative is the restricted range of DVDs that our local Tesco sells, and as this is a rural branch, only runs to the top 30 or so DVDs and even less games, or the £5 bargin bin titles.
If we loose our Blockbuster, with the really poor rural broadband provision and no cable TV, it will make the area even less attractive to the resident youth. We are already seeing a serious upward change in the age demographic as the young leave when they can.
Yes, we can buy from Amazon or Play. Yes we can download (slowly and encumbered by usage caps). Yes, we can get titles from Love Film, but the postal service is already going down the tube. We appear to only get every other day deliveries of mail as it is, and this will only get worse.
What am I supposed to do when I get one of my kids asking for a rental or game for the weekend. Or for an extra game controller. Modern kids just don't seem to understand "it'll be here next week". They want it NOW.
It would be interesting
to see if bone would gradually permute the foam over time.
My thoughts are that if it is similar in strength to human bone, it may break, but if over time ordinary bone grows through, it may be able to heal with ordinary bone, without further surgical intervention. Now that would be revolutionary. It may completely change the lives of people who currently have to go through serious bone grafting after injury.
I am not in any way associated with a medical profession, and I am just idly speculating, so I'm sure someone will say that this can't happen. Still...
The problem is...
...that this works fine if all you want is what they offer.
As soon as you get tied in, and decide, say, that you want to use a product that they do not offer, such as a particular new network type, or a better HSM product, or a particular data visualization package to integrate with your MIS, you suddenly find that you either can't, or will compromise the gold-standard support they offer by changing the software stack.
This is a nirvana for corporate sales droids, especially if they can talk to the customer managers rather than their techies (it's amazing how often I have found that businesses will allow the managers to talk to salesmen without having techies present nowadays).
You end up getting steered down a path that ties you in to a vendors products, then when you can't get what you need working, to a vendors consulting arm, all of which will be chargeable.
- NASA boffin: RIDDLE of odd BULGE FOUND on MOON is SOLVED
- SOULLESS machine-intelligence ROBOT cars to hit Blighty in 2015
- BuzzGasm! Thirteen Astonishing True Facts You Never Knew About SCREWS
- Worstall on Wednesday YES, iPhones ARE getting slower with each new release of iOS
- Tor attack nodes RIPPED MASKS off users for 6 MONTHS