1696 posts • joined 15 Jun 2007
For bog's sake. It's easy (although costly).
Zone your network using firewalls. Wireless access appears in one zone, which does NOT have any critical servers in it. Employ a capable network engineer or two, and let them achieve a working relationship to the security people.
Control the keys using the strongest authentication all your official devices can use, preferably based on something like RADIUS. Change any PSK keys that you have to have regularly, only circulate these changed keys to people with registered devices.
Query all devices using a device checker probe (something as simple as nmap or wireshark should be able to get most devices) and track down any unauthorized devices. Scan for unauthorized wireless networks in the vicinity, and attempt to identify whether it is the coffee shop downstairs, or a rogue access-point in the building (I'm serious, it happened somewhere I worked!). Make sure that all laptops physically attached to the wired network have wireless services turned off (including 3G 'dongles' and Bluetooth). Run regular security scans on laptops to check that this is the case.
Put simple services (like printing and possibly mail access) within the DMZ. Allow devices on the DMZ controlled access the Internet and then back in to your corporate gateways exactly the same as if they were coming in from the Internet. Knock specific holes controlled by the strongest access control you have in the inward looking firewall for any apps that absolutely have to be accessed from mobile devices. Argue the case for blocking every singe one, until you have been convinced that it is necessary and appropriate controls are in place.
Review these holes regularly, and have a strong procedures to track leavers and joiners. Ban, with the strongest penalties, sharing of ID's and revealing PSK's to non-authorized users. Lock services to specific ID's using strong authentication, preferably using one-shot password devices.
Be prepared to use VPN for any really critical services, especially those containing private or critical data. Select your approved devices carefully, to make sure that they meet all the security requirements. If there are vulnerabilities known on your mobile device of choice, make sure you have appropriate AV software deployed and updated.
If you are paranoid, consider using glass coatings on the windows to control the leakage of the WiFi signal out of the building, but if you are that worried, you should probably not use wireless services at all. Work out how far your wireless networks spread outside of your controlled space, using normal devices and focused antenna as well. Show the controlling managers this, and demonstrate it as well.
And above all, if you value your business, JUST DON'T USE WIRELESS SERVICES. This should include wireless keyboards, and any future wireless USB technology. If the MD objects, put a reasoned argument that the very business itself is at risk if the network is compromised. And if you are over-ruled, either be prepared to give in, lodging an "I Told You So" letter somewhere in the business, or to resign on principal.
It is clear that the "Block everything, then allow only what's essential" principal operates here.
Where is he anyway.
I've missed him.
"cannot afford enough jets for the two ships"
The intention is that only one ship would be at sea at any one time, so why the need for aircraft for both? If, as expected, the delivery of the carriers is staggered, once the Prince of Wales has finished it's acceptance trials, Queen Elizabeth will be ready for it's first R&R and minor refit.
Going by how the old Audacious class Ark was run, the aircraft would fly off to a land-based airfield when the ship returned to it's home port, and would only join again once the ship was back at sea, and passed it's sea-worthiness trials.
You would need more than one ship's worth, but less than two, to account for aircraft maintenance cycles.
Oops, silly me.
I meant to say CDC SMD (Storage Modile Device) drives, not SMB. How memory fades.
@Ocular Sinister. Experience tells me otherwise
When Dapper Drake (6.06) was the LTS release, by the time Hardy Heron (8.04) came along, many of the packages in the repository were functionally stable. This meant that you may get bug fixes, but you would probably not get a bump of the version.
If you were adventurous, you could add the 'backports' repository to the list of subscriptions, and get a select few packages at the same level as a more recent Ubuntu release.
As a result, even though dapper was still 'supported', it began to be very difficult to put .deb files from the Debian repository onto Dapper, because the prerequisite libraries would not be present. Ditto compiling up stuff from source.
Hardy does not appear to be quite so prone to this, now Lucid is available, but you can see it starting to happen, especially with third-party software like the BBC iPlayer.
I'm sure that if you joined the Ubuntu developer community, offering to make the backports repository more complete, you would be welcomed with open arms. But until then, the current developer community will be more interested in putting recent versions of the packages into the latest-and-greatest releases, not into the older ones. I myself would love to do this, but personal commitments do not leave me with the time to do it at the moment.
It's a shame, as I believe that ordinary users would be best served putting a LTS release on their systems and leaving it there for the lifetime of the system.
Strangely enough, I did a Windows XP to Windows 7 upgrade recently (one of my kids gaming rigs), and it was much easier than I expected, at least using a second disk and a parallel Windows 7 retail install to make a dual booting system. I do not think I had to re-license anything. All the programs installed on the XP drive were identified and recognised, and ran without problems. But these were mostly games, but did include Office.
Microsoft must be doing something right!
In the 70's, you could never mount / read-only. The ability to do this only came about when Sun implemented their diskless model, where all of the files that would be modified on the / partition (often the files in /etc such as passwd, utmp, wtmp, and mtab) were moved into /var, specifically so / could be mounted read-only on diskless workstations. I'm a bit vague about Sun timelines (I was woking with PDP11s and Bell Labs. versions of UNIX at the time), but I would guess that this happened around 1983, a few years after Sun was set up, with the release of the Sun2 workstations.
In this model, / and /usr were remote read-only mounts, /var was a remote read/write mount specifically for that workstation, and /home was a read/write shared mount for user files.
@AC re. Sensible compromises
Drive letters were antiquated when MS used them in DOS 1.1!
UNIX already had a fully hierarchical filesystem years before Bill went to see IBM.
The concept of filesystems on separate partitions really goes back to the original Bell. labs V6 and V7 code for PDP11s, where the partitions sizes were hardcoded into the device driver for RP disks (no on-disk partition tables there!), and when the smaller RK disks were barely large enough for / or /usr.
Each device could have a maximum of 8 partitions defined, and the definition of the partitions had to work with all drives of that type present in the system. IIRC, it was normal practice to make one partition span the whole device, two more cover half of the device each, and a further four more to cover a quarter of each device. It was, of course, stupid to try to mount the overlapping filesystems, or use the wrong minor device, but this model gave flexibility.
My old Systime 5000E (a PDP11/34E in Systime covers circa 1982) had 2x32MB CDC SMB disks, with a controller hacked to look like an RP controller with RP03 drives, with overlapping partitions of 1x32MB, 2x16MB and 4x8MB. I had / on 1 8MB partition (formatted to use just 6MB, with the last 2 MB used as swap space), /usr on another 8MB partition, and then used the remaining 16MB as a /user filesystem, which was equivalent to /home on a Linux or more modern UNIX system. There was no /var or /opt at that time, as Sun were only just thinking about diskless systems. A second drive had a single 32MB partition for the /ingres filesystem (which actually had the whole of the BSD 2.6 [for which I sadly do not have a copy of the tape] unpacked in it), and which contained the Ingres database code, and all of the defined databases.
It was the only real way to manage such systems. If you are really interested in knowing what was involved in setting up ancient UNIX systems, I suggest that you start here http://minnie.tuhs.org/PUPS/Setup/v7_setup.html, and then brows the rest of the UNIX Heritage Societies site.
BTW, I started on Version 6, although I have put the link in for Version 7 as that is regarded as the point where UNIX really started to fragment.
I prefer two partitions (but I am a UNIX sysadmin)
It's swings and roundabouts. I tend to use a separate home partition so that I can dry-run a new release while keeping access to all of my files in both releases.
Unfortunately, this is not a perfect solution, as quite often, the configuration files for all of the dozens of apps and utilities change between major releases. You often watch informational messages about configuration files being 'converted' to a new version, and find that it no longer works with the old OS. This broke the sound on my Thinkpad between 6.06 (Dapper) and 8.04 (Hardy) (both LTS releases).
I've never been satisfied that 10.04 is ready to switch to, because there are sound, display and suspend problems, so I am still running Hardy. One day, I will boot Lucid, update everything, and all my problems will be over, but I'm not holding my breath, and I don't want to switch away from LTS releases for my main systems.
Non-nuclear carriers are stupid and escorts are required
The problem with the current strategy is that the gas-turbine powered carriers have such a restricted range that they cannot really go anywhere without Replenishment At Sea (RAS or whatever it is called now). And if you reduce the escort fleet to enough to cover the operational carrier plus some in refit, what the hell is going to protect the tankers that are necessary to provide the fuel? How to cripple the British Navy? Sink all the RFAs.
And Lewis is assuming that all of the Navy is operating in the same place at the same time. What is needed to protect HMS Ocean, which has helicopters but no fixed wing aircraft? It's speed is only about 20 knots IIRC, so could be subject to attack by a conventional sub with some dash capability. It is possible that this ship may be deployed separately from the carrier.
We need a capable and relatively sizeable escort fleet, although possibly with slightly different capabilities, to provide some degree of flexibility. If all we can do is deploy the fleet in a fixed configuration as Lewis is suggesting, then it fixes the way it operates for the next 30 years or so. Not exactly the forward looking attitude Lewis says he is suggesting.
I agree that the Type 45's are about as relevant as the Type 81 40 years ago, of which all but one was cancelled. But the batch 3 Type 22 frigates proved themselves to be immensely flexible vessels, so a combination of these (with something like Aster), plus some ocean going gunboat sized anti-pirate and fishery protection vessels, possibly large enough to operate a single simple helicopter like the old Wasp, are necessary. Equip them with some relatively heavy, rapid reaction 30-60mm weapons to dissuade fast-boat pirates, some towed array sonar, and put a capable containerized AA weapon. Something not dissimilar to the Swedish upgraded Stockholm class.
And the radar picket vessels that Lewis talks about must be able to defend itself, so must be a multi-purpose vessel. As they operate many miles from the fleet, they must have some AS detection capability, even if their main purpose is as a radar early warning. This assumes that they are necessary at all. The only real reason we had type 42's (like Sheffield) for this was because the through-deck-cruisers (that we now call the current generation of [harrier] carriers) were not large enough to operate any AWACS aircraft.
I've got one
and it is a decent enough phone which was a free upgrade from Orange on the monthly plan I pay for. I wanted an HTC Desire, but was not prepared to pay the £150 they wanted as an upgrade charge (I'm tight-fisted, I know).
The only serious problems with it that I can really point to is that 400x240 is really too small a resolution to browse, although pinch to zoom does make it a bit more bearable, and that I had to change the settings on the GPS receiver in order to make it work at all.
Other minor niggles that are specific to the phone are that it requires a strong WiFi signal, it's performance is significantly worse than any of the laptops I have in the house. Also, the battery indicator is inaccurate. The other day it dropped from 50% to 15% (the low battery warning point) in about 10 minutes while browsing the Android Market over Wifi. 2 days moderate use with data and Wifi exhausts the battery.
Finger marks are very obvious, and I end up polishing them off every time I use it.
Most of the Orange apps require connections via data services and will not work over WiFi.
I have other problems with it, but they are mostly Android related. My previous phone was a Palm Treo 650, and I am finding the reliance on data services that Android imposes for pretty much everything bugs me. I had only a small data allowance on my phone plan that I blew in about 10 minutes when I first got the phone. I ended up changing the APN, just to stop it connecting, and then virtually nothing worked. Most apps that checked the state of the data connection would hang for 30-40 seconds when starting. My Palm had complete control of the data connection, and apps would start up just as quick whether or not the data services are enabled. As I said, this is an Android issue.
I find the array of settings that Android and Samsung offer difficult to navigate, but I guess that is inevitable when phones get so complex.
Other than that it works as a phone, the audio and video performance is acceptable to good, and although it is slower than a Desire, you get used to it. I've yet to find an app that does not work. I'll do for a while.
I still miss the simplicity of the Treo, however.
Data gathering, or medication control.
I can see that this will be used for all sorts of thing in the medical world, but that in itself is worrying.
Lets make an assumption that it may allow feedback controlled drug release for people with chronic pain or insulin dependence.
It would appear to me that no longer would your private data be at the mercy of the security of the network, but your very life. Imagine if a 'hacker' could gain access, and trigger an overdose of whatever drug is being dispensed, it might be more serious.
(BTW for all you big-brother conspiracy theorists, consider a scenario where somebody could release drugs remotely to calm a crowd).
I never said UNIX was invulnerable. Only a fool would claim that any OS is absolutely secure, and I can think of several ways to target a UNIX system, but most involve some form of social engineering together with lax root administration.
But in the write up of the worm on the Symantic site http://www.symantec.com/connect/blogs/distilling-w32stuxnet-components, it is quite clear that in order to infect the SCADA PLC, normal Windows methods are involved, although what is described is very sophisticated.
I'm glad that dual-vendor redundant systems are involved in our power stations, but I would guess that the next generation will probably have less of this, because Windows is slowly taking over the world.
"No operating system is bullet proof". Too true, but at least UNIX model privilege separation and no write access for ordinary users to the system directories means that not only do you have to get code running in a system, you have to get it running as a user with escalated privileges in order to do serious damage. A double hurdle. And there are even better security models available.
Let's face it. The security and application installation model on Windows pre-Vista was just terminally flawed. It required serious knowledge of Windows to allow it to work in a secure manner. This is why those systems are such ripe targets, not just because there are so many instances of Windows. And I am prepared to argue this point out with anybody, although preferably over a pint rather than in these forums.
If there is software running nuclear power stations that needs admin rights to run, then I laugh at their folly! Or I would if I did not live within 20 miles of one!
Lots of things you have written trigger outrage in me, but I believe that to encapsulate what you've said, it's always an economic argument. And not to keep services prices to the customer down, but to increase shareholder return.
My view is that some things should be beyond raw bean-counter accountant economics, and safety is number one on this list.
And the argument that other OS's would be equally exploitable is just fatuous. If the account that you logged in to use the PC for everyday use did not have write access to the PLC code, then ordinary everyday use of the system would not expose the control system to infection. This would be the case if it were designed to use a UNIX type OS, or QNX, or VMS, or, in fact, any OS that did not evolve from a 'Personal Computer'.
I am not saying that it would be totally safe, your point about nothing being totally secure is quite true, but the times that the system would be vulnerable would be significantly reduced, mainly to system updates.
Part of the underlying problem is that as Windows is not regarded as a secure OS, many generations of programmers have grown up without having to think of making their code work in a system with anything like a decent security model. I've come across this time and time again when I get to install a piece of software on a UNIX or Linux box that was written by a PC programmer, and find that you have to have log and configuration files globally writeable, and even worse, whole directory trees in a similar state.
It is possible to control user access on a Windows system running NTFS 5 or later, but not enough people care enough to design their software to install and run in a safe manner.
In fact, the underlying user and file permission model on NT based systems is actually much better than UNIX (and this is a UNIX bigot saying this - the UNIX model is actually quite simple and restrictive), but how many people know how to use the policy editor to take advantage of this. If you are a Windows programmer, do you set up multiple accounts. Do you have a specific account to install the software that is neither an admin account or a day-to-day user account. Do you use the access rights user and group attributes to control who can do what with which file. Do you even know what acledit is!
The simple answer is No! I would say that almost without exception, Windows programmers just don't think that way. (BTW, if you are a Windows programmer who does take the require amount of care, wade in, and prove me wrong!). I believe that Windows administrators have more idea than the application developers, and that is just because they have been burned too often because of the vulnerabilities. of Windows.
If you grew up in a UNIX or VMS or even a RACF world, you would understand this or you would not get work.
Maybe I was wrong.
I cannot point to where I picked this up from, which is why I questioned whether I remembered it correctly, but I'm sure that I did read it at one time. Possibly it was an earlier agreement, or maybe one of the other type of arrangement that Microsoft had. I accept that the posting may be partially wrong.
But the mere fact that the keys are still available does not really prove that you are still allowed to use them. Maybe someone who actually has a subscription can check their agreement, and quote or paraphrase what it says about lapsed subscriptions.
I have just read what you are allowed to do with the software you obtain through TechNet from the technet.microsoft.com website. This appears to be an interesting quote regarding the use of the software: "Access over 70+ full-version Microsoft software for evaluation purposes only".
In the License terms there is also:
"Evaluation Software. One user may install and use copies of the evaluation software listed in the COMPONENTS.TXT file, even if you obtained a server license. You may use the evaluation software only to evaluate it. You may not use it in a live operating, in a staging environment or with data that has not been sufficiently backed up."
and later in the same document:
"SCOPE OF LICENSE. The software is licensed, not sold. This agreement only gives you some rights to use the software. Microsoft reserves all other rights. Unless applicable law gives you more rights despite this limitation, you may use the software only as expressly permitted in this agreement."
I believe that these terms taken together would allow Microsoft to judge that long-term use of a particular license may not be for evaluation purposes (yes, I did read the "without any time or feature limits", but this is then qualified "for evaluation purposes only") and this would be enough to allow them to disable a license if they thought that the use was no longer for evaluation.
And the moral is - Please read the terms and conditions that you agree to, especially with Microsoft. You may not get what you think.
Legal Disclaimer: All of the quotes are taken directly from Copyrighted material contained on a Microsoft web site, and the rights remain with Microsoft in accordance with the text contained at http://www.microsoft.com/About/Legal/EN/US/IntellectualProperty/Copyright/default.aspx
I think you need to read the agreement AGAIN
Part of the agreement, if I remember correctly, is that you are only permitted to use the keys that you obtain through Technet while you maintain the subscription.
As soon as you stop paying the subscription, you need to buy new, full licences, or un-install the software. So you can expect that any keys that you used to fail the Windows Genuine Advantage test sometime in the future.
This was the main reason I never took advantage of the apparently favourable conditions offered. I did not want to tie myself into a long-term agreement with MS where they could repeatedly demand money from me at their own terms.
I would laugh you all for dancing with the devil, if it was not so tragic.
I believe that there is a different issue here that may be changed by solid state memory.
My thoughts are that it is an addressing issue. Currently, if you think about it, data in current persistent media is accessed via a filesystem, indirected to some form of adapter, across some form of interlink one or more times, then to a disk.
All of these levels provide addressing information of one kind or another, that may or may not be abstracted one or more times. This is required because of the inherent limitations on the size of disks, the number of disks per device bus, and the number of device and interlinks available. Over and over, this has to be re-worked as disks sizes reach the next barrier. This is expensive, time consuming and slows down what can be done.
With solid state memory, it is in theory possible to implement a block or even a byte addresses space as large as the size of your address. Lets allocate 256 bit addressing, giving a 10 to the 77th power space, which should be enough for anybody (famous repeated last words, maybe make it 512 bits). We don't have to make this all physically addressable immediately. Expose this as a global address space to ALL of your systems. Call this a Storage Bus Address (SBA - I claim trademark and any copyright and patent rights over the name and concepts). Allow SBA virtual mapping so that you can expose parts of your global filestore to individul systems, and maybe allow slow interconnects to use fewer address lines.
Put the resilience in the managing device (two or three times mirror with multi-bit error correction), make the memory hot-swappable in manageable chunks. Add secure page or 'chunk (of address space)' level access security using a global name space and cryptographic keys to protect one systems data from another. Add in some geographical mirroring at any level you like for protection.
Once you have done this, you can abstract the interconnect between your servers in any way you like, provided that you maintain the access semantics. Make it closely coupled (at internal bus speeds), or distance coupled depending on the access speed you require.
Change all the OS's to implement this large space addressing for their persistent store (it's easier with some, like Plan 9 and IBM i, than others), initially as a filesystem, but ultimately as a flat address space in later incarnations of the OS. This could even be added into the processor address space, but I think that would require more changes in system and OS design.
I think that the revolution will come when persistent storage is addressed like this, and it could be done fairly easily, but would require industry agreement. This may be what prevents it.
This is me blue-sky dreaming, but I don't see why it can't happen.
I'm sure that Microsoft and the EFF are in this for different reasons.
Microsoft are treading a narrow path. They don't want their patents overturned, but they do want this one. They have clearly failed to convince the appeals process, but if a review is granted, it would enshrine by precedent a process that may further favour companies with big pockets.
EFF wants at least software patents to more stringently examined before being granted, with a preferred outcome of ruling that software is not patentable.
So it's a dangerous game for both parties, but at least it would air the problems somewhere it might do some good.
You've found the Linux equivalent of Catch 22!
If an OS has no suitable apps, people will not consider it.
If nobody uses the OS, applications will not be written.
Having any apps available to do video manipulation is a step in the right direction, especially in the home market. I've used Avidemux for the last few years to trim and combine video files. It Works For Me.
Care to expand on this? I admit that Ubuntu is not perfect, and in some respects going the wrong way at the moment IMHO, but it is a much more end-user targeted distribution that Fedora, where you have to run to keep up, or OpenSuSE where it sometime seems that the opposite is true, or the niche hobbyist distros (and I include Debian here, even though it is the basis for Ubuntu).
The fact that Ubuntu has a large repository that is kept up to date, has a documented lifetime for each of the releases (I tend to keep to LTS releases because my computers are tools, and spending time maintaining the OS is not high on my list of things-I-have-to-do), has a easy to understand patching strategy, and is actually targeted at ordinary users rather than hobbyists, are serious plus points for someone exploring Linux. Not everybody likes to wear hair shirts.
The other thing that Ubuntu is doing is reaching towards the critical mass where it is taken seriously by computer and software suppliers for mass consumption devices. RedHat or SuSE Enterprise releases will never appear in this segment of the market, merely because of model being followed.
It is quite true that if you are building a business model around Linux, that you may choose a more business oriented distribution, but Canonical are looking in that direction as well.
There can never be a one-size-fits-all distro, but what we are looking at here is what is prevalent. You don't have to like it, but if you make statements like you have, I feel that you have to justify them.
Like with children
you should not always give them what they want. Sometimes they just don't understand the consequences.
The thing that gets me
is that this is not new revenue. It's merely moving a fixed amount of purchasing power from one stream to another. If the operators get a slice of this, then it will be by 'stealing' it from someone else.
What I keep seeing is that businesses believe that they are making new money by offering this type of service. This is just plain wrong.
If I buy a cinema ticket, I do not want to pay more to buy it using my phone. Realistically, people actually believe that they will pay less with new transaction types, especially if they are paying a monthly fee for the privilege. And more interested parties will be taking a piece of the action.
Time is indeed money, but only up to a point.
And sometimes, it's nice knowing that the £50 in my wallet is still there as long as I don't spend it. Someone might steal it, but nobody can legitimately spend it without my knowledge. I'm almost afraid to look at my bank account sometimes, because I have so many direct debits and other transactions, often on irregular days of each month. I think that the same will be true of any e-payment system.
You're making an assumption
that the signal does not go beyond your house. If it does, a neighbour can join your network, or possibly you will end up with a single network spanning more than one house. Combined with uPnP, this could mean that all your media and Internet devices are visible and available. Shiver.
If you want to guarantee privacy, then you should set your own key, and to do that, you need the windows application. This is why it sucks.
There have been Linux (actually Posix compliant) tools, but I have only tested them on the older Homeplug and Homeplug Turbo devices.
Podules were for Acorn Archimedes machines, although I suppose that the A3000 was branded as a BBC micro. Not the classic 8-bit 6502A based BBC A and B though!
Liabillity insurance needed
Bearing in mind how much noise is made by environmentalists about oil contamination if a ship founders, one wonders how much liability insurance would need to be carried by a company operating nuclear merchant ships.
If a nuclear warship is damaged/founders, you expect (and this has been the case so far) that the nation operating the ship will carry the burden of recovery and clean-up of the wreck. There would need to be some guarantee that sufficient resource would be available to prevent nuclear contamination from a merchantman.
If there were arguments and delays after such an accident involving a merchant ship, leading to nuclear contamination, then the outrage that would follow would make the Deepwater Horizon a mere storm in a tea-cup.
I'd also expect that these nucular wessels (sorry, couldn't resist) would also have to be operated far from Somali pirates!
BTW. The description for the warning exclamation icon reads "All hands man the pumps, run for the hills, batten down the hatches and so forth", so I thought it was appropriate.
Played with one owned by a friend
While it worked, it felt unnatural, and the keys, particularly around the edges, were a bit unreliable.
It just didn't feel right with no physical keys, and the flatness meant that anybody used to typing got aching hands quite quickly. I never saw him use it much in the following months. It was a clever and an impressive gadget though.
I found a better solution for sending texts quickly was to link my (then) Nokia phone to my laptop by IR, but that would rather defeat the purpose when using a smartphone. I got a Palm Treo, installed Graffiti, and used that instead. I wish I could use a stylus and Graffiti on my current android phone (I know, both are possible, but Graffiti appears to have been pulled from the Android Market at the moment!)
Our Choices, which became a Blockbuster during the big change a few years ago is fairly clean and tidy, staffed with enthusiastic people, and is never empty of customers.
I live in a rural town, with the nearest large retailer over 25 miles away. We've lost Woolworths and Curries (games), and our small WH Smiths do not sell a large range of DVDs or any games at all.
The Blockbuster is the only remaining local outlet with a reasonable range of DVDs and games to purchase, and has the added benefit of renting both. The only alternative is the restricted range of DVDs that our local Tesco sells, and as this is a rural branch, only runs to the top 30 or so DVDs and even less games, or the £5 bargin bin titles.
If we loose our Blockbuster, with the really poor rural broadband provision and no cable TV, it will make the area even less attractive to the resident youth. We are already seeing a serious upward change in the age demographic as the young leave when they can.
Yes, we can buy from Amazon or Play. Yes we can download (slowly and encumbered by usage caps). Yes, we can get titles from Love Film, but the postal service is already going down the tube. We appear to only get every other day deliveries of mail as it is, and this will only get worse.
What am I supposed to do when I get one of my kids asking for a rental or game for the weekend. Or for an extra game controller. Modern kids just don't seem to understand "it'll be here next week". They want it NOW.
It would be interesting
to see if bone would gradually permute the foam over time.
My thoughts are that if it is similar in strength to human bone, it may break, but if over time ordinary bone grows through, it may be able to heal with ordinary bone, without further surgical intervention. Now that would be revolutionary. It may completely change the lives of people who currently have to go through serious bone grafting after injury.
I am not in any way associated with a medical profession, and I am just idly speculating, so I'm sure someone will say that this can't happen. Still...
The problem is...
...that this works fine if all you want is what they offer.
As soon as you get tied in, and decide, say, that you want to use a product that they do not offer, such as a particular new network type, or a better HSM product, or a particular data visualization package to integrate with your MIS, you suddenly find that you either can't, or will compromise the gold-standard support they offer by changing the software stack.
This is a nirvana for corporate sales droids, especially if they can talk to the customer managers rather than their techies (it's amazing how often I have found that businesses will allow the managers to talk to salesmen without having techies present nowadays).
You end up getting steered down a path that ties you in to a vendors products, then when you can't get what you need working, to a vendors consulting arm, all of which will be chargeable.
I think that the problem is that the Typhoon is one of the new generation of inherently unstable aircraft, that are only rendered flyable by the Fly-by-Computer avionics.
I'm sure that if the avionics were still operating, it would be possible to land, but if the avionics were out, there would be virtually no chance of any type of controlled landing. Hopefully, redundant systems and power supplies should be installed to keep the systems running in the case the primary power systems fail.
Primadonna robot footballers!
Do the robot players also get programmed to roll on the floor if another player gets within 6 inches of them during a tackle!
Dear oh dear.
I'm not sure whether what I'm saying here is a joke or not.
If you consider that civilizations are cyclic (which is not actually proven, but it's a good theory), then you need to leave some easily extracted mineral resources to allow a future civilization to progress through the equivalent of our early industrial age. Otherwise, once our reign a the top of the stack (last in, first out) comes to an end, future civilizations will get stuck in the pre-industrial age. An amusing fictional illustration of what might happen is described in "The Mote in God's Eye", by Larry Niven.
They're not going to be able to jump straight to a solar/green/nuclear lifestyle without going through some pretty low grade technology! Unless you are suggesting a gap of 100Myears to allow fossil fuels to accumulate again.
Mineral oil is actually a quite important lubricant, which may be more important in the future than using it as a fuel. Vegetable based oils are too light without significant processing (which takes energy).
Still, I'm just ranting as an office-chair speculator here.
Model M and failure rates
You really put your model M in the dishwasher. Wow. I religiously strip all the keycaps off and wash them, and then apply a stiff brush and wet-wipes to the rest. Your solution sounds much quicker. How long do you leave it to dry?
When it comes to large numbers of similar devices, you need to look at the MTBF figures. The more of a particular device you have, the more frequently you will see one fail. I would have to look up the exact maths, but I don't think its a simple ratio. Where I am, we have over three thousand 300GB disks, and we lose a couple every month. This does not cause a problem, because they are in a large number of separate raid arrays with two hot spares per 10 disk array (=12 disks total). We could still be operating with three disks down in an array.
Memory, on the whole, seems reasonably reliable, but we have multi-bit parity on the systems, together with bit-steering (the joys of Power6 systems). This means that it is not the built-to-a-budget memory that most people put in their Wintel servers. That price premium must really buy you something.
Ubuntu 6.06 LTS - Dapper Drake
Saved you having to look it up.
I used to be able to use my PalmPilot m100 to control TVs quite well using OmniRemote (a free, downloadable app a long time before Apple got in on the act), but when I switched to a Sony Clie and then a Palm Treo, the LED's were only strong enough to control the TV from a distance of about a metre. Not too good for a remote. I would have been able to buy a hardware add-on, but that would have spoilt the lines on the PDA/Phone.
Was a good idea though, having a fully customizable remote able to do all of your media appliences. Shame it was not successful.
Interesting about consoles, though. My son uses his DS as a wireless controller on his Wii. Not sure if this is specific to particular games, but it allows multiplayer games to be played when you don't have enough controllers.
..hang on a sec, something wrong here. Circular argument. Still, IBM did something similarly silly in the printer market in the 90's.
...if you really think there is anything of note here
I'll remind you of this, sometime in the future. I don't think it's just me that likes to elicit a response from you. I just enjoy seeing the vulture icon, especially if it has your name next to it!
Most organizations (including the Reg in the past) would just post a correction, with a note about the correction in the edited article, and then silently moderate out the obvious comments before they become visible.
I think I'll bookmark the comments on this one, because it is the first time I've seen 5 of the first 7 posts deleted after the fact by the moderator. Unfortunately, like Gillian McKeith and possibly William Hague, you may find that the Internet is an unforgiving medium.
After being put firmly in my place about rejected comments by yourself I know exactly how serious you are. But I do like to tease...
No need to actually post this one.
Residual capacity could the the reason
Someone has to pick up the cost of the loss of capacity after a pack has been recharged a hundred or so times. Leasing makes more sense than owning, as nobody will complain about swapping one that is new for one that is near it's end-of-life it they lease it.
You would still have some uncertainly about range, and you would probably have to have some rules about when a battery pack would be retired or reconditioned. Would you make it 90% of original charge capacity, 80%, 50%?
I'm all for this technology, but there are serious wrinkles that need sorting out, not the least of which is the cleanness of the electricity. Also, could the power grid cope with thousands of battery packs drawing tens of amps at the same time? For example, if a battery charging station has 50 packs charging at any time, which draw 30A each while charging, we're talking 1,500 amps, or at 230V, 345KW per station. That's a lot of power. A typical UK house draws about 0.4KW per hour, averaged out across the year (according to EDF), so the charging station would put the same load on the grid as 800+ houses.
These figures are rough, based on the Tesla's battery pack which apparently take 3.5 hours to charge at 70A at 240V (thanks Wikipedia), mapped into something that is more likely to be found in the UK urban environment.
How many petrol stations serve as few as 150 customers in a day (assuming packs take 8 hours at 30A to charge)? And you would have to be pretty certain that the packs could not be nicked for their scrap value. And how large would the station have to be?
So, interesting ideas, but currently, fossil fuels still rule, as indicated by the icon.
230,000 a day!
Either Apple has been stockpiling iPhone4s for a while, or half of China must be making them.
I cannot believe that they can sustain nearly A QUARTER OF A MILLION activations a day for any length of time. At that rate, the equivalent of the whole of the UK population could have an iPhone 4 within a year, even if they were only making them on week days!
"SCO UNIX is an orphan version of UNIX"
Excuse me, UNIXWare is the direct closest linear descendent of Bell Labs version5/6/7 UNIX that first appeared on University PDP/11 systems around 1976/77. It is the closest thing to being the main UNIX line that exists.
The line runs as follows
Bell Laboratories UNIX Timesharing System Version 5/6/7
System3 (Sometimes written System III)
SystemV (Sometimes written System 5)
Bell Laboratories, which could possibly make a claim for switched to Plan9 after UNIX edition 10 sometime around 1990.
Of course, there has been cross pollination, especially from the BSD releases, but these were made almost completely AT&T code-free around 1993. BSD4.4 could be regarded as intellectual descendents, but you would have to question whether it still counts as a genetic UNIX.
SunOS/Solaris, AIX, and HP/UX are vendor-owned branches of the original code, and Linux is not even a distant cousin, although there may be some illegitimate blood from dalliances from the UNIX members in the past. The family resemblance is striking, however.
SCO picked this up via UNIX System Laboratories (USL), Novell, Original-SCO and Caldera-SCO.
The term is Genetic UNIX. A good diagram can be found here http://www.levenez.com/unix/. Enjoy.
Do we at last get some closure on this? Or something worse.
So this will mean that the stewardship (term chosen very carefully) of the genetic UNIX code, originally controlled by Bell Labs. will definitely be changing hands, leaving SCO with just Linux interests?
I'm not sure how this works, bearing in mind that it is the use (or misuse) of the UNIX code itself that was the subject of the legal action. Surely, it is not possible to sell the very subject of the action, while keeping the action going. It makes no sense!
It is the case, of course, that the IP rights and license revenue will remain as-is, with Novell.
What I would be worried about is Darl being backed by someone, and buying back the UNIX business and attempting to start the whole thing over. Or even Microsoft (perish the thought).
Aaaagh, horrible thought. Oracle!!!!!
Tell you what
I'll see if I can find a cheap multi-band radio. I'll enquire about the address to send it when I've found one.
More info please
I know it's rather specialist, but after many articles on Homeplug and RF interference on the Reg. would it not be possible to put a broad spectrum RF analyser in the vicinity, to see whether these are good or bad?
And are the UK models two pin like the pictures?
When I read the conditions on the upgrade edition, it suggested that the license key for XP would no longer be valid once you had put the upgrade version on the system,
I wanted to have a dual boot system, because one of my sons was not convinced that all his games would work on 7. I was worried that Microsoft would be able to cripple/disable the WinXP instance using Windows Genuine (dis)Advantage, so opted for the full retail version of 7, and a second hard disk.
Ironically, I believe that he found everything worked, even from the original installation, so he has never started XP since 7 was installed!
Runs out of supplies
This is an artificial limit, because presumably it will only have one copy of the DVD. How difficult is it to print the COA and license key?
And remember, if you upgrade an XP box using an upgrade license, you are no longer allowed to run XP on the system!
Surely this should have been posted by Tony Rand!
And my Monster from his slab began to rise
To his surprise, it's Microsoft Office Communication Server. Made up of lots of disparate bits held together with stitches and bolts!!
(sorry, don't know whether this is actually the case, but the analogy seemed too good to let go without a comment!)
Damn, and damn.
Meant to AC my previous comment, but what the heck. It's fairly innocuous.
Hang on, who's that beating down my door?
CDs and DVDs officer? Yes, I have several hundred scattered around. Where do you want to start? Oh! you want to take them all away! Can I have a receipt please? And please note the ones you can't read are not encrypted, they're almost certainly ones that have failed to burn, and I forgot to throw away. No. Really. They don't have encryption keys for them. No. NO. Not the cuffs!
Help. Call a lawyer!
- Vid Hubble 'scope snaps 200,000-ton chunky crumble conundrum
- Bugger the jetpack, where's my 21st-century Psion?
- Windows 8.1 Update 1 spewed online a MONTH early – by Microsoft
- Google offers up its own Googlers in cloud channel chumship trawl
- Something for the Weekend, Sir? Why can’t I walk past Maplin without buying stuff I don’t need?