1635 posts • joined Friday 15th June 2007 09:17 GMT
I never said UNIX was invulnerable. Only a fool would claim that any OS is absolutely secure, and I can think of several ways to target a UNIX system, but most involve some form of social engineering together with lax root administration.
But in the write up of the worm on the Symantic site http://www.symantec.com/connect/blogs/distilling-w32stuxnet-components, it is quite clear that in order to infect the SCADA PLC, normal Windows methods are involved, although what is described is very sophisticated.
I'm glad that dual-vendor redundant systems are involved in our power stations, but I would guess that the next generation will probably have less of this, because Windows is slowly taking over the world.
Maybe I was wrong.
I cannot point to where I picked this up from, which is why I questioned whether I remembered it correctly, but I'm sure that I did read it at one time. Possibly it was an earlier agreement, or maybe one of the other type of arrangement that Microsoft had. I accept that the posting may be partially wrong.
But the mere fact that the keys are still available does not really prove that you are still allowed to use them. Maybe someone who actually has a subscription can check their agreement, and quote or paraphrase what it says about lapsed subscriptions.
I have just read what you are allowed to do with the software you obtain through TechNet from the technet.microsoft.com website. This appears to be an interesting quote regarding the use of the software: "Access over 70+ full-version Microsoft software for evaluation purposes only".
In the License terms there is also:
"Evaluation Software. One user may install and use copies of the evaluation software listed in the COMPONENTS.TXT file, even if you obtained a server license. You may use the evaluation software only to evaluate it. You may not use it in a live operating, in a staging environment or with data that has not been sufficiently backed up."
and later in the same document:
"SCOPE OF LICENSE. The software is licensed, not sold. This agreement only gives you some rights to use the software. Microsoft reserves all other rights. Unless applicable law gives you more rights despite this limitation, you may use the software only as expressly permitted in this agreement."
I believe that these terms taken together would allow Microsoft to judge that long-term use of a particular license may not be for evaluation purposes (yes, I did read the "without any time or feature limits", but this is then qualified "for evaluation purposes only") and this would be enough to allow them to disable a license if they thought that the use was no longer for evaluation.
And the moral is - Please read the terms and conditions that you agree to, especially with Microsoft. You may not get what you think.
Legal Disclaimer: All of the quotes are taken directly from Copyrighted material contained on a Microsoft web site, and the rights remain with Microsoft in accordance with the text contained at http://www.microsoft.com/About/Legal/EN/US/IntellectualProperty/Copyright/default.aspx
I believe that there is a different issue here that may be changed by solid state memory.
My thoughts are that it is an addressing issue. Currently, if you think about it, data in current persistent media is accessed via a filesystem, indirected to some form of adapter, across some form of interlink one or more times, then to a disk.
All of these levels provide addressing information of one kind or another, that may or may not be abstracted one or more times. This is required because of the inherent limitations on the size of disks, the number of disks per device bus, and the number of device and interlinks available. Over and over, this has to be re-worked as disks sizes reach the next barrier. This is expensive, time consuming and slows down what can be done.
With solid state memory, it is in theory possible to implement a block or even a byte addresses space as large as the size of your address. Lets allocate 256 bit addressing, giving a 10 to the 77th power space, which should be enough for anybody (famous repeated last words, maybe make it 512 bits). We don't have to make this all physically addressable immediately. Expose this as a global address space to ALL of your systems. Call this a Storage Bus Address (SBA - I claim trademark and any copyright and patent rights over the name and concepts). Allow SBA virtual mapping so that you can expose parts of your global filestore to individul systems, and maybe allow slow interconnects to use fewer address lines.
Put the resilience in the managing device (two or three times mirror with multi-bit error correction), make the memory hot-swappable in manageable chunks. Add secure page or 'chunk (of address space)' level access security using a global name space and cryptographic keys to protect one systems data from another. Add in some geographical mirroring at any level you like for protection.
Once you have done this, you can abstract the interconnect between your servers in any way you like, provided that you maintain the access semantics. Make it closely coupled (at internal bus speeds), or distance coupled depending on the access speed you require.
Change all the OS's to implement this large space addressing for their persistent store (it's easier with some, like Plan 9 and IBM i, than others), initially as a filesystem, but ultimately as a flat address space in later incarnations of the OS. This could even be added into the processor address space, but I think that would require more changes in system and OS design.
I think that the revolution will come when persistent storage is addressed like this, and it could be done fairly easily, but would require industry agreement. This may be what prevents it.
This is me blue-sky dreaming, but I don't see why it can't happen.
I'm sure that Microsoft and the EFF are in this for different reasons.
Microsoft are treading a narrow path. They don't want their patents overturned, but they do want this one. They have clearly failed to convince the appeals process, but if a review is granted, it would enshrine by precedent a process that may further favour companies with big pockets.
EFF wants at least software patents to more stringently examined before being granted, with a preferred outcome of ruling that software is not patentable.
So it's a dangerous game for both parties, but at least it would air the problems somewhere it might do some good.
You've found the Linux equivalent of Catch 22!
If an OS has no suitable apps, people will not consider it.
If nobody uses the OS, applications will not be written.
Having any apps available to do video manipulation is a step in the right direction, especially in the home market. I've used Avidemux for the last few years to trim and combine video files. It Works For Me.
Care to expand on this? I admit that Ubuntu is not perfect, and in some respects going the wrong way at the moment IMHO, but it is a much more end-user targeted distribution that Fedora, where you have to run to keep up, or OpenSuSE where it sometime seems that the opposite is true, or the niche hobbyist distros (and I include Debian here, even though it is the basis for Ubuntu).
The fact that Ubuntu has a large repository that is kept up to date, has a documented lifetime for each of the releases (I tend to keep to LTS releases because my computers are tools, and spending time maintaining the OS is not high on my list of things-I-have-to-do), has a easy to understand patching strategy, and is actually targeted at ordinary users rather than hobbyists, are serious plus points for someone exploring Linux. Not everybody likes to wear hair shirts.
The other thing that Ubuntu is doing is reaching towards the critical mass where it is taken seriously by computer and software suppliers for mass consumption devices. RedHat or SuSE Enterprise releases will never appear in this segment of the market, merely because of model being followed.
It is quite true that if you are building a business model around Linux, that you may choose a more business oriented distribution, but Canonical are looking in that direction as well.
There can never be a one-size-fits-all distro, but what we are looking at here is what is prevalent. You don't have to like it, but if you make statements like you have, I feel that you have to justify them.
"No operating system is bullet proof". Too true, but at least UNIX model privilege separation and no write access for ordinary users to the system directories means that not only do you have to get code running in a system, you have to get it running as a user with escalated privileges in order to do serious damage. A double hurdle. And there are even better security models available.
Let's face it. The security and application installation model on Windows pre-Vista was just terminally flawed. It required serious knowledge of Windows to allow it to work in a secure manner. This is why those systems are such ripe targets, not just because there are so many instances of Windows. And I am prepared to argue this point out with anybody, although preferably over a pint rather than in these forums.
If there is software running nuclear power stations that needs admin rights to run, then I laugh at their folly! Or I would if I did not live within 20 miles of one!
Lots of things you have written trigger outrage in me, but I believe that to encapsulate what you've said, it's always an economic argument. And not to keep services prices to the customer down, but to increase shareholder return.
My view is that some things should be beyond raw bean-counter accountant economics, and safety is number one on this list.
And the argument that other OS's would be equally exploitable is just fatuous. If the account that you logged in to use the PC for everyday use did not have write access to the PLC code, then ordinary everyday use of the system would not expose the control system to infection. This would be the case if it were designed to use a UNIX type OS, or QNX, or VMS, or, in fact, any OS that did not evolve from a 'Personal Computer'.
I am not saying that it would be totally safe, your point about nothing being totally secure is quite true, but the times that the system would be vulnerable would be significantly reduced, mainly to system updates.
Part of the underlying problem is that as Windows is not regarded as a secure OS, many generations of programmers have grown up without having to think of making their code work in a system with anything like a decent security model. I've come across this time and time again when I get to install a piece of software on a UNIX or Linux box that was written by a PC programmer, and find that you have to have log and configuration files globally writeable, and even worse, whole directory trees in a similar state.
It is possible to control user access on a Windows system running NTFS 5 or later, but not enough people care enough to design their software to install and run in a safe manner.
In fact, the underlying user and file permission model on NT based systems is actually much better than UNIX (and this is a UNIX bigot saying this - the UNIX model is actually quite simple and restrictive), but how many people know how to use the policy editor to take advantage of this. If you are a Windows programmer, do you set up multiple accounts. Do you have a specific account to install the software that is neither an admin account or a day-to-day user account. Do you use the access rights user and group attributes to control who can do what with which file. Do you even know what acledit is!
The simple answer is No! I would say that almost without exception, Windows programmers just don't think that way. (BTW, if you are a Windows programmer who does take the require amount of care, wade in, and prove me wrong!). I believe that Windows administrators have more idea than the application developers, and that is just because they have been burned too often because of the vulnerabilities. of Windows.
If you grew up in a UNIX or VMS or even a RACF world, you would understand this or you would not get work.
The thing that gets me
is that this is not new revenue. It's merely moving a fixed amount of purchasing power from one stream to another. If the operators get a slice of this, then it will be by 'stealing' it from someone else.
What I keep seeing is that businesses believe that they are making new money by offering this type of service. This is just plain wrong.
If I buy a cinema ticket, I do not want to pay more to buy it using my phone. Realistically, people actually believe that they will pay less with new transaction types, especially if they are paying a monthly fee for the privilege. And more interested parties will be taking a piece of the action.
Time is indeed money, but only up to a point.
And sometimes, it's nice knowing that the £50 in my wallet is still there as long as I don't spend it. Someone might steal it, but nobody can legitimately spend it without my knowledge. I'm almost afraid to look at my bank account sometimes, because I have so many direct debits and other transactions, often on irregular days of each month. I think that the same will be true of any e-payment system.
You're making an assumption
that the signal does not go beyond your house. If it does, a neighbour can join your network, or possibly you will end up with a single network spanning more than one house. Combined with uPnP, this could mean that all your media and Internet devices are visible and available. Shiver.
If you want to guarantee privacy, then you should set your own key, and to do that, you need the windows application. This is why it sucks.
There have been Linux (actually Posix compliant) tools, but I have only tested them on the older Homeplug and Homeplug Turbo devices.
Liabillity insurance needed
Bearing in mind how much noise is made by environmentalists about oil contamination if a ship founders, one wonders how much liability insurance would need to be carried by a company operating nuclear merchant ships.
If a nuclear warship is damaged/founders, you expect (and this has been the case so far) that the nation operating the ship will carry the burden of recovery and clean-up of the wreck. There would need to be some guarantee that sufficient resource would be available to prevent nuclear contamination from a merchantman.
If there were arguments and delays after such an accident involving a merchant ship, leading to nuclear contamination, then the outrage that would follow would make the Deepwater Horizon a mere storm in a tea-cup.
I'd also expect that these nucular wessels (sorry, couldn't resist) would also have to be operated far from Somali pirates!
BTW. The description for the warning exclamation icon reads "All hands man the pumps, run for the hills, batten down the hatches and so forth", so I thought it was appropriate.
Played with one owned by a friend
While it worked, it felt unnatural, and the keys, particularly around the edges, were a bit unreliable.
It just didn't feel right with no physical keys, and the flatness meant that anybody used to typing got aching hands quite quickly. I never saw him use it much in the following months. It was a clever and an impressive gadget though.
I found a better solution for sending texts quickly was to link my (then) Nokia phone to my laptop by IR, but that would rather defeat the purpose when using a smartphone. I got a Palm Treo, installed Graffiti, and used that instead. I wish I could use a stylus and Graffiti on my current android phone (I know, both are possible, but Graffiti appears to have been pulled from the Android Market at the moment!)
Our Choices, which became a Blockbuster during the big change a few years ago is fairly clean and tidy, staffed with enthusiastic people, and is never empty of customers.
I live in a rural town, with the nearest large retailer over 25 miles away. We've lost Woolworths and Curries (games), and our small WH Smiths do not sell a large range of DVDs or any games at all.
The Blockbuster is the only remaining local outlet with a reasonable range of DVDs and games to purchase, and has the added benefit of renting both. The only alternative is the restricted range of DVDs that our local Tesco sells, and as this is a rural branch, only runs to the top 30 or so DVDs and even less games, or the £5 bargin bin titles.
If we loose our Blockbuster, with the really poor rural broadband provision and no cable TV, it will make the area even less attractive to the resident youth. We are already seeing a serious upward change in the age demographic as the young leave when they can.
Yes, we can buy from Amazon or Play. Yes we can download (slowly and encumbered by usage caps). Yes, we can get titles from Love Film, but the postal service is already going down the tube. We appear to only get every other day deliveries of mail as it is, and this will only get worse.
What am I supposed to do when I get one of my kids asking for a rental or game for the weekend. Or for an extra game controller. Modern kids just don't seem to understand "it'll be here next week". They want it NOW.
I think you need to read the agreement AGAIN
Part of the agreement, if I remember correctly, is that you are only permitted to use the keys that you obtain through Technet while you maintain the subscription.
As soon as you stop paying the subscription, you need to buy new, full licences, or un-install the software. So you can expect that any keys that you used to fail the Windows Genuine Advantage test sometime in the future.
This was the main reason I never took advantage of the apparently favourable conditions offered. I did not want to tie myself into a long-term agreement with MS where they could repeatedly demand money from me at their own terms.
I would laugh you all for dancing with the devil, if it was not so tragic.
It would be interesting
to see if bone would gradually permute the foam over time.
My thoughts are that if it is similar in strength to human bone, it may break, but if over time ordinary bone grows through, it may be able to heal with ordinary bone, without further surgical intervention. Now that would be revolutionary. It may completely change the lives of people who currently have to go through serious bone grafting after injury.
I am not in any way associated with a medical profession, and I am just idly speculating, so I'm sure someone will say that this can't happen. Still...
The problem is...
...that this works fine if all you want is what they offer.
As soon as you get tied in, and decide, say, that you want to use a product that they do not offer, such as a particular new network type, or a better HSM product, or a particular data visualization package to integrate with your MIS, you suddenly find that you either can't, or will compromise the gold-standard support they offer by changing the software stack.
This is a nirvana for corporate sales droids, especially if they can talk to the customer managers rather than their techies (it's amazing how often I have found that businesses will allow the managers to talk to salesmen without having techies present nowadays).
You end up getting steered down a path that ties you in to a vendors products, then when you can't get what you need working, to a vendors consulting arm, all of which will be chargeable.
I think that the problem is that the Typhoon is one of the new generation of inherently unstable aircraft, that are only rendered flyable by the Fly-by-Computer avionics.
I'm sure that if the avionics were still operating, it would be possible to land, but if the avionics were out, there would be virtually no chance of any type of controlled landing. Hopefully, redundant systems and power supplies should be installed to keep the systems running in the case the primary power systems fail.
Primadonna robot footballers!
Do the robot players also get programmed to roll on the floor if another player gets within 6 inches of them during a tackle!
Dear oh dear.
I'm not sure whether what I'm saying here is a joke or not.
If you consider that civilizations are cyclic (which is not actually proven, but it's a good theory), then you need to leave some easily extracted mineral resources to allow a future civilization to progress through the equivalent of our early industrial age. Otherwise, once our reign a the top of the stack (last in, first out) comes to an end, future civilizations will get stuck in the pre-industrial age. An amusing fictional illustration of what might happen is described in "The Mote in God's Eye", by Larry Niven.
They're not going to be able to jump straight to a solar/green/nuclear lifestyle without going through some pretty low grade technology! Unless you are suggesting a gap of 100Myears to allow fossil fuels to accumulate again.
Mineral oil is actually a quite important lubricant, which may be more important in the future than using it as a fuel. Vegetable based oils are too light without significant processing (which takes energy).
Still, I'm just ranting as an office-chair speculator here.
Model M and failure rates
You really put your model M in the dishwasher. Wow. I religiously strip all the keycaps off and wash them, and then apply a stiff brush and wet-wipes to the rest. Your solution sounds much quicker. How long do you leave it to dry?
When it comes to large numbers of similar devices, you need to look at the MTBF figures. The more of a particular device you have, the more frequently you will see one fail. I would have to look up the exact maths, but I don't think its a simple ratio. Where I am, we have over three thousand 300GB disks, and we lose a couple every month. This does not cause a problem, because they are in a large number of separate raid arrays with two hot spares per 10 disk array (=12 disks total). We could still be operating with three disks down in an array.
Memory, on the whole, seems reasonably reliable, but we have multi-bit parity on the systems, together with bit-steering (the joys of Power6 systems). This means that it is not the built-to-a-budget memory that most people put in their Wintel servers. That price premium must really buy you something.
Ubuntu 6.06 LTS - Dapper Drake
Saved you having to look it up.
I used to be able to use my PalmPilot m100 to control TVs quite well using OmniRemote (a free, downloadable app a long time before Apple got in on the act), but when I switched to a Sony Clie and then a Palm Treo, the LED's were only strong enough to control the TV from a distance of about a metre. Not too good for a remote. I would have been able to buy a hardware add-on, but that would have spoilt the lines on the PDA/Phone.
Was a good idea though, having a fully customizable remote able to do all of your media appliences. Shame it was not successful.
Interesting about consoles, though. My son uses his DS as a wireless controller on his Wii. Not sure if this is specific to particular games, but it allows multiplayer games to be played when you don't have enough controllers.
...if you really think there is anything of note here
I'll remind you of this, sometime in the future. I don't think it's just me that likes to elicit a response from you. I just enjoy seeing the vulture icon, especially if it has your name next to it!
Residual capacity could the the reason
Someone has to pick up the cost of the loss of capacity after a pack has been recharged a hundred or so times. Leasing makes more sense than owning, as nobody will complain about swapping one that is new for one that is near it's end-of-life it they lease it.
You would still have some uncertainly about range, and you would probably have to have some rules about when a battery pack would be retired or reconditioned. Would you make it 90% of original charge capacity, 80%, 50%?
I'm all for this technology, but there are serious wrinkles that need sorting out, not the least of which is the cleanness of the electricity. Also, could the power grid cope with thousands of battery packs drawing tens of amps at the same time? For example, if a battery charging station has 50 packs charging at any time, which draw 30A each while charging, we're talking 1,500 amps, or at 230V, 345KW per station. That's a lot of power. A typical UK house draws about 0.4KW per hour, averaged out across the year (according to EDF), so the charging station would put the same load on the grid as 800+ houses.
These figures are rough, based on the Tesla's battery pack which apparently take 3.5 hours to charge at 70A at 240V (thanks Wikipedia), mapped into something that is more likely to be found in the UK urban environment.
How many petrol stations serve as few as 150 customers in a day (assuming packs take 8 hours at 30A to charge)? And you would have to be pretty certain that the packs could not be nicked for their scrap value. And how large would the station have to be?
So, interesting ideas, but currently, fossil fuels still rule, as indicated by the icon.
230,000 a day!
Either Apple has been stockpiling iPhone4s for a while, or half of China must be making them.
I cannot believe that they can sustain nearly A QUARTER OF A MILLION activations a day for any length of time. At that rate, the equivalent of the whole of the UK population could have an iPhone 4 within a year, even if they were only making them on week days!
Most organizations (including the Reg in the past) would just post a correction, with a note about the correction in the edited article, and then silently moderate out the obvious comments before they become visible.
I think I'll bookmark the comments on this one, because it is the first time I've seen 5 of the first 7 posts deleted after the fact by the moderator. Unfortunately, like Gillian McKeith and possibly William Hague, you may find that the Internet is an unforgiving medium.
After being put firmly in my place about rejected comments by yourself I know exactly how serious you are. But I do like to tease...
"SCO UNIX is an orphan version of UNIX"
Excuse me, UNIXWare is the direct closest linear descendent of Bell Labs version5/6/7 UNIX that first appeared on University PDP/11 systems around 1976/77. It is the closest thing to being the main UNIX line that exists.
The line runs as follows
Bell Laboratories UNIX Timesharing System Version 5/6/7
System3 (Sometimes written System III)
SystemV (Sometimes written System 5)
Bell Laboratories, which could possibly make a claim for switched to Plan9 after UNIX edition 10 sometime around 1990.
Of course, there has been cross pollination, especially from the BSD releases, but these were made almost completely AT&T code-free around 1993. BSD4.4 could be regarded as intellectual descendents, but you would have to question whether it still counts as a genetic UNIX.
SunOS/Solaris, AIX, and HP/UX are vendor-owned branches of the original code, and Linux is not even a distant cousin, although there may be some illegitimate blood from dalliances from the UNIX members in the past. The family resemblance is striking, however.
SCO picked this up via UNIX System Laboratories (USL), Novell, Original-SCO and Caldera-SCO.
The term is Genetic UNIX. A good diagram can be found here http://www.levenez.com/unix/. Enjoy.
More info please
I know it's rather specialist, but after many articles on Homeplug and RF interference on the Reg. would it not be possible to put a broad spectrum RF analyser in the vicinity, to see whether these are good or bad?
And are the UK models two pin like the pictures?
When I read the conditions on the upgrade edition, it suggested that the license key for XP would no longer be valid once you had put the upgrade version on the system,
I wanted to have a dual boot system, because one of my sons was not convinced that all his games would work on 7. I was worried that Microsoft would be able to cripple/disable the WinXP instance using Windows Genuine (dis)Advantage, so opted for the full retail version of 7, and a second hard disk.
Ironically, I believe that he found everything worked, even from the original installation, so he has never started XP since 7 was installed!
Runs out of supplies
This is an artificial limit, because presumably it will only have one copy of the DVD. How difficult is it to print the COA and license key?
And remember, if you upgrade an XP box using an upgrade license, you are no longer allowed to run XP on the system!
And my Monster from his slab began to rise
To his surprise, it's Microsoft Office Communication Server. Made up of lots of disparate bits held together with stitches and bolts!!
(sorry, don't know whether this is actually the case, but the analogy seemed too good to let go without a comment!)
Do we at last get some closure on this? Or something worse.
So this will mean that the stewardship (term chosen very carefully) of the genetic UNIX code, originally controlled by Bell Labs. will definitely be changing hands, leaving SCO with just Linux interests?
I'm not sure how this works, bearing in mind that it is the use (or misuse) of the UNIX code itself that was the subject of the legal action. Surely, it is not possible to sell the very subject of the action, while keeping the action going. It makes no sense!
It is the case, of course, that the IP rights and license revenue will remain as-is, with Novell.
What I would be worried about is Darl being backed by someone, and buying back the UNIX business and attempting to start the whole thing over. Or even Microsoft (perish the thought).
Aaaagh, horrible thought. Oracle!!!!!
Damn, and damn.
Meant to AC my previous comment, but what the heck. It's fairly innocuous.
Hang on, who's that beating down my door?
CDs and DVDs officer? Yes, I have several hundred scattered around. Where do you want to start? Oh! you want to take them all away! Can I have a receipt please? And please note the ones you can't read are not encrypted, they're almost certainly ones that have failed to burn, and I forgot to throw away. No. Really. They don't have encryption keys for them. No. NO. Not the cuffs!
Help. Call a lawyer!
Understand the law? How!
Whilst I in no way condone this person possessing these pictures, this story indicates how difficult it is to live in the modern world.
Ignorance of the law is no protection, but if this is the case, surely there is an onus on the government to make sure that the most pertinent features of existing and new laws are publicised to give people a chance to comply?
A case in point is the Coroners and Justice act 2009 (I think), which has been discussed often in these forums, which is largely unknown to almost everyone that I talk to in my social circles. This includes a significant number of family friends (mainly of my daughter) who have an interest in mainstream manga and animé.
I'm sure that there are titles that are regularly stocked on the shelves of high street book sellers that contain pictures of seemingly young people (often, but not exclusively girls) in compromising positions. And the fact that they are drawings makes no difference to the legislation. Even if the worst of the titles are removed from the shelves, there will be copies in peoples private collections or in the second hand market.
Why are there not warnings on the bookshelves, Amazon, and everywhere else people who may have such titles will see to check their collections?
And I am still not clear about how the law can be operated. If you take general landscape pictures on a beach where young children are playing, and without any intent, capture an image of a child in a state of undress where the child is in an accidentally provocative pose (like falling on their back at the moment that the picture was taken), this picture may fall foul of the law.
Now don't get me wrong, I do not often take pictures on the beach, even though I live in a seaside town. But I could not guarantee that in the 1000's of pictures I have taken over the years, that I do not have something like that either on paper, negative, or stored on CD-ROM.
And things that may have been regarded as totally innocent 60 or 70 years ago, even in quite prudish times, may also fall foul of this legislation. Acquiring such pictures before the legislation came into effect is no defence either.
So how many of us have checked our photo collections? I know I was shocked to find that one of my archive CD's has pictures taken using my first digital camera, which I let my two youngest sons play with when they were about 6 and 4. Some of these pictures are explicitly of their nether regions, taken I presume for a giggle, in the way that young kids do. I copied them wholesale without checking to CD and backed up this CD several times as I added to the collection. This means I now have pictures of undressed young boys scattered around on numerous disks. I doubt I could find all of them even if I tried. Am I a criminal? How can I prove that I did not take the pictures, or even that they are of my own kids?
In this, and several other instances, the law is definitely an ass, and so open to interpretation that I pity the poor defendants who get dragged in to cases that are taken to court to try to set precedent.
Ahh, the PDP11
The 22 bit addressing extension on PDP11s were completely transparent to application programs. The address mapping was handled by the OS (as long as you weren't running early versions of RT/11 IIRC) , and every application ran in a private address space that was either a 16 bit 64K combined address space, or one 64K Instruction space and one 64K Data space on separate I&D systems like the PDP11/70. The OS would control the segmentation registers to to the virtual page to physical page mapping, so the programs knew nothing about it.
There was an additional complication on UNIBUS (rather than MASSBUS or QBUS) systems, where the DMA disk adapters (like RL11s) could only write into an 18 bit real address space managed by the Unibus map, limiting where the OS put it's disk buffers. This meant that if you were wanting to have mapped DMA I/O from a disk directly into your programs address space, you had to be very careful about asking the OS to set up your address space correctly.
I had a real oddity, a system called a SYSTIME 5000E, which was, as far as I know, the only system based on a PDP11/34 which had 22 bit addressing (they normally only had 18 bit), but it did not have separate I&D. All other systems, mainly from DEC themselves, were either 16 bit non-I&D, 18 bit non-I&D, 18 bit I&D, or 22 bit I&D. SYSTIME could do this, because the processor was made using TTL logic chips on 5 or more boards plugged into a backplane, and it was possible to buy the basic processors from DEC, and build your own memory management unit, disk controllers and backplane.
This gave me real problems when I was trying to get BL Unix V7 to use the full 2MB (we could not afford the other 2MB, it would have cost about £40K) on this system! I eventually worked out that I needed to turn on 22 bit addressing and 18 bit Unibus addressing, and had to use the Calgary disk buffer modifications, and fix the start address of the buffers at 64K physical, otherwise I would get an address wrap-around during a DMA disk transfer, and wipe out the I/O vectors in the first 256 bytes of real memory about 5 seconds after booting the system. Panic!!
It's rather sobering to think that I ran a 12 concurrent terminal multi-user system on a 16 bit machine with 2MB of real memory, 64MB of disk space (and that was quite a lot for this class of machine in the early '80s), with each program being limited to 64KB of memory, supporting a community of over 100 users. And more than this, it ran Ingres as well (albeit slowly). My phone has much more resource than this now!
64 bit (in)compatibility with AIX 5.1
I was going to enter a long, essay type reply to this when I realised that it wasn't worth it.
I was caught by the Oracle 8i to 9i problem (but on Winterhawk SP/2 nodes, not p690's), but I reckoned that this revolved around the lock kernel extension (read device driver) that Oracle added to AIX. There was an IBM documented incompatibility here.
Being a cynic, I always thought that this was just a ploy to make customers pay an upgrade charge to Oracle, rather than a real issue with AIX. It would not surprise me if it were perfectly possible for Oracle to have issued a patch for 8i, but this would have had little financial advantage for them.
IBM issued guidance that said that the kernel extension interface was changing, but this sounds very similar to the Adaptec driver issue mentioned. So no real difference from Solaris there.
The rest of the code may well have worked, but would have been completely unsupportable. IBM claimed in the AIX 5.1 readme that the only reason to recompile 64 bit code was to take advantage of the new features of the new release, which is not unreasonable.
You're timelines are wrong.
1. What were you doing in 1990. I would doubt that it was working in the UNIX field.
2. Sun did not have a logical volume manager at that time, unless you count Veritas, which was not their product Sun Disk Suite was a charged for add on that was released in 1991. Nor did HP until sometime after AIX 3.1 was launched (the HP/UX logical volume manager was derived from the code IBM contributed to OSF if I remember correctly, and added to HP/UX 9.0 in about 1992)
3. HP/UX had SAM. Hmm, not good, and not a patch on Smit.
4. I'm sorry, Solaris definitly did not have dynamically loadable AND unloadable device drivers (I'm not talking linking here, I'm talking device drivers) at this time. I was having to sysgen systems to tell it what devices were included in the Kernel. And I would ask when you think that Solaris was launched, bearing in mind that Solaris 1 was a packaging option including Sun OS 4, and was not the label for the entire OS until Solaris 2 which was SVR4 based in around 1992.
5. Sun OS was a BSD derived OS until Solaris 2 so of course it implemented BSD commands. It had a veneer of SystemV on some commands until the SVR4 initiative that was not Sun's idea (remember this). The way that you effectively chose which type of commands and environment you used was poor. You may not think this was important, but as you pointed out, this could just be an opinion.
6. I'll give you Starfire. I had forgotten about that system. As I pointed out, however, the single processor power negated some of the SMP benefits that other vendors had. But I believe that the cost per TPC was not in Sun's favor on these systems. IBM's closest system at the time was probably the S70, but they were a bit later than Starfire. S80 and p690 (regatta) closed the performance gaps.
7. OK, find them. I don't remember any, because for system management, Solaris was, and IMHO still is in the stone ages. I don't believe that Sun ever had decent hardware error handling system, and patch management appears more primitive than AIX. NFS and automount is better, granted, but even for a hardened CLI user, smit is a great fall-back when you can't remember how to do something you touch once in a blue moon. And dynamically loaded and unloaded device drivers allow you to fix a multitude of problems without needing a reboot. And all of the hot-swap hardware makes management easy.
8. Starfire again. How much were they? All IBM power4 systems were paritionable except for the very smallest.
9. Yes? IBMs SMT on Power4 implemented two separately scheduled hardware threads on a single CPU with multiple instruction units, so more like SMT than hyperthreading. The two threads were not completely symmetrical, which is why I said sort-of.
10. Yes. And dynamic LPARing does work. Very well, in fact, as do hot pluggable disks and adapters. Zones is quite different, and I will admit that WPARs were a direct copy of this functionality. I'm not 100% sure, but I am not sure how well Zones split the allocation of CPU and I/O bandwidth between the systems. LPAR overheads, mainly memory (but not CPU), are quite high granted.
11. Starfire again. And again, how much did it cost? And what was available on the smaller systems?
12. Backward compatibility. WTF. I would bet that a 32 bit executable built on AIX 3.1 on RIOS hardware in 1990 has a greater than 70% chance of still running on Power7 running AIX7 twenty years later. How much more backward compatibility do you want? And once you get to AIX 4.3 and AIX 5.1 it will be getting to more than 95% or more. And IBM make it quite clear what features are likely to prevent an application from working. Many software vendors are still compiling their code on AIX 5.1 or AIX 5.2 knowing that their software will work on all later versions of AIX (an example of this is the AIX Toolbox for Linux Compatibility from IBM, that still proclaims to be compiled on AIX 5.1).
I have VERY RARELY (in fact I can hardly remember the last time) had to recompile a program when switching AIX releases unless I wanted to take advantage of new features of a new processor or compiler.
We both obviously have our own perception of what happened and when, but I still believe that the original statement in the article was wrong.
@El. Reg. Don't agree!
"AIX was always the laggard when it came to commercial-grade Unixes"
You need to qualify this. When AIX 3.1 on the RS/6000 was first launced back in 1990, it wa streets ahead of any other commercial UNIX. It has a logical volume manager, an integrated system management utility (remember, this was a time when sysadm ruled the roost for most UNIXes), dynamically loadable device drivers, and was one of the first UNIXes that did a good job of merging SystemV with BSD flavours of commands and libraries (SUN's way of doing this was less transparent).
With the SP/2 in the mid 90's IBM moved AIX into high-performance computing (Deep Blue et. al)
In the late 90's, they were up there with 64 bit systems, and had a nearly seamless 32/64 bit strategy that meant that the kernel you booted did not have to match the binary you were running.
For absolutely years, AIX was the leader in the Gartner manageability surveys.
Power4 systems, available in the early 2000's implemented hardware partitioning. I'm not sure whether HP had this on the Superdome (or whatever they were called at the time), but I remember this being a real marketing differentiator at the time. Power4 also had SMT of a kind.
Power5 systems, available 2004/2005 implemented I/O virtualization, sub-cpu partitioning, and dynamic hardware allocation and de-allocation (this might have been possible on Power4, I can't remember exactly).
IBM were slow on SMP, the initial work being done by Bull with the G/J30s, but when you have systems with single CPUs running as fast as your competitors SMP boxes, what was the hurry.
The only thing that I believe that Sun had was the containers, and to tell you the truth, I never worked at a customer where this caused a problem.
So tell me. What else were IBM lagging behind their competitors.
I'm really not sure that you can count nuclear as a 'fossil fuel'. Uranium is dug up from the ground, yes, but it was never part of a living organism, and that is generally what a fossil is.
Remember, coal, oil and natural gas were all plants and marine creatures before they were buried in the ground.
But nuclear cannot, by its nature, be regarded as a renewable fuel. The amount is finite in/on the Earth, and I believe that this is the point you were trying to make. And I agree about nuclear being about the only low-carbon energy source, even if you include cost to build the stations.
Back in the days of Virgin.net, and with a 24x7 flat rate dial up service via Modem, I noticed something worrying. My Smoothwall firewall was reporting a huge number (100's per minute - remember it was dial up) of intrusion attempts on port 135.
I sent Virgin Support a mail, pointing out that many of the addresses probing me were from within their own network, and I got a replay saying that it was a problem affecting all ISP's (it was MSblast in the wild at the time), and that they were taking the issue very seriously, suggesting that I install a software firewall (ignoring my statement that I was using a well regarded dedicated firewall).
And that was it. Nothing else happened at all. Eventually, the frequency of the attack dropped to a more manageable level, but not due to any obvious action on their part.
So I actually welcome people being warned that their systems may be compromised, although I do agree that in this day and age, a paper letter is probably too little, too late.
Not the afternoon play, but...
The c*nt word was used on air by an female artist (can't remember the name, can't be bothered to look it up) on Front Row, which airs on Radio 4 between 19:15 and 19:45 (well before the watershed) I guess about 5-6 years ago. It was used in relation to an art installation of a particularly sexual nature, IIRC, involving models sitting in provocative postures with no underwear, a la Basic Instinct.
Mark Lawson (I think it was) made a hurried apology, together with a request not to use such language on air to the interviewee. Again, IIRC, the terms vagina and vulva were also used several times, but they were not censured. I think it was more shocking to Mark because it was a woman who said it.
Laugh, I could have crashed the car!
Being pedantic, are we talking mean, mode or median. These are all 'averages', but have significant differences. In particular, If you can find a job at 12K, and one at 62K, then the median would be 37K, regardless of the actual distribution. Consider the following set of numbers:
10 numbers, totalling 30.
Mode is 2, Mean is 3, Median is 5.5. I would point out that there one hell of a difference between the Mode and the Median.
I do not have the stats, but bearing in mind how many PC first line support jobs appear to be at the 15-20K level, and how prevalent windows systems are, I've often wondered about the source of these £37K average figures.
There is one classic '50's or '60's news clip of a Union representative who says into camera something like "We will not stop this action until all of our members are being paid at least the average wage", which of course, if applied to all workers (not just his union members, admittedly), would mean that everyone would be paid the same. Still, Maths education must be better than that nowadays, mustn't it?
It's a Maths teacher, obviously.
..to providing my bank details on my tax return, not even knowing whether there is a rebate or not. I know it's a trivial issue, because I'm sure that the banks would roll over and provide bank account details to a suitable request from HM Customs and Revenue, but it just feels wrong.
Let them send me a cheque if I am owed money. Don't know what will happen when cheques are withdrawn, but I'll face that one when I come across it.
You're talking chain or band line printers here. I very much doubt that a dot-matrix, even a heavy duty one like a Printronix, would be able to do more than three part, chemical transfer paper.
When I was working with mainframe band printers, we were using multi-part fanfold stationery with interleaved carbon paper (not chemical transfer paper). There was a machine called a splitter, which would split the copies out and wind the carbon paper up for disposal, while leaving the two split copies neatly folded (at least, if the operator threaded it correctly). For three and more part stationery, it had to be put through further times to split each copy off. Interestingly enough, each carbon sheet had a completely legible copy of what was on the page. We also had authorized cheques with a second carbon copy, but this was for audit purposes.
I was once told that the hood on these fast printers was more than just acoustic protection, because if the band or chain broke, it was moving so fast that it would damage the hood as it flew off. Not something I would like to hit me.
Where's the old fart icon.
- Xmas Round-up Ten top tech toys to interface with a techie’s Christmas stocking
- Xmas Round-up Ghosts of Christmas Past: Ten tech treats from yesteryear
- Review Hey Linux newbie: If you've never had a taste, try perfect Petra ... mmm, smells like Mint 16
- Analysis Microsoft's licence riddles give Linux and pals a free ride to virtual domination
- I KNOW how to SAVE Microsoft. Give Windows 8 away for FREE – analyst