... to keep below the download limits your ISP impose?
Or do you like your feed being blocked, or being hit with punitive over-limit fees? The cache tends to hold large static data that would be expensive to download again and again.
2924 posts • joined 15 Jun 2007
I'm sure he was known as "Sir Toppham Hat" from about book 13 (Branch Line Engines, 1961), in the story "Thomas comes to Breakfast". I think his butler answers the phone with "Sir Toppham Hat's residence"
All of the main characters were first written about in the 50's and early 60's, when the world was mainly male centric. Of course the primary characters were male, because that's how the world was then. It would be a mockery if the characters were changed when the TV series was first made, and this must be about 25 years ago, because that is the age of my oldest Son.
Is there going to be some backlash about the "Thin Clergyman" for being anorexic, even though this character was based on the Author himself?
BTW. I would be interested to find out whether the Chris Payne who appears on the credits for the early TV episodes was the same Chris Payne who I was at University with in Durham in the late 70's. They both appear to have been interested in model railways. Anybody know?
I thought that the Navy had learned not to put reliance on a single weapon system a long time ago (the Counties suffered from this with Seaslug in the 50's and 60's). Still, if the carriers are not built, these destroyers will be of limited use anyway, even if armed.
It's funny, There is actually a direct analog between the County class Guided Missile destroyers of the cold war era, who's job was to defend a carrier task force from low-flying aircraft, and the Type 45s that are supposed to do the same against low flying missiles. Both are largely single purpose, single weapon platforms, hugely expensive to build and run, and mainly ineffective at their role.
I wonder if these missiles were ever known as Floggle Grummit missiles? I'm sure HMS Troutbridge had trouble with these in the Navy Lark.
Left hand down a bit!
The lead time for installing this type of system runs into many months, because of the associated infrastructure required to put this type of system on the floor. This means that most of the customers who might want one on launch date will have to start planning their power, water cooling and reinforced suspended floor now in order to get one installed close to GA date. All of the big Power6 build capability IBM has is probably already committed, so a customer probably can't by a Power6 HPC now even if they wanted one.
Power6 575's are already water cooled. The IBM regional hardware CE's have to do a water cooling training course before they are allowed to touch the systems. The smaller systems are air cooled, and the smaller Power7 systems will also probably be air cooled as well.
And you have to remember that there are only a hand-full of customers in the UK who have pockets deep enough and problems large enough to warrant purchasing this model of Power7 systems (see the top500 to see who they may be).
As far as I am aware, although i (aka i/OS or OS/400) will run on these systems, it is unlikely that the applications that will run will be able to take advantage of the capabilities of one of these systems. It is unlikely that anybody will bother to LPAR them, even though it may be possible (it is certainly possible to LPAR a p6-575). Looking at p6-575's, they are every bit CHRP POWER systems, with similar entitlements and capabilities, and even the same basic AIX installs. From a system management perspective, they are just very large AIX systems.
The current zSeries systems run significantly different silicon. Although the fundamental processors are similar in design and structure, the actual instruction set, even allowing for micro-coding, is too different between Power and zSeries to allow one to run the other's programs.
Recent zSeries systems have also been air rather than water cooled. Mainframes excel in huge I/O performance rather than just basic grunt.
Of course, Linux is always an option.
I, too, would be interested in hearing how VM intend to identify what song is being transferred.
Surely, if they are calculating some form of hash from well-known mp3 files, then re-sampling, or adding 1/4 second of silence at the beginning of the track or even changing the id3 tags could prevent them from correctly identifying a track. And if they are just sampling the bit-steam, and trying to match sequences of bytes, then this would be even more fragile.
I suspect that in their naivety they may try to use something like CDDB, and we all know how good that is!
Unless they have some sophisticated music analysis program that will identify beat, melody and harmony elements of music, but I would guess that if this technology was reliable, then it would be announced as a major advance in music analysis.
AIX LPARs are very much like VMWare ESX, but with a hypervisor (which is actually a specialist Linux based OS) separating the virtual systems. Each virtual system has its own OS image, with no page sharing between instances.
WPARs are like Sun Zones/Containers, where you have a single OS image running applications in what are effectively chrooted environments with some CPU and memory enforcement (provided by WLM) and some network virtualisation provided loopback virtual Ethernet devices.
BTW. Whoever said that C does not use sharable CSECTS obviously has not looked at the way that shared-text UNIX processes have worked for nearly 40 years!
@Julia Smith: Xserver and Mainframes? Tradditional IBM mainframes (running MVS, IMS, CMS, TSO etc) either never understood X, or were slow to adopt it (OpenMVS, which became z/OS, has/had a POSIX compatibility layer that added X Clients) but it is clearly nonsensical to have an X Server running on a Mainframe. IBM mainframe graphics either used channel attached workstations (often running AIX and proprietary channel-based communications) for high performance work, or 3278 graphics terminals for business type graphics.
Can't really talk about other vendors mainframe offerings, because I never had any real exposure to them.
@Mage : Understand what you are saying, but being a bit picky, I would like to point out that vt100's were not graphics capable terminals (unless you include the box-characters in the advanced video option). In the vt1XX line, you would be using a vt131 or vt132 for graphics, and the followups were the vt240 and vt241, (this last being a colour terminal!) The standard they used was a propriety ANSI-extension that was called ReGiS (capitalization may be wrong) that was proposed as a standard, but fell by the way-side.
Still, I would have thought that using a browser based rendering engine will never be really efficient unless it grows into a full blown fully functional 3D rendering device (like OpenGL or possibly DirectX (spit)). In which case, you would have re-invented the thin client again, without it being all that thin.
The difference in cost of a fully functional computer (with disk-like storage and all) and a thin client will never be high enough to justify their deployment. Desktop computers, Netbooks, and Phones will probably merge together, all with SSD based filesystems, input, and display devices, and a real, fully functional OS under the covers. Something like Chrome will end up being effectively a presentation or compatibility layer sitting on top of the OS, but the OS will be there, and will probably be Linux based for everybody other than Microsoft (and Apple if you draw a distinction between GNU/Mach and GNU/Linux).
FreesatHD and FreeviewHD are different things. You need a satellite dish and a suitable decoder in either a set-top-box or your telly for FreesatHD.
FreeviewHD will use a normal TV aerial, together with a suitable dec....... you know what I mean.
I'm pretty pissed of actually. I tried to be ahead of the game by making sure that all of the TV's I have in the house have freeview boxes, only to now find out that they will be obsolete almost before I needed to install them.
And why do we need both FreesatHD and FreeviewHD. Surely one or the other could be made to cover the whole country. And why not just legislate (or even just pay) to make Sky carry the free-to-air channels, rather than inventing another incompatible satellite system.
I'm sick of the perpetual bandwagon of money-grabbing new technologies that we have to buy in to in order to maintain what we have already. PC's, DVD's, game consoles, phones, TV's, media players etc.
My view is that this is capitalism gone mad. I'm not normally of this persuasion, but I'm beginning to think that governments should legislate for a minimum life for technologies, otherwise we will just be cycling raw materials between the manufacturers and the recyclers, with a brief use as devices in between.
The F4K Phantoms that the Royal Navy operated when we had real carriers were transferred to the RAF when the Audacious class Ark Royal (not the current one) was scrapped.
The last I heard was that they were doing service as long-range interceptors in Scotland (replacing the English Electric/BAe Lightenings), but this was some time ago (the Ark was scrapped about 1978, so if they were still flying, the Phantoms would be pushing 40).
The RAF also took the remaining Buccaneers.
As I understand it, the new carriers (if they are ever built) have provision in the design for arresters and catapults, but it is unlikely that the non-nuclear powerplant would be able to provide either electrical power for an electro-magnetic catapult, or steam for a steam catapult.
The simple answer would be to put a couple of Astute sub. powerplants in to replace the gas turbines, and re-use the space freed up by not having to carry gas-turbine fuel and fresh water for additional weapons and provisions, providing power or steam, and increasing the range and usefulness of the carriers in general.
Or maybe they will be regarded as too expensive, marking an end of the era of British sea power. There's sod all else left!
Sorry to be picky, but UNIX pre-dates VAX by close to a decade (at least from GA), and probably RSTS as well (DEC never really embraced RSTS. I often felt that they only reluctantly accepted it due to customer pressure). I must admit, I don't remember any form of user ID switching in RSX/11, but it is over 22 years since I last used that OS.
I'm sure you could look at MULTICS to find some form of privilege escalation that is older than the lot of them.
By the way, like the knotted hanky. Weren't you a brain surgeon some time ago?
There is a verification suite owned by The Open Group which allows you to test your UNIX-like OS, and if it passes, actually call it UNIX(tm). This used to be based around the SVID (System V Interface Definition) and was called the SVVS (System V Verification Suite). The standard has developed through POSIX 1003, Spec 1170, XPG, and UNIX 93, 95, 98 and 03, and there has been a verification process for each of them.
It has been passed around a bit, but I believe that The Open Group has maintained UNIX branding (since the demise of USL, UNIX System Laboritories), separate from all of the UNIX IP and source ownership arguments.
I thought that the Linux Standard Base consortium were attempting to get UNIX 95 or 98 branding some time back, but I could be mistaken.
I personally prefer to think of a UNIX(tm) OS as being a derivative of the original Bell Labs code (a so-called Genetic UNIX), but I know that I am out of date. I know that Open/MVS and z/OS on mainframes (definitly NOT genetic UNIX) have achieved branding, but as far as I am aware, this is the only non-genetic OS that has achieved any UNIX branding.
If I remember correctly, the US patent office does not allow a third party to provide prior art during the investigation phase of a patent being granted. What has to happen is that if a third party has information about prior art, they must wait for the patent to be granted, and then challenge it in court.
This means that someone like the EFF cannot just slap a dossier of well-known information onto the desk of the patent examiner when a dodgy patent is applied for.
This strikes me as being a stupid way of doing things, as a small inventor with a patent, who is already being stung to maintain the patent, cannot prevent an infringing patent being granted when it would be cheap, but must wait and take out a costly lawsuit after the fact. This means that the US patent system unfairly favours large companies or other people with deep pockets.
This works another way as well. If a large company wants to steal a patent owned by a smaller organisation or individual, they can make their application sufficiently vague so that the patent gets granted, and then they challenge the validity of the earlier patent. These get argued out in court, so the owner of the original patent has to either stump up the cash to defend their legally granted, prior patent, come to terms with the large company (which normally involves them getting less than it is worth) or completely abandoning the original patent.
This make US paten law the proverbial ass. But then again, the original law was probably drafted or influenced by US large industry anyway, so why would they not build a patent system that was to their advantage!
I know that I am being pedantic, but I'm sure that the original Radio 4 coverage said that Professor Nutt was "asked to resign". Now I know that there is not much difference, but I wonder what would have happened if he had refused to resign, and actually had to be removed by invocation of some contractual or legislative clause.
This is a great article, and there really are some pearls of wisdom there.
We once had this level of entrepreneurship back in the early days of ZX81's, BBC Micros and Spectrums. Kids used to spend long hours getting every erg of performance from their systems by learning how they worked, and teachers would produce innovative ways of using computers to make non-computing subjects easier to teach. Small hardware shops like Viglen, and Quicksilver set up to produce reasonably priced hardware add-ons to provide graphics add-ons, sound systems, and storage systems.
This almost completely died out with the advent of the IBM compatible PC and, especially, Windows. There was no easy and cheap way to get into doing clever things out of such systems. Compilers, assemblers and debugging tools were not shipped with the OS, and had to be bought, Graphics were crude and difficult to get working. The interfaces were proprietary (including the original 8-bit expansion card for the IBM PC), and needed quite extensive electronics to even get working.
The fun was not there, and the whole infrastructure for home-brew hardware and software from talented individuals disappeared.
But not, apparently, in the previous eastern block countries. From your article, it would appear that the economic constraints and difficulties in getting equipment persisted.
I wonder how the youth of Russia are doing now. I suspect that they are tapping into the Open Source movement, and writing their code on Linux. In fact, I believe that there is a prevalence of non-European names in much of the code that I look at. Be interesting to see some research there.
I've bought lots from Morgan, from camera's (my first digital), through laptops, PC's, memory, hard-disk and DVD recorders and GPS devices.
I'm seriously upset by the demise of one of the first places I would go to to get a good price for slightly out-of-date but perfectly usable kit. I guess that the PC manufacturers are moving to build-to-order, so there are not the remainder stock that Morgan could shift so well.
I can't believe that whatever happens, the website will remain the emporium of gadgetry that it was.
Quite honestly, bearing in mind that the originals were recorded in mono, two track and (shock horror) 4 track analogue magnetic tape with overdubs and all, putting it into lossless flac is not likely to improve the quality over MP3. Some of my first Beatles albums were on Compact Cassette, and you could hear the hiss from the master tapes over the Compact Cassette hiss (and this was before DolbyB)
Of course, when they say re-mastered, they could mean compressed, dehissed, pitch corrected, psuedo stereo-ised and other cleanup methods, but if they do, then I'm not interested.
If it's not black and 7 or 12" rotating at 45 or 33rpm, then it's not really a Beatles record. I'll listen to the un-mutilated CD's where there is noise (in the car, or on a portable media player), but vinyl is the real McCoy.
"Love" was very good, but listening to it again, even though the Martins said they had to do very little to the tracks to munge them together, you can still tell where they did mess around.
Too many of today's recording artists are produced and engineered to death. They may sound good live (that is if they are any good), but the records that come out are lifeless and flat, without any personality, and even worse if they have used pitch-correction. That is one of the few good things about X-Factor (and even more, Fame Academy) was that you actually get to hear the contestants 'warts and all' (I mean, just listen to John and Edward at the moment - they're some warts!). As soon as the top three produce their albums, the music is just dead.
If they re-work the Fab Four using modern day production techniques, then there is no doubt that newcomers to their work will just say "so-what."
Not sure which camp you're in, if you're a windows user, I guess that the Windows Registry is clear and understandable to you.
The number of MS Technotes that start with something like "Open the registry editor, find key HKEY_CURRENT_USER\SOFTWARE\Microsoft\Windows\CurrentVersion\policies\Explorer\NoDriveTypeAutoRun
and set to 0xFF or whatever bitmap disables autorun on the device in question according to the following table..." followed by the table with hex numbers in it for each of the devices windows can use.
This is a REAL example, and would be only be slightly less meaningful if it were written in Russian to someone like my wife. And have you tried to work out how some services and background tasks get started on Windows!
The crux of the matter is that complex operating systems require complex configuration. It's just that most people never see the Windows stuff, because it is hidden. When you need it, it is equally cryptic, regardless of the OS.
I'm sure that OSX and BSD have equally arcane incantations, but then so did RiscOS, OS/2 and probably NeXT and BeOS.
Of course, we could have all he configurations stored in XML (shudder), in which case it would be almost impossible to change any system configuration settings without the correct tool.
You must have a much more understanding wife than me.
I, too, can drill through solid walls. I can't pull the *ORIGINAL* skirting boards up without ruining them, they are too solid, and they go right down to the floor (none of this leaving gaps for the carpet, I'm REALLY talking about a 100 year old house with many of the original period fittings). And I wonder what you do with regard to doors. I'm sure that from what you are describing your house must have been taken back to the walls, and decorated in a modern manner.
If I were to run a cable from where the telephone line comes into the house (where my ADSL router and firewall are), to where my office used to be on the top floor, I would have had to either route around a a room and it's door, a hall, a staircase, a landing, another staircase another landing and another door. At a rough guess, I would suspect that I would need about 50 meters of cat 5 cable, and would end up with wire running up door frames and across ceilings. And all with a wife muttering about more spaghetti in the house. And if I wanted to get all of my kids bedrooms wired up like this I would have to repeat this all three more times, or find convenient cupboards with power where I could install switches.
Even if I were to run it into the *next* room where the game consoles are kept, I would still have to route it around two walls negotiating either a door frame or a fireplace (depending on which route I take), drill a hole in the wall, and then around another two walls with fitted furniture. I'd have to move tables, shelf units, desks, the HiFi and possibly carpets as well. You really think that this is easier than plugging 2 or 3 homeplug devices into the wall? You're deluded.
When I said months, I was assuming that I was doing a full time job and doing my part in running a household. I admit that if I took a week off work, I probably could wire up the house to a similar level of access, but I doubt I could do it without cables showing in any number of places. Again, plugging homeplug devices in is MUCH easier, and probably cheaper.
"it's really not hard to run Ethernet cables around old houses and flats" - Beg to differ.
Yes it is. I've a ~100 year old house with mostly solid walls and ground floor, and if I wanted to hide Cat 5 cable, I would either be digging into the lath and plaster of the upper story walls (no easy cavity in these) or solid walls (no plaster board anywhere), or pulling up skirting boards that have not been touched since the house was last re-wired (or in some cases, since the house was built).
Because of the size of the house, I would probably want a switch on each floor, limiting the number of floor-to-floor cables (it's a 3 story house), which would need me to find a location with power where I could install each switch. It would be much more of a commercial PDS rather than just a Cat 5 cable or two.
Also, I cannot use wireless throughout because of the solid walls that prevent wireless signals reaching the whole of the house.
It's MUCH MUCH MUCH easier (a few minutes compared with months of work) to plug two homeplug devices into the mains sockets where they are needed. And I can move them whenever I want, and I don't suffer the wrath of the Wife.
My house is a really switched on one with more computers than people, with 4 tower systems away from the room containing the ADSL router, Firewall, NAS storage, printers, and Wireless Router. I use homeplug for the towers, and wireless for the laptops (and the MacMini and Wii which have wireless built in and are in range of the wireless router). I also use a homeplug for the Xbox360, because I already had a spare homeplug, and did not want to shell out for the inflated price of the Microsoft Wireless adapter.
I have a mixture of 85Mb/S and 14Mb/S devices, but I find that two 85Mb/S devices talking together are easily faster than 802.11g (with good signal) for transferring large files around, but obviously slower than directly connecting to the 100Mb/S switch.
So, yes, it has solved my problems and yes they were real. My only concern is how rapidly I am loosing the older homeplug devices due to failure, and whether any of my neighbours are SW or HAM radio operators.
Now. Go back to your modern shoe-box house and think again.
... used to have it right, but threw it away when they dropped Graffiti on the Treo, and opted for a keyboard.
I put an addon back on my 650 that allowed quick relatively accurate use of a stylus for texts, mails and notes, but I could not quite get the hang of Graffiti 2, especially when trying to follow a letter l with a space (comes out as a "t"). Used to wear the screen, though, so screen protectors were a must.
Putting a keyboard on the Pre and Pixie is a definite retrograde step. I'm sure that there must be a market for an Graffiti addon plus a useful case with space for a stylus, or possibly a purely screen based Pre.
Why Palm did not graft a phone onto a Tungsten T5 or TX I don't know. Would have been a progressive and very desirable product years before the iPhone.
That the redundant VIOS for IBM i has only just made it out of the door.
For AIX, it has been possible to have 2 VIO servers providing multiple paths to your storage for at least 3 years, and it was being talked about when AIX 5.3 was announced, whenever that was. But I guess that the UNIX market place is a bit more aggressive, and AIX needed more of an advantage to leverage sales.
Wonder when they will get the partition mobility support.
The 6150 was only marginally underpowered when it was first released, but suffered from lack of upgrade until close to the end of it's life (about 5 years) when the 6150-135 deskside and 6151-115 desktop systems were produced. The product numbers followed the IBM PC line (the original IBM PC was a 5150), and the initial development was for the PC division to produce a RISC based PC.
But without the 801 processor (which has been argued by some as being the first commercially available RISC processor), there would have been no RS/6000, PowerPC, RS64, POWER, or CELL products (or, in fact, the 9371 deskside mainframe, or some of the current crop of zSeries systems that use POWER processor offshoots).
I always thought the the dial and button boxes were quite a neat idea, and I saw them used to great effect with CAD for zoom and pan operations. How well they were used was largely dependent on how much effort was put into installing it into the workstation (desk - not computer).
What you must remember was that this was an 80's designed system that looked its era, and should have been updated more frequently than it was. I can't remember how Sun 2 and 3 boxes looked at the time, but I'm sure that it was much less slick that the Sparc pizza boxes that appeared in the later.
Whilst it was possible to have async terminal and graphics head attached, it was not necessary. I had a 6150-135 fully populated with 24MB of memory (although it was only supposed to support 16) and 930MB of ESDI disk, with a Megapel adapter and 5081 model 1 17" display as my home UNIX system for several years.
The version of AIX sucked more than a bit (it was a non-paging SVR2 port in the days of SVR3 and BSD4.3), and it was built on a hardware abstraction layer called the VRM which isolated AIX from the hardware (for disk and memory allocation [the VRM did the paging providing AIX with a larger address space than the available memory - possibly the first Hypervisor outside of a Mainframe environment] and serial port configuration, anyone else remember the minidisk and devices commands).
Boy, was it noisy, and the 5081 screen was sooooooo heavy (it had a lump of concrete in it to counterbalence the weight of the display tube). I gave it away to a computing museum (complete with a full set of 30ish install floppies and manuals) when I decided that Linux was a better way of having a UNIX-like OS in the house.
If you have got rid of the polarising nature of early LCD screens, and apply a per-pixel static polarising filter, then I guess that this isn't too difficult. All you would need to maintain resolution is double either the vertical or horizontal resolution of the screen, and provide a mechanism to address every alternate pixel in each of two virtual display adaptors (or two screens on a dual head adaptor). Registration problems from a distance would not be noticeable. I guess that the technology is up for this.
It would be better still if the polarising filter could be rapidly switched on or off, whereupon you could use the same pixels, and just paint alternate frames (not sure whether LCD's are responsive enough for this).
Would be interested in the software algorithms to analyse 2D images and create 3D projections, though. It must get it wrong sometimes, surely.
I'm similarly concerned about *one* of my XP systems. Its a 2.4GHz Celeron D (I think). When delivered, it had 256MB of memory, and ran passably fast (it was bought for utility, not speed), but when my wife recently said that she could not use it any more because it was slow, I had a look.
In 256MB, it was paging heavily. The hard disk light was almost permanently on. Virus, I thought, but no. New scans by recently downloaded copies of both virus and adware scanners showed nothing. Firewall (Smoothwall) showed no unexplainable network activity (but did show how much happens when installed apps check for new versions!) Booted into a live Linux distro just fine, showing the basic hardware was sound. Disabling all the bloated startup background processes made little difference, nor did de-fragging the hard disk.
Doubled the memory, and the machine became usable again. Looked more closely at the memory usage and the installed and running tasks, and saw that there are a whole load of Microsoft patches that are memory resident occupying resource (I'm not kidding, I counted nearly 100MB worth of patches), taking the basic memory footprint of the idle machine to about 260MB.
So. Do Microsoft install patches as memory diverts as opposed to a new kernel?
As an exercise, I installed another system from scratch using XP install media, and found that I could get a newly installed unpatched XP system running quite happily in 128MB (obviously, not using any heavyweight apps., and behind a firewall!) As I installed SP2, it needed 256MB, and when I put in the required virus scanner, 256MB was no longer sufficient for acceptable performance. I did not put SP3 on, although maybe I should .
So, it is quite possible that your slow machine is not slow because of XP, but because there are loads of start-at-boot processes and MS patches using all of the available memory. If there is a modern antivirus system installed (you know, the type that intercepts and inspects [and slows down] all reads and writes to both disk and network), then it is also possible that this is what is crippling a perfectly capable machine. Decide whether it is worth re-installing. Or possibly, investigate Edubuntu, whereby you can dispense with the AV stuff. But this is more than most Windows people are comfortable doing.
So, in reference to my earlier post, the default behaviour for a machine like this will be to chuck the machine and buy a new one. What I don't understand is why do you need half-a-gig of memory just to run an idle machine? Madness.
I think the comments here are missing an important aspect of the problem.
I appreciate that it has always been the case that new hardware will always run new operating systems better, but some of the systems people are complaining about aren't really old.
People (and computer companies) seem to imagine that anybody who has a system older than 2-3 years should really replace their hardware. Think what this would mean if the same were true for cars, your heating system, your cooker, or your television ( - scratch the television, the manufacturers are already managing to convince people that their 2 year old 1080i LCD televisions are not 'true HD').
What is missing is the ongoing support for the 2-5 year old Athlon XP, Pentium D, M and 4 machines that are still perfectly capable of doing the Web browsing, Email, Word Processing and home accounts, that many people still have. These are still usable machines, and the only thing that will make people dump them (literally) is either a hardware failure, withdrawal of support (as MS are threatening to do for XP) leading to their online banking complaining that their system is insecure, or a sales person persuading them that what they have is lacking in some way.
If MS wish to make XP an OS of the past, they must have an affordable upgrade path to W7, IE8, WMP 10 etc, and make sure that drivers are available for older hardware (and the same must forced on the display and audio device manufacturers who are so keen on abandoning their old-hardware customer base). After all, I'm sure that a 3GHZ Pentium 4 must be at least as fast as a 1.5GHz Single core Atom 270, which is supposed to be able to run Windows 7 without problems.
Of course, there is a vested interest in the computer manufacturers shifting new hardware, and support issues for older stuff is a useful lever to them. I've long thought that MS and the hardware manufacturers are in collusion to make sure that they continue to sell the new 'shiny' things.
Computers should be commodity tools now, not subject to the whims and fads of fashion. It sickens to see, week after week, serviceable computers and televisions being sent to the recyclers, not because they no longer work, but because the owners have been conned into thinking that the old ones are too old/slow/difficult to secure or support, and the answer is to replace them. I'm not a Green, but the blatant waste is bordering on the criminal.
A computer should be for (it's) life, not just Christmas (sic).
It is a mistake to assume that xCat is built from the ground up. It still uses underlying components that are currently used by CSM, including NIM (NIMoL for Linux) for system image deployment and RSCT for monitoring, and they all revolve around other well known systems such as NFS, Kerberos, and rsh/ssh (to say nothing of the open source components ).
It's true that the overall gloss on the top is new, but many of the bits under the covers are the same. It's interesting to also see that IBM Director still uses NIM for Power/AIX systems.
I must admit that I believe that the switch from PSSP to CSM was a bit of a dogs dinner (I understand the architectural reasons for the change, that PSSP was designed around the constraints of the AIX SP/2 which were too rigid for the Cluster/1600 offering) . It took a couple of years for CSM to even approach the usability that PSSP had, and I fear the same will be true for the switch from CSM to xCat2.
The real problem is that often the people who write the code often do not work in the real world, and end up making assumptions about the shape of the systems. This means, once you take into account the various networking, commercial support and security restraints placed on real-world systems by people like Government Agencies and the Financial organisations (the people most likely to deploy large scale commercial clusters), very often the management tools, as delivered out of the box are about as useful as a perforated condom.
Even in HPC environments, it is comparatively unusual to have the systems configured exactly as the vendors suggest. I'm currently working with a large Power6 HPC cluster, and the requirements for outward event reporting to an enterprise reporting system that is NOT Tivoli are causing more than a few problems, along with security, access and data control that IBM had no real incentive to architect solutions for. As a result, it is necessary to dig under the glossy covers of the management and deployment tools using whatever can be found to implement what is needed.
All I can hope for is the fact that xCat2 has come out of Alphaworks means that real system admins have had input to the requirements and may have spotted any potential problems, but I wonder how many 200 Power node clusters have been deployed anywhere.
I'm still not very happy about having to learn another clustering tool, though, and I've still got IBM Director to contend with in the future for non-HPC clusters.
As I remember it, it was the JFS code that IBM wrote for AIX 3.1 that was the main issue. SCO were arguing that this was a derived wok from the AT&T source, although it was more like an evolution of the BSD4.4 filesystem, complete with distributed Inodes and block bitmap for the free space. The whole concept of derivative works currently appears to be sparking controversy with GPL2, which just shows how convoluted US Copyright law is. See http://www.theregister.co.uk/2009/10/15/black_duck_gpl_web_conference_copenhaver_radcliffe/
All the LVM and hooks to extend filesystems were as far as I am aware IBM innovations that were actually new code unique to AIX.
Subsequently, IBM contributed the original LVM and JFS code to the Open Software Foundation, where it became used by several different vendors who implemented OSF/1 (although I don't believe that DEC used it in their implementation, they preferred their own Advanced File System [not to be confused with the Andrew File System]). This did not cause a problem, because all OSF members were already UNIX source licensees.
It is obviously now available for Linux, which caused the whole furore.
If anybody sees my copy of the Lyons annotated V6 UNIX code, I would be grateful to hear, as it appears to have fallen out of my coat pocket whilst in the cloakroom.
Not sure I understand Microsoft's mismanagement. While they were responsible for the port/rewrite of UNIX that produced Xenix, at no time have they had any control of UNIX. They bailed out of the UNIX market when the original SCO was spun off to a separate company, and the rest is a history of competition.
It is ironic to recall that at one time early in their history, Microsoft did actually see themselves as a UNIX vendor, making statements to the effect that their future was DOS desktop systems clustered around UNIX servers. They were an AT&T source code licensee.
I recollect that at the time, Xenix/286 was regarded as a poor UNIX port, as it did not adhere to man Section 2 system call semantics . I believe that this was only pulled together when (original) SCO joined AT&T, SUN and other notable UNIX vendors in the SVR4 converged UNIX venture, although I am prepared to be corrected on this.
BTW. As it is a trademark, UNIX should always be capitalised.
Anyway, I will be glad when the FUD that SCO have been peddling finally goes away.
As I understand it, because the devices are not intended to broadcast wireless frequencies, they avoid a number of the standards that apply to wireless transmitters.
The standards they fall foul of are the electrical and electromagnetic interference standards, and it does look like some devices are seriously dirty in this respect. And it looks like the frequencies that RSGB are trying to protect are not actually frequencies used by the devices, but unintended harmonics of the carrier frequencies. This should be fixable in the design without making the devices unusable.
I believe that OFCOM (if they are the people who police electromagnetic interference) should crack down on the manufacturer of devices that do not meet the required standards. I wish someone would produce a list of the devices that are dirty, though. I am using some Netgear homeplug devices (which work very well), but I don't know whether they are causing a problem.
Alan uses a video watch in the episode "Move - and You're dead" (1965)
Brains uses a video watch in the episode "Day of Disaster" (1965)
and I'm sure one actually features in the movie "Thunderbirds are Go" (1966). I remember seeing a picture of the oversized model wrist with I believe that it was an early colour TV or possibly a projector behind it. Good effects for the time, all done with models, camera tricks and pyrotechnics (not for the watch, of course, unless you include "30 Minutes After Noon")
That's mine with a copy of "The Complete Gerry Anderson" in the left pocket, and the copy of "Century 21" in the right.
Medium or large businesses do it differently from home users. Very few business related PC's will be put on a desk running the same cut of the OS as it was installed with by the manufacturer.
Almost anybody with an IT department will have an image that they will put on any new PC that they take delivery of. This installs known versions of all of the standard software. It is quite normal for their 'fix' for software-borked PC's to be to re-install the initial image. It's quick and low overhead, and can be done by your average IT trainee.
Many businesses will also create updated images for their existing PC's to 'bring them up to date'.
While I doubt that many will deploy a new OS (like Windows 7) on older kit (mainly because of the licensing costs), it is refreshing to know that it probably would work on any relatively modern PC. Of course, many businesses could extend the lifetime of their aging kit and maximising ROI by either making them thin clients, or putting a fee free OS on them for the sector of their business that can work with Firefox, Java applications and Open Office.
... gopher and archie, as well as uunet, usenet and many other net-news and email services as early users of Internet connectivity. These all pre-dated NCSA Mosaic. Ahhh. Plain text protocols. How trusting we were.
Nooo, I didn't have a coat, or wait, maybe I did... I get so forgetful now.
This is the reason why the current BT master phone sockets include a "service jack". Under the front plate is an internal phone plug attached to the plate and socket attached to the base All extension wiring is attached to the front plate. By removing the front plate from the master socket, you are removing all extension cabling and other devices in the house, and leave the circuit with a single phone outlet that BT can be reasonably certain is not compromised by customer supplied wiring (it's actually a against your contract to mess with the wires on the BT side of the master socket).
It's not difficult and is perfectly safe provided that nobody has wired mains into your house telephone wiring. Ring signal voltage is only about 50V at limited current, so even in extreme cases should not cause even people with heart conditions a problem.
I think that it is also in your contract with BT that they may ask you to do this.
I know many people regard PC's as appliances, but in reality they are not, and there should be no 230V mains outside of the power supply, so there are no real safety concerns about taking the side panel off of a computer. Of course, some manufacturers actually dissuade you from taking the covers off by putting tamper stickers on the case (personal experience with the now defunct Time, Fujitsu-Siemens and even an eMachine [bought from MorganComputers, not PC World I hasten to add]). This causes different concerns about allowing people to upgrade their systems.
It's switch off on my Wife's computer. She is an unwilling novice, and a bit of a luddite as well. She does not remember what I tell her about computers from one day to the next, mainly because she just doesn't care.
All I hear from her is "I've put my CD in the drive, and it's not working" when she puts one of her craft CD's in the system.
God knows how many times I've told her, but the instructions on the CD cover tell her that it will autorun, and she trusts that more than she trusts me. It's driving me crazy.
This type of software is written for people who will never care about how computers work, and uses every trick in the book (and some daft ones as well) to try to make sure that the computer is just an applience. I can't even install the software on the hard disk, because the STUPID and SIMPLISTIC copy protection system KNOWS that it will ALWAYS run from drive D, and has hard coded-paths scattered throughout the software. Of course, nobody partitions their hard disks, do they!
I admit that I use HomePlug standard devices in my house, because the wiring is already present, and being an older house with thick walls, built over three stories, even with Wireless range extenders (repeaters), I have significant WiFi dead spots. Likewise, running Cat 5/5e/6 around the house to all of the kids bedrooms would be an expensive option, even if I were to use PDS-like infrastructure, not that I would want to have powered switches on every floor of the house for economy reasons.
I am worried about what this article is saying, because I try to be a good neighbour, but had not considered that I would be polluting the EM spectrum to the degree implied. I trusted that the manufacturers would abide by the standards, that the standards were reasonable, and that OFCOM would police them.
If two of these three trusts cannot be relied on, then I am worried that I might be impacting my neighbours, at least one of whom have a large pole antenna, presumably for HAM or Shortwave radio. Should I be going around all of my neighbours asking whether I should stop using Powerline Ethernet, or should I sit back and wait to see whether I get a knock on the door about these devices.
I would hate to have to dump the devices, because they are just too useful.
P.S. All of my devices are Netgear 14 or 85Mbs devices, not BT supplied devices.
You've obviously never worked anywhere Sarbaine Oxley or Basle II requirements have been applied (like *ANY* Western financial organisation). If you follow these as specified, you have to implement a segregation of authority, meaning that your system administrators cannot subvert the security logs (at least, not without leaving a secure-from-them audit trail). There should be a different part of the organisation who have no ability to change the systems, but who do have authority over the logs, whose job it is to make sure that the sysadmins do not step over the mark.
The problem in a nutshell is that, yes, the sysadmins can do pretty much anything within their control, but this should be subject to audit. By allowing sysadmins to peek at other user's passwords, it enables them to do things as other users while bypassing any audit trail that points back to *THEM*, and leaving a false trail pointing at someone else.
Key loggers and password leaking backdoors should not be able to be installed without again leaving some evidence of the fact.
As SarBox is mandatory for US financial organisations, does this mean that it should be recommended that SQL Server should now be added to the banned software list, or at least relegated to non-customer and non-financial database use?
This behaviour from a major IT provider is just inexcusable.
I was at the point of leaving Virgin because of their FUP bandwidth limits which they hit me with every two weeks or so (there are 5 family members in the house, using a total of 1GB+ per day - no it's not P2P, just OS fixes, internet voice chat, gaming, iPlayer, SkyAnytime PC, iTunes, Amazon MP3 store, YouTube etc.)
But they've not complained or limited me in the last three months, and the performance has improved at peak times (I can actually use iPlayer in the evening!)
I guess they must either have lost some other customers, or increased their local bandwidth. As a result, I'm currently quite happy.
I don't use their support unless I really have to, however, as it appears I know more about their network than the people in their call centres.
I know genetic UNIX well. I do not know Linux kernel as well, but I do know this.
Userland processes run in a virtual address space controlled by the kernel that does not match physical memory addresses.
The memory mapping registers that control a processes virtual address space can only be manipulated with the processor supervisor bit set (this should only be set when running kernel code after a system call, or in a kernel thread), so a process cannot map new bits of physical memory into it's address space without the kernels involvement. This is a FUNDAMENTAL security requirement that is well understood by kernel writers (and incidentally, a reason why the first versions of Windows on Intel processors prior to 80286+80287 or 80386 were fundamentally insecure)
Page zero of a userland process does not necessarily map to physical page zero. It's dependent on the hardware architecture of the system and the way that the process virtual address space is set up by the kernel and whether you have a separate set of supervisor mode memory mapping registers.
In all UNIX variants I have used that do map page zero to virtual page zero in a userland process ALWAYS write-protect this page, and most hide them so they can't even be read. It is normal to only map page zero if the system call interface does not provide a mechanism to change the memory mapping registers during the transition to system mode (IBM's original RS/6000 processors could not do this. The result was that the first piece of kernel code actually has to execute in the non-privileged processes address space before it has the chance to change the memory mapping registers, so some kernel code had to appear in the processes address space).
So. My question is how do you write to physical page zero to exploit this problem without already having escalated privileges? Maybe someone with some real Linux kernel experience can explain.
I'm off to /usr/src on my laptop to have a dig. If I find anything relevant, I'll post again.
It is axiomatic that if you want a high capacity battery, then it holds a lot of energy when charged. It is also a fact that high capacity==highly reactive chemicals in the battery (after all, it is a chemical reaction that produces the energy).
This means that pretty much all devices with batteries are potentially dangerous (if you don't believe me, try shorting a couple of good alkaline AA cells connected in series, and see how long you can hold them!). And you will get even more spectacular results with NiCads (it all depends on the internal resistance of the battery - the lower the resistance, the faster the battery can be made to liberate it's total capacity, and as we know from school physics V=I*R but P=I*I*R, so the lower the resistance, the higher the generated current and thus the higher the rate-of-release). Power=heat, heat means liberated gas which can cause explosions and heat that can cause burns.
Bearing in mind how much energy is required to move a car, I would hate to see what would happen to a Lithium Polymer battery car in a fiery crash with a petrol vehicle. Nor would I want to be a fireman trying to tackle an electrical fire on the motorway.
I claim that this is either a Chemistry or Physics teacher and this is a Pedantic Thermodynamic and/or Electrical Power Equation Nazi Alert! .... Come on, it is Friday afternoon.
Biting the hand that feeds IT © 1998–2019