1391 posts • joined Friday 15th June 2007 09:17 GMT
Does anybody still use...
...rs232 terminals, or even TTY emulators, to manage routers? Surely, SNMP, bespoke applications and web-based access rule nowadays. Can you even buy asynchroneus terminals anymore?
Oh well, must be the recycled stuff from the bin that they managed to re-use and sell back again.
I need my coat, it's raining here!
I'm not saying it can't happen. It is possible to engineer root access to a Linux or UNIX system that is managed remotely as long as there is a single vulnerabillity, even a non-root one. BUT, using root as infrequently as possible rather than having admin rights all the time (typical Widows user) must be more secure, even if it is only by degree.
All the time you have the concept of escalated privileged to perform some function, you have the posibillity of this being abused. This will NEVER completely go away, regardless of the OS, until computers are so locked down that you cannot change anything. So, make it so that you use the escalated privilege as little as possible. No version of Windows I have come across has taken this line, with the result that too many people HAVE to run as admin for their apps to even work correctly.
So even though this development is worrying, I am still slightly smug, but cautiously so, and with some respect for the skill of the people writing the rootkits. They are MUCH cleverer than most of us who mearly comment on the effects of their work. Pity about the script kiddies, however. But you cannot control information the way you can a physical object. Once it's out in the wild, its out, whether by design (Open Source) or by accident (leak). The vector makes no real difference.
Not sure about your comment about the paged ROM area being faster.
The Beeb did use fast (for it's time) RAM, but I'm fairly certain that the ROM area was slowed down to 1MHz, while the processor clock was 2MHz (although this may only have been the early systems with the OS and Basic in EEPROM). The speed was required for the RAM, because the CPU and Display ULA had interleved access to the memory, so that both the display hardware and the processor could access the memory at (their) full speed without slowing down the other.
Where the memory was improved was by bank-switch the language ROM (8000-BFFF hex), and OS ROM (C000-FFFF) with the dispay. Acorn did this with the BBC B+, and Master 128 with the Shadow screen, but it introduced compatibillity problems with programs that did not use the OS routines for writing to the display. I think that Acorn copied the idea from either a Solidisk or Watford hardware add-on to the original Beeb.
But these systems never really reached the same popularity level as the original BBC B. Probably, it's time had just come. I still think that there needs to be an education system as accessible as the Beeb for our schools. PCs just do not engage the same degree of enthusiasm in kids or teachers.
One of my favorite subjects
Fast, flexible, fantastic.
Was blown away by an 8 bit micro doing 3-D hidden line removal wireframe graphics close enough to real-time to be useable for a game (Elite).
Not only a good teaching machine, but well made, with a consistent OS, and brilliantly documented. Modular, vectored OS calls, overlayed sideways ROMs. The way all home PCs should have bee made.
The only real criticism was that it had too little memory. When using mode 0-4, 20K of the 32K was used for the screen, with 3.5K used for various stacks, buffers, and character maps when using the Cassette fileing system, and an extra 2.75K used if using the Acorn disk filing system (DFS). Woe betide you if you also had Econet (NFS - though not the Sun offering), which took another 1K. Left you with about 5K for your program. Soon learned to turn off the fileing systems that were not in use.
And if you used ADFS, you lost even more. On a normal Beeb, you only really did this if you were running an Econet II fileserver, and you needed a 6502 second processor for that (yes, the Beeb could be networked even before it was popular to do so).
Terms to trigger nostalga. PAGE, RAMTOP, OSCLI, VDU, Fred, Jim, Shelia, OS 0.9E, OS 1.2, BREAK, Escape, Tube, 1MHz bus, Ferranti ULA, Teletext Graphics, Attacker, Snapper, Panic Attack, VIEW, and...
"A plastic flower in a Grecian Urn, Goodbye Peter, now press RETURN"
One thing that needs re-enforcing is that unless you have a security hole that allows a non-privileged user across the security divide, it is just NOT POSSIBLE to install a rootkit on a properly run Linux system. Rootkits, by their very nature, need to alter/add code to the kernel, libraries or modules that are used to run the system. And this needs root permissions. This is why it is very important to make sure that you do AS LITTLE AS POSSIBLE as root on a UNIX-like OS.
There are a number of well known ways to try to subvert a user currently logged on as root, but a reasonably savy sysadmin should be able to avoid these (you know, don't browse the internet as root, check that your path does not allow commands from the current directory, make sure that there are no executable script files with world-write, don't read mail as root, keep firm control of the permissions on directories in root's path, all the simple stuff).
If a rootkit cannot be installed, it cannot compromise your system, nor can it get access to SSH keys other than the user it is running as.
Please note that I am not saying Linux is totally secure, there have been, and will be in the future, code and design defects which could allow a system to be compromised. I firmly believe, however, that the open source model allows such things to be identified and fixed much more rapidly than a closed source model. Couple this with an effective notification and patch delivery system, and Linux just is more secure.
Contrast this to Windows, where many people by default use an account with Admin. privileges, or with the security notifications turned off? Asking for trouble as far as I can tell.
But the amazing thing is that the UNIX/Linux security and source model is decades old (I've been using UNIX for 30 years, and the Bell Labs. UNIX V6 and V7 code, and all the BSD code used to be available for inspection and modification to the academic community and others for at least that long). And Microsoft (who, in fact, have been UNIX licensees for at least 24 years - they did the original Xenix port to various architectures, before spinning off the original SCO) just don't seem to be able to learn.
There are things that can be done
Once you can get access to a system, the whole security requirements changes.
There is no system in existence that will prevent apparantly authorised users from doing some damage on a system, but the degree of damage is what is important. Where Linux benefits is from a strong divide between normal and privileged access. Sure, if your private key AND IT'S PASSPHRASE are compromised, then someone can get access as you to your system. But this is just your non-privileged account, isn't it.
Of course, if lazy admin's directly access root using SSH, or have passphraseless SSH keys, or have sudo rules that allow them to cross the security divide without further confirmation, or store both private and public keys on their boundry systems, or use the same private key throughout their whole environment, then these fools deserve to get their systems compromised.
So here are the rules.
- Use a non-privileged account for initial access any system
- Su or sudo to obtain root access, but use additional authentication steps
- Don't use passphraseless SSH keys, unless you tie down what can be run (see SSH documentation).
- If possible, use hardware based authentication to secure private keys
- Guard your private keys like your life depens on them
- Don't store private keys on systems that do not need them
- Make sure that the permissions on your .ssh directory only allow your ID to see the private keys (I know ssh does some checks, but 0600 is best on files, and 0700 is best on directories)
- Use different keys in different parts of your organisation
- Consider using passwords with SSH (with strong change and strength rules) rather than SSH keys for very critical systems (really)
- Be careful about storing your private keys on shared Windows systems, or systems that have remote users with administrator access (consider portaputty and store the keys on an encrypted USB key).
- If you are really paranoid, regularly change your private and public keys on all your access boxes (please note it is NOT enough just to change the passphrase!)
If you follow these, then even if one of your private keys is stolen, then the amount of damage can be limited. As always, you can run a potentially secure system in a non-secure way. Security is only as strong as the weakest link, and this is often the sysadmins!
Oh, and by the way, if you want to see a file that cannot be seen using a hacked ls, try "echo *", or "find /etc -print". Or maybe use filename-completion in the shell. This is UNIX (or close to it) so there is nearly always more than one way to do something
"BT are the only ISP who can serve my house".
You sure? Normally if BT can serve an address, they will also offer the service through BT Wholsale. This normally allows other ISPs to provide service, even though you are using BT "Last Mile" infrastructure. This is even the case if the ISP is not able to install equipment at the exchange "because of space or power restrictions".
I know that Virgin Media (spit, hold out Holy Cross for protection) used to offer no minimum period ADSL contracts, although having checked the current smallprint, it looks like they now have a 12 month minimum contract.
@John and others
I've been doing 3 mobile on Ubuntu (6.06 and 8.04) for about 6 months, and can confirm some of your problems, but they can be worked around.
When used with the supplied Windows software, 3 Mobile hard-codes the IP addresses of the DNS servers. If you can get the HSDPA modem set up as a managed network, hardcode the 3 DNS servers to be put in when you start the managed network, and make sure that the option to use the provided DNS servers is off. You can use Locations to condition different sets of DNS and default routes if you also use your system on other networks.
The throughput is probably being throttled by the USB TTY modules, which have an effective limit of about 60KB/S. Google for info on the hacked airprime modules and the Huawai modem. This allows reads and writes in units of more than one character at a time, and allows a higher peak rate. I'm using the ZTE modem, which is another kettle of fish, which requires re-compiling airprime to put the USB ID's in the code. Then all you need to put the right udev rules in to load the airprime module rather than the USB TTY module.
I regularly get more than 100Kb/S, but have noticed that
@Steven Raith about YouTube
Unfortunatly, a lot of an older system's resource (CPU/Memory) is taken up by the large animated looping flash adverts that YouTube now carries (ironicaly from Crucial, or maybe that is by design!) My EeePC701 used to play YouTube well, even at Higher Quality, but now stutters along. The newer releases of FireFox also appears to place a load on a system with this type of advert.
If you can find or make an embedded link for the video so that the adverts do not show, slower PCs can still work quite well. Alternativly, try full screen (I know this sounds silly). Or download the videos, and watch them offline.
I am happily running Ubuntu Hardy Heron on my EeePC.
Securing data is not genetic engineering...
... sorry, rocket science is too simple now.
Here are a number of measures which SHOULD be made compulsary wherever government held information is used.
- Put a robust RFID chip as an integral part of each official USB Flash drive.
- Put Shoplifter type security (or even make it prevent operation of the turnstyles) on all exits in secure facilities.
- Do not use generic RFID tags, track specific tags (to stop someone identifying a secure USB device as the holder walks around a shoping center).
- Have Official USB flash drives tracked, and holders made responsible for their loss.
- Do not allow official flash drives to be held for extended periods.
- Have a specific process to allow tracked USB flash drives to be removed from secure sites.
- Change the USB ID on the official drives so that they do NOT appear as a generic storage device, so it becomes more difficult to read on ordinary PCs.
- Put the required driver on all systems required to use the official stick, and have it use automatic strong encryption as the data is accessed.
- Don't allow the specific driver to be installed on non-official PCs.
- Regularly rotate the keys on the specific driver and flash drives (this can be done with the flash drives by making holders regularly check the drives in).
- Clean all data from checked in flash drives when they are checked in to prevent people from using them as a backup mechanism.
- Ban the use of personal USB flash drives (or the use of phones or watches, or whatever else provides this type of function) from secure sites as part of policy.
- Disable the USB storage device handling drivers in all systems that can access private data to prevent non-tracked USB flash drives being used (I know this is difficult, but it should not be impossible, even if it means you have to put PS/2 keyboard and mouse ports back into PCs).
- Enforce the already existing GSI Security requirements for all government held data.
I'm not saying that this will make our data totally secure, but it would be a step in the right direction. It would prevent casual examination of misplaced devices. It would not stop a concerted attempt to steal data, but what would.
Very little of this is particularly complex or expensive, as most of the barrier security and procedures already exist in secure government locations.
BTW. This counts as Prior Art in the unkilely event that I am the first person to put all of these ideas together.
Who in the world...
... wants a 'laptop' weighing 4.1Kg. Surely it defeats the object of being portable.
If watching Blue-Ray is your primary requirement for a laptop then this may be the device for you, but I'd book for the body-building course now.
Having recently watched the Sailor episode where the hapless Bucaneer pilot took about 10 attempts to land on the Audacious class Ark Royal (the one scrapped in the early eighties), it is clear that deck landings are always fraught with problems.
I don't see why a F35B would not be able to just go to full thrust, possibly bounce, and get back up to flying speed before running out of deck. The ski jump will not get in the way (at least in the CTOL design of CVF), because the aircraft will be landing on the angled part of the flight desk, and this will always have to be clear for a non-vertical landing.
I take it you don't work in IT then.
If you do, then I hope you don't have a site disaster, because you will lose (really lose, not 'share') all of the data that should have been backed up OFFSITE.
It's all a matter of control and process rather than location.
Can a DNA DB be kept safe
Do you think that whatever agency keeps the DB can prevent leaks, because I'm sure that Insurance companies, amongst others, would love to be able to screen health insurance applications against illnesses with a genetic component.
Also, think what scandals a complete paternity map of the UK population might show! What would the tabloids pay for such information.
I agree about the high taxes that the energy companies pay, but when demand causes the price to rise, who actually ends up with the extra money?
Obviously, everyone who takes a percentage cut will take a slice (including the taxman), but has the energy become more expensive to produce as a result of the extra demand? Not if they offset their own fuel costs. Are the wage bills immediatly higher? No. Are the extraction licence fees more expensive? Not immediatly.
I conclude, then, as a result of the high demand, the energy companies do get a windfall, as most of the extra money goes to them. This can only be a benefit to their bottom line. But should they be taxed higher? Probably not, as the taxman already gets a cut of the sale of the energy, and THEN takes a cut of the over-all profits in corporation tax. So the governments win without an extra windfall tax.
It's really just governments trying to fill the gaping holes in their bugets caused by their policies, or trying to score votes.
Thanks Mike and Chris
I think I can understand programming having learned to program on 80x25 ASCII terminals. After this anything seems like a luxury.
On the keyboard front, as I look at my Thinkpad T30 (probably one of the best laptop keyboards around), which admittidly does not have a numeric pad, the 15.1" diagonal 4x3 screen allows sufficient width for most of the keys to be full-sized, and I'm used to the way that IBM place the cursor and other extra keys (which is actually reasonably close to a fullsize keyboard). This results in a laptop which is only marginally larger than an A4 pad.
Most of my time is spent writing documentation, and I like to see a whole A4 page at a time. This is why vertical size is important to me. I also use multiple terminal sessions for sysadmin, and can choose various font sizes to get 2 or 4 windows on a 4x3 screen at a res of 1024x760 without having to resort to a magnifying glass. I'm sure I could cope with a 1440x900 (more vertical space than my 1024x768), but I would prefer a 1440x1050 (a real Thinkpad resolution) with the screen filling the lid. Even more pixels!
Still not convinced.
Of course, maybe the extra horizontal space is actually required for the extra bumph Microsoft have put in Aero!
Screen aspect ratio
Can someone please explain why the computer industry is so keen on widescreen laptops. The only real reason I can see is to watch DVD's, but as the horizontal resolution of DVD's is a maximum of 720 pixels, I can cope with only using the middle two thirds of my existing 4x3 screen to watch them.
I cannot for the life of me see why you would want either a bulkier laptop, fewer vertical pixels, or smaller pixels for a business laptop that you carry with you all the time.
If anything, I would like *more* vertical pixels. Please, someone, enlighten me, because I'm mightly pissed off every time I wander anywhere that is selling laptops now.
No, not necessarily 45 degrees. Do a least square regression (or similar), and then all of the cards below the line show better than the average, and those above are worse. The further a card is from the line, the more extreme the perforance vs. price is from the 'average'. But the line could be any angle, depending on the scales you choose.
See and be seen.
Come on. I don't think that they have said that it is invisible in all directions. If you could make it so light from *behind* gets bent around to the front but light from in front gets absorbed totally (i.e. no reflections) and you should be able to see in front, and not be seen from the front. Of course this would not be full invisibillity (a la thermoptic camoflage to all of us GITS fans).
Anyway, if you bent 75% of the light, and prevented the reflection of the other 25% (by absorbing it, which could be in your eyes), you probably would be able to see well enough, and would be fairly well hidden (there would still be a 'dull' spot). The problem would be liniarity, making sure that the diverted light rays followed the same path they would if they were not diverted. And even then, there would probably be a detectable phase shift of the light due to the longer path!
Full invisibillity is still Sci-Fi, and will be for some time.
BT retail or wholesale
It's funny. I am a Virgin Media ADSL customer. The service comes via BT wholesale. What is being described is exactly the situation I see, and I have been blaming on Virgin. If BT are applying this policy to their wholesale customers (i.e. you buy your ADSL service from another ISP, but the ADSL link to your local exchange is run by BT), then I have been unjustly accusing Virgin. At almost presicely 23:00, the overall download speed jumps up from around 50-60KB/s (as measured by my firewall traffic monitors) to 200-600KB/s (and sometimes faster). I would suggest that it was people going to bed if it were not for the abrupt change at such an obvious time. And I see a progressive slowdown in the morning as well.
My house's traffic (I don't call it mine, because we are a switched on-house of six internet users, with more than one computer/network device per household member) consists of Steam, WoW, Fantasy Star Unlimited (and other) games, Wii and DS internet connected games, Skype, Mail, torrents (Linux distros - honest), tunneled services through SSH (inbound and outbound including SMB file access and printing), BBC iPlayer, Sky Anytime, YouTube and other flash video sites, Internet radio, system updates (Ubuntu, Mac, and Windows), system updates for the purposes of my business (AIX and related fixes from IBM fix central), VPN access to my clients systems, and oh, I nearly forgot, some web-browsing.
So the vast majority of my traffic will be legal and justified (I can't always keep track of exactly what everyone is doing) use and much of it will NOT be done over port 80. I suspect that many people actually have far more non-port 80 traffic than they think. So if they are shaping all my traffic because od my torrent downloads, I think I have a right to be upset.
I bought my 701 on the day it became available, and I have used it nearly every day since to suppliment my IBM Thinkpad, for a whole host of different things.
For me, the primary thing is the size. It is still so small compared to almost everything on the market. I don't think I would have bought it if it was larger, or if it were more expensive.
I have dumped Xandros, however, as I keep getting the system in a state where it won't boot because there is a strange problem with the UnionFS commiting transient files to the read-only copy so that you cannot recover the disk space. I'm sure that there must be a config problem there somewhere.
I cannot believe that larger/more expensive models will actually have the same WOW factor of the original, and that is what sold it.
Not sure that it was the network bandwidth that killed diskless workstations from Sun, IBM, Apollo, Whitechapple et. al. After all, you normally only boot a system once every day (paging excepted).
The reason why diskless was appealing was that disk prices then were high. I remember being quoted over £500 to add a 60MB disk to a Sun 3/50. When it became cheap to put a disk into a diskless system, all the cost advantages went away, and you were left with all the bandwidth costs and no advantages. Add to that the increasing concern about the security of NFS, and suddenly things began to look scary.
As a sysadmin, however, the fact that you had total control of a diskless system (like implementing patches once for all your diskless clients) was very attractive. And it gave you a really good reason to say No! to users wanting root access to the system on their desk. Also, remember that all the systems looked EXACTLY THE SAME, so if a desktop system blew up, you told the user to switch desks, or dropped another one on the desk, and the user would not know the difference. Streets ahead of Roaming Profiles.
I'm really a little sad that no-one has resurrected the idea using a virtualisation technology, although I believe that it fits the UNIX model better than MS's current offerings.
I'm sure that you could have a kernel stored in flash that would check on boot to see whether it needed to update itself, and then attach all it's resources from LOCAL servers. Sounds like an interesting Linux based research project, although I'm sure that some of the old X-Terminals used to do something similar.
Mine is looking verry tattered after having been worn for so long!
Random number generator
Come on all of you. All you need is a Bambleweenie 57 sub-meson brain attached to an atomic vector plotter suspended in a strong brownian motion producer, say, a really hot cup of tea! (RIP Douglas)
I had an expererience with an educational computer-controlled robot arm that used IR sensors to make optical shaft encoders for the motors (it was a really good design of arm that did not use stepper motors as was the rage at the time, but proper electric motors, so was much faster and more impressive, and with six seperate independent movements). It worked really well, but unfortunately, the IR emmitter/detectors were covered in translucent plastic, which when used in direct sunlight caused ALL of the active motors to run to the end-stops of the respective movement. The whole arm contorted, and dumped itself off the bench, and led to red faces and a difficult-to-justify repair bill!.
What I want to know...
... is how many of the machines that were surveyed are older than 18 months.
Most businesses will not upgrade the OS on an existing PC. It makes no sense for two reasons, one being that the system is already partway through it's lifetime (and asset depreciation), so why spend new money to replace the OS when the existing OS still works, and the other being that the machine will probably be less productive with Vista than it is currently with XP. Add to this the fact that buying a new system means it comes automatically with Vista, and may well be cheaper in real terms than the system it replaces, and you will realise that very few businesses will do anything other than moving to Vista when they replace the PC.
So the real question should be how many of the business systems deployed since Vista hit the market are still running XP. Anybody any idea?
@ all the ACs
There really ARE legal P2P uses. Linux Distro's being the one I use, but I'm sure that one of the major IT manufacturers proposed a P2P method to distribute fixes, and the BBC dabbled with the original iPlayer.
But for all you anonymous 'experts' out there, how do you 'ban' P2P? You can stop Grockster, Kazaa, eDonkey, Overnet, Torrent, limewire (does it still exist!) along with all the *CURRENT* P2P applications, but hey, TCP/IP, which the net runs on, allows point-to-point datastream connections from two machines. All you have to do is come up with another P2P protocol which has not been seen yet, and you have got around the filter. Or is there a magic piece of technology that I don't know about that can look at a random data packet and go "Ha, this is part of a P2P stream. Quash".
And if the P2P designers really wanted to be clever, it would be possible to devise a UDP/IP protocol using stateless connections, with out-of-order packets routed via multiple hosts using different ports, possibly with each packet encoded differently. Block that!
It becomes a technology war, where the side with the largest number and cleverist deep-hackers winning. I would place my money with the P2P designers, quite honestly, as these people work without financial reward (the other side needs salaried people). And if it is decided that such applications become illegal to write, then you end up with a locked down Internet, where a new technology like the World-Wide-Web (as it was when it was new) can never happen again. I think people forget what the Internet was like using ftp, Archie and Gofer. There *will* be new killer apps that will change the Internet overnight that we cannot yet imagine.
I'm sure Microsoft, Google et. al. would love it if they get given the decision making power to decide what we can run on the Internet by Governments. Think of the revenue generating power that they would be given.
And the AC who believe that they can be filtered at source just does not understand about how P2P and computers in general work. What constitutes the start of a transmission, if you are getting parts of the work from a dozen different P2P systems around the Internet? P2P is NOT a client-server model (the clue is in the name Peer-to-Peer).
Goodby freedom. Stop the planet, I want to get off.
You obviously have never watched an effective teacher teach a class using computers, even if you appear to be in education.
It starts up with instructions about how to start the app read from a crib sheet (or often the teacher or teaching assistant setting the programs up before the class starts), and then continues with using the app, which is probably OS agnostic. What use is detailed Windows knowlege in this type of class?
In the UNIX/Linux world, it is possble to set up specific user accounts that just launch the required application. So the instructions become "At the login prompt, type the name of the application, and watch it launch". No specific user ID's per child, you can lock the ID down so that even if the kids find a way to break out, they can do no damage, and the teachers DO NOT NEED TO KNOW ABOUT THE SPECIFICS OF THE COMPUTER.
Your points about supportabillity only count if your support staff are only trained in Windows. Why should they not learn about Linux. It's not like your average teacher who uses Windows XP Home or Vista Basic at home is likely to be able to support a network of Windows systems without additional training. And unlike your average IT professional, they don't CARE about whether it is Windows or not, just that it works like the manual and procedures say it should. They are TEACHERS for goodness sake!
It is possible to lock Linux down so that it will never change until some deliberate action changes it. Try doing that on XP, or even Vista where some programs or other will require Admin rights, and this is likely to open vectors for system corruption in the classroom. If you are really that concerned, you can effectivly make your Linux PC a thin client, or even both thin and fat, determined by what user you login as.
And as for applications, UNIX/Linux programs work from network shares much better than Windows ones, and have done since Sun said that "The network IS the system" in the 1980's. There are LOTS of people who understand how to write applications for UNIX-like OS's that can pick up all of their code and configs from relative or non-specific paths, making the way that the shares and mounts are accessed less important. This means that the apps DO NOT EVEN HAVE TO BE INSTALLED ON THE PC's. Just mount the share or remote filesystem read-only during system boot, and go.
And the best teaching software is bespoke, or at least written specifically to support the subject. The BBC micro, rest it's silicone chips, had huge amounts of subject led programs available, often written by the very people who used it. When schools started installing IBM compatibles, many, many teachers found that there were too few subject lead programs available, and the PC was too complex and had too few development tools available to allow the teachers themselves to write the simple but specific programs they needed. This may have changed now, but there was a generation of teachers who cut their teeth on BEEBs but felt that the new computers in their school were in-accessible or of limited use for their subjects.
I've seen too much "Computer" teaching ending up being teaching particular packages, often Windows ones. Computers SHOULD be used as a tool to support other subjects, not as an end to themselves, except in ICT classes. And these should teach more about HOW computers work, rather than just how to use them.
I've now left the education field, but I have three children in various levels of education, and nothing much seems to have changed since I was there. I have heard of schools who have embraced Open Systems very sucessfully, and only use Windows for a few packages that there are no representative alternatives available, but in most schools, the Windows momentum is difficult to deflect.
Where's the PalmOS version?
Call me a bigot, but I won't buy anything for myself running any type of MS Windows unless I have no option. And I'll think long and hard about it even then.
Roll on the Linux version with a PalmOS frontend and a Dragonball or ARM emulator, but hurry, my 650 is beginning to go west.
Don't know how many people remember Elite on the 32K BBC B, but it was a revelation when it first came out. Realtime hidden line removal on an 8-bit micro running at 2MHz with not a GPU in sight! When I first saw it I was amazed, as I had been playing around on the BEEB in assembler to do 3-D wireframe, and I could only get simple objects (cubes mainly and other regular objects) without hidden line removal running at 2-3 frames a second. But I think that the main problem was that I was using the OS linedrawing primatives, whereas Elite used a quick and very dirty algorithm.
I would love to know how they did the hidden line stuff so fast on a limited system. They wern't even using colour switching to hide the drawing (Elite ran split screen, with the top 3/4 running effectivly in mode 4 [actually a hybrid mode 3] 1 bit per pixel == 2 colours, and the bottom running in mode 5 2 bits per pixel == 4 colours) except when you were using a 6502 second processor, when it ran in mode 1 all the time. Really used the available hardware to it's best.
This Power7 monster IBM is proposing sounds like you will need serious communication skills to get the best from it. Make the p4 and p5 stuff I am working on at the moment look a bit lame.
I don't think that anything in the Ubuntu package manager will actually uninstall modules that you have built, but what it will to is update the kernel and not rebuild the modules that you have added. New kernel versions mean new modules directories which will not contain your modules. The old ones will still be there in the /lib/modules/<version> directory, and if you boot the relevent kernel (ever wondered what all those extra entries in the GRUB boot screen were), they will still work.
I do agree that this is difficult behaviour to get to grips with. I put 8.04 on my main laptop (previously running Dapper 6.06) the weekend after the full release, and have had at least 4 minor kernel upgrades since, which have meant that I have had to re-compile (or at least re-copy) the aironet module that I use for my Three 3G network dongle to speed up network access (it's patched with the USB id of the dongle).
Provided that the kernel update is a minor release (4th number of the version number) there is an extremely good chance that your module will work without re-compiling. Alternativly, you can lock the kernel and kernel modules packages so that they will not be upgraded, but this means that they will not get any patches. Fire up the synaptic package manager from the System->Administration menu, and press <F1> to get some help.
To locate where the module you want is installed, assuming you know the name of the module, then you can use "find /lib/modules -name '*mod-name*' -print" (where mod-name is the name of your module). You can then identify the version of the kernel with "cat /proc/version", and workout from this where to copy the module. Please note that this is all command line stuff, and is not a full procedure, but with the correct amount of applied thought, you should be able to work it out.
Sorry, I know that this is not a tech-assistance forum. I'll try to just keep to comment in future.
AMSTRAD and VIGLEN
Sir Alan bought Viglen out when they got into financial difficulties sometime in the '90s as I recall. All of the BBC stuff was pre-Sugar, although it was the BBC stuff that made them famous. I still have a working 40/80 switchable double sided drive that I bought back in 1983. Bare TEAC drive with a plastic sheath case, plastic back plate, and 2 cables with the correct ends on.
Viglen became a reputable supplier of reasonable PC's to business after they moved from BBC stuff. I was surprised when they had one of the first 486-DX systems reviewed in the UK.
Amstrad used to make real Hi-Fi seperates before the card-boxy-things. I had an IC2000 amp and IC3000 tuner which were paper covered chipboard and plastic, but the metal chassis and electronics wern't actually that bad. Beef up the power supply with a large electrolytic capacitor to knock out the hum and you had 25 watts RMS per channel which could drive significant amounts of current.
Following that, they had metal cased seperate amps, tuners and cassette decks, in silver and black, but I thought they looked a bit tacky.
They also did a strange turntable, which looked a little like a Rega Planar (wooden plinth, speed change by moving the belt by hand, external belt drive), but had a strange three-armed turntable with hexagonal pads on the ends of the arms that did not support the LP at all. I wondered what one would have sounded like with a Rega glass platter sitting on it.
You really need an Old Fart icon here!
The Steam games are very difficult to pirate, as they are tied to single use product activation keys. It is possible to install them on more than one computer, but if you are connecting to the Steam servers for multi-player games, then you have to log on, and each activation key is registered against a single sign-on ID. It won't allow you to register a key against two accounts.
If you try to set up a LAN game with the same copy on two PC's, again they will tell you, and the second one won't start.
Pi is Pi Gordon?
It will be equal to itself (this is axiomatic), but it is NOT 3.14159, although you could make this statement true by saying it is 3.14159 rounded to 5 decimal places, or to 6 significant figures.
Pi is a non-repeating irrational number (i.e. it cannot be represented as a fraction, and as far as we know, the sequence of digits does not repeat), so it is not possible to be completely accuratly represented on paper or computer.
But, back to the story. All of you who state that it is impossible to have a completely secure OS are generalising. It should be possible to make a completely secure OS, but the costs of doing it make the feat impractical. But UNIX-like OS's have a distinct advantage over pre-vista versions of Windows because the security model that has existed in UNIX-like OS's for over 30 years expects that most work is done as a non-privileged user that does not have access to large parts of the system.
Even a patchy webserver can be made to run as a non-privileged user, with read-only data, so the system as a whole is unlikely to be compromised.
Of course, if you have a means to administer/patch the OS, social engineering can ALWAYS be used to compromise the system. I'm not saying that these OS's are completely secure, but they have fundamental advantages.
If you were to have a system with no mechanism to patch the OS, and the OS was stored in ROM and could not be changed, and there was no way to re-vector OS calls, and you were not able to run any code that was not shipped with the OS, and you made the system functionally frozen, and you put an encrypted filesystem in place, encrypted by a physical dongle then it is unlikely that anybody would break in. But this would be more like an appliance that a general-purpose computer. But maybe that is what is needed by the majority of current users.
Putting any ease-of-use feature in an OS (although you could argue that the user interface is seperate from the OS proper) puts a system at risk. Obviously, any remote desktop tool has the scope to be a way into a system, and having a general purpose scripting language could also make a system vulnerable.
Think's ain't what they used to be!
I've had Thinkpads for 10 years or so, mainly bought re-conditioned, and everything after T23's are flimsy. I still have a 10+ year old 380XD running as a firewall.
Granted, bits always fell off eventually, but none have ever let me down by not working onsite until I got a T30 (the first model built in the far east, I believe). T41s and T42s appeared a bit better, but I do not like the T60s at all, especially the widescreen ones, which is why I havn't bought one!
I really don't know what I will get next. Never mind. My current machine (again a T30, because the disk and DVD-Writer could be just swapped over) runs Ubuntu 8.04 quite well enough for the moment, as long as the one remaining working memory socket stays soldered to the MB. Won't be upgrading the Windows partition to Vista, though.
How about this true story
About 15 years ago, I was working in a major UNIX vendors support centre, and took a call about the colours being wrong on the screen. After going through all of the X colour (color?) maps and everything else I could think of, we found that red was coming out blue, and vice-versa. Green was OK, I suggested in desparation, that the customer unplug the monitor from the computer, and plug it back in.
After some noises from the end of the phone, an amazed customer came back on the line, saying that when he removed the plug, it was incredably stiff, and he found that it was plugged in upside down! Quite how that could have happened by accident is beyond me.
P.S, it was not a 15 pin high density D-Shell VGA connector, it was a D-Shell with three mini-coax connectors in it, one each for red, green, and blue, with sync on green, like below.
\ o o o /
When plugged in upside down, the mini-coax plugs connected, but the surrounding D-Shell must have been well and truly bent out of shape!
It's funny, all the collaborative things you mention were pretty much invented by IBM. The document sharing, mail, and shared calender features were first seen in the PROFS, or NOSS internal office system that IBM used. At the time, it was all done on mainframes and 3270 terminals, but it eventually was ported to OS/2. I'm talking about a product (the mainframe version) that existed BEFORE PCs were actually made.
It's funny that it took a long time for some of the features to appear in Lotus Notes, which was supposed to replace PROFS. But I think that most things now appear in Domino, when used with an up-to-date Notes client. The problem is that most people see Notes as just a quirky mail client, rather than the revolutionary collaborative tool and application platform that it actually was and is. But I will admit that Notes used to wind me up when the replication I asked for didn't happen, leaving all my outgoing mail stuck on my Thinkpad.
I was in IBM 12 years ago, and at that time, it was OS/2 and Lotus SmartSuite that came loaded automatically on any Wintel system. If you wanted Windows and Office, you had to have a really good business case, and Office on OS/2 was not really well supported (probably due to Microsoft using secret API calls when running on Windows).
When OS/2 fell out of grace at IBM, there was a time when SmartSuite on Windows was tried, but as most of IBM's customers were Office users, document exchange became a problem (there were SmartSuite filters to open and write Office formats, but they were not included by default, and were not 100% effective). There became a straight choice for users between SmartSuite and Office, and Office won, like in the rest of the world.
IBM then tried to make SmartSuite (and Lotus Notes client, the Email part of Notes) more popular with a giveaway program on magazine cover disks, but that did not work either, so the package died, albeit a slow, lingering death.
So for about 10 years, IBM has been using exclusively Office, buying corporate licenses at whatever cost Microsoft felt like charging them.
If IBM can make even some of their own users give up Office, so that a smaller license fee needs paying, then they can only gain. And with ODF being a hot topic at the moment, it gives the possibility of some free news space. Not sure how the targeted users will react, however.
I avoided Office, using SmartSuite after I left IBM, and switched to StarOffice and then OpenOffice when I decided to use Linux as my primary OS (I'm a Unix consultant). And now, when I have to use Office as part of my work on client provided systems, you cannot imagine how annoying and difficult I find it. The lack of any common sense in things like font handling, and styles when cutting and pasting between documents, everything moving around when new releases come out, and some very strange behavior when trying to adjust complex numbered lists just astound me. I could list at least 50 things that I cannot stand. Quite how the software passes ergonomic testing escapes me.
So I think that IBMers should embrace a move from Office. Let's break Microsoft's monopoly. Of course, I would actually like to move back to Memorandum Macros and Troff, or possibly LaTeX for documents, and who needs spreadsheets anyway!
... that Vista on new hardware designed for it is stable and capable, but unless your old system REALLY rocked, it is probably better off with XP.
I do not want to enter a flame war. If you like Vista, stick with it. If you don't, use XP (or Linux). It really is horses for courses.
But it would be nice if Microsoft would continue to produce security patches for XP for all of the people who have perfectly capable XP systems that will not take Vista, or who do not want to pay for what is probably an unwanted upgrade.
I would hate to think that people started discarding perfectly serviceable systems because their Bank or some other organisation they deal with decides that as XP is not secure enough when it goes out of support. Think of the damage to the environment due to all the plastic, heavy metals and CRT screens.
They cannot even be used with their Microsoft OS if they are donated to charity, as the Windows EULA does not allow it.
Maybe there is an opening for offering recycled kit with Ubuntu loaded on it at budget prices.
Surely, even 5% should be too high for new products. That's 1 in 20 items faulty.
Is it really the case that it is regarded as costing less to ship a faulty device from China, and dispose of it in Europe, than testing it before it is shipped? They sure as hell don't rework and fix faulty devices in Europe unless they are high cost items.
Or maybe the shipping causes the damage. So much for the value of the blister pack and expanded polystyrene.
Something not right
I don't know. These bigger EeePCs just do not look right after the 701. I wish that they had produced a model with a screen with more pixels, without the bezel, but in the same case.
I can cope very well with the keyboard on a 701, but the screen just does not have enough space, even for some of the default menus.
I have a couple of XP systems that have games installed and run by the kids.
There is a significant problem with the security model, particularly with XP Home running on NTFS, where you need admin rights to install DLLs and other config files on the system, and the permissions are set so that you then need admin to read/change the files, and save any save files. It does not affect Fat32, because there are not the extended access control on the filesystem to give you the security (and problems) that running as a non-administrative user provides.
The additional problem with XP Home is that you do not have the extended policy editors for users and groups, or to manipulate the file attributes on NTFS. I suspect that even if XP Pro was being used by the majority of users, they would not want to get involved with this type of administration anyway.
It is possible to do some of the work with cacl from a command prompt, but it is very hard work. I have not found any way with XP Home (without installing additional software) that allows you to manipulate the user and group policies at all.
My current solution is to create an additional admin user, and then hide it from the login screen (with a registry key change). The kids can then do a right-click, and then a "run as" to this user for the games.
This is still insecure for a number of reasons. They already know that they can use this account to run any command with "run as", and the system is still as vulnerable to security flaws within a game, but it is a half-way house.
Unfortunatly, it does not appear to work 100% of the time. I tried using it to install "Blockland", which is a game that allows you to create a first-person role playing game in a world built out of something similar to Lego, and it would appear that there is a access-rights inheritance feature (read problem) that I don't know enough about to fix. It installed OK when run directly from a admin login, so I did not persue it.
I must admit that this type of problem scares me, especially if similar issues exist in SELinux (I normally have SE disabled), but I guess that I am just resisting change to a system I understand well. Role based authorisation is definitly the way forward, but it is just so difficult to accept this type of change.
@AC about cat - if you are still reading
This is UNIX we are talking about, almost all things are possible, although I suspect that your ksh loop may well run slower than cat.
I take your point about 'command' and 'program', sloppy thinking on my part. But that sloppy thinking runs through the entire UNIX history. Check your Version 7, System V or BSD or AIX or any other documentation, and you will see that 'cat' appears in the "Commands" section of the manual (run "info cat" on a GNU Linux system, and see the heading. Section 1 "User Commands")
Interestingly, your one-liner does not to work exactly as written on AIX (a genetic UNIX), as echo does not have a -e flag. Still, you probably don't want that flag if you are trying to emulate cat. I have used echo like this in anger, when nothing but the shell was available (booting a RS/6000 off the three recovery floppy disks to fix a problem before CD-ROM drives were in every system).
I was not really ranting, I was trying to put a bit of perspective on the comments, from a historical point of view. I'll bet you would find a need to complain if cat was really not there on a distro.
Sorry, I did miss the lighthearted comment. Still, just a bit of fun between power users, eh!
Myself, I try to stick to a System V subset (vanilla, or what), mainly because it is likely to work on almost all UNIX from the last 20 years. When you have used as many flavours as I have, it's the only way.
Yes, I've been around and yes, I am still making a living out of UNIX and Linux, so don't feel I'm a complete dinosaur yet. And I am also open enough to use my name in comments (sorry, could not resist the dig).
Wireless and Linux
I can get hermes, orinoco, prism, atheros and centrino (ipw220) chipsets working out of the box with any Linux that supports the Gnome Network manager (which I have installed on Ubuntu 6.06 - it's in the repository). I can get the ralink and derrived chipsets working for WEP without too much trouble, but it takes some effort to get WPA working, which most people will not be able to sort out themselves.
Where there is a weakness is in the WPA supplicant support. Atheros and Centrino chipsets with Gnome Network manager will do it, and in a reasonably friendly way.
I am using a fairly backward Ubuntu release, so I suspect that it will be a little easier in later releases. I know that the normal network system admin tool in the menu does not work with WPA at all in Ubuntu 6.06.
Where the problem lies is that with a card intended for Windows, the user gets the nice little install CD, which takes away from them all the hassle of deciding which chipset is being used.
Modern Linux distributions probably have the abillity to drive almost all of the chipsets used out-of-the-box, and also have the NDIS wrappers as a fallback, but you need to be able to decide which chipset is in use to make useful decisions.
If manufacturers provided the details of the internal workings of the card (basically the chipset details), or even gave the same degree of care to installing their products on different Linux distros, as they do on different Windows releases, then I'm sure that there would be less discontent amongst non-hardcore Linux users.
I know that this is hampered by the plethora of different distributions out there (see my earlier comments), but it should not be rocket science.
An additional complication is that if you go into your local PC World (assuming it is still open after Thursday) and ask for a Wireless PC-Card using the Atheros chipset, you will get a blank look from the assistants, as they will understand "Wireless" and may understand "PC-Card" (but you might have to call it a PCMCIA card), but Atheros might as well be a word in Greek (actually, it probably is).
And it complicated by manufacturers who have multiple different products, with the same product ID, using completly different chipsets (if you are lucky, on the card itself, you may get a v2 or v3 added to the product ID, but not normally on the outside of the box).
If you definitly want to get wireless working, I suggest that you pick up one of the Linux magazines (-Format or -User) and look for adverts from suppliers who will guarantee to supply a card that will work with Linux, or keep to the Intel Centrino wireless chipset that fortunatly is in most laptops with Pentium processors.
If your laptop uses mini-PCI cards (under a cover normally on the bottom of the laptop) for wireless expansion, then there are many people selling Intel wireless cards on eBay for IBM Thinkpads (2915ABG) that will probably work. Thats what I am using, and it works very nicely indeed.
Who needs cat?
I know that this is absolutly geeky, but cat is a command (like ls, find, dd, ed etc.) which has been in UNIX since it's inception. I have been using it since the 1976/77 release of Bell Labs. Version 6 for the PDP/11. Long before ksh and bash (in fact, the version 6 shell was *really* primitive, only being able to use single character shell variables, for example)
It actually does a lot more than you think. Look up the -u flag, and with a couple of stty settings, you can make a usable, if very basic, terminal application (one cat in the background, one in the foreground).
Try doing a "cat *.log > logfiles.summary" using your ksh one-liner.
How about "ssh machine cat remotefile > localfile" for a simple file copy.
Also, cat has an absolutly miniscule memory footprint (the binary is just over 16KB on this Linux system I'm using).
It is one of the fundamental 'lego-style' building blocks that make UNIX so powerful. Whilst it is true that other tools are around that can do the same job, you cannot remove it because of compatibillity with old scripts. And you can guarantee it is there all the time on any UNIX variant (try running a script that starts "#!/bin/ksh" on a vanilla Linux system). And it is in every UNIX standard from SVR2 (SVID anybody), Posix 1003.1 to whatever is the current UNIX (2006?) X.Open standard.
Remember that the UNIX ethos is "efficient small tools, used together in pipelines". Even things like Perl are an anathama to UNIX purists, because they do everything in one tool.
I think you need to see a real UNIX command line power user at work. I have literally blown peoples minds by doing a task in a single pipeline of cut, awk, sed, sort, comm, join etc, that they thought would take hours of work using Excel or Oracle.
Mine is the one with the treasured edition of "Lyons annotated Version 6 UNIX kernel" in the pocket.
Proud to be celebrating 30 years of using UNIX!
@AC and others
I used to be a committed RedHat user for my allways-with-me workhorse laptop(s), from 4.1 through 9.1 (or was it 2), and when Fedora came alone, I got fed up with the speed that Fedora changed. You just could not use a Fedora release for more than about 9 months, and still expect the repositories to remain for that package that needed a library that you had not yet installed.
Also, when you update, you pick up a new kernel, and all of the modules that you had compiled need frigging or recompiling (my current bugbear is the DVB-T TV adapter I use).
I switched to Ubuntu 6.06 LTS mainly because I liked the support that they promised, and have delivered. Also the repositories are extensive, and are maintained.
Here I am again, two years later, and I can remain on Dapper if I want to (for quite a while) but I am finding that it is taking longer and longer for new things to be back-ported, and I have had problems with getting Compiz/Beryl or whatever the merged package is working with GLx or the binary drivers from ATI.
I am going to go to 8.04 (LTS again) for the same reasons as before, and I am removing the last remains of the RedHat 9 from my trusty Thinkpad T30 (the disk has moved/been cloned several times, keeping the machine the same, just on different hardware - ain't Linux good).
I wish there was an upgrade from 6.06 to 8.04, but I guess that one re-install every two years is not too much to put up with, especially as I choose to keep my home directory on a seperate partition.
I may give Fedora 9 a try on USB stick, just to see how things have changed, but I think that Ubuntu is still my preferred choice. This is mainly because I use my laptop as a tool, not as an end in itself. I just do not have the time to be fiddling all the time.
I know that this is petty, but I feel that we absolutly need ONE dominant Linux distro, so that we can achieve enough market penetration to make software writers take note. Ubuntu is STILL the best candidate for this as far as I can see, because of its ease-of-use, good support, and extensive device support.
If the Fedora community want to come up with a long-term release strategy, then I think that they could move into this space, but as most non-computerate users will generally keep the same OS on a system that it came delivered with for the lifetime of the machine. If they have to perform a major upgrade, most will discard the machine and buy a new one. This means that we need distributions with an effective lifetime of several years to get the needed penetration.
Length of cable supported depends on the SCSI variant being used.
I used Fast-Wide differential SCSI-2 cables that long and yes, they were in spec. and yes, they worked. LVM SCSI-3 allows even longer cables, I believe.
I seem to remember that SCSI-1 allowed a maximum length of 3 metres from terminator-to-terminator, but on many early midrange systems, the external SCSI port was on the same bus as the internal devices, and once you measured the internal cables that ran to all the drive bays, you often had less than 1 metre available for external devices.
The biggest problem was that all the different SCSI varients used different connectors and terminators, and even when you used the same variant, manufacturers often had their own (often proprierty) connectors (IBM, hang your head in shame!)
I hope that some of the smaller Currys stores survive, although this is unlikely, as they are about the only electrical retailer left on the high street (I know there are the Euronics consortium of retailers, but somehow, they are just not the same). Sometimes, you just have to pop out and buy a toaster/hoover bag/unusual battery in an emergency, and if I have to trek 2x25+ miles to the nearest large town, the only ones who will benefit is HM treasury due to the fuel duty. I know I won't.
I'm with most of you on PC World, however. I have often had to bite my tongue if I am ever in one of the stores listening to advice being given to customers. I once had an open row with the supposed network expert about the benefits of operating a proper firewall on a dedicated PC vs. the builtin inflexible app that is in most routers. Eventually, I had to play the "I'm an IT and network Consultant, and I know what I'm talking about" card just to shut him up.
Tux, because he makes UNIX-like OSs available to all.
I loved all of the PDP 11/34's I used. Of course they were not as reliable as more modern machines (or even 11/03s and 11/44s), but then they were built out of LS7400 TTL with a wire-wrap backplane with literally thousands of individually wrapped pins. If I remember correctly, the CPU was on five boards, with the FPU (optional) on another two. Add DL or DZ-11 terminal controllers, RK or RP-11 disk controllers, and MT-11 tape controllers, and you had a lot to go wrong.
I suspect that all of the Prime, DG, IBM, Univac, Perkin Elmer, HP systems of the same time frame had similar problem rates. Especially as they were not rated as data-centre only machines, and would quite oftern be found sitting in closed offices or large cupboards, often with no air-conditioning.
It was quite normal for the engineers to visit two or three times a month, and we had planned preventative maintenance visits every quarter.
But, the PDP 11 instruction set was incredibly regular (I used to be able to dis-assemble it while reading it), and it was the system that most Universities first got UNIX on. It had some quirks (16 bit processor addressing mapped to 18 or 22 bit memory addressing using segment registers [like, but much, much better than Intel later put into the 80286], Unibus Map, seperate I&D space on higher-end models). OK the 11/34 had to fit the kernel into 56K (8K was reserved to address the UNIBUS), but with the Keele Overlay Mods. to the UNIX V7 kernel, together with the Calgary device buffer modifications, we were able to support 16 concurrent terminal sessions on what was on paper little more powerful than an IBM PC/AT.
It was a ground-breaking architecture that should go down as one of the classics, along with IBM 360, Motorola 68000 and MIPS-1.
Happy days. I'll get my Snokel Parka as I leave.
Chasing the money
A lot of you commenting on a scientific gravy train obviously don't know how scientific grants are awarded.
If you are a Research Scientist in a UK educational or Goverment sponsored science establishment, you must enter a funding circus to get money for your projects. This works by you outlining a proposal for the research you want to carry out, together with the resources required. This then enters the evaluation process run by the purse string holders (UK Government, Science councils, EU funding organisation etc.) Enivatably, the total of all of the proposals would cost more money than is available (just look at the current UK physics crisis), so a choice must me made.
The evaluation panels, which are made up of other scientists with reputations (see later) but often also contain civil servants, or even Government Ministers. They look at the proposals and see which ones they are prepared to fund. As there is politics involved, there is an adgenda to the approvals.
If there is a political desire to prove man-made climate change, the panel can choose to only approve the research that is likely to show that this is the case.
So as a scientist, if you want to keep working (because a research scientist without funding is unemployed - really, they are), you make your proposal sound like it will appeal to the panel. So if climate change is in vogue, you include an element of it in every proposal.
The result is funded research which starts with a bias. And without a research project, a scientist does not publish creditable papers, does not get a reputation, and is not engaged in peer review, one of the underlying fundamentals of the science establishment. Once all of the Scientists gaining reputations in climate study come from the same pro-climate change background, and the whole scientific process gets skewed, and doubters are just ignored as having no reputation.
If there was more funding available, then it is more likely that balanced research would be carried out, but at the moment, the only people wanting to fund research against manmade climate change are the energy companies, particularly the oil companies. This research is discounted by the government sponsored Scientists and Journalists as being biased by commercial pressures.
More money + less Government intervention = more balanced research. Until this happens, we must be prepared to be a little sceptical of the results. We ABSOLUTLY NEED correctly weighted counter arguments to allow the findings to be believable.
Please do not get me wrong. I believe in climate change, but as a natural result of reasons we do not yet understand properly (and may never as proved by the research of the recently deceaced Edward Lorentz), one of which could well be human. Climate change has been going on for much longer than the human race has been around, and will continue until the Earth is cold and dead.
I am a product of the UK Science education system to degree level, and have taught in one such establishment too, so please pass me the tatty corduroy jacket, the one with the leather elbow patches.
Like many modern filesystems, NTFS has the concept of complete blocks, and partial blocks.
Sequential writes will result in complete blocks being used for all but the last block. In order to maximise the disk space, the remaining bit at the end of the file is written to a patial block, leaving the rest of the block containing the partial block available for other partial blocks. Confusingly, these partial blocks are called fragments. I don't know about the NTFS code in XP and Vista, but with other OSs, the circumstances when fragments are promoted to full blocks are fairly rare under normal operation, so over time the number of full blocks split to fragments will increase.
When reading a files that has been extended many times, you end up with blocks in the middle of the file that instead of being stored as whole blocks are in multiple fragments. Each fragment needs a complete block read so a single block dived into four fragments (for example) needs four reads instead of one, probably associated with four seeks as well.
When you defrag a filesystem, these fragments will be promoted to whole blocks (by effectivly performing a sequential re-write of the whole file), significantly increasing performance.
You also find that filesystems that are run over 90-95% full end up with a significant amount of the free space being in fragments, with few full blocks available. Certain types of filesystem operations just will not work in fragments (operations that try to write whole blocks, like those used by databases for example). This also affects a number of UNIX variants as well.
So long as the OS thinks of the SSD as a disk, using the same code as for spinning disks, the same problems will happen, thus you will need to defrag it just like an ordinary disk. Why should it be any different? What may happen is that the performance of a fragmented solid-state disk may not defrade as much as a spinning disk, as I would guess that a seek on an SSD is almost a no-op.