1395 posts • joined Friday 15th June 2007 09:17 GMT
Length of cable supported depends on the SCSI variant being used.
I used Fast-Wide differential SCSI-2 cables that long and yes, they were in spec. and yes, they worked. LVM SCSI-3 allows even longer cables, I believe.
I seem to remember that SCSI-1 allowed a maximum length of 3 metres from terminator-to-terminator, but on many early midrange systems, the external SCSI port was on the same bus as the internal devices, and once you measured the internal cables that ran to all the drive bays, you often had less than 1 metre available for external devices.
The biggest problem was that all the different SCSI varients used different connectors and terminators, and even when you used the same variant, manufacturers often had their own (often proprierty) connectors (IBM, hang your head in shame!)
I hope that some of the smaller Currys stores survive, although this is unlikely, as they are about the only electrical retailer left on the high street (I know there are the Euronics consortium of retailers, but somehow, they are just not the same). Sometimes, you just have to pop out and buy a toaster/hoover bag/unusual battery in an emergency, and if I have to trek 2x25+ miles to the nearest large town, the only ones who will benefit is HM treasury due to the fuel duty. I know I won't.
I'm with most of you on PC World, however. I have often had to bite my tongue if I am ever in one of the stores listening to advice being given to customers. I once had an open row with the supposed network expert about the benefits of operating a proper firewall on a dedicated PC vs. the builtin inflexible app that is in most routers. Eventually, I had to play the "I'm an IT and network Consultant, and I know what I'm talking about" card just to shut him up.
Tux, because he makes UNIX-like OSs available to all.
I loved all of the PDP 11/34's I used. Of course they were not as reliable as more modern machines (or even 11/03s and 11/44s), but then they were built out of LS7400 TTL with a wire-wrap backplane with literally thousands of individually wrapped pins. If I remember correctly, the CPU was on five boards, with the FPU (optional) on another two. Add DL or DZ-11 terminal controllers, RK or RP-11 disk controllers, and MT-11 tape controllers, and you had a lot to go wrong.
I suspect that all of the Prime, DG, IBM, Univac, Perkin Elmer, HP systems of the same time frame had similar problem rates. Especially as they were not rated as data-centre only machines, and would quite oftern be found sitting in closed offices or large cupboards, often with no air-conditioning.
It was quite normal for the engineers to visit two or three times a month, and we had planned preventative maintenance visits every quarter.
But, the PDP 11 instruction set was incredibly regular (I used to be able to dis-assemble it while reading it), and it was the system that most Universities first got UNIX on. It had some quirks (16 bit processor addressing mapped to 18 or 22 bit memory addressing using segment registers [like, but much, much better than Intel later put into the 80286], Unibus Map, seperate I&D space on higher-end models). OK the 11/34 had to fit the kernel into 56K (8K was reserved to address the UNIBUS), but with the Keele Overlay Mods. to the UNIX V7 kernel, together with the Calgary device buffer modifications, we were able to support 16 concurrent terminal sessions on what was on paper little more powerful than an IBM PC/AT.
It was a ground-breaking architecture that should go down as one of the classics, along with IBM 360, Motorola 68000 and MIPS-1.
Happy days. I'll get my Snokel Parka as I leave.
Chasing the money
A lot of you commenting on a scientific gravy train obviously don't know how scientific grants are awarded.
If you are a Research Scientist in a UK educational or Goverment sponsored science establishment, you must enter a funding circus to get money for your projects. This works by you outlining a proposal for the research you want to carry out, together with the resources required. This then enters the evaluation process run by the purse string holders (UK Government, Science councils, EU funding organisation etc.) Enivatably, the total of all of the proposals would cost more money than is available (just look at the current UK physics crisis), so a choice must me made.
The evaluation panels, which are made up of other scientists with reputations (see later) but often also contain civil servants, or even Government Ministers. They look at the proposals and see which ones they are prepared to fund. As there is politics involved, there is an adgenda to the approvals.
If there is a political desire to prove man-made climate change, the panel can choose to only approve the research that is likely to show that this is the case.
So as a scientist, if you want to keep working (because a research scientist without funding is unemployed - really, they are), you make your proposal sound like it will appeal to the panel. So if climate change is in vogue, you include an element of it in every proposal.
The result is funded research which starts with a bias. And without a research project, a scientist does not publish creditable papers, does not get a reputation, and is not engaged in peer review, one of the underlying fundamentals of the science establishment. Once all of the Scientists gaining reputations in climate study come from the same pro-climate change background, and the whole scientific process gets skewed, and doubters are just ignored as having no reputation.
If there was more funding available, then it is more likely that balanced research would be carried out, but at the moment, the only people wanting to fund research against manmade climate change are the energy companies, particularly the oil companies. This research is discounted by the government sponsored Scientists and Journalists as being biased by commercial pressures.
More money + less Government intervention = more balanced research. Until this happens, we must be prepared to be a little sceptical of the results. We ABSOLUTLY NEED correctly weighted counter arguments to allow the findings to be believable.
Please do not get me wrong. I believe in climate change, but as a natural result of reasons we do not yet understand properly (and may never as proved by the research of the recently deceaced Edward Lorentz), one of which could well be human. Climate change has been going on for much longer than the human race has been around, and will continue until the Earth is cold and dead.
I am a product of the UK Science education system to degree level, and have taught in one such establishment too, so please pass me the tatty corduroy jacket, the one with the leather elbow patches.
Like many modern filesystems, NTFS has the concept of complete blocks, and partial blocks.
Sequential writes will result in complete blocks being used for all but the last block. In order to maximise the disk space, the remaining bit at the end of the file is written to a patial block, leaving the rest of the block containing the partial block available for other partial blocks. Confusingly, these partial blocks are called fragments. I don't know about the NTFS code in XP and Vista, but with other OSs, the circumstances when fragments are promoted to full blocks are fairly rare under normal operation, so over time the number of full blocks split to fragments will increase.
When reading a files that has been extended many times, you end up with blocks in the middle of the file that instead of being stored as whole blocks are in multiple fragments. Each fragment needs a complete block read so a single block dived into four fragments (for example) needs four reads instead of one, probably associated with four seeks as well.
When you defrag a filesystem, these fragments will be promoted to whole blocks (by effectivly performing a sequential re-write of the whole file), significantly increasing performance.
You also find that filesystems that are run over 90-95% full end up with a significant amount of the free space being in fragments, with few full blocks available. Certain types of filesystem operations just will not work in fragments (operations that try to write whole blocks, like those used by databases for example). This also affects a number of UNIX variants as well.
So long as the OS thinks of the SSD as a disk, using the same code as for spinning disks, the same problems will happen, thus you will need to defrag it just like an ordinary disk. Why should it be any different? What may happen is that the performance of a fragmented solid-state disk may not defrade as much as a spinning disk, as I would guess that a seek on an SSD is almost a no-op.
Come on, can you think that a security 'expert' that goes into an organisation, and just comes up with a 'nothing to look at here' is going to be trusted?
They HAVE to find something to justify their own existance, even it it is that you have to video everybody everywhere. The better you (and the previous expert consultants) are at the job in hand, the more trivial the next vulnerabillities become. And because they are just trying to find one or two things, they will stop once they have these one or two. Of course, this assumes that all the basics are covered.
It's when they start complaining that the screens can be read over the video links that they asked for, and whether the CCTV wires are Tempest complient or could be intercepted between the camera and the monitoring station that you really have to worry.
My view is, let a couple of minor but visible, easily fixable, holes be found,. Take the resultant report, fix them in no time flat, and everyone will be happy. You will get a 'Found something, had it fixed, everything OK now' report, and they will go away happy knowing that they have done the job. You will then not have to fix the trivial new vulnerabillities that they have not had to find.
I think the BOFH would aggree to this plan. Either that, or there will be some more mysterious accidents with lift doors opening at the wrong time!
Maybe you ought to put pressure on your favorite game and device manufacturers to support Linux, rather than asking Linux to work like Windows. My IBM Thinkpad works flawlessly with a stright off-the-CD Ubuntu install, including the trackpad, the wireless and the display adapter. The only thing I have not tried is the modem, but who uses that anyway nowadays.
The problem with many the Linux software that needs to match the kernel version is that the developers did not understand the correct methods for making their software kernel version independent. As long as you remain in a major branch (like 2.6), it is possible to make your modules version independent.
Even if a module is compiled against a particular kernel minor version, it is often possible to copy the module into the correct location for any new kernel that you install. I admit that this is not somthing that is done automatically when you install a new kernel, but it's not that difficult either. If you have compiled the module, and kept the build directory, try doing a "make install" to see whether that will install it in the correct location.
Unfortunatly, Nvidia do not appear to be able to do this with their 3D module, somthing that almost everybody trying to get compiz running on a system with an Nvidia card will fall over.
All disks have bad blocks, you just don't see them because they are mapped out. If you have a drive that cannot do that automatically, you may like to try re-mapping all the bad blocks using a suitable utility (normally provided by the drive manufacturer, but you could try mhdd). Of course, back your data up before trying to rescue the disk.
Once the badblock map is written, new bad blocks are normally caused by head contact with the platters, so don't jog your computer.
All of the IBM Ultrastar (and other IBM server disks back to Redwings) I have used have automatically re-mapped bad blocks. I'm not saying that I have never had to replace Ultrastar disks, but it has generally been because of electronics or motor or actuator failure. What I have found is that they mostly fail when they are stopped or started. Keeping them running 24x7, I have seen the run for literally years at a time (in the case of years without stopping, they were Spitfire 1GB SCSI disks - and the OS was AIX).
The Deskstar 'click-of-death' problems were a problem with the voice-coil motor failing to perform the head-preload during power up. The click was the head being moved to the end-stops. Deskstars were not the only disks with this type of problem. If you google click-of-death, you will find that other manufacturers have had similer problems in the past.
The underlying story is that you can have it cheap, big, or reliable. Currently cheap and big appear to be more important (to us!) than reliable. And the other thing is that the more expensive server members of a disk family are probably worth the money.
ISPs own fault
If the ISPs sell bandwidth they cannot deliver, who is to blame?
I know it may mean higher prices for us all, but I would much prefer to pay more to buy a service that delivers what I have been sold, than get a service that is unusable for much of the day.
Why should the BBC, or ITV, or Channel 4 or Channel 5, or Sky, or YouTube or its clones (who all have video on demand services) have to pay for anything except the bandwidth between them and their ISP.
The ISPs are asking for an unworkable charging model. The only thing that might make the BBC situation slightly different is that the high demand material may be slightly more predictible than some of the other content providers.
Is this what you want?
I can understand having a guitar that is alwas in tune whenever you pick it up, but to correct the tuning of what you are playing?
How will it cope with bending a note, or slides, or playing with a bottleneck?
You might as well put it it through a post instrument DSP dynamic tuning corrector between the axe and the amp.
Government thinking (possibly an oxymoron?)
Of course, the Goverment could provide a state sponsored email system, and force everybody to use that for all email....
....no wait. You would have to prevent use of out-of-nation email servers. OK, lets block SMTP and have a block list for foreign webmail servers. The ISPs can do this for us without cost if we mandate it by legislation...
...hang on, we then need to block tunneled and anonymised connections. OK also block anything that is encrypted....
...but that will block SSL. Never mind, Phorm will work so much better if SSL is not used. And once the Interwebnet tubes unencrypted, we can filter content from abroad, we won't have to worry about terrorists picking up bomb plans from foreign subversive sites.
Hell, lets just ban the Interwebnet. But wait, arn't we trying to push down costs by using the it for tax and other goverment systems....
...and so on.
Anybody for a Police State?
Check the Ts & Cs
I think that if you look at the terms and conditions of most online banking services, you will find that they have a list of known and supported OS/Browser combinations, and I would be surprised if any Linux platform is listed. This gives them an immediate get-out from most Linux users.
My primary bank would like me to install agent software on my machine (at least last time I looked) to access their online banking system. Of course, this is windows based.
And the AC who was talking about Linux viruses has obviously not taken into account how short the wikipedia page about Linux viruses actually is, nor has he looked at the viruses listed. Many of them are old definitions, some are for products not involved with browsing, and virtually none of them will cross the user/system boundry unless you are stupid enough to be running the vector as a privileged user (root).
Of course, Linux is just as vulnerable to social engineering (i.e. Phishing) attacks, but that is because the user is being targetted, not the OS or browser. In theory, it is possible to install anti-phishing plugins in Firefox, but such defenses are only as good as the block database that is being referenced.
I'm just waiting for the banks to insist on content filters being mandatory for their services. When that happens, the simple port filter firewalls implemented by most routers (and Linux Tables and Chains firewalls) will not satisy their requirements, and we will be further beholden to Microsoft.
Oops, not Boca Raton
Not Boca Raton, that was PC, PS/2 and OS/2. I meant Rochester, of course.
And I missed out the OS/2 on PPC, which was also seen as possible to run on the same merged PPC platform, with a common microkernel and OS personallity layers put on top for OS/400, AIX and OS/2.
A long time coming
I remember having a presentation from an IBM bod from Montpellier describing a merged archetecture about 20 years ago (I wonder if the non-disclosuer agreement is still enforceable?). This was using a unified backplane with common components, that you plugged the relevent processor card into, and had scheduled using a hardcoded VM implementation that on reflection looked like the current hypervisor. It used common memory between all processors, with IO performed through the VM. The project was at that time called Prism, a term that has been used more recently just in the mainframe world for a hardcoded VM implementation (maybe that is a spin off from the same research project).
I also remember about 15 years ago when it was announced that the Boca Raton people had taken the PowerPC roadmap, and inserted the ppc 615 (I think) processor to run OS/400, extended with additional instructions to assist the running of that OS (and in the process, I understand, rescued the floundering PowerPC family, because Austin were having difficulty getting the ppc 620 (the first 64 bit member on the roadmap) running). Everyone in IBM was talking about merged product lines again then.
The smaller mainframes have long used microcoded 801 (the IBM RISC chip before RIOS and PowerPC) and PowerPC cores in such systems as the 9371 and air-cooled small zSeries systems. I wonder if the full unification of the product lines is still on someones roadmap? I'm sure that I was on a machine room some in the last couple of years, and had difficulty differentiating a p670 and a small mainframe that was close to it on the floor.
In reallity, a lot of the memory, disks, tape drives and I/O cards have been almost identical between the AS/400-RS/6000 and [pi]Series for many years (back to when the RS/6000 was launched), with the only real difference being the controller microcode, the feature numbers and the price!
Is it really 1024x768? This is not a wide screen resolution, but one often used for 4x3 aspect laptop screens. If you were to keep the same aspect ratio, it would be 1024x614. 1280x660 would also be about right.
@Sid re. SMP vs.MPP
MPP=Massivly Parallel Processing
With an SMP box, there is a single OS image that schedules applications across the processors. If you write threaded code, then most SMP implementations will schedule threads on seperate processors without you having to write the code to explicitly taks into account the fact that there are multiple processors.
With MPP, there are multiple OS images in the cluster, and you have to write to an API that will allow different units of work to be placed on different systems. This means you have to make the application much more aware of the shape of the cluster. This also means that if not written carefully, you may not get better performance by adding additional nodes into the cluster.
Unfortunatly, too many IBM SP/2 implementions were not really parallel processing clusters, more like lan-in-a-can systems (goodness, where did I dredge that term up from).
But what Google does is a quantum leap up from what SP/2s were capable of, and are much more like Mare Nostrum and Blue Gene/L.
No better machine
The BEEB was clearly the most useful teaching computer, possibly of all time.
It was accessible to people who were only prepared to learn Basic, and also to those who were prepared to use assembler. You could teach structured programming on it without any modification, but it also had languages like Forth, Pascal, Logo, and LISP available. Although the networking was rudamentry (and fantastically insecure), it allowed network file and print servers to be set up very easily and cheaply (proto-Ethernet CSMA/CD for PC's at the time cam in at hundreds of pounds per PC plus the fileserver). Although it did not run Visicalc or Wordstar (the business apps of the time), it was still possible to use View or WordWise, and ViewSheet, or ViewBase to teach office concepts. And it was possible to have the apps in ROM for instant startup.
I ran a lab of 16 BBCs to teach computer appreciation, and we had a network with a fileserver (and 10MB hard disk!), robot arms, cameras, graphic tablets, speech synths. speech recognition units, touch screens, pen plotters, mice and more. This was around 1983. Show me another machine of that time that could do all of this. And all for a cost of less than £25K (which included building custom furniture).
I wish that schools still used systems that empowered their staff to develop custom written software to teach their students. Nope. Only PC's.
I know many people (me included) who were prepared to pay for one of these machines at home. A classic.
I think if you read the CERTs, you will find that a large number of the Linux vulnerabillities are theroetical, unexploited problems that have been identified by examination of the code. Do you really think that the buffer overrun security pronlems were all discovered by experimentation? Many of these problems have not even got example exploit code published.
So, which do you trust more. The code that has been examined and found that there may be theoretical problems (which are fixed reeeal quick), or the code that has definite exploits published, and may not get patched for months. Just imagine how many problems are likely to be found in Windows if the code was open, if there are this many discovered by experimentation.
Please don't just count the exploits, examine them in detail, and you then won't compare apples and oranges.
Minehead mourns the loss of one of its famous sons
What I always likes about his writing was that it was science fiction grounded in science fact. Unlike many other authors, all of his innovations seemed to be possible given one or two advances.
Who will provide the realistic grand visions now.
OK, so I will still have to use plastic bags to put my rubbish in (I line all of the bins in my house, one per room, with supermarket carrier bags), and also to sort my recycling into (cans, glass etc.) but instead of re-cycling the bags from the supermarket, I will have to buy them instead.
I probably will still use a similer number of bags, and these will still probably end up in landfill. But guess who will benefit. The supermarkets. Instead of giving me bags, they will now be able to SELL me them. A cost item becomes one generating profit.
And there is another environmental down side. Currently supermarket bags degrade over time in landfill, but the polythene bin bags used to replace them probably won't. I would also like to know what happens to the bag-for-life bags once the supermarkets have swapped them for new ones. Are they recycled? What are the energy cost comparisons in the recycling process vs. the costs of making disposible bags.
All I am saying is that nothing is simple or obvious.
I'm not a climateologist, but I have to say that I believe that the current published science of climate change is skewed towards proving that we are to blame.
The way this works is that research money has to be justified in advance by the researcher before the research starts. So researcher A asks for money to find why the Polar Bear population is reducing. Researcher B asks for money to research how man-made global warming is affecting Polar Bear populations.
Faced with the titles of these two research projects, the politicians (who ultimatly hold the purse strings in most western states) decide that the latter one is in keeping with their green agenda so has political value as well as scientific. So, the research starts off with a biased premise, skewing the perception when the results are presented out-of-context. It's like saying that all scientists who are looking for man-made climate change agree that their research concludes it is happening. What a surprise that they have found what they were looking for. Hey, they've justified their funding. This is the IPCC to a tee, and dissenting voices are shouted down.
Now, please don't get me wrong. Climate change is happening, and we are contributing to it in many different ways. But from what I have gleened, we are at the end of an ice age, the amout of geological change is reducing, affecting the long-term carbon cycle (look it up), and the Earth may be returning to a more 'normal' (in geological time frames) tempreture after about two million years of cold. This probably would happen even if we were to stop producing carbon dioxide tomorrow. I reference the BBC series Earth Story (ep. 6) to help support this claim.
I agree that we should reduce fossil fuel use, not to reduce carbon in the atmosphere, but because they are precious resources which will never be replaced naturally in any useful (to us) timeframe.
I'll just don my asbestos coat.
The main styling problem is actually a power thing. DAB radios are power hungry which equates to large batteries, which leads to large sets. It does not really matter how you discuise it, it will look retro. I have a Pure Elan RV40, which does not look like a '50s radio, but is large (and has two speakers!)
I also use headphone-only DAB radios (one branded KISS picked up in a catalogue clearence shop, and a Roberts Robi iPod attachment), and only get a few hours listening on either one. This is just a fact of life. I live with it. I regard it as an acceptable price for the diversity I cannot get on FM.
I would want to ask how people would like to package radios in a way that was not retro? Can anybody point to a stylish modern FM radio? I will then be able to point to a Roberts or Pure device that looks similar.
For goodness sake...
How many times to we have to have this same argument Windows vs. Mac vs. Linux.
There is no perfect solution to the problem as long as you have mechanisms to make the use of a system easier. Easier on the surface == complex under the covers. It does not matter if it is the sudo model that is in OSX or Linux, the Role based securtiy model of Vista or the "lets just do it" model of XP running as administrator. The basic problem still exists in that you need to do something out-of-the-ordinary, and you either trust it, or ask some form of question.
In every case, unless the user is really on the ball, there is always the chance that something nasty could get through. The Unix model (different from popular Linux distro's) of putting the code in your own non-privileged space is about the only robust model there is, as you are very unlikely in a properly run system to import anything that will affect anyone other than yourself. That's not to say that a 'bot or a trojan will not get through, but other users of the system are unlikely to be compromised. I am deliberately ignoring the lack of binary compatibillity, which is not what I am arguing.
Of course, this means that everyone who wants to use a particular browser extension or version of Java will have to install it themselves, and it is possible for things to be run when you are not logged in (just put it in cron), but this is quite easy to spot.
So, lets just agree that it is a knotty problem, accept that different OSs do it differently, and leave it at that.
I really don't think that ARPA (as it was then) was spec'ing a worldwide commercial network.
It's research was in self-healing communication networks useful for military communications where many parts of the 'net might be taken out. This would be (and is) used on closed encrypted networks with no public access.
Also, don't tar the original research into packet switching with the poor implementation that plagues many applications now on the Internet.
Of course, there were weaknesses in the original design, such as the DoS SYN attacks or man-in-the-middle data capture attacks that are possible, but the security layers that leak so badly are definitly above the one provided by the basic ARPA design.
If you look at the original suite of applications that were demonstrations of the work (telnet, tenex, ftp, mail), they were useful, and people used them, even if they were basically insecure. The world was a more simple place, and generally the networks they were used on were internal to single organisations. Even then, firewalls were mooted (the first firewall I was aware of pre-dated the Web. by several years).
The concept of the World Wide Web (which is just a service running on the Internet) was NOT part of the ARPA research.
The fact that we are still using it, warts and all, justifies the strength of its original design, and it is only likely to be replaced by a derrivitive work (IPv6).
Was your mainframe running DOS?
What - DOS/360 maybe? But that would imply an IBM 360/370 mainframe (although I guess that it would run under VM on newer kit), much older than 1989.
Where is the icon for an old IT fart. A crumpled suit would probably do.
If only half of the effort went in to defending DAB as has gone into these comments, it would be alive and well!! On some of the comments from others, here is what I think.
If you cannot advance the closk, set the alarm an hour earlier (Duh!)
DAB has DECODING delays in the receiver (listen to two different DAB radios at the same time, and hear the time signals at different times). This makes it impossible to correct by broadcasting it early. Same is true for Digital TV vs. Analog TV.
DAB is as good as the ariel. Good ariel==strong signal==no errors or dropouts
DAB radios have quite a lot of computing processing power, which is power hungry. I'm sure that if the person who complained about power consumption would really like to go back to listening to AM on crystal sets that can be made to work WITHOUT A BATTERY! If battery life worries you, get rechargable batteries.
Planet Rock plays music A LOT of people like (including me). But it won't suit everybody. And listen to something other than the rock blocks. I hear new-to-me stuff all the time.
Much of the BBC 7 content was recorded in mono (and some on acetate disks, not tape!), so stereo is not required for all of the material.
I now notice hiss on FM much more than before I listened to DAB.
FM and AM will never die as it is the official emergency information route for national emergencies, mearly because it needs less infrastructure to broadcast and recieve (can you imaging what the EMP from nuke would do to every satellite receiver)
GCAP have lost the plot, and are just chasing as much money as they can get.
There. Take that. My coat has the Roberts Robi hanging from the pocket.
Supporting Steve George
I really appreciate Steve George making it clear what Ubuntu and Canonical stance is all about. And before you read on, I am just a Ubuntu user, and have no links with Canonical at all.
All of you who think that a company like Canonical can put the resources into making Ubuntu the first real open alternative to Microsoft without being able to leaverage a return can only think that money grows on trees.
They have a service based business model, and these services will include bespoke tools. The GNU Public License does not prevent such software from being written to run on top of Linux, nor does it prevent these tools from using, say, the libraries that ship in a Linux distro. This means that these tools can remain closed, and provide a commercial advantage. Canonical does not HAVE to put EVERYTHING they work on back into Ubuntu (provided that it is their own work that they are selling and not modified GPL'd code). That's what the GPL allows.
I applaud Canonical for putting back into the open as much as they do, and for sponsoring Ubuntu development, but they do have to become an economically viable company at some point. As long as they keep to their principals, what is wrong with that.
Where Canonical can benefit is by making these tools and services good enough for people to want to use them. By making sure that Ubuntu is adopted as widly as possible means that they have a larger potential client base. But what makes them different is that they are not shoving their software or services down peoples throats. Ubuntu users have chosen it because it is good, it is free, and it does not come with strings attached (Redhat and Novel/SUSE take note). People can pay for support if they want or need it, but there is no stigma to using the software, downloading the patches, and not paying anything if that is what they choose to do.
All I can hope is that enough people want services to enable Canonical to achieve their goal.
@Dave re pointers
If you can take a dump of the entire memory, then time is not a problem for mining data. Of course you would not be able to break in to the machine in a hurry, but that is only one possibillity.
And I believe that my point still stands. If the Kernel can find the information, then so can another tool specifically written to follow the same evidence trail. Once you know the rules, you can code a analytical tool to apply them. All you need is a device like an EeePC (but with a firewire port) with tools intelligent enough to recognise the OS in question, and apply the relevent rules. A serious hacker will have a toolkit with the rules built in ready and waiting. In in seconds.
My guess is that those people who think it is too hard have never delved under the covers in a real OS to understand how they work. And I know I am being a pedant, but I do not see the difference in this context between 'abstraction' and 'obsfrucation'.
Useful comment, but not completly valid. The OS always has to be able to find this information, so has pointers that can themselves be found (paging tables with known base addresses etc.) All you have done is added an extra level of abstraction, which may deter some people, but not those with serious knowledge, or access to clever tools. Of course, this may make OS's with their source code visible more vulnerable.
Busmastering DMA controllers the problem?
I must admit that I am years behind the times, but when I studied DMA controllers in detail, the OS programmed the memory mapping registers on most architectures to limit the DMA controllers access to just the memory that it needed. This was before the advent of busmastering controllers, but I cannot see that not limiting the memory region, or allowing the controller to access the memory management registers can ever have been a good idea.
In the normal scheme of things, the DMA write operations needs the controller to know where it is safe to write the information, even if it is taking control of the bus in a non-solicited manner. Of course, read operations are not as critical, but again, for a DMA controller to do anything useful, it is necessary for it to be told where to look.
As a result, allowing the controller carte-blanche to the memory map of the system should never really be necessary. Surely this means that the DMA access for firewire must be a mis-feature at the very least, even if it is not a flaw in the design. Or is it really a problem with the northbridge memory controller in a PC?
Maybe someone can enlighten me about why you would want to be able to allow a DMA controller full access to the memory, except to allow a box to be owned in this manner.
BTW, this is also an old story. Apparantly the technique was presented at Ruxcon in 2006.
A certain large supermarket...
... was seen selling a 'complete' system (screen, keyboard etc.) with Vista Basic installed that, on inspection, did not actually have sound capabillities listed on the external packaging, and no speakers included.
I'll bet that this had a motherboard with sound built-in, that did not have Vista drivers available for the chipset used.
I pity the people who bought these systems. I'll bet they did not expect to not get sound!
I wondered what I was doing wrong. I was just working from the original aircrack manual pages.
Of course (having looked around a bit), it would help if my laptop did not have a Intel 2200BG chipset. I know I'm wimping out here, but for a quick investigation, I am not going to go down the route of re-compiling the ipw2200 modules. Oh well, looks like I need the backtrack2 livecd.
Have you tried to crack WEP
I is important that you cannot algorithmetrically guess a WEP password, because even 64 bit WEP is enough to deter casual bandwidth-stealers.
I made an attempt to crack a 64 bit WEP key on one of my wireless routers recently, just to see how long it would take.
I used airmon, airodump and aircrack, and read that I would need something over 200,000 packets before aircrack would be guaranteed to recover the key. I found that it was not the power of the machine running the crack, but the amount of traffic on the network which determined the amount of time to crack the key.
After running the whole weekend, I had nowhere near enough packets with just surfing running on the network (I admit it was a quiet, but not idle network), so I suspect that most war-drivers will not bother to hang around to attempt to crack your 64 bit WEP unless you are a big-time P2P user, or throw large media files around your wireless network.
Of course, the 15 year old h4x0r or script-kiddie in the next road, trying to get porn without their parents knowing might be a different matter.
Not sure whether it will work in US
There are several very Japanese concepts in Akira that will not easily translate into the US as is. There are two ways round this. Ether New Manhatten society will be made to look like post apocalipse Neo-Tokyo, or they will change the story.
Which is it likely to be. Hmmmmm.
I've always wondered why the human race (or at least the British population) has not degraded.
If you look at the demographics of which part of society is having the most kids, in Western society, you will find that the best educated, highest earning portion of the society is the one having the least number of children.
If you go down to the Chav end, they are having the most (this is by observation, not statitistics, but my gut feel is that it is true).
So in theory, assuming that abillity and education follow down the generations (educated people are more likely to make sure that their children are educated than non-educated people), why has the population of these societies not ended up at the chav end of the spectrum.
Oh. Maybe it has. Hence dumbing down everywhere. And here is another example. Paris! (OK, not so good as the Hiltons are slightly rich, but as good a reason for the icon as any!)
It is almost certain that the plane won't fly again.
Aircraft bodies are designed to cope with the stress of taking off, flying, and normal landing. A mushy landing resulting in undercarriage colapse and wing damage is likely to stress elemets of the plane beyond their tolerances, leading to structural defects in many of the strength elements of the airframe. Damage will have been done to the wing fabric and possibly the roots, undercarriage, engine pods, and body (where it touched the ground).
If it were to fly again, there would need to be tests performed on all of the major structural elements to prove that they were not compromised. This would probably cost more than replacing the plane. In addition, it would need new wings and engines (which is possible, but expensive). For an example, see how much it has cost to return Vulcan XH558 to flying condition, and this was a much more simple aircraft.
In addition, the investigation will be probing all of the wiring systems and control systems, so these would need to be re-worked. If you have ever seen how much wire there is in a modern plane, and at what point in the construction it is put in, you would realise that it cannot be replaced. Hell, car companies don't like replacing the wiring loom in a car!
It is likely that most of the relevent parts of the plane will be kept until some time after the investigation is closed, and if they are ever released, re-usable parts will enter the spares pool of BA or Boeing (after being bought back from the insurers). The remaining airframe will probably become a engineering, fire, or evacuation training mule.
BA will not suffer, as the planes are insured.
£1387 per month...
... but how many people want/need to download 2.5TB in a month, which is approx. how much you can do with 100% of 8Mb/S line.
I pay a premium for my line at this speed, and I would like to be able to get the speed I pay for, but only in bursts, and not necessarily during the peak hours. I clearly don't get it.
Looking at my firewall traffic graphs, it looks like during busy times, I average about 40KB/S, which equates to about 320Kb/S, which is something like 24 times slower than my theoretical maximum. And the peak (averaged out over 15 minute periods) is only 125KB/S, which equates to about 1Mb/S. Still 8 times less than I pay for.
And now, it is very suspicious that my incoming SSH sessions hang within seconds of me starting them, which looks like VM is doing something antisocial with my traffic.
I admit that the majority of torrents are probably copyright material (even Linux distro's are copyrighted, but the license allows free distribution), and much of that will be illegal use. But those "anonymous cowards" who claim it should be completely banned need lining up and shooting. Banning torrents outright would only mean that other tools would be used. And those people who *DO* use bittorrent it for legitimate purposes (and for what it was originally used before being hijacked) would be the victims.
And with iPlayer downloads (which are P2P) legitimise use for *SOME* commercial material.
It will become a constant technological war between RIAA and MPAA through the ISPs to stop illegal distribution of copyright material. Let's seperate this from the bandwidth/contention debate, and encourage the ISPs to sell more realistic services that charges for use. Cap according to purchased allowance, encourage heavy users to take out higher tier packages, and invest the money in ensuring that we get what we pay for!
Can anybody post links to credible information about traffic use statistics at any representative point on the 'net?
Looks top heavy
I have trouble with my Eee on my lap. Because you are not on a flat surface, the feet at the back do not keep it at an angle, and the screen makes it fall backwards.
This thing looks like most of the weight is in the screen, which would be even worse! Still. less than $100, would be a bargain!
@AC - Capped at peak hours
I think that VM are one of the most communitive of the UK ISPs. If you want to find out about their AUP, then sign in to their "Customer Zone" and get details of your package. On that page will be a link to their AUP. Follow that, and you will find a link about their traffic shaping. Alternatitivly, look through their FAQ on broadband.
If you believe it verbatim, then it is only the top 5% of users during peak hours who get traffic managed. Their peak hours are defined as 16:00 to 24:00.
Having praised them on their openness, I find myself sceptical about what they say actually being what they do. My 8Mb/S line rarely goes over 25% of it's potential bandwidth for any traffic. My DSL router actually gives me a connection speed of about 7.4 Mb/S. This could be as a result of the upload speed of the site I am receiving from, but I have recently got a HSDPA modem which often downloads faster that my landline, even when only in 3G mode (my home is not in a "Turbo" enabled region). I suspect heavy contention, but can I get them to respond to mails asking what the contention ratio is?
Still, I could change, and I havn't. Must say something.
@Captain Jamie - Virgin Media support
Now you've hit a raw nerve. I have been a Virgin.Net customer for many years, and in general, have found the service fairly reliable (although this could be because I am not a cable customer, just ADSL), but their support STINKS.
I have a bare wire ADSL install to my own DSL router, to a Smoothwall firewall and then to a mixed wired and wireless network of Linux, MS Windows, Macs and even an AIX system (yes, this is at my home). I use a mixture of Webmail (for convienience) and fetchmail to pick up my mail from their mail server.
If I mention any one of these components in a support call, they just turn off and don't even bother to understand the questions.
I'm currently having capacity problems (I'm not getting even 10% of my paid-for 8Mb/s bandwidth), but am currently at the "have you turned off your computer and router and turned them back on" stage.
I took the rash action of sending them traceroute and ping timings to each of the hops, to show where the most likely bottleneck was, but I think I must have blown the recipients poor little mind, because I never heard anything back! They also took a very dismissive attitude when I complained that my allways-on link was being dropped several times a day, which screwed up my dynamic DNS entries. I now have to manually force my entry several times a day just so I have it working for some of the day.
One of these days, I will get so sick I will change, but I can mostly work without their help, so just get by.
My wife says that I should leave now. My coat is the one with all the network cable and micro-filters filling the pockets.
Not just americans
The BBC has a ban on it. I remember listening to Front Row on Radio 4 one evening at about 21:30, and there was a woman artist describing a live art work that involved a woman sitting with her legs open with little clothing, and the artist used the word (sorry, I'm not afraid of the word, I'm just posting through a content screen which may identify the word and take action) to describe aspects of the work, and Mark Lawson nearly tripped over himself with a repremand to the guest and an apology to the listeners.
Laughed, I could have crashed the car!!
I think that there were a number of audience complaints recieved by the BBC.
Education for education's sake
We are in danger of using a very broad brush to tar the whole of the Computing education system.
Commenters have mentioned everything from boot-camps to degree level qualifications, and most of them have been negative comments.
There is a VAST difference between a tick-all-the-right-boxes test after a five day course, and a four year sandwich degree with yearly exams. And there is variation in these as well.
I was a product of the University system 25+ years ago, and still value much of what I learnt. I have since taught, both in Further education, and on get-them-through-the-test courses. Both can be valuable, but for different reasons. In many cases, it is HOW they are taught rather than what is taught that is important, and encouraging an enquiring mind is the desired result, and the certificate is a by-product.
I applaud this 16 year old, as he clearly has the right-stuff to go much further, and be a really useful person. He even acknowleges that he has much to learn, something that many people I know in the industry have not learned after 20 years.
I do believe, however, that the learn-by-wrote certification method leaves much to be desired, and value experience over certificates nearly every time. But those of us who say that experience is always better may not actually have learned enough to know their own limitations.
Viva good education, however it is delivered!
People are uneducated and not disposed to learn.
The vast majority of them will just accept it, because they are told it is necessary by someone in a position of apparant responsibillity. We (the commentors of El. Reg.) do not represent the majority of the uninformed sheep that make up the voting public, and thus are the tail trying to wag the dog, and even we argue about such things!
Paris, because she represents the dumb masses.