1836 posts • joined 15 Jun 2007
This has always been the case with both ADSL and Cable. The upload speed is a small fraction of the download speed, and they have never claimed otherwise. That's what the "A (Asymmetric)" in ADSL stands for, and I'm sure it is described in the T&Cs for cable customers as well.
If you really want good upload speed, might I suggest that you invest in a leased line, but I think you will be shocked by the price.
When you get to these speeds
you have to look at all parts of the link between you and where your data is coming from. Just because you can receive 50Mb/S does not mean that the other end can or will send it at that speed, or that the shared inter-ISP links are not congested. Try selecting a service from your ISP, and measuring that.
I think it's about time that people start looking at this in a holistic manner, and expecting the Internet as a whole to have infinite bandwidth.
Your computer may well be able to work in French, but did it teach French to you?
I think that not even the most masochistic person would attempt to use the Windows message catalogue to learn a foreign language!
Still, point taken.
@James Picket - I disagree
I believe that one is a superset of the other. Training IS a type of education (in the broadest sense), but education is a much broader field than training.
I presume that you are comparing Training as in what-you-need-to-do-a-particular-job, with Education as in what-schools-and-colleges-provide.
But even with these definitions, you often find colleges offering vocational training, at least in the UK. When I worked in a UK Polytechnic (sadly a type of institution that no longer exists here), we were often approached by industry to provide training courses for particular subjects and fields. It seemed more natural at that time than approaching a commercial training organisation, and provided much needed cash to the Poly to help provide a better all round service.
Maybe I am looking through rose-tinted glasses, but I don't think so.
Once upon a time
the man pages were really the documentation. I'm talking Bell Labs UNIX version 7.
If the man pages were not enough, and there was nothing in the "UNIX Papers" documents (that were shipped as n/troff source files almost complete on every V7 tape), then you could resort to the source (which was also shipped, at least to educational customers).
I get tired nowadays of typing "man something", and being given a stub man page that suggests I type "info something", which gives no more information (I actually do not understand why info is supposed to be better than man, I always trip over the key bindings, even though I am an emacs user).
Now I know that something like sendmail or perl cannot be described in a <10 page man page, and that large packages like Open/Libre Office deserve their own books, but I really miss getting comprehensive documentation of at least the usage of a command through man.
@Myself re: documentation under GPL
As I found out when I looked, what you need for documentation, and I presume training material, is the GNU Free Documentation License, or GNU FDL. Should really learn the lesson of checking before posting rhetorical questions.
The argument about Open Source documentation and training material illustrates a circular problem, that of who actually pays to develop the documentation and training.
Training material costs money to develop, so people to do this sort of thing for a living expect to get something back to make it worth their while. Open source organisations can build a business model around this, but they have to get paid for the training and consultancy they provide. Free software does not mean free training.
Microsoft, on the other hand, can divert money paid to them by current customers into developing the training material to lock in the next generation of customers, who will then fund the next generation ad nauseam. And because they have an effective monopoly, they can browbeat the education departments with 'Of course your students need Microsoft Office skills, after all EVERYBODY uses our software'
I'm not saying that this is any different to other programs given to education by vendors, except that Microsoft can use this to reinforce their monopoly paid for by their already locked in customers, in a way that nobody else can.
Of course, in a perfect world, educational institutions would write their own material around open software, and in the spirit of the Open Source movement, contribute that material for other people to use without cost (can you publish training material under GPL, I wonder?)
Unfortunately, this is not an ideal world.
was a useful stop-gap until Acorn delivered my BEEB (ordered on the first day of them accepting orders).
It didn't crash, as I took a Tandy keyboard, re-wired it (or actually re-painted the tracks on the membrane with silver paint) and connected it through a long ribbon cable, adding a power switch to the keyboard. Once I did this, it became quite stable, even with a Quicksilver pass through audio board (with sound modulator to the TV) between the '81 and the RAMpack. Never needed to touch the computer itself. No problems until my homebrew power supply blew it's bridge rectifier.
It was also interesting re-mapping the 1K internal RAM to another memory location when the RAMpack was attached, and putting a second 1K of static RAM under the keyboard connected on the ULA side of the bus isolating resistors, allowing me to change the IR register, which was used as the base address of the character table! Yay, programmable character set and (if you worked hard) bit-mapped graphics.
I actually tried writing a Battlezone variant, but it was just too slow in Slow mode.
Funny, I've just been called a hacker for a completely different reason by my colleague in the next desk. I wonder why.
It's worse than that...
What if he wants to add another micro-channel card to the system. It is necessary to have the reference disk to add the ADF file for the new adapter!
As do I (almost no MS software in sight on MY, rather than the rest of my family's systems), but think how different it would have been if that had happened in 1982!
Some of us even remember when UNIX was new, and all microcomputers were 8 bit and minicomputers were 16 bit (and some mainframes had 24 or even 36 bit wordlengths).
Seriously, if (Intergalactic) Digital Research had persuaded everyone that MP/M (the multitasking variant of CP/M) was the OS to use, then desktop PCs would have been able to multitask from the word go. And that would have really changed history. But remember, even CP/M was not original and was superficially a rip off of RT11 on DEC PDP11s, even down to the device naming and PIP.
Still would have preferred a UNIX derivative on the desktop, even if it had to be crippled by needing to run from floppy disks (UNIX V6 on PDP11 circa 1975 - Kernel less that 56K, ran [just about] on systems with 128K memory, able to multitask, multiuser by default with recognisable permissions system). Definitely doable, although reliable multitasking without page level memory protection would have been a challenge.
The highest system on the list that looks like it is predominantly funded by a commercial organisation is the EDF research system at #37. You may like to suggest that this is partly funded by the French government (like #76), in which case it would probably be #78, which is just described as "IT service provider, Germany".
So yes, if you had the money, and the will to run the infrastructure, you could buy one and HP, IBM or Cray would be delighted to sell you one, but to be honest, what would you use it for! Even the Intel ones are not suited to run Crysis.
On reflection, if you go back 8 or 9 years, you can see a number of commercial organisations with systems in the top 100, including banks and telecoms providers. But this was when you could estimate the power of a system by adding the component parts, not by proving that it could run such jobs. The bank I used to work for had their SP/2 AIX server farms listed, because they were clusters. They were never actually used for any HPC type workload ever.
I will put my cards on the table
and openly admit that I do not play poker, either face-to-face or on-line, but it seems to me that in order for some people to keep winning, and for the house to take it's cut, it is obvious that someone has to keep loosing, and loosing a lot.
To me, this indicates that there is a significant group of mugs, self-renewing as each wave loose all their money, and replaced by others suckered in by the advertising that is EVERYWHERE it seems.
So there are people who keep playing because they are better than the newbies, and there are people who run bots that can do the same. Sounds like either of these two groups have nothing to complain about, as they will probably break even, otherwise they would stop.
So how many here admit that they've regularly lost money? I'm sure that if they do it will have been as an 'experiment' or 'just to try it out', not admitting that they're the mugs.
I think that the whole industry is unethical, and should contain greater safeguards for the uninformed. But the late night TV interactive game shows are allowed, where the odds are so obscured that you can't tell what the payback is, so I can't see on-line poker being stopped. I just feel sorry for the victims.
If you've ever used a DAB radio on non-rechargeable batteries, you will know how expensive they are to run....
I guess using AA's would allow you to put rechargeables in, but would you be able to charge them in situe?
Blah blah blah sound no good blah blah blah
It depends on the bitrate of the channel and the strength of the signal in your area. The lack of hiss is well worth it, and to be truthful, most people listening in kitchens, sheds cars etc. probably would not be able to tell the difference between 128Kb/S MP2 and 256Kb/S MP3. It's the dropout rate that is so bad.
ClassicFM and Radio3 (160Kb/S and 192Kb/S respectively) sound great in a good signal area, a good receiver and quiet conditions. Unfortunately, I don't live in a good signal area, and even though I have a Pure Highway specifically for cars, I can only get any DAB channels for about a third of my commute to work. But then, even FM drops out in one part of my journey.
DAB is a flawed service, I admit, but it is worth keeping unless it is replaced with something better, but even then many will whinge about having to buy new receivers (like me, I have 5 DAB radios).
If you ever see references to Dragonball processors, as used in PalmPilots, then these are low-power 68000 processors.
I'm sure I was poking about inside some consumer device (it may have been a Freeview TV box) recently, and came across a 68K based SoC being used as a micro-controller, which probably means that they are still being made.
The 68000 family should be regarded as one of the classic processor designs, alongside the IBM 370, the PDP/11, the MIPS-1 and possibly the 6502. Beats the hell out of the mire that Intel processors have become. Some people might also say that the NS32032 processor and maybe the ARM-1 should also be included in this list.
This is a first step
If someone can work out how to drive the thing, that information would be very useful to someone who might want to make a work-a-like, and deprive Microsoft of hardware sales. I'm not saying that Microsoft is correct in what it is doing, but the reason why they are doing it is not really that hard to see.
Of course, MS would be able to prevent importation to countries with valid patents, but that would not stop imports from China via Ebay or the like.
If you look now, you can see non-licensed Wiimote-a-likes, and they are cheaper than the Nintendo originals. The same would happen for Kinect.
I though that there were several precedents set for reverse engineering. Cases involving garage openers and inkjet cartridges spring to mind, and I believe that they all went against the company attempting to maintain the monopolies.
The original Xbox really was PC components. The 360 is quite a different beast, using PowerPC derived processors. I'm sure that it is running a Windows variant under the covers, but this is more likely to be WinCE than XP.
The rest of the hardware is pretty much generic, but what do you expect? Memory is memory, disks are disks, and even the wireless and controllers will be using off-the-shelf chips. This is the same with PS3s, Wiis and even Macs.
Shame on you
The BBC version has to be the definitive version. Especially if played with a 6502 2nd processor and a bitstick.
It just shows...
...that people ignore things that they don't understand.
And if he had started buying things on Amazon from their accounts, they would hold their hands in the air running around crazily, blaming Amazon, Starbucks, their ISP, the Government and everyone else but themselves.
I'm beginning to think that the Internet is too dangerous to let Joe Public loose on it! Maybe we need an Internet driving test before allowing them to connect.
As soon as
Fedora has an equivalent of an LTS release, I'll consider switching.
I just can't rework my primary systems each time a new release comes out. I don't have the time.
Mind you, my confidence with Ubuntu has been severely dented recently. I still haven't switched to Lucid yet, because I cannot get suspend/resume or sound working well enough on my trusty Thinkpad T30, things that just worked without problem on Hardy, and my netbook, running Jaunty tells me that there are no further updates for that release. Still, I'll put the netbook remix of Meerkat on that, just to see what it is like.
Don't think so,
and I would be interested to know whether they run at all!
I seem to remember that they weren't the most reliable of devices when they were current! And that strange offset flip up screen and fixed keyboard.
I think that the term used for these and similar devices was 'luggable computers'.
We all know there are problems with X. We know that the abstraction between client and server does not suite all types of application and can be an apparent performance barrier.
BUT (and this is a big but)
If you know X, and work in an environment with many networked systems all of which understand X, then the benefits of the abstraction are HUGE. Don't suggest that VNC can fill the gap, because unless you have a big fat network, the performance is crap compared to properly written X apps. If you've not worked in such an environment, you may not know this, but your view risks throwing the baby out with the bath water.
One of the problems as I see it is that because the font handling in X was based on the fonts that the server know about, rather than the fonts that a client wanted to use, some applications appeared to display poorly. To get round this, both KDE and Gnome introduced models that meant that font glyphs were effectively loaded by each client when the client started, often multiple times.
This increased the X server memory footprint and client startup time no end, and almost completely broke the efficient font model that X had (and still has!). In my view, the best way of handling non-standard fonts is to have a font server somewhere (either locally or on the network) and have a mechanism for font-picky applications to add the FreeType or Type1 scalable fonts to that server, either for the duration of the run, or permanently.
Similarly, the way that some application treat pixmaps (and Java is one of the worst culprits, wanting to do it's own graphic abstraction) mean that X performance is much worse than it needs to be for such apps.
X.org is making what I think is a very sensible move to OpenGL based rendering, especially if it has network abstraction built in (I've not checked). This should allow good 3D performance, and as we can see allows the gloss to be added. What we need now is a well recognised, resource controlled window manager and application framework. Whether this is Gnome or KDE, or something completely different remains to be seen, but introducing a new default must be backed up by allowing users who want to remain with what they like to do so.
I actually cannot stand the netbook remixes, even on systems with small screens. They are just too proscriptive, and just get in the way unless all you want to do is what they provide. I use Gnome on my EeePC 701 (800x480), and I only have a small number of times when windows fall off the screen, so I don't see the need for the netbook remix.
"Hooked up to a Beeb"
As long as the two channels were in phase and not skewed, and you could do without the motor control.
I had no end of trouble with stereo players and the Beeb. Ended up making a cable that would only connect the left or the right channel, but never both. Always recommended a good mono tape recorder to other people.
Beebs were remarkably tolerant of the tape speed. There was a tape deck you could buy that had an adjustable speed controller on the motor. You could speed it up by nearly 10% before you had any loading issues. Really made a difference for the longer games. Some game manufacturers advertised faster loading speeds by actually recording with a slower tape drive before duplication, so they were faster on normal players.
Ahhh. Gone are the days of *OPT 1,2 followed by *LOAD and then by swapping the tapes and a *SAVE with the correct parameters to copy tape games.
Someone has actually got a BBC micro user guide on the net at "http://central.kaserver5.org/Kasoft/Typeset/BBC/Intro.html". Bizarre, but welcome. I didn't have to risk opening my decrepit and fragile copy in order to refresh my memory of the *OPT numbers for extended info for CFS.
Tranny radio listening was a very popular communal pastime
Especially at 12:45 on Tuesday lunchtimes, when the BBC chart was announced. That ages me!
And yes, radio did promote conversation in kids. See how it is portrayed in "The Boat that Rocked" to get a feel of '60 and '70s radio culture. Because of the cost of record players and radios, music listening was a communal pastime. You would take your new LP around to a mates house to listen to, rather than giving them a copy, as happens now. A household would normally have one TV, one record player and a couple of battery powered radios at most.
The revolution started with the "Compact Cassette", which allowed you to tape your mates records, and continued through the Walkman era as cassette decks in Hi-Fis got better. In some respects, this paused a bit when CDs first came alone as they were originally a read-only medium.
I still remember the stir rare-earth magnets caused when they appeared in the headphones which were the other revolution of the time. Allowed music to actually sound passable at low voltages compared to the crap piezoelectric earphones for transistor radios or bulky and current hungry cans that were used on Hi-Fi systems.
@AC re: "authoriser" . That's not the point
The 1990 computer misuse act defines what is illegal as defined by legislation passed by Parliament.
An EULA falls into the category of a License (End User License Agreement), so is contract law rather than criminal law.
License agreements can be (and often are) challenged in the courts, and can be deemed unreasonable. I'm sure that I could if I looked hard enough find a precedent where just what you have described has been judged unreasonable, but you have to be careful of the jurisdiction of the court system looking at the case.
Even if a bad EULA were found reasonable, the penalty for infringing it would be a financial rather than custodial, and may not even be enforceable (for example, if the EULA is judged in Texas which is often where these things are tested, and you are in the UK, then so long as you didn't visit the US, it is unlikely that any action could be taken).
BTW. If you are a Windows user, stop and really read the conditions on the Microsoft EULA that you almost certainly agreed to when you 'accepted' (whatever that means for a pre-installed system) them without checking. I think you will be surprised (and maybe a bit frightened) by what you've signed up for!
Anybody any data
on how long PCs with SSD storage last before requiring the SSD to be replaced?
I know that flash memory is getting better, but even with data shuffling and sparing, I expect to see SSD needing to be replaced before a spinning disk. Any chance of such devices lasting 6 years of daily use?
Upward and downward. It depends
on whether the wing spars running through the fuselage are connected to each other and stiff. If they do not flex or kink, then the fuselage will be suspended from the wing when flying, but if they flex or are not connected together, then the inside ends of the spar will move in the opposite direction to the ends of the wings around the point where the wings enter the fuselage.
From what I can see, there is a join in the spars on the mid-line of the fuselage. This could potentially be a weak point, possibly allowing the spar to kink at the join. Adding bracing to prevent this happening looks like it is a very good idea. Would it not have been an idea to stagger the joins on the individual spars? Or have additional uncut straws to reinforce it so that the joins did not occur at the same point? Oh well, too late now!
I would worry about the weight distribution. I think it looks like it will be tail heavy. The design looks like the sort of thing you would use for a powered plane, which has significant weight at the front (the engine). Do you get the opportunity to see if it will glide before attaching it to the balloon?
I have been really upset by the tricks Google are using to make sure that you have your data connection turned on all the time. I must check whether there is an outgoing firewall app in the Android Marketplace.
that some critical comments got through. Mine, that was posted before any of these, was rejected, even though it says nothing worse than many that got through. I wonder how many from other people were rejected as well.
I'm still wanting a reason added for comment rejection. I know that this is a moderated forum, but I was not abusive or insulting, the grammar and spelling was at least fair to good, and I was commenting on the technical accuracy of the article, although I did accuse the article of being a barely disguised advert. But so did the first comment that got through.
Could we at least know for sure that the comments are not moderated by the author of the article? That would at least give us confidence that critical comments have a fair chance to get through.
"fitted to the operational carrier"
How is this going to work? I'm sure that removing and refitting will require both carriers to be out of service at the same time!
Does he think that it's like a child seat in a car, unstrap, move, and strap? If he does, I think he should go on board a carrier when a catapult is operating, and feel how much the ship is affected when a heavy jet is launched. It takes time to make the necessary heavy-duty attachments to keep a plane-flinger safe and no risk to the ship, aircraft and people.
In the processs
In case you hadn't noticed, his sysadmin blogs are already being carried as articles. Stirred up some interesting comments as well.
..I always put Simon down as an ale drinker.
Two mentions of that hideous brew called 'lager' in a single episode. I suppose I can forgive the last one, if it was an instrument of financial torture used on the PFY.
Wasted so much time
trying to get things to work, and have just given up.
I could, on the odd occasion, get audio to work, but I have never managed to get my Buffalo Linkstation Live, that is suppose to to be DLNA compliant, actually server any video, even that encoded to one of the supposed supported formats. This was to a number of clients including Xbox 360, Windows clients, and open source clients.
The whole concept of specifying the container and codecs required in the standard is just a cynical attempt at building in obsolescence into consumer electronic devices to guarantee future sales! It sucks, and anybody who says otherwise is either a marketing shill, or just does not understand.
Nowadays, where possible, I stream stdout to stdin using SSH as the transport and mplayer as the player. Not got the gloss of a nice GUI, but just works anywhere you have knocked a known port through the network. Don't even need to share anything. And I get to avoid running the hacker friendly protocol uPnP, which will advertise the complete capabilities of the systems on your network to anybody who can get snoop it.
Had exactly the same
Only with PowerPoint. Teacher would not even allow the presentation to be shown, even though the presentation style they had been taught was so simple (background and basic titles and bullet points only), it could have been done using ANYTHING!
This makes Office Student and Home edition the only piece of Microsoft software that I have purchased that did not come bundled with a computer for around 10 years! (I do not copy licensed software).
Not that easy
In order to hijack a range of IP addresses, you have to subvert a core ISP, or find some way of injecting false BGP (or whatever they use nowadays) information into the wider network. You have to be trusted, and in particular points in the network to have BGP info believed by your neighbours.
While I am not saying this is impossible, it is so fundamental in the operation of the Internet as a whole that if this is compromised, the operation of the whole Internet is at risk.
To El. Reg. To see whether an IP address is where you think it is, you can try to use traceroute (oh, sorry, tracert for windows users) to see where the packets appear to go. While it is not a sure-fire thing (traceroute can be blocked easily, and some routers do not respond), you may get sufficient clues from the names of the routers that have DNS entries to guess at the routing of the packets. If this does not work, you might try a ping -R (UNIX/Linux only?) to get the return path of the packets.
There are probably many better tools, but Dig (although I still use nslookup), traceroute, ping, netcat, telnet, nmap, wireshark and other tools such as nessus should all be in the metaphorical toolbox of people who want to diagnose network problems.
@AC re: @Me. Yes, I know. Bad me.
Since being able to directly reply to a comment, I have developed a bad habit of assuming that it will be obvious what I'm replying to. But this is not a threaded forum (thank goodness) so this is not always clear. This is not the first time this has happened, and I'm annoyed with myself for it.
I meant amanfrommars, of course. I may be reading the wrong comment trails, of course, but I don't think I've seen a comment by him for several weeks.
Where is he anyway.
I've missed him.
"allows you to view the screens of the hosted VMs and switch between them"
It's part of the design, and it requires some external hardware (so this may be enough for you to argue), but the PowerVM hypervisor on IBM Power systems is architected so it is possible (using the HMC or IVM partition) allows the console screen of all LPARS to be opened from the console.
The same was true for the daddy of all type 1 hypervisors, IBM VM, which also allows a single master terminal to display the consoles of each of the partitions (IIRC, been some years since I worked with it, and most customers used to want physical consoles anyway).
Yes, I know they are text only, but when you have OSs which do not rely on a GUI to run them, why do you need anything other than a text mode console? You have X11 and if you must, virtual consoles using VNC for that.
Where you are getting confused is believing that VMware and Citrix invented the type 1 hypervisor. They did not.
Of course, anyone in the know realises that there is really no such thing as a "Bare Metal" hypervisor, because, as has already been pointed out, VM is an operating system (used to be actually booted from disk!) and PowerVM is actually a locked down (an not even particularly cut down) Linux kernel in flash storage. But hey! It's convenient for the vendors to sell the hypervisor as a firmware "Black Box", that the customer needs to know nothing about, particularly when the security people come snooping.
For bog's sake. It's easy (although costly).
Zone your network using firewalls. Wireless access appears in one zone, which does NOT have any critical servers in it. Employ a capable network engineer or two, and let them achieve a working relationship to the security people.
Control the keys using the strongest authentication all your official devices can use, preferably based on something like RADIUS. Change any PSK keys that you have to have regularly, only circulate these changed keys to people with registered devices.
Query all devices using a device checker probe (something as simple as nmap or wireshark should be able to get most devices) and track down any unauthorized devices. Scan for unauthorized wireless networks in the vicinity, and attempt to identify whether it is the coffee shop downstairs, or a rogue access-point in the building (I'm serious, it happened somewhere I worked!). Make sure that all laptops physically attached to the wired network have wireless services turned off (including 3G 'dongles' and Bluetooth). Run regular security scans on laptops to check that this is the case.
Put simple services (like printing and possibly mail access) within the DMZ. Allow devices on the DMZ controlled access the Internet and then back in to your corporate gateways exactly the same as if they were coming in from the Internet. Knock specific holes controlled by the strongest access control you have in the inward looking firewall for any apps that absolutely have to be accessed from mobile devices. Argue the case for blocking every singe one, until you have been convinced that it is necessary and appropriate controls are in place.
Review these holes regularly, and have a strong procedures to track leavers and joiners. Ban, with the strongest penalties, sharing of ID's and revealing PSK's to non-authorized users. Lock services to specific ID's using strong authentication, preferably using one-shot password devices.
Be prepared to use VPN for any really critical services, especially those containing private or critical data. Select your approved devices carefully, to make sure that they meet all the security requirements. If there are vulnerabilities known on your mobile device of choice, make sure you have appropriate AV software deployed and updated.
If you are paranoid, consider using glass coatings on the windows to control the leakage of the WiFi signal out of the building, but if you are that worried, you should probably not use wireless services at all. Work out how far your wireless networks spread outside of your controlled space, using normal devices and focused antenna as well. Show the controlling managers this, and demonstrate it as well.
And above all, if you value your business, JUST DON'T USE WIRELESS SERVICES. This should include wireless keyboards, and any future wireless USB technology. If the MD objects, put a reasoned argument that the very business itself is at risk if the network is compromised. And if you are over-ruled, either be prepared to give in, lodging an "I Told You So" letter somewhere in the business, or to resign on principal.
It is clear that the "Block everything, then allow only what's essential" principal operates here.
"cannot afford enough jets for the two ships"
The intention is that only one ship would be at sea at any one time, so why the need for aircraft for both? If, as expected, the delivery of the carriers is staggered, once the Prince of Wales has finished it's acceptance trials, Queen Elizabeth will be ready for it's first R&R and minor refit.
Going by how the old Audacious class Ark was run, the aircraft would fly off to a land-based airfield when the ship returned to it's home port, and would only join again once the ship was back at sea, and passed it's sea-worthiness trials.
You would need more than one ship's worth, but less than two, to account for aircraft maintenance cycles.
Oops, silly me.
I meant to say CDC SMD (Storage Modile Device) drives, not SMB. How memory fades.
@Ocular Sinister. Experience tells me otherwise
When Dapper Drake (6.06) was the LTS release, by the time Hardy Heron (8.04) came along, many of the packages in the repository were functionally stable. This meant that you may get bug fixes, but you would probably not get a bump of the version.
If you were adventurous, you could add the 'backports' repository to the list of subscriptions, and get a select few packages at the same level as a more recent Ubuntu release.
As a result, even though dapper was still 'supported', it began to be very difficult to put .deb files from the Debian repository onto Dapper, because the prerequisite libraries would not be present. Ditto compiling up stuff from source.
Hardy does not appear to be quite so prone to this, now Lucid is available, but you can see it starting to happen, especially with third-party software like the BBC iPlayer.
I'm sure that if you joined the Ubuntu developer community, offering to make the backports repository more complete, you would be welcomed with open arms. But until then, the current developer community will be more interested in putting recent versions of the packages into the latest-and-greatest releases, not into the older ones. I myself would love to do this, but personal commitments do not leave me with the time to do it at the moment.
It's a shame, as I believe that ordinary users would be best served putting a LTS release on their systems and leaving it there for the lifetime of the system.
Strangely enough, I did a Windows XP to Windows 7 upgrade recently (one of my kids gaming rigs), and it was much easier than I expected, at least using a second disk and a parallel Windows 7 retail install to make a dual booting system. I do not think I had to re-license anything. All the programs installed on the XP drive were identified and recognised, and ran without problems. But these were mostly games, but did include Office.
Microsoft must be doing something right!
In the 70's, you could never mount / read-only. The ability to do this only came about when Sun implemented their diskless model, where all of the files that would be modified on the / partition (often the files in /etc such as passwd, utmp, wtmp, and mtab) were moved into /var, specifically so / could be mounted read-only on diskless workstations. I'm a bit vague about Sun timelines (I was woking with PDP11s and Bell Labs. versions of UNIX at the time), but I would guess that this happened around 1983, a few years after Sun was set up, with the release of the Sun2 workstations.
In this model, / and /usr were remote read-only mounts, /var was a remote read/write mount specifically for that workstation, and /home was a read/write shared mount for user files.
@AC re. Sensible compromises
Drive letters were antiquated when MS used them in DOS 1.1!
UNIX already had a fully hierarchical filesystem years before Bill went to see IBM.
The concept of filesystems on separate partitions really goes back to the original Bell. labs V6 and V7 code for PDP11s, where the partitions sizes were hardcoded into the device driver for RP disks (no on-disk partition tables there!), and when the smaller RK disks were barely large enough for / or /usr.
Each device could have a maximum of 8 partitions defined, and the definition of the partitions had to work with all drives of that type present in the system. IIRC, it was normal practice to make one partition span the whole device, two more cover half of the device each, and a further four more to cover a quarter of each device. It was, of course, stupid to try to mount the overlapping filesystems, or use the wrong minor device, but this model gave flexibility.
My old Systime 5000E (a PDP11/34E in Systime covers circa 1982) had 2x32MB CDC SMB disks, with a controller hacked to look like an RP controller with RP03 drives, with overlapping partitions of 1x32MB, 2x16MB and 4x8MB. I had / on 1 8MB partition (formatted to use just 6MB, with the last 2 MB used as swap space), /usr on another 8MB partition, and then used the remaining 16MB as a /user filesystem, which was equivalent to /home on a Linux or more modern UNIX system. There was no /var or /opt at that time, as Sun were only just thinking about diskless systems. A second drive had a single 32MB partition for the /ingres filesystem (which actually had the whole of the BSD 2.6 [for which I sadly do not have a copy of the tape] unpacked in it), and which contained the Ingres database code, and all of the defined databases.
It was the only real way to manage such systems. If you are really interested in knowing what was involved in setting up ancient UNIX systems, I suggest that you start here http://minnie.tuhs.org/PUPS/Setup/v7_setup.html, and then brows the rest of the UNIX Heritage Societies site.
BTW, I started on Version 6, although I have put the link in for Version 7 as that is regarded as the point where UNIX really started to fragment.
I prefer two partitions (but I am a UNIX sysadmin)
It's swings and roundabouts. I tend to use a separate home partition so that I can dry-run a new release while keeping access to all of my files in both releases.
Unfortunately, this is not a perfect solution, as quite often, the configuration files for all of the dozens of apps and utilities change between major releases. You often watch informational messages about configuration files being 'converted' to a new version, and find that it no longer works with the old OS. This broke the sound on my Thinkpad between 6.06 (Dapper) and 8.04 (Hardy) (both LTS releases).
I've never been satisfied that 10.04 is ready to switch to, because there are sound, display and suspend problems, so I am still running Hardy. One day, I will boot Lucid, update everything, and all my problems will be over, but I'm not holding my breath, and I don't want to switch away from LTS releases for my main systems.
Non-nuclear carriers are stupid and escorts are required
The problem with the current strategy is that the gas-turbine powered carriers have such a restricted range that they cannot really go anywhere without Replenishment At Sea (RAS or whatever it is called now). And if you reduce the escort fleet to enough to cover the operational carrier plus some in refit, what the hell is going to protect the tankers that are necessary to provide the fuel? How to cripple the British Navy? Sink all the RFAs.
And Lewis is assuming that all of the Navy is operating in the same place at the same time. What is needed to protect HMS Ocean, which has helicopters but no fixed wing aircraft? It's speed is only about 20 knots IIRC, so could be subject to attack by a conventional sub with some dash capability. It is possible that this ship may be deployed separately from the carrier.
We need a capable and relatively sizeable escort fleet, although possibly with slightly different capabilities, to provide some degree of flexibility. If all we can do is deploy the fleet in a fixed configuration as Lewis is suggesting, then it fixes the way it operates for the next 30 years or so. Not exactly the forward looking attitude Lewis says he is suggesting.
I agree that the Type 45's are about as relevant as the Type 81 40 years ago, of which all but one was cancelled. But the batch 3 Type 22 frigates proved themselves to be immensely flexible vessels, so a combination of these (with something like Aster), plus some ocean going gunboat sized anti-pirate and fishery protection vessels, possibly large enough to operate a single simple helicopter like the old Wasp, are necessary. Equip them with some relatively heavy, rapid reaction 30-60mm weapons to dissuade fast-boat pirates, some towed array sonar, and put a capable containerized AA weapon. Something not dissimilar to the Swedish upgraded Stockholm class.
And the radar picket vessels that Lewis talks about must be able to defend itself, so must be a multi-purpose vessel. As they operate many miles from the fleet, they must have some AS detection capability, even if their main purpose is as a radar early warning. This assumes that they are necessary at all. The only real reason we had type 42's (like Sheffield) for this was because the through-deck-cruisers (that we now call the current generation of [harrier] carriers) were not large enough to operate any AWACS aircraft.
I've got one
and it is a decent enough phone which was a free upgrade from Orange on the monthly plan I pay for. I wanted an HTC Desire, but was not prepared to pay the £150 they wanted as an upgrade charge (I'm tight-fisted, I know).
The only serious problems with it that I can really point to is that 400x240 is really too small a resolution to browse, although pinch to zoom does make it a bit more bearable, and that I had to change the settings on the GPS receiver in order to make it work at all.
Other minor niggles that are specific to the phone are that it requires a strong WiFi signal, it's performance is significantly worse than any of the laptops I have in the house. Also, the battery indicator is inaccurate. The other day it dropped from 50% to 15% (the low battery warning point) in about 10 minutes while browsing the Android Market over Wifi. 2 days moderate use with data and Wifi exhausts the battery.
Finger marks are very obvious, and I end up polishing them off every time I use it.
Most of the Orange apps require connections via data services and will not work over WiFi.
I have other problems with it, but they are mostly Android related. My previous phone was a Palm Treo 650, and I am finding the reliance on data services that Android imposes for pretty much everything bugs me. I had only a small data allowance on my phone plan that I blew in about 10 minutes when I first got the phone. I ended up changing the APN, just to stop it connecting, and then virtually nothing worked. Most apps that checked the state of the data connection would hang for 30-40 seconds when starting. My Palm had complete control of the data connection, and apps would start up just as quick whether or not the data services are enabled. As I said, this is an Android issue.
I find the array of settings that Android and Samsung offer difficult to navigate, but I guess that is inevitable when phones get so complex.
Other than that it works as a phone, the audio and video performance is acceptable to good, and although it is slower than a Desire, you get used to it. I've yet to find an app that does not work. I'll do for a while.
I still miss the simplicity of the Treo, however.
- Review Is it an iPad? Is it a MacBook Air? No, it's a Surface Pro 3
- Hello, police, El Reg here. Are we a bunch of terrorists now?
- Video of US journalist 'beheading' pulled from social media
- Netflix swallows yet another bitter pill, inks peering deal with TWC
- The Register to boldly go where no Vulture has gone before: The WEEKEND