Feeds

* Posts by Peter Gathercole

1734 posts • joined 15 Jun 2007

First fondleslab found in 1970s kids TV sci-fi gem

Peter Gathercole
Silver badge

@Ugotta

Seriously, a clipboard with buttons! No. I'm sure it was meant to be electronic.

0
0
Peter Gathercole
Silver badge
Happy

That's uncanny, although form-follows-function makes it inevitable

I used to watch the Tomorrow People as one of my favourite shows.

They did not need 3G or any cellular technology, because they had an alien mentor from a more advanced planet (the Trig) who provided access to a limited amount of advanced technology to augment the still developing telepathic abilities of the group. They regularly used remote data gathering and communication devices, so this proto-ipad probably did not need 3G, and may have used a near-field technology, as it was only used in their lab.

In addition, they had an advanced AI called TIM who coordinated all of the technology, although it was not portrayed as techno-magic, and there were definite limits to what they could do.

On a related note, was Captain Kirk in ST-TOS not forever using a device, often given to him by Yeoman Rand (gotta love those uniforms) not an electronic device? I know it used a stylus, so was probably more like a Newton than an iPad, but still.

4
0

Microsoft reinfects Chrome with closed video codec

Peter Gathercole
Silver badge
Stop

@Surprised, I'm not

The decoder license is only part of the picture. Look at the MPEG-LA license with regard to using the coder part for commercial content. There is a fee payment due on each item encoded for commercial use, and IIRC, it's not pennies.

I'm fairly certain that things like TV-on-demand, adverts, commercial presentations, pornography and even free videos on an ad-supported site (think Facebook and YouTube) can be considered as a commercial use, so there is much money to be extracted from content providers. In order for the providers to consider H.264 over a free codec, they have to see almost a clean sheet of browsers supporting it at no cost to the end users. Otherwise they will not see it as an expense worth paying.

It is interesting that WinXP is being avoided. Probably trying to provide more leverage to get end users to fork out for another Windows license.

I would love to see a system for recycling transferable XP Retail licenses from scrapped systems, rather than them disappearing into the smelter with the system case. Anybody any ideas about setting something like this up before they all disappear?

0
0

Sky loses pub footy case

Peter Gathercole
Silver badge

@lou

It all depends on the satellite. There are several different satellite systems in geo-synchronous orbit including Astra, Eurobird, Hotbird, Eutelsat amongst many others. Not all of their transmission footprints are the same, and some are better in southern Europe, and some in north/north west. For example, most of the working Astra 1 and 2 satellites cover most of northern Europe, but not southern Italy or Greece. Chances are the pub was using one of the other satellites which cover southern Europe, some of which are nor encrypted and can be picked up using equipment with the right dish, even though it is not intended for a particular region.

It really is an issue with the content provider and the broadcasting company, as has been pointed out in other comments. Sky cannot prevent someone buying equipment to point at an out-of-region marginal satellite, even if it does overlap with their paid-for service.

1
0

Apple clips publishers' wings

Peter Gathercole
Silver badge
Unhappy

Unacceptable control

At what point will iFans finally understand that what Apple are doing is unethical. This must be close or past this tipping point, surely.

21
2

Virgin Media kills 20Mb broadband service

Peter Gathercole
Silver badge

Clarity required in the article

Please remember that Virgin Media offer broadband services via ADSL as well as their cable infrastructure. This report is just for cable customers.

1
0

Scotland bans smut. What smut? Won't say

Peter Gathercole
Silver badge

Consenting adults

Is it the consent, or the adult part that you think protects you from prosecution?

God forbid that you engage in sex with a consenting 17 year and 11 month old partner, and take some pictures, because although the act is legal, the pictures aren't!

2
0

The cost of not deduping

Peter Gathercole
Silver badge

There is a trust issue in the examples in the article

"electronic copies of their HR contract and pension scheme guide"

If there were just a single copy of this type of information, then employees would have trouble making sure that what they agreed with when they started a job was the same as the current single copy. You would end up with needing an audit trail for changes to your single copy, with it's own complexity and storage issues. People keep their own copies as a point-in-time reference, to guard against change in the primary copy.

It's funny. Lotus Notes, which was originally sold as a document management system, was selling this "store a pointer, not the document" 15 or more years ago. This aspect of it's use has fallen by the wayside, but I'm sure that it is still in there somewhere. Mind you, using Notes as the glue for a mobile workforce (as IBM does) requires multiple copies to be stored in the hard disk of all of the laptops anyway, so the de-duplication benefits are negated.

Another thing is that you don't want to de-dupe your backup, at least not completely. You must have multiple copies on different media, because media fails. Enterprise backup tools such as TSM work using an incremental-forever model, meaning that by default, only one copy of each version of a file is backed up, but then has specific techniques to force more than one copy of a file to be kept on separate media.

I know I am a born sceptic, but I must admit to being unconvinced by the block-level de-duplication that is being pushed by vendors. Maybe I have just not seen the right studies, or maybe I'm already de-duplicating a lot of what I do using techniques like incremental-forever backups and single-system image deployment.

Maybe I'm just a neo-luddite. Who knows.

1
0

India's cheap-as-chips delayed by cash spat

Peter Gathercole
Silver badge

£20 does seem a bit optimistic

but if you can build a basic mobile phone, with rechargeable battery screen and keyboard, and sell it (presumably with some profit) in the UK for under £30 (as seen at http://www.reghardware.com/2010/12/03/ten_essential_cheap_voice_phones/), then why should you think that it was impossible. Also look at the 7" Android epad and apad devices that are selling on ebay presumably at a profit to the supply chain for less than £80 at the moment.

I know that the phone companies probably make a bit of a loss on the phones, hoping to recoup it from the phone charges, but I cannot believe they subsidise it by a significant amount.

I doubt that the Indian government is going to insist on a device capable of running Windows 7 with 3D high performance graphics, a multi-megapixel display and a terabyte hard disk, and they will probably drive the profit element down as far as possible. If they are assembled in India, they may also be able to fudge or hide the labour costs, in the same way that they subsidise the railways.

What do you actually need for basic web surfing, email and a bit of text processing? Probably a <500MHz ARM, 256MB memory, 2GB backing store, a basic keyboard and mouse (assuming you are not going to use a touch screen) and a display of 640x480 or so. If the web surfing is intended to be government information (voting, census, tax etc) then they can control the web content and thus the requirements of the display.

So impossible at £30, maybe not.

0
0

Bot attacks Linux and Mac but can't lock down its booty

Peter Gathercole
Silver badge

...and more

There are many more places than just the .bashrc (assuming you're using bash, of course, I prefer the AT&T software toolbox ksh myself). Both KDE and Gnome (and most other X11 Window mangers as well) have user startup directories and rc files to allow attacks on systems accessed with a GUI, and you would, of course, have the normal PATH and LD_LIBRARY_PATH attack vectors that could be used to subvert commands that people use all the time, and there are many more.

Linux is not immune from attack, it's just that an attack needs to do more things to really pwn it . For instance, if a user has iptables configured to control inbound and outbound traffic on a Linux system (assuming that the user does not run everything as root), you would have to engage in tricking the user to sudo a command, or otherwise obtain escalated privileges to alter the configuration or turn it off, unlike most windows systems.

There is no such thing as a totally secure OS, it's just more difficult to mess with Linux.

The OSX statistics in the article are a surprise, however.

5
0

Ubuntu - yes, Ubuntu - poised for mobile melee

Peter Gathercole
Silver badge
Happy

Flash on Linux

I know that this is not a help forum, but I recently found out something revealing wrt flash in Firefox on Linux.

I was puzzled by the fact that on the same hardware, flash appeared to run much faster on a new Ubuntu install than on a system that had been upgraded. The same was true on a new install using a previous home filesystem.

I found out that there appears to be a firefox quirk left over from a previous way of installing flash, which ended up installing a shared object called libflashplayer.so in the ~/.mozilla/plugins directory, which over-rides the version of flash installed system wide. This meant that even though I had flash 10.something installed, the properties shown in a flash window showed 9.something. I even found that renaming the file to libflashplayer.so.save in the plugins directory still caused it to be picked up.

This screwed up BBC iplayer and many other sites that checked the version of Flash installed. This had puzzled me for a long time.

Deleting the file completely suddenly made flash work sooooo much better. My daily use system is a Thinkpad T30 2GHz Mobile Pentium 4 running Hardy Heron at the moment (I'm still having problems with KMS, suspend and ATI 7500 mobility graphics adapter on Lucid, and I only want to use LTS releases), and even this is able to make a passable attempt at most YouTube videos now.

I have another dual core Pentium E2200 system, which I think is clocked at 2.2GHz running Lucid Lynx, and that manages flash fullscreen without problems after similar treatment.

I think that everybody who experiences slow flash in Firefox may want to check whether they have something similar.

7
0

No court order against PlayStation hackers for now

Peter Gathercole
Silver badge

Re: Withdrawal of service

Yes, but Sony would almost certainly have put in a "Terms and conditions can change, see the web site at ..." clause in the agreement, making it the customers responsibility to make sure that they were still in compliance.

0
0
Peter Gathercole
Silver badge
Unhappy

Tom 7

It is quite clear in the Sky agreement. You do indeed own the box once you have passed the 1 year initial agreement. But without a Sky subscription, no matter which box you have, all you can do with it is watch free-to-air channels as they are broadcast in standard definition (with the possible exception of the HD BBC channels).

I recently had my Sky subscription dropped because my bank had cancelled the direct debit (long story), and I could not see Sky 1, Sky 2, Living, Sky movies or even Dave (which is available free-to-air), or any of the kids or documentary channels. Can't remember if the BBC HD channels worked. Just what you would get if you took the Sky card out (although the message on the encrypted channels was a bit more polite). What's even worse, I could not see anything stored on the hard-disk, even if it was from a Free-To-Air channel.

This makes the Sky HD box useless as a recorder without a subscription, even for FTA channels.

What is more interesting, as part of any 'upgrade' the Sky installer will probably want to take YOUR old box away. I'm not sure if this is in the upgrade agreement or not (I've not done one, I got my Sky HD box off ebay). This means that if you later want Sky Multiroom, you end up paying for another Sky box to replace the one you previously had!

I'm wondering whether we could challenge whether Sky have the right to restrict the recording function for FTA channels on Sky+ boxes without a Sky subscription? Anybody any thoughts?

0
0

Police reject Labour MP's call for Bristol-wide DNA test

Peter Gathercole
Silver badge
Dead Vulture

"...or happened to be female"

Presumably, if they already have a sample to compare with, they must already know that it was a man. It is quite simple to tell whether a suitable DNA sample comes from either a man or a woman by the presence or absence of a Y chromosome.

Flame warning - Of course, if there was already a national DNA register......

0
3

Seagate sees big drive capacity jump coming

Peter Gathercole
Silver badge
Unhappy

Longevity of data

is really what worries me. We've had a relatively golden age for the last 15 years or so, where any media that you could write to is probably still readable now. I certainly can still read CDs that I burned in the 90's, although it probably depends on how they have been left in direct sunlight.

I recently had a requirement to read some 'spinning rust' from systems I ran in the same time frame (one was a 1.1GB Quantum disk, another was a ~860MB Seagate disk), and the data was still there, still readable.

I have, however, found that older media, particularly floppies (5.25 and 3.5) from the same period or older are very much more a problem, with an almost 50% failure rate of those that I tried (including a whole set of precious and irreplaceable BBC micro disks).

I worry about how long an archived MLC or even SLC flash drive will remain be readable after being put on the shelf. I have already had various flash cards fail on me, which is not a problem for cards used for transient information (holding photos until they are loaded onto a computer, or copies of ripped music or video that is also held elsewhere). Throw them away, and buy another one, reload the data if applicable.

But this would be more of a problem if flash was being used as the primary repository of the information.

I'm not sure that disk is the correct solution either (especially as old style SCSI is pretty much dead, and EIDE interfaces disappear from new systems, replaced by SATA, and SAS and other serial technologies), but I predict that it will be more useful for archive storage than Flash memory. I'm purposefully ignoring tape, as this is now far too expensive for ordinary people, even though the remaining tape and drive manufacturers have a roadmap for data longevity.

Looks like we are fated to continue to re-write our important data forever as we move away from media with any significant lifetime. I think we will look back to days when books were on paper, photographs were on film, and music was on vinyl, all with a lifetime measured in decades, with some nostalgia.

1
0

Intel touts 'Sandy Bridge' video chops

Peter Gathercole
Silver badge

@Matt

I know that you wouldn't necessarily put one of these in every system, I was wondering if the encryption keys are in the Sandy Bridge processor, and the keys were mandated by the content provider's encryption, how would you use another processor/graphics card combo.

It's probably not actually going to happen, I was seeing whether anybody would bite to start a discussion.

0
0
Peter Gathercole
Silver badge

Intel Insider?

So the processor will run a service to allow streaming of video content without the OS being involved? Because this is what the article appears to say!

I think that this is more likely to be media encryption keys locked in the processor, so those nasty Open Source people can't hack them to allow the content to be 'stolen' while it passes through the OS layers. This would enable the media to be encrypted all the way from the server on the Internet to the graphics hardware, and thence on to the display device. Sounds like Intel and the content providers got tired of waiting for the TCG to deliver bare-metal-to-application system trust, so have bypassed the whole OS stack, and large parts of the system hardware.

This does pose the question what if you want to use better graphics hardware than Intel provide? Still, I'm just speculating here.

1
0

Who will rid me of these obsolete PCs?

Peter Gathercole
Silver badge

Unexpected results

I'll try to dig out a reference, but in the last year, one of the UK magazines (or it might have been the UK PC World online magazine) did some testing and found that putting overspec'd power supplies in systems actually reduced the power consumption. So, if you had a system requiring 450W, putting in an 800W power supply resulted in less power used than a 500W power supply in the same system. They published the measured consumption figures, and these showed a considerable difference.

It was reasoned that a power supply is most efficient towards the middle of it's rated capacity, and efficiency falls off as you reach the limit. In addition, the power supply is more likely to continue to cope even as it ages.

I measure that my 24x7 firewall, which is currently a AMD K6-II (remember those?) clocked at 550MHz only consumes about 85W measured with an in-line consumption meter, so older kit really does consume less and could be less than a 100W filament light bulb (and my 2GHz P4 T30 Thinkpad only uses about 45W even when charging at the same time as it is running). My kids recent gaming rigs draw more like 500W, though.

Don't think I would like to use the K6 system as my workstation, however.

1
0
Peter Gathercole
Silver badge

I can only comment on the UK

and this inverse exponential is how I was told to run the residual value of my asset register by two different accountants.

0
0
Peter Gathercole
Silver badge

Only in an LLU area

If I were to switch to SKY broadband, even though I would qualify due to my package, it is not available where I live. I can buy the paid service from SKY, but this is delivered using BT Wholesale just like every other provider in the area.

0
0
Peter Gathercole
Silver badge
Unhappy

Windows license

You also have very restrictive conditions on the Windows license. Unless you can pass on all the documentation and original media, the Windows EULA does not allow you to transfer the Windows license. And what good to lower-income households are computers without Windows (yes, I am a Linux advocate, but I am realistic enough to know that most people currently don't want Linux, unfortunately).

I'm sure that this is often conveniently overlooked, but any organization involved in re-deploying old systems will not risk crossing Microsoft and their lawyers, and will avoid most old kit unless they are putting Linux on it.

1
0

Small biz calls for end date on enhanced 17.5% VAT

Peter Gathercole
Silver badge
Stop

@JimC, it's not quite on profit, but Value Add

You've pretty much repeated what I said, but at no point is it actually related to any profit (in the tax sense) you might make. VAT is related to sales and purchases which I do admit is INDIRECTLY related to profits. In any case, your VAT registered entity is not paying, their customers are, and you are acting as the tax collector.

If you have ever filled in a VAT form, the calculation goes (this is simplified, because I am not considering cross-border VAT) as follows:

Net VAT=VAT charged on sales minus VAT paid on purchases

They also want to know your gross sales for the period and the value of any purchases, but these do not take any part in the calculation, they're just there to allow some form of sanity and fraud checking.

It's that simple, and if you think it through, you as an entity registered for VAT are not paying any VAT at all on your purchases as you offset it against the VAT you charge (it's offset at the point of paying it to HMRC, not actually paid and claimed back). Your customers (who might also be VAT registered and passing it on) are paying it to you, and this re-curses down until you reach someone who is NOT VAT registered, and they end up actually paying without any way of offsetting it.

When I was involved in running a company, I regarded it as a financially neutral operation, although I did begrudge the paperwork. The only thing that was not neutral is that if you paid your VAT quarterly, you could stuff the money in an interest generating account until you needed it to pay it to HMRC.

The point I was trying to make is that you don't actually have to be a very big business to have to be registered for VAT, and if you are, it is neutral to you, although not to your customers. £50,000 might appear a lot, but if you have four staff being paid full time at national minimum wage, then you must be a good way to the threshold in order to be able to pay them. Any form of shop or pub almost certainly has more than £50,000 annual turnover, as this equates to less than £1,000 per week.

As a result, your competitors are probably also registered for VAT, and in the same boat. It's only if you are competing against someone VERY small (under £50,000 PA turnover) that you will be at a disadvantage.

0
0
Peter Gathercole
Silver badge

Only affects very small businesses

probably with a turnover of £50,000 or less. Over that amount, businesses have to register for VAT, and charge it on their services, and are allowed to claim back the VAT they pay on any business purchases against what they collect (yes, businesses acting as tax collectors for the government).

The only thing it will do, however, is make their prices higher, as they will have to increase the amount of VAT they charge, but this will also be the same for their competitors. It will also force them to update the processes they use to work out the VAT. This should not be too hard, as they have had practice at changing the VAT rate for the last two years.

The FSB do a good job on the behalf of small businesses, but this time I believe that they are saying something just so that they are not silent on the matter.

1
1

Garmin tells iPhone users where to go

Peter Gathercole
Silver badge

May be free

but if I could have got updates (which are not available any more, having been dropped as a supported device), I would prefer to use TomTom Navigator 6 that I had running on my Palm Treo 650 using an external bluetooth GPS device than use Google Navigator on my Samsung Galaxy (the Treo really was a smartphone in it's time)

The problem is that the Google app does not provide enough information with regard to speed, time to destination and distance to destination. All I appear to get is time to destination and distance to the next change in navigation (i.e. junction), and even this appears quite arbitrary when in the country.

For instance, on the A396, there is a tight left turn in Exebridge which is counted as a change in navigation, so I can tell how far it is to that even though it is the same road, rather than when I reach the end of the A396. Not clever. And I still find it annoying that it calls the road things like the "A three thousand three hundred and ninety one", rather than a-three-three-nine-one. Reading the street names is clever, though.

I also miss setting the journey up in advance, rather than getting in the car and waiting for it to work out where it is before entering the destination. One time I was in the outskirts of a city, and had to detour due to a closed road, and it did not re-plan the route until I had managed to nearly get to my destination.

Never mind, hopefully Google will update it sometime to make it more usable.

0
1

Ford cars get draconian parental controls

Peter Gathercole
Silver badge

Ford Popular

The follow on vehicle was a 650cc Ford Angular - sorry, Anglia, immortalised by being one of the first Police PANDA cars.

This could not go much above 60 either, and radio's were really a luxury add-on, as was sound-proofing of the engine compartment.

My first car was a second or third hand top-of-the-range 1976 Vauxhall shove-it - sorry, Chevette GLS which had semi-alloy wheels (steel rims, alloy centres), wide(r) tyres, velour seats, sound-proofing, body styling trim and (shock) a heated rear window all as standard, but no radio. A decent stereo radio-cassette was one of the first things I fitted, though, even though I had to dismantle half of the dashboard to get it in and drill a hole for the aerial in the wing.

It would do 80 downhill with a following wind, though!

0
0

Novell's Microsoft patent sale referred to regulators

Peter Gathercole
Silver badge
Unhappy

...keep to yourself

The reason why consortia get involved with this type of thing is to prevent exactly what the OSI are trying to do.

By definition, a consortia cannot be a monopoly (which only implies ONE controlling interest), so the monopoly legislation in western countries cannot apply.

It is possible that you might be able to prove a cartel, but not at this stage of the proceedings, as cartels are normally challenged at the point where they fix prices or control access to a resource.

I first noticed this type of thing with the cross-licensing of IP in the TCPA, now the TCG to prevent monopolies organizations to look too closely at the end-to-end DRM in the Trusted Platform, which threatens FOSS on the very computing systems we use.

I suspect that you could probably quote the MPEG-LA and H.264 as another example of consortia controlling a technology to avoid allegations of monopoly.

1
0

UK.gov relaxes patent application process

Peter Gathercole
Silver badge
Stop

I initially thought this,

but I changed my mind when I considered what the article actually said.

It is not eliminating the need to provide searches, which would be disastrous, but to allow the application to the EPO to use the previous searches that were provided for the initial UK patent application automatically, without needing to re-submit them searches to the EPO.

This will reduce the paperwork, and thus the cost to the applicant, without seriously reducing the protection. This would appear to me to be quite sensible.

The only downside I can see is that the UK searches will not have been against the EPO records, but I guess that it would still be necessary to perform those.

0
0

Ubuntu Wayland: Shuttleworth's post-Mac makeover

Peter Gathercole
Silver badge
Boffin

@ricegf2 - Posts after my own heart

I could not agree more with what you are saying.

Some people in this comment trail have been saying that the names of the UNIX/Linux filesystems are cryptic. This is not the case, as they all have meaning, although like all things UNIX, the meaning may have been lost a little in the abbreviation. I will attempt to shed some light on this, although this will look more like an essay than a comment. Please bear with me.

Starting with Bell Labs. UNIX distributions up to Version/Edition 7 circa 1976-1982.

/ or root was the top level filesystem, and originally had enough of the system to allow it to boot (so /bin contained all of the binaries (bin - binaries, geddit) necessary to get the system up to the point where it could mount the other filesystems. It included the directories /lib and /etc, which I will mention in more detail later.

/usr was a filesystem, and was originally contained all of the files users would use in addition to what was in /, including /usr/bin which contained binaries for programs used by users. On very early UNIX systems, user home directories were normally present under this directory.

/tmp is exactly what it says it is, a world writeable space for temporary files that will be cleaned up (normally) automatically, often at system boot.

/users was a filesystem used by convention adopted by some Universities as an alternative for holding the home directories of the users.

/lib and /usr/lib were directories used to store library files. The convention was very much like /bin and /usr/bin, with /lib used for libraries required to boot the system, and /usr/lib for other libraries. Remember that at this time, all binaries were compiled statically, as there were no dynamic libraries or run-time linking/binding.

/etc quite literally stands for ETCetera, a location for other files, often configuration and system wide files (like passwd, wtmp, gettydefs etc. (geddit?)) that did not merit their own filesystem. With all configuration files, there was normally a hierarchy, where a program would use environment variables as the first location for options, then files stored in the users home directory, and then the system-wide config files stored in the relevant etc directory (more on this below).

/dev was a directory that contained the device entries (UNIX always treats devices as files, and this is where all devices were referenced). Most files in this directory are what are referred to as "special files", and are used to access devices through their device driver code (indexed with Major and Minor device numbers) using an extended form of the normal UNIX filesystem semantics.

/mnt was a generic mount point used as a convenient point to mount other filesystems. It was normally empty on early UNIXes.

When BSD (the add-on tape to Version 6/7, and also the complete Interdata32 and VAX releases) came along (around 1978-1980), the following filesystems were normally added.

/u01, /u02 ..... Directories to allow the home directories of users to be spread across several filesystems and ultimately disk spindles (this was by convention).

/usr/tmp A directory sometimes overmounted with a filesystem used as an alternative to /tmp for many user related applications (e.g. vi).

I think that /sbin and /usr/sbin (System BINaries, I believe) also appeared around this time, as locations for utilities that were only needed by system administrators, and thus could be excluded by the path and directory permissions from non-privileged users.

Things remained like this until UNIX became more networked with the appearance of network capable UNIXes, particularly SunOS. When diskless workstations arrived around 1983, the filesystems got shaken up a bit.

/ and /usr became read-only (at least on diskless systems)

/var was introduced to hold VARiable data (a meaningful name again), and had much of the configuration data from the normal locations in /etc moved into places like /var/etc, with symlinks (introduced in BSD with the BSD Fast Filesystem) allowing the files to be referenced from their normal location. /usr/tmp became a link to /var/tmp.

/home was introduced and caught on in most UNIX implementation as the place where all home directories would be located.

/export used as a location to hold system specific filesystems to me mounted over the network (read on to find out what this means)

/usr/share was also introduced to hold read-only non-executable files, mainly documentation.

About this time the following were also adopted by convention.

/opt started appearing as a location for OPTional software, often acquired as source and compiled locally.

/usr/local and /local often became the location of locally written software.

In most cases for /var, /opt, /usr/local, it was normal to duplicate the bin, etc and lib convention of locating binaries and system-wide (as opposed to user-local) configuration files and libraries, so for example a tool in /opt/bin normally had it's system-wide configuration files stored in /opt/etc, and any specific library files in /opt/lib. Consistent and simple.

The benefit of re-organising the filesystems into read-only and read-write filesystems was so that a diskless environment could be set up with most of the system related filesystems (/ and /usr in particular) stored on a server, and mounted (normally with NFS) by any diskless client of the right architecture in the environment. Different architecture systems could be served in a heterogeneous environment by having / and /usr for each architecture served from different directories on the server, which could be a different architecture from the clients (like Sun3 and Sparc servers).

/var also became mounted across the network, but each diskless system had their own copy, stored in /export/var on the server, so that things like system names, network settings and the like could be kept distinct for each system.

/usr/share was naturally shared read-only across all of the systems, even of different architectures, as it did not contain binaries.

This meant that you effectively had a single system image for all similar systems in the environment. This enabled system administrators to roll out updates by building new copies of / and /usr on the server, and tweaking the mount points to upgrade the entire environment at the next reboot. Adding a system meant setting up the var directory for the system below /exports, adding the bootp information, connecting it to the network, and powering it up.

And by holding the users home directories in mountable directories, it enabled a user's home directory to be available on all systems in the environment. Sun really meant it when they said "The Network IS the Computer". Every system effectively became the same as far as the users were concerned, so there was no such thing as a Personal Computer or Workstation. They could log on on any system, and as an extension, could remotely log on across the network to special servers that may have had expensive licensed software or particular devices or resources (like faster processors or more memory), using X11 to bring the session back to the workstation they were using, and have their environment present on those systems as well.

As you can see, this was how it was pretty much before Windows even existed.

Linux adopted much of this, but the Linux new-comers, often having grown up with Windows before switching to Linux, have seriously muddied the water. Unfortunately, many of them have not learned the UNIX way of doing things, so have never understood it, and have seriously broken some of the concepts. They don't understand why / and /usr were read-only, so ended up putting configuration files in /etc, rather in /var and using symlinks. They have introduced things like .kde, .kde2, .gnome, and .gnome2 as additional places for config data. And putting the RPM and deb database in /usr/lib was just plain stupid, as it makes it no longer possible to make /usr read-only. They have mostly made default installations have a single huge root filesystem encompassing /usr and /var and /tmp (mostly because of the limited partitioning available on DOS/Windows partitioned disks). They have even stuck some system wide configuration files away from the accepted UNIX locations

So I'm afraid that from a UNIX users perspective, although many of the Linux people attempt to do the 'right-thing', they are working from what was a working model, broken by their Linux peers. Still, it's better than Windows, and is still fixable with the right level of knowledge.

I could go on. I've not mentoned /proc, /devfs, /usbfs or any of the udev or dbus special filesystem, or how /mnt has changed and /media, nor have I considered multiple users, user and group permissions, NIS, and mount permissions on remote filesystems, but it's time to call it a day. I hope it enlightened some of you.

I have written this from memory, based on personal experience of Bell Labs. UNIX V6/7 with BSD 2.3 and 2.6 add-on tapes, BSD 4.1, 4.2 and 4.3, AT&T SVR2, 3 and 4, SunOS 2, 3, 4 and 5 (Solaris). Digital/Tru64 UNIX, IBM AIX and various Linux's (mainly RedHat, and Ubuntu), along with many other UNIX and Linux variants, mostly forgotten. I may have mixed some things up, and different commercial vendors introduced some things in different ways and at different times, but I believe that it is broadly correct, IMHO.

1
0
Peter Gathercole
Silver badge

Re. "X was a horrible project"

I agree that X was designed for a different environment than personal computers running a GUI on the same system, but to brand it a "horrible project" just goes too far.

Because of it's origins (in academia), it would be fair to say that X10 and X11, particularly the client side, was one of the first "Open Source" projects (along with the original UNIX contributed software products - many of which pre-date GNU). As such, it helped define the model that enabled other open source initiatives to get off the ground. But it suffered teething problems like all new methods, particularly when it got orphaned when the original academic projects fell by the wayside.

What happened with XFree86 and X.org was messy, but ultimately necessary to wrest control back from a number of diverging proprietary implementations by the major UNIX vendors (X11 never did form part of the UNIX standards). I don't fully understand your comment of reducing bloat, unless you mean modularising the graphic display support so you only have to load what you need, rather than building specific binaries for each display type, but that is just a matter of the number of display types that needed to be supported. X11R5 and X11R6 was actually lightweight by the standards of even X.org.

But I have said this before, and I will say it again. If you don't understand what X11 is actually capable of, then you run the risk of throwing the baby out with the bath water. It would be perfectly possible to keep X11 as the underlying display method, and replace GNOME as a window manager (much as Compiz does, and does quite well). This is one of its major strengths, and would allow us die-hard X11 proponents happy. If you use one of the local communication methods (particularly shared memory) you need not necessarily have a huge communication or memory overhead, especially if you expose something like OpenGL at the client/server interface. It's higher than having the display managed as a monolithic single entity, but I don't believe that any of the major platforms do that. There is always an abstraction between the display and the various components.

Having tried Unity and 10.10 netbook on my EeePC 701, surely one of the targeted systems (small display, slow processor) for several weeks, I eventually decided that it was COMPLETELY UNUSABLE at this level of system. The rotating icons on the side of the screen were too slow, and the one you needed was never visible leading to incredible frustration as you scrolled through the available options, trying to decode what the icons actually mean while they fly up and down the screen. It appeared very difficult to customise, and I begrudged the screen space it occupied. My frustration knew virtually no bounds, and it's lucky that the 701 did not fly across the room (note to self - check out anger management courses) on several occasions.

I reverted to GNOME (by re-installing the normal desktop distro), and my 701 is now usable again, and indeed quick enough to be used for most purposes including watching video.

I know I am set in my ways, but I can do almost everything soooo much faster in the old way. I fail to see that adding gloss at the cost of reduced usability and speed helps anybody apart from the people easily dazzled by bling. To put this in context, I also find the interface on my PalmOS Treo much easier to live with than Android on my most recent phone.

I'll crawl back under my rock now, but if Unity becomes the main interface for Ubuntu, I will be switching to the inevitable Gnubuntu distribution, or even away from Ubuntu completely.

5
0

Electric forcefield space sailing-ship tech gets EU funding

Peter Gathercole
Silver badge

I can't see what prevents

the the wires just folding up in front of the payload. I read that the whole thing spins, so centripetal force will keep the wires extended, but unless the force is very even, surely the psudo-disk will start to precess as soon as the force becomes uneven (such as when tacking), and as the wires will not be rigid (at 25 microns, they could not be), they will just get wrapped up.

I suppose that you could say that the electric field is what is being 'struck', not the wires themselves, but I think that the small push would be transmitted back to the closest wire deforming it away from the disk.

In addition, spinning the construct would be an interesting exercise, as you would have to take into account conservation of angular momentum, and spin relatively fast when starting to deploy it and slow down as it extends outward, again, because the wires are so thin they cannot be rigid. And twisting it to tack...

The mathematics is beyond me (at least, without getting the text books out), so this is just a gut feel

0
0

Google revives ‘network computer’ with dual-OS assault on MS

Peter Gathercole
Silver badge

@poohbear. Before that, even.

You really need to look at X Terminals from the like of NCD and Tektronix (and Digital, HP and IBM as well) in the late '80's and early '90's. These were really thin clients using X11 as the display model.

AT&T Blit terminals (5620, 630 and 730/740) terminals may also fit the bill from about 1983. You might also argue that Sun Diskless Workstations (circa 1982/3) were actually thin clients, but that may be taking things a bit far.

2
0

UK government looks for 500MHz spectrum

Peter Gathercole
Silver badge
Coat

This caught my eye,

and I immediately thought what a ZX Spectrum running at 500MHz (as opposed to 3.75 MHz) would be able to do.......

I'll get my jacket myself.

6
0

Apple patents glasses-free, multi-viewer 3D

Peter Gathercole
Silver badge

@stucs201 re: one step forwards.

I'm not sure what you are getting at. I was responding to the comments that stated that some people get headaches, which they do, and I was trying to explain why that was.

I don't buy into 3DTV or other 3D displays of any type at the moment or the foreseeable future, but that is my opinion, and I don't try to force my opinions on anybody. I may discuss them especially with people who I believe have not seen all the sides of a problem, but that is what dialogue and conversation is all about.

0
0
Peter Gathercole
Silver badge

@Vic

Nothing I have said indicates that stereoscopic vision is not a major part of depth perception, and I don't think that anybody would think that I said anything different.

It's just not the only thing that matters. You can ignore that the other effects exist if you want to, but that would not alter the fact that they do exist.

I did look it up before posting. Maybe you would like to look up the monocular cues Accommodation, and Blurring, both of which are real documented features of depth perception, along with Motion Parallax. Wikipedia will appear top of the search list, but it is not the only reference for this on the 'net.

The universe is vast and complex. Anybody believing that we fully understand any part of it is either a fool, or deluding themselves.

1
1
Peter Gathercole
Silver badge
Stop

@Vic

Stereoscopic images are only part of the whole picture (pun intended).

There are two other features of depth perception that are also important. One is the act of focusing the eye for the correct distance, and the other is that each eye moves to focus the object of interest on the center of the retina, where there are more light receptors.

If you just use separate images beamed into each eye, you can have the correct image for an object close to you. The brain says that it should be focusing close, but in actual fact you need to focus on the screen further away. Ditto the depth related parallax issue of the two eyes. These are what causes people to have headaches.

If you want a demonstration, try the following.

Hold both one hand about 6" in front of your right eye. Hold the other at arms length in front of your left eye. Try to focus on both hands at the same time. Don't do this for too long, or else you will get dizzy.

Then find someone you know quite well, hold your index finger about 2' from their face and ask them to look directly at it. Watch their eyes, and move your finger to about 6" from their face. You will see that they go slightly 'cross-eyed'.

Stereoscopic vision together with both of these other effects are required for the brain to correctly determine depth. If you only get one of them, some people can ignore the fact that the others are missing, and some can't.

Oh, and one more thing. With proper depth perception, moving your head will alter the image due to parallax (again), but the current 3D TV cannot do this. If the eyes are tracked correctly, this system *could* be able to do it, but I agree with most people that this is unlikely using any tech. we are likely to see soon.

2
1

97% of INTERNET NOW FULL UP, warn IPv4 shepherd boys

Peter Gathercole
Silver badge

@Anon 16

Currently, most ISP's share a pool of IP addresses between their users, assuming that they will not all be online at the same time. This allows you to have a unique IP address for all the systems behind your NAT connection for the time you are connected. If you have this, then using a dynamic DNS service will work to make your systems locatable, and port re-direction will allow multiple inbound sessions to be directed to different servers behind your NAT system.

Unfortunately, the world is moving to always-connected devices, so this model is breaking down.

When DNS was first designed, they added the possibility of having well-known-services to be hosted in a map to be queried. This was to allow you to provide information such as port numbers for particular services. Since that time, everyone has got used to fixed port numbers for things like http (80), https (443), ssh (22) and the like, so WKS has been ignored.

Using fixed port numbers makes it difficult to NAT several people's service to a single IP address on the network, as they may all want 80 for example.

If the dynamic WKS support of DNS was used to hold port numbers, or something like SUN RPC (portmap) was rolled out onto the internet for inbound services using port redirection, then it would be possible to use the 16 bit port number together with a single IP address to stave off the inevitable exhaustion of available IPV4 addresses, but it would require people to be much more knowledgeable about port usage, and some changes to certain services to not rely on fixed port numbers.

It would also make firewalls a lot more difficult to write, but you would only expose the services you needed anyway, so maybe this would not be so much of an issue.

1
0

Olympians threaten ICANN with lawsuit

Peter Gathercole
Silver badge

@Blofeld's Cat

Because if they get it as a reserved name, pretty much any address with Olympic or Olympiad will be protected in the TLDs run by ICANN, not just the ones they choose to register. Just think of the near infinite number of combinations they would have to register in order to protect them.

Not that I agree with the IOC. I do not think that they should have preferential treatment, especially when they do their damnedest to control all media during the games.

1
0

The Mac that saved Apple (and Steve Jobs)

Peter Gathercole
Silver badge
Stop

@Christian Berger re: 56k

It's all comparative.

If you had moved up from 1200/75b/s, through V.22bis, V.32 and V.32bis, then V.90 was fast.

If you were using it for commercial use, then it was almost certainly the upload speed that was your issue, as it was asymmetric and the upload channel was a fraction of the download speed. IIRC, if you did V.90 modem to V.90 modem directly, you could only get 33.6kb/s anyway. You needed something like a DS0 setup, which could directly inject digital signals into the phone system, to give you the 56k download speed to end-users.

Most home users mostly downloaded data, so this was not a big issue.

Don't compare your 20Mb/s ADSL line, or even channel-bonded ISDN with what home users had available at the time, because ISDN was far too expensive for home users to consider, even the 'reduced-cost' Home Highway that BT tried to sell.

At this time, nobody did large mail attachments or video, and you left P2P running for hours or days if you were using it. Web sites were still mainly HTML, with only fairly small GIF images. You also did not have flash video adverts or java or javascript apps at all. Most pages were fairly static, and eminently cachable, so we got what we though was a good service at the time.

I wan my whole household (several computers with thin-wire Ethernet, and then wireless as it became available - we're a techie household) on a dial-on-demand 56K modem for several years, until BT got round to upgrading our exchange to ADSL.

8
0

Attachmate gobbles up Novell for $2.2bn

Peter Gathercole
Silver badge

We will just have to disagree.

I don't think I ever said that there was any AT&T code that was not properly licensed or in the public domain in Linux.

That said, the SCO vs. IBM case was never dismissed. It's not been actually ruled on and there still are claims of copyright infringement, rather it has been deferred until the Novell vs. SCO issue has been resolved. That is likely to be dismissed unresolved as SCO have filed for bankruptcy. It would be better if the IBM case came alive again and was ruled on, rather than being dismissed because SCO cannot continue to support it. If it is not resolved, it can always be re-opened by whoever ends up with that part of SCO, although I would hope it would not.

The tort, as you put it, does not have to be so blatant as I made it. Remember, this is FUD we're talking about, not firm claims. It only has to sow doubt in an uninformed procurement officer or financial director (as most of them are, in IT matters at least) for them to ask for assurances from anyone who proposes a Linux solution. If Microsoft actually had the copyrights, and they said "Of course, we own the copyrights to UNIX upon which Linux was modelled, and there is unresolved litigation about whether Linux infringes these copyrights", then that would be completely truthful, and could not be contested as long as the cases had not been resolved.

I can envisage a situation where Free software cold be excluded from playing commercial media by patent and license restrictions. Read what Ross Anderson said about TPM, which doesn't appear to be happening at the moment thank goodness, but is not dead yet. Your BSD or Linux box might be great for HTTP/XML/Flash that we have at the moment, but if H.264 were to become the dominant CODEC rather than WebM, without a benefactor like Mark Shuttleworth to pay for the license for you, you may not be able to buy a system with Linux and an H.264 codec pre-installed on it. General acceptance as a home PC OS without being able to watch video - not a hope. This may be a patent issue, as you pointed out, but they are all hurdles for Linux to overcome.

0
0
Peter Gathercole
Silver badge
WTF?

@Vic - You seriously don't believe they want to and won't try?

In case you hadn't noticed, Microsoft have been doing a really good job of keeping Linux off the desktop. It is an uphill struggle to get traction with users in the home, education and business markets. Even with the likes of Ubuntu and it's derivatives, it's just not happening.

If they can go in to any procurement with organisations by starting with "Of course, you know that we MAY still sue users of Linux for use of OUR copyrighted material", then to the uninformed procurement officers, this will equate to "Better stay clear of Linux, just in case". Financial officers have been known to be sacked for taking decisions that include known financial risks, so they tend to be cautious.

I don't see this as being illegal under current unfair trading practices legislation, but it would be seriously unethical, and unfortunately probably effective.

Without commercial organisations being able to build a business around Linux because of an unfair poisoning of it's reputation, some of the major contributors could fall by the wayside. Without the likes of SuSE (which I believe will be wound down as a Linux developer anyway), Redhat and Cannonical, Linux will revert to a hobbyists OS, with some embedded systems maintained by the likes of Oracle and IBM. Any serious chance of being an accepted desktop OS will disappear.

Also, imagine how much "Get the facts" could have been enhanced by claims of UNIX copyright ownership.

With the potential of a Linux desktop reduced, Microsoft could then capitalise on their near monopoly position (along with a like-minded Apple), and engage on leveraging maximum return from their tied-in user base who have nowhere else to go.

Let me ask you. Do you think that you would pay, say, £100 a year per PC just for the right to use it? Microsoft have often stated that they would like to move to a subscription model for their software, and have also moved in positive ways to exclude (think DRM) Linux from the equation.

Of course, if you or your company have the resources to build, maintain and market a good Linux distribution, and keep it up to date with all the DRM and media licensing issues, then this won't be a problem, will it?

1
1
Peter Gathercole
Silver badge

@Vic

OK, maybe it would be the defining moment for Linux. Maybe I am just being a Luddite by today's standards because I think of myself as a UNIX specialist, with a (strong) sideline in Linux.

But think about this. If MS do get access to the UNIX copyright, then many claims the Linux community have about MS including GPL code from the Linux tree evaporate in a puff of smoke (at least for stuff in the TCP/IP stack and other code that overlaps with the kernel), as MS claim that they have replaced the GPL code with the UNIX equivalent.

With regard to SCO, a lot of their case started unravelling precisely because Novell had maintained ownership of the copyright. In this light, Novell would have been in a much better position to try to sue IBM than SCO, not that it would have tried. IBM's SVR2 source license is pretty much bulletproof for AIX. Not sure about the one they got from Sequent. If Microsoft now have that copyright, they might think differently, though, and might waive the royalties for SCO and give them the copyright to kickstart the case. The fact that the case still exists causes enough FUD.

Other things MS could do:

Start increasing any repeating license code fees for UNIX licensees.

Stop granting any more UNIX source code licenses (OK, I suppose you might say how many people actually want one that have not already got one)

Don't actually start suing anybody about UNIX IP, but they make statements about potential contamination to increase FUD. After all, that is what they have been doing for years.

@AC re. flawed analysis - You seem to suggest that SUN and Microsoft were in a pact. This was not the case. SUN's UNIX source license for Solaris was always bulletproof, especially as they were a co-developer of SVR4 with AT&T. What they offered was indemnity for licensees of Java and other technologies by paying for various licenses from SCO to cover IP in new products that may not have been covered by GPL and their original UNIX license. This was not SUN financing the SCO case directly, this was standard business practice.

People tend to cast SCO as a completely evil company, but you should really see them as a company that thought they owned some IP, with a suitable business model for licensing it, but who then stepped over the line in a serious way and lost the plot and did not know when to stop. This did not alter the legitimate business that they had, which is what SUN bought into.

@jonathanb - The fact that SCO publish their own Linux has little bearing on what Microsoft might do with the UNIX copyright.

@another AC - Oracle have no bother with forking Linux and keeping it in-house (as long as they keep to the GPL and other appropriate licenses). They probably no longer case about what happens to Solaris, this was not what they bought SUN for. So they are probably not worried what MS do as long as there is no litigation.

Maybe I'm just mournful of a passing age, and that this may be one of the last nails in the coffin for Genetic UNIX. I would have preferred IBM (originally a UNIX baddie, but reformed for 15 years or so) or Redhat to have bought that part of the business.

0
0
Peter Gathercole
Silver badge

@Bugs - I'll try this again

If you can say this with sincerity, you don't understand what the implications would be. I hope that you have a big wallet, because you are going to need one to keep paying to use your computer if MS can kill alternative operating systems. It would be like printing money to them.

Now Apple appear to be in the cashing-in-on-their-user-base game as well, you can't even regard Windows as a monopoly.

To the moderator. I think I've removed everything that you could possibly have object to, so will you accept it this time? Thanks.

2
0
Peter Gathercole
Silver badge

@Mage

Yes, that was when Microsoft thought PCs were only good for the desktop, and commissioned a UNIX variant to act as the office hub. Microsoft didn't write it, however (what a surprise), but bought it from the Santa Cruz Operation, then a small development company.

2
0
Peter Gathercole
Silver badge
Flame

Just what will CPTN Holdings get?

CPTN Holdings LLC, which was organized by Microsoft appears to be getting some intellectual property rights as part of this deal.

What might Novell have that would interest Microsoft?

Hmmmmmm. Maybe the copyright to the UNIX code that they did not get by backing SCO (I know it's conjecture, but Microsoft were an investor in Baystar when Baystar offered capital to SCO. Look it up, if you don't know). And, of course, they may take ownership of SuSE.

This could be verrrrrry bad news for the Open Source movement, if it means MS get a dagger to dangle over the head of small open source companies, in the same way that SCO tried.

Sounds like another round of FUD coming.

With SUN out of the way, it would leave IBM and Redhat to try to defend the shattered remnants of the UNIX legacy. I'm seriously worried.

12
0

Unarmed Royal Navy T45 destroyer breaks down mid-Atlantic

Peter Gathercole
Silver badge
Unhappy

Sadly

Of the main cast, only Leslie Philips, Heather Chasen and Judy Cornwell are still alive. Richard Caldicot, Jon Pertwee, Tenniel Evans, Dennis Price, Ronnie Barker, Chic Murray, and Michael Bates, who all played a variety of parts, are no longer with us.

Epic days of British radio comedy, that my teenage children love listening to.

0
0

Most coders have sleep problems, need 'hygiene and care'

Peter Gathercole
Silver badge
FAIL

Wow. 91 coders in a single company.

Such a statistically relevant sample.

7
0

Meltdown ahoy!: Net king returns to save the interwebs

Peter Gathercole
Silver badge

He outlines the problem

and the general solution, but not something that could work at the detail level, IMHO.

The drawback of his solution is how you actually address the content that you want? If you look at the way P2P networks (which have been mentioned several times in the comments), you need to have some form of global directory service that can index content in some way so that it can be found.

Just how on earth do you do this? Google is currently the best general way of identifying named content (for, say, torrents) and just look at how big the infrastructure for this is.

If you are trying to de-dup the index (using an in-vogue term), you are going to have to have some form of hash, together with a fuzzy index of the human readable information to allow the content to be found. I'm sure it sounds interesting, but I cannot see it happening.

Anyway, this could be added as a content rich application enhancement over IP anyway, using something like multicast, especially if IP6 is adopted.

0
0

Stoke Council avoids fine over lost childcare data on USB stick farce

Peter Gathercole
Silver badge
FAIL

Don't suggest fining just the person who lost the stick

but also consider the IT/management team responsible for allowing unencrypted devices to be used, especially if the use is codified in a documented procedure.

It must be drummed in at all levels that this type of information stored on portable storage has to be encrypted.

3
0

Virgin demands ISPs end broadband speed 'con'

Peter Gathercole
Silver badge

@Alex Walsh

This has always been the case with both ADSL and Cable. The upload speed is a small fraction of the download speed, and they have never claimed otherwise. That's what the "A (Asymmetric)" in ADSL stands for, and I'm sure it is described in the T&Cs for cable customers as well.

If you really want good upload speed, might I suggest that you invest in a leased line, but I think you will be shocked by the price.

0
0