Feeds

* Posts by Peter Gathercole

1817 posts • joined 15 Jun 2007

Who will rid me of these obsolete PCs?

Peter Gathercole
Silver badge

Unexpected results

I'll try to dig out a reference, but in the last year, one of the UK magazines (or it might have been the UK PC World online magazine) did some testing and found that putting overspec'd power supplies in systems actually reduced the power consumption. So, if you had a system requiring 450W, putting in an 800W power supply resulted in less power used than a 500W power supply in the same system. They published the measured consumption figures, and these showed a considerable difference.

It was reasoned that a power supply is most efficient towards the middle of it's rated capacity, and efficiency falls off as you reach the limit. In addition, the power supply is more likely to continue to cope even as it ages.

I measure that my 24x7 firewall, which is currently a AMD K6-II (remember those?) clocked at 550MHz only consumes about 85W measured with an in-line consumption meter, so older kit really does consume less and could be less than a 100W filament light bulb (and my 2GHz P4 T30 Thinkpad only uses about 45W even when charging at the same time as it is running). My kids recent gaming rigs draw more like 500W, though.

Don't think I would like to use the K6 system as my workstation, however.

1
0
Peter Gathercole
Silver badge

I can only comment on the UK

and this inverse exponential is how I was told to run the residual value of my asset register by two different accountants.

0
0
Peter Gathercole
Silver badge

Only in an LLU area

If I were to switch to SKY broadband, even though I would qualify due to my package, it is not available where I live. I can buy the paid service from SKY, but this is delivered using BT Wholesale just like every other provider in the area.

0
0
Peter Gathercole
Silver badge
Unhappy

Windows license

You also have very restrictive conditions on the Windows license. Unless you can pass on all the documentation and original media, the Windows EULA does not allow you to transfer the Windows license. And what good to lower-income households are computers without Windows (yes, I am a Linux advocate, but I am realistic enough to know that most people currently don't want Linux, unfortunately).

I'm sure that this is often conveniently overlooked, but any organization involved in re-deploying old systems will not risk crossing Microsoft and their lawyers, and will avoid most old kit unless they are putting Linux on it.

1
0

Small biz calls for end date on enhanced 17.5% VAT

Peter Gathercole
Silver badge
Stop

@JimC, it's not quite on profit, but Value Add

You've pretty much repeated what I said, but at no point is it actually related to any profit (in the tax sense) you might make. VAT is related to sales and purchases which I do admit is INDIRECTLY related to profits. In any case, your VAT registered entity is not paying, their customers are, and you are acting as the tax collector.

If you have ever filled in a VAT form, the calculation goes (this is simplified, because I am not considering cross-border VAT) as follows:

Net VAT=VAT charged on sales minus VAT paid on purchases

They also want to know your gross sales for the period and the value of any purchases, but these do not take any part in the calculation, they're just there to allow some form of sanity and fraud checking.

It's that simple, and if you think it through, you as an entity registered for VAT are not paying any VAT at all on your purchases as you offset it against the VAT you charge (it's offset at the point of paying it to HMRC, not actually paid and claimed back). Your customers (who might also be VAT registered and passing it on) are paying it to you, and this re-curses down until you reach someone who is NOT VAT registered, and they end up actually paying without any way of offsetting it.

When I was involved in running a company, I regarded it as a financially neutral operation, although I did begrudge the paperwork. The only thing that was not neutral is that if you paid your VAT quarterly, you could stuff the money in an interest generating account until you needed it to pay it to HMRC.

The point I was trying to make is that you don't actually have to be a very big business to have to be registered for VAT, and if you are, it is neutral to you, although not to your customers. £50,000 might appear a lot, but if you have four staff being paid full time at national minimum wage, then you must be a good way to the threshold in order to be able to pay them. Any form of shop or pub almost certainly has more than £50,000 annual turnover, as this equates to less than £1,000 per week.

As a result, your competitors are probably also registered for VAT, and in the same boat. It's only if you are competing against someone VERY small (under £50,000 PA turnover) that you will be at a disadvantage.

0
0
Peter Gathercole
Silver badge

Only affects very small businesses

probably with a turnover of £50,000 or less. Over that amount, businesses have to register for VAT, and charge it on their services, and are allowed to claim back the VAT they pay on any business purchases against what they collect (yes, businesses acting as tax collectors for the government).

The only thing it will do, however, is make their prices higher, as they will have to increase the amount of VAT they charge, but this will also be the same for their competitors. It will also force them to update the processes they use to work out the VAT. This should not be too hard, as they have had practice at changing the VAT rate for the last two years.

The FSB do a good job on the behalf of small businesses, but this time I believe that they are saying something just so that they are not silent on the matter.

1
1

Garmin tells iPhone users where to go

Peter Gathercole
Silver badge

May be free

but if I could have got updates (which are not available any more, having been dropped as a supported device), I would prefer to use TomTom Navigator 6 that I had running on my Palm Treo 650 using an external bluetooth GPS device than use Google Navigator on my Samsung Galaxy (the Treo really was a smartphone in it's time)

The problem is that the Google app does not provide enough information with regard to speed, time to destination and distance to destination. All I appear to get is time to destination and distance to the next change in navigation (i.e. junction), and even this appears quite arbitrary when in the country.

For instance, on the A396, there is a tight left turn in Exebridge which is counted as a change in navigation, so I can tell how far it is to that even though it is the same road, rather than when I reach the end of the A396. Not clever. And I still find it annoying that it calls the road things like the "A three thousand three hundred and ninety one", rather than a-three-three-nine-one. Reading the street names is clever, though.

I also miss setting the journey up in advance, rather than getting in the car and waiting for it to work out where it is before entering the destination. One time I was in the outskirts of a city, and had to detour due to a closed road, and it did not re-plan the route until I had managed to nearly get to my destination.

Never mind, hopefully Google will update it sometime to make it more usable.

0
1

Ford cars get draconian parental controls

Peter Gathercole
Silver badge

Ford Popular

The follow on vehicle was a 650cc Ford Angular - sorry, Anglia, immortalised by being one of the first Police PANDA cars.

This could not go much above 60 either, and radio's were really a luxury add-on, as was sound-proofing of the engine compartment.

My first car was a second or third hand top-of-the-range 1976 Vauxhall shove-it - sorry, Chevette GLS which had semi-alloy wheels (steel rims, alloy centres), wide(r) tyres, velour seats, sound-proofing, body styling trim and (shock) a heated rear window all as standard, but no radio. A decent stereo radio-cassette was one of the first things I fitted, though, even though I had to dismantle half of the dashboard to get it in and drill a hole for the aerial in the wing.

It would do 80 downhill with a following wind, though!

0
0

Novell's Microsoft patent sale referred to regulators

Peter Gathercole
Silver badge
Unhappy

...keep to yourself

The reason why consortia get involved with this type of thing is to prevent exactly what the OSI are trying to do.

By definition, a consortia cannot be a monopoly (which only implies ONE controlling interest), so the monopoly legislation in western countries cannot apply.

It is possible that you might be able to prove a cartel, but not at this stage of the proceedings, as cartels are normally challenged at the point where they fix prices or control access to a resource.

I first noticed this type of thing with the cross-licensing of IP in the TCPA, now the TCG to prevent monopolies organizations to look too closely at the end-to-end DRM in the Trusted Platform, which threatens FOSS on the very computing systems we use.

I suspect that you could probably quote the MPEG-LA and H.264 as another example of consortia controlling a technology to avoid allegations of monopoly.

1
0

UK.gov relaxes patent application process

Peter Gathercole
Silver badge
Stop

I initially thought this,

but I changed my mind when I considered what the article actually said.

It is not eliminating the need to provide searches, which would be disastrous, but to allow the application to the EPO to use the previous searches that were provided for the initial UK patent application automatically, without needing to re-submit them searches to the EPO.

This will reduce the paperwork, and thus the cost to the applicant, without seriously reducing the protection. This would appear to me to be quite sensible.

The only downside I can see is that the UK searches will not have been against the EPO records, but I guess that it would still be necessary to perform those.

0
0

Ubuntu Wayland: Shuttleworth's post-Mac makeover

Peter Gathercole
Silver badge
Boffin

@ricegf2 - Posts after my own heart

I could not agree more with what you are saying.

Some people in this comment trail have been saying that the names of the UNIX/Linux filesystems are cryptic. This is not the case, as they all have meaning, although like all things UNIX, the meaning may have been lost a little in the abbreviation. I will attempt to shed some light on this, although this will look more like an essay than a comment. Please bear with me.

Starting with Bell Labs. UNIX distributions up to Version/Edition 7 circa 1976-1982.

/ or root was the top level filesystem, and originally had enough of the system to allow it to boot (so /bin contained all of the binaries (bin - binaries, geddit) necessary to get the system up to the point where it could mount the other filesystems. It included the directories /lib and /etc, which I will mention in more detail later.

/usr was a filesystem, and was originally contained all of the files users would use in addition to what was in /, including /usr/bin which contained binaries for programs used by users. On very early UNIX systems, user home directories were normally present under this directory.

/tmp is exactly what it says it is, a world writeable space for temporary files that will be cleaned up (normally) automatically, often at system boot.

/users was a filesystem used by convention adopted by some Universities as an alternative for holding the home directories of the users.

/lib and /usr/lib were directories used to store library files. The convention was very much like /bin and /usr/bin, with /lib used for libraries required to boot the system, and /usr/lib for other libraries. Remember that at this time, all binaries were compiled statically, as there were no dynamic libraries or run-time linking/binding.

/etc quite literally stands for ETCetera, a location for other files, often configuration and system wide files (like passwd, wtmp, gettydefs etc. (geddit?)) that did not merit their own filesystem. With all configuration files, there was normally a hierarchy, where a program would use environment variables as the first location for options, then files stored in the users home directory, and then the system-wide config files stored in the relevant etc directory (more on this below).

/dev was a directory that contained the device entries (UNIX always treats devices as files, and this is where all devices were referenced). Most files in this directory are what are referred to as "special files", and are used to access devices through their device driver code (indexed with Major and Minor device numbers) using an extended form of the normal UNIX filesystem semantics.

/mnt was a generic mount point used as a convenient point to mount other filesystems. It was normally empty on early UNIXes.

When BSD (the add-on tape to Version 6/7, and also the complete Interdata32 and VAX releases) came along (around 1978-1980), the following filesystems were normally added.

/u01, /u02 ..... Directories to allow the home directories of users to be spread across several filesystems and ultimately disk spindles (this was by convention).

/usr/tmp A directory sometimes overmounted with a filesystem used as an alternative to /tmp for many user related applications (e.g. vi).

I think that /sbin and /usr/sbin (System BINaries, I believe) also appeared around this time, as locations for utilities that were only needed by system administrators, and thus could be excluded by the path and directory permissions from non-privileged users.

Things remained like this until UNIX became more networked with the appearance of network capable UNIXes, particularly SunOS. When diskless workstations arrived around 1983, the filesystems got shaken up a bit.

/ and /usr became read-only (at least on diskless systems)

/var was introduced to hold VARiable data (a meaningful name again), and had much of the configuration data from the normal locations in /etc moved into places like /var/etc, with symlinks (introduced in BSD with the BSD Fast Filesystem) allowing the files to be referenced from their normal location. /usr/tmp became a link to /var/tmp.

/home was introduced and caught on in most UNIX implementation as the place where all home directories would be located.

/export used as a location to hold system specific filesystems to me mounted over the network (read on to find out what this means)

/usr/share was also introduced to hold read-only non-executable files, mainly documentation.

About this time the following were also adopted by convention.

/opt started appearing as a location for OPTional software, often acquired as source and compiled locally.

/usr/local and /local often became the location of locally written software.

In most cases for /var, /opt, /usr/local, it was normal to duplicate the bin, etc and lib convention of locating binaries and system-wide (as opposed to user-local) configuration files and libraries, so for example a tool in /opt/bin normally had it's system-wide configuration files stored in /opt/etc, and any specific library files in /opt/lib. Consistent and simple.

The benefit of re-organising the filesystems into read-only and read-write filesystems was so that a diskless environment could be set up with most of the system related filesystems (/ and /usr in particular) stored on a server, and mounted (normally with NFS) by any diskless client of the right architecture in the environment. Different architecture systems could be served in a heterogeneous environment by having / and /usr for each architecture served from different directories on the server, which could be a different architecture from the clients (like Sun3 and Sparc servers).

/var also became mounted across the network, but each diskless system had their own copy, stored in /export/var on the server, so that things like system names, network settings and the like could be kept distinct for each system.

/usr/share was naturally shared read-only across all of the systems, even of different architectures, as it did not contain binaries.

This meant that you effectively had a single system image for all similar systems in the environment. This enabled system administrators to roll out updates by building new copies of / and /usr on the server, and tweaking the mount points to upgrade the entire environment at the next reboot. Adding a system meant setting up the var directory for the system below /exports, adding the bootp information, connecting it to the network, and powering it up.

And by holding the users home directories in mountable directories, it enabled a user's home directory to be available on all systems in the environment. Sun really meant it when they said "The Network IS the Computer". Every system effectively became the same as far as the users were concerned, so there was no such thing as a Personal Computer or Workstation. They could log on on any system, and as an extension, could remotely log on across the network to special servers that may have had expensive licensed software or particular devices or resources (like faster processors or more memory), using X11 to bring the session back to the workstation they were using, and have their environment present on those systems as well.

As you can see, this was how it was pretty much before Windows even existed.

Linux adopted much of this, but the Linux new-comers, often having grown up with Windows before switching to Linux, have seriously muddied the water. Unfortunately, many of them have not learned the UNIX way of doing things, so have never understood it, and have seriously broken some of the concepts. They don't understand why / and /usr were read-only, so ended up putting configuration files in /etc, rather in /var and using symlinks. They have introduced things like .kde, .kde2, .gnome, and .gnome2 as additional places for config data. And putting the RPM and deb database in /usr/lib was just plain stupid, as it makes it no longer possible to make /usr read-only. They have mostly made default installations have a single huge root filesystem encompassing /usr and /var and /tmp (mostly because of the limited partitioning available on DOS/Windows partitioned disks). They have even stuck some system wide configuration files away from the accepted UNIX locations

So I'm afraid that from a UNIX users perspective, although many of the Linux people attempt to do the 'right-thing', they are working from what was a working model, broken by their Linux peers. Still, it's better than Windows, and is still fixable with the right level of knowledge.

I could go on. I've not mentoned /proc, /devfs, /usbfs or any of the udev or dbus special filesystem, or how /mnt has changed and /media, nor have I considered multiple users, user and group permissions, NIS, and mount permissions on remote filesystems, but it's time to call it a day. I hope it enlightened some of you.

I have written this from memory, based on personal experience of Bell Labs. UNIX V6/7 with BSD 2.3 and 2.6 add-on tapes, BSD 4.1, 4.2 and 4.3, AT&T SVR2, 3 and 4, SunOS 2, 3, 4 and 5 (Solaris). Digital/Tru64 UNIX, IBM AIX and various Linux's (mainly RedHat, and Ubuntu), along with many other UNIX and Linux variants, mostly forgotten. I may have mixed some things up, and different commercial vendors introduced some things in different ways and at different times, but I believe that it is broadly correct, IMHO.

1
0
Peter Gathercole
Silver badge

Re. "X was a horrible project"

I agree that X was designed for a different environment than personal computers running a GUI on the same system, but to brand it a "horrible project" just goes too far.

Because of it's origins (in academia), it would be fair to say that X10 and X11, particularly the client side, was one of the first "Open Source" projects (along with the original UNIX contributed software products - many of which pre-date GNU). As such, it helped define the model that enabled other open source initiatives to get off the ground. But it suffered teething problems like all new methods, particularly when it got orphaned when the original academic projects fell by the wayside.

What happened with XFree86 and X.org was messy, but ultimately necessary to wrest control back from a number of diverging proprietary implementations by the major UNIX vendors (X11 never did form part of the UNIX standards). I don't fully understand your comment of reducing bloat, unless you mean modularising the graphic display support so you only have to load what you need, rather than building specific binaries for each display type, but that is just a matter of the number of display types that needed to be supported. X11R5 and X11R6 was actually lightweight by the standards of even X.org.

But I have said this before, and I will say it again. If you don't understand what X11 is actually capable of, then you run the risk of throwing the baby out with the bath water. It would be perfectly possible to keep X11 as the underlying display method, and replace GNOME as a window manager (much as Compiz does, and does quite well). This is one of its major strengths, and would allow us die-hard X11 proponents happy. If you use one of the local communication methods (particularly shared memory) you need not necessarily have a huge communication or memory overhead, especially if you expose something like OpenGL at the client/server interface. It's higher than having the display managed as a monolithic single entity, but I don't believe that any of the major platforms do that. There is always an abstraction between the display and the various components.

Having tried Unity and 10.10 netbook on my EeePC 701, surely one of the targeted systems (small display, slow processor) for several weeks, I eventually decided that it was COMPLETELY UNUSABLE at this level of system. The rotating icons on the side of the screen were too slow, and the one you needed was never visible leading to incredible frustration as you scrolled through the available options, trying to decode what the icons actually mean while they fly up and down the screen. It appeared very difficult to customise, and I begrudged the screen space it occupied. My frustration knew virtually no bounds, and it's lucky that the 701 did not fly across the room (note to self - check out anger management courses) on several occasions.

I reverted to GNOME (by re-installing the normal desktop distro), and my 701 is now usable again, and indeed quick enough to be used for most purposes including watching video.

I know I am set in my ways, but I can do almost everything soooo much faster in the old way. I fail to see that adding gloss at the cost of reduced usability and speed helps anybody apart from the people easily dazzled by bling. To put this in context, I also find the interface on my PalmOS Treo much easier to live with than Android on my most recent phone.

I'll crawl back under my rock now, but if Unity becomes the main interface for Ubuntu, I will be switching to the inevitable Gnubuntu distribution, or even away from Ubuntu completely.

5
0

Electric forcefield space sailing-ship tech gets EU funding

Peter Gathercole
Silver badge

I can't see what prevents

the the wires just folding up in front of the payload. I read that the whole thing spins, so centripetal force will keep the wires extended, but unless the force is very even, surely the psudo-disk will start to precess as soon as the force becomes uneven (such as when tacking), and as the wires will not be rigid (at 25 microns, they could not be), they will just get wrapped up.

I suppose that you could say that the electric field is what is being 'struck', not the wires themselves, but I think that the small push would be transmitted back to the closest wire deforming it away from the disk.

In addition, spinning the construct would be an interesting exercise, as you would have to take into account conservation of angular momentum, and spin relatively fast when starting to deploy it and slow down as it extends outward, again, because the wires are so thin they cannot be rigid. And twisting it to tack...

The mathematics is beyond me (at least, without getting the text books out), so this is just a gut feel

0
0

Google revives ‘network computer’ with dual-OS assault on MS

Peter Gathercole
Silver badge

@poohbear. Before that, even.

You really need to look at X Terminals from the like of NCD and Tektronix (and Digital, HP and IBM as well) in the late '80's and early '90's. These were really thin clients using X11 as the display model.

AT&T Blit terminals (5620, 630 and 730/740) terminals may also fit the bill from about 1983. You might also argue that Sun Diskless Workstations (circa 1982/3) were actually thin clients, but that may be taking things a bit far.

2
0

UK government looks for 500MHz spectrum

Peter Gathercole
Silver badge
Coat

This caught my eye,

and I immediately thought what a ZX Spectrum running at 500MHz (as opposed to 3.75 MHz) would be able to do.......

I'll get my jacket myself.

6
0

Apple patents glasses-free, multi-viewer 3D

Peter Gathercole
Silver badge

@stucs201 re: one step forwards.

I'm not sure what you are getting at. I was responding to the comments that stated that some people get headaches, which they do, and I was trying to explain why that was.

I don't buy into 3DTV or other 3D displays of any type at the moment or the foreseeable future, but that is my opinion, and I don't try to force my opinions on anybody. I may discuss them especially with people who I believe have not seen all the sides of a problem, but that is what dialogue and conversation is all about.

0
0
Peter Gathercole
Silver badge

@Vic

Nothing I have said indicates that stereoscopic vision is not a major part of depth perception, and I don't think that anybody would think that I said anything different.

It's just not the only thing that matters. You can ignore that the other effects exist if you want to, but that would not alter the fact that they do exist.

I did look it up before posting. Maybe you would like to look up the monocular cues Accommodation, and Blurring, both of which are real documented features of depth perception, along with Motion Parallax. Wikipedia will appear top of the search list, but it is not the only reference for this on the 'net.

The universe is vast and complex. Anybody believing that we fully understand any part of it is either a fool, or deluding themselves.

1
1
Peter Gathercole
Silver badge
Stop

@Vic

Stereoscopic images are only part of the whole picture (pun intended).

There are two other features of depth perception that are also important. One is the act of focusing the eye for the correct distance, and the other is that each eye moves to focus the object of interest on the center of the retina, where there are more light receptors.

If you just use separate images beamed into each eye, you can have the correct image for an object close to you. The brain says that it should be focusing close, but in actual fact you need to focus on the screen further away. Ditto the depth related parallax issue of the two eyes. These are what causes people to have headaches.

If you want a demonstration, try the following.

Hold both one hand about 6" in front of your right eye. Hold the other at arms length in front of your left eye. Try to focus on both hands at the same time. Don't do this for too long, or else you will get dizzy.

Then find someone you know quite well, hold your index finger about 2' from their face and ask them to look directly at it. Watch their eyes, and move your finger to about 6" from their face. You will see that they go slightly 'cross-eyed'.

Stereoscopic vision together with both of these other effects are required for the brain to correctly determine depth. If you only get one of them, some people can ignore the fact that the others are missing, and some can't.

Oh, and one more thing. With proper depth perception, moving your head will alter the image due to parallax (again), but the current 3D TV cannot do this. If the eyes are tracked correctly, this system *could* be able to do it, but I agree with most people that this is unlikely using any tech. we are likely to see soon.

2
1

97% of INTERNET NOW FULL UP, warn IPv4 shepherd boys

Peter Gathercole
Silver badge

@Anon 16

Currently, most ISP's share a pool of IP addresses between their users, assuming that they will not all be online at the same time. This allows you to have a unique IP address for all the systems behind your NAT connection for the time you are connected. If you have this, then using a dynamic DNS service will work to make your systems locatable, and port re-direction will allow multiple inbound sessions to be directed to different servers behind your NAT system.

Unfortunately, the world is moving to always-connected devices, so this model is breaking down.

When DNS was first designed, they added the possibility of having well-known-services to be hosted in a map to be queried. This was to allow you to provide information such as port numbers for particular services. Since that time, everyone has got used to fixed port numbers for things like http (80), https (443), ssh (22) and the like, so WKS has been ignored.

Using fixed port numbers makes it difficult to NAT several people's service to a single IP address on the network, as they may all want 80 for example.

If the dynamic WKS support of DNS was used to hold port numbers, or something like SUN RPC (portmap) was rolled out onto the internet for inbound services using port redirection, then it would be possible to use the 16 bit port number together with a single IP address to stave off the inevitable exhaustion of available IPV4 addresses, but it would require people to be much more knowledgeable about port usage, and some changes to certain services to not rely on fixed port numbers.

It would also make firewalls a lot more difficult to write, but you would only expose the services you needed anyway, so maybe this would not be so much of an issue.

1
0

Olympians threaten ICANN with lawsuit

Peter Gathercole
Silver badge

@Blofeld's Cat

Because if they get it as a reserved name, pretty much any address with Olympic or Olympiad will be protected in the TLDs run by ICANN, not just the ones they choose to register. Just think of the near infinite number of combinations they would have to register in order to protect them.

Not that I agree with the IOC. I do not think that they should have preferential treatment, especially when they do their damnedest to control all media during the games.

1
0

The Mac that saved Apple (and Steve Jobs)

Peter Gathercole
Silver badge
Stop

@Christian Berger re: 56k

It's all comparative.

If you had moved up from 1200/75b/s, through V.22bis, V.32 and V.32bis, then V.90 was fast.

If you were using it for commercial use, then it was almost certainly the upload speed that was your issue, as it was asymmetric and the upload channel was a fraction of the download speed. IIRC, if you did V.90 modem to V.90 modem directly, you could only get 33.6kb/s anyway. You needed something like a DS0 setup, which could directly inject digital signals into the phone system, to give you the 56k download speed to end-users.

Most home users mostly downloaded data, so this was not a big issue.

Don't compare your 20Mb/s ADSL line, or even channel-bonded ISDN with what home users had available at the time, because ISDN was far too expensive for home users to consider, even the 'reduced-cost' Home Highway that BT tried to sell.

At this time, nobody did large mail attachments or video, and you left P2P running for hours or days if you were using it. Web sites were still mainly HTML, with only fairly small GIF images. You also did not have flash video adverts or java or javascript apps at all. Most pages were fairly static, and eminently cachable, so we got what we though was a good service at the time.

I wan my whole household (several computers with thin-wire Ethernet, and then wireless as it became available - we're a techie household) on a dial-on-demand 56K modem for several years, until BT got round to upgrading our exchange to ADSL.

8
0

Attachmate gobbles up Novell for $2.2bn

Peter Gathercole
Silver badge

We will just have to disagree.

I don't think I ever said that there was any AT&T code that was not properly licensed or in the public domain in Linux.

That said, the SCO vs. IBM case was never dismissed. It's not been actually ruled on and there still are claims of copyright infringement, rather it has been deferred until the Novell vs. SCO issue has been resolved. That is likely to be dismissed unresolved as SCO have filed for bankruptcy. It would be better if the IBM case came alive again and was ruled on, rather than being dismissed because SCO cannot continue to support it. If it is not resolved, it can always be re-opened by whoever ends up with that part of SCO, although I would hope it would not.

The tort, as you put it, does not have to be so blatant as I made it. Remember, this is FUD we're talking about, not firm claims. It only has to sow doubt in an uninformed procurement officer or financial director (as most of them are, in IT matters at least) for them to ask for assurances from anyone who proposes a Linux solution. If Microsoft actually had the copyrights, and they said "Of course, we own the copyrights to UNIX upon which Linux was modelled, and there is unresolved litigation about whether Linux infringes these copyrights", then that would be completely truthful, and could not be contested as long as the cases had not been resolved.

I can envisage a situation where Free software cold be excluded from playing commercial media by patent and license restrictions. Read what Ross Anderson said about TPM, which doesn't appear to be happening at the moment thank goodness, but is not dead yet. Your BSD or Linux box might be great for HTTP/XML/Flash that we have at the moment, but if H.264 were to become the dominant CODEC rather than WebM, without a benefactor like Mark Shuttleworth to pay for the license for you, you may not be able to buy a system with Linux and an H.264 codec pre-installed on it. General acceptance as a home PC OS without being able to watch video - not a hope. This may be a patent issue, as you pointed out, but they are all hurdles for Linux to overcome.

0
0
Peter Gathercole
Silver badge
WTF?

@Vic - You seriously don't believe they want to and won't try?

In case you hadn't noticed, Microsoft have been doing a really good job of keeping Linux off the desktop. It is an uphill struggle to get traction with users in the home, education and business markets. Even with the likes of Ubuntu and it's derivatives, it's just not happening.

If they can go in to any procurement with organisations by starting with "Of course, you know that we MAY still sue users of Linux for use of OUR copyrighted material", then to the uninformed procurement officers, this will equate to "Better stay clear of Linux, just in case". Financial officers have been known to be sacked for taking decisions that include known financial risks, so they tend to be cautious.

I don't see this as being illegal under current unfair trading practices legislation, but it would be seriously unethical, and unfortunately probably effective.

Without commercial organisations being able to build a business around Linux because of an unfair poisoning of it's reputation, some of the major contributors could fall by the wayside. Without the likes of SuSE (which I believe will be wound down as a Linux developer anyway), Redhat and Cannonical, Linux will revert to a hobbyists OS, with some embedded systems maintained by the likes of Oracle and IBM. Any serious chance of being an accepted desktop OS will disappear.

Also, imagine how much "Get the facts" could have been enhanced by claims of UNIX copyright ownership.

With the potential of a Linux desktop reduced, Microsoft could then capitalise on their near monopoly position (along with a like-minded Apple), and engage on leveraging maximum return from their tied-in user base who have nowhere else to go.

Let me ask you. Do you think that you would pay, say, £100 a year per PC just for the right to use it? Microsoft have often stated that they would like to move to a subscription model for their software, and have also moved in positive ways to exclude (think DRM) Linux from the equation.

Of course, if you or your company have the resources to build, maintain and market a good Linux distribution, and keep it up to date with all the DRM and media licensing issues, then this won't be a problem, will it?

1
1
Peter Gathercole
Silver badge

@Vic

OK, maybe it would be the defining moment for Linux. Maybe I am just being a Luddite by today's standards because I think of myself as a UNIX specialist, with a (strong) sideline in Linux.

But think about this. If MS do get access to the UNIX copyright, then many claims the Linux community have about MS including GPL code from the Linux tree evaporate in a puff of smoke (at least for stuff in the TCP/IP stack and other code that overlaps with the kernel), as MS claim that they have replaced the GPL code with the UNIX equivalent.

With regard to SCO, a lot of their case started unravelling precisely because Novell had maintained ownership of the copyright. In this light, Novell would have been in a much better position to try to sue IBM than SCO, not that it would have tried. IBM's SVR2 source license is pretty much bulletproof for AIX. Not sure about the one they got from Sequent. If Microsoft now have that copyright, they might think differently, though, and might waive the royalties for SCO and give them the copyright to kickstart the case. The fact that the case still exists causes enough FUD.

Other things MS could do:

Start increasing any repeating license code fees for UNIX licensees.

Stop granting any more UNIX source code licenses (OK, I suppose you might say how many people actually want one that have not already got one)

Don't actually start suing anybody about UNIX IP, but they make statements about potential contamination to increase FUD. After all, that is what they have been doing for years.

@AC re. flawed analysis - You seem to suggest that SUN and Microsoft were in a pact. This was not the case. SUN's UNIX source license for Solaris was always bulletproof, especially as they were a co-developer of SVR4 with AT&T. What they offered was indemnity for licensees of Java and other technologies by paying for various licenses from SCO to cover IP in new products that may not have been covered by GPL and their original UNIX license. This was not SUN financing the SCO case directly, this was standard business practice.

People tend to cast SCO as a completely evil company, but you should really see them as a company that thought they owned some IP, with a suitable business model for licensing it, but who then stepped over the line in a serious way and lost the plot and did not know when to stop. This did not alter the legitimate business that they had, which is what SUN bought into.

@jonathanb - The fact that SCO publish their own Linux has little bearing on what Microsoft might do with the UNIX copyright.

@another AC - Oracle have no bother with forking Linux and keeping it in-house (as long as they keep to the GPL and other appropriate licenses). They probably no longer case about what happens to Solaris, this was not what they bought SUN for. So they are probably not worried what MS do as long as there is no litigation.

Maybe I'm just mournful of a passing age, and that this may be one of the last nails in the coffin for Genetic UNIX. I would have preferred IBM (originally a UNIX baddie, but reformed for 15 years or so) or Redhat to have bought that part of the business.

0
0
Peter Gathercole
Silver badge

@Bugs - I'll try this again

If you can say this with sincerity, you don't understand what the implications would be. I hope that you have a big wallet, because you are going to need one to keep paying to use your computer if MS can kill alternative operating systems. It would be like printing money to them.

Now Apple appear to be in the cashing-in-on-their-user-base game as well, you can't even regard Windows as a monopoly.

To the moderator. I think I've removed everything that you could possibly have object to, so will you accept it this time? Thanks.

2
0
Peter Gathercole
Silver badge

@Mage

Yes, that was when Microsoft thought PCs were only good for the desktop, and commissioned a UNIX variant to act as the office hub. Microsoft didn't write it, however (what a surprise), but bought it from the Santa Cruz Operation, then a small development company.

2
0
Peter Gathercole
Silver badge
Flame

Just what will CPTN Holdings get?

CPTN Holdings LLC, which was organized by Microsoft appears to be getting some intellectual property rights as part of this deal.

What might Novell have that would interest Microsoft?

Hmmmmmm. Maybe the copyright to the UNIX code that they did not get by backing SCO (I know it's conjecture, but Microsoft were an investor in Baystar when Baystar offered capital to SCO. Look it up, if you don't know). And, of course, they may take ownership of SuSE.

This could be verrrrrry bad news for the Open Source movement, if it means MS get a dagger to dangle over the head of small open source companies, in the same way that SCO tried.

Sounds like another round of FUD coming.

With SUN out of the way, it would leave IBM and Redhat to try to defend the shattered remnants of the UNIX legacy. I'm seriously worried.

12
0

Unarmed Royal Navy T45 destroyer breaks down mid-Atlantic

Peter Gathercole
Silver badge
Unhappy

Sadly

Of the main cast, only Leslie Philips, Heather Chasen and Judy Cornwell are still alive. Richard Caldicot, Jon Pertwee, Tenniel Evans, Dennis Price, Ronnie Barker, Chic Murray, and Michael Bates, who all played a variety of parts, are no longer with us.

Epic days of British radio comedy, that my teenage children love listening to.

0
0

Most coders have sleep problems, need 'hygiene and care'

Peter Gathercole
Silver badge
FAIL

Wow. 91 coders in a single company.

Such a statistically relevant sample.

7
0

Meltdown ahoy!: Net king returns to save the interwebs

Peter Gathercole
Silver badge

He outlines the problem

and the general solution, but not something that could work at the detail level, IMHO.

The drawback of his solution is how you actually address the content that you want? If you look at the way P2P networks (which have been mentioned several times in the comments), you need to have some form of global directory service that can index content in some way so that it can be found.

Just how on earth do you do this? Google is currently the best general way of identifying named content (for, say, torrents) and just look at how big the infrastructure for this is.

If you are trying to de-dup the index (using an in-vogue term), you are going to have to have some form of hash, together with a fuzzy index of the human readable information to allow the content to be found. I'm sure it sounds interesting, but I cannot see it happening.

Anyway, this could be added as a content rich application enhancement over IP anyway, using something like multicast, especially if IP6 is adopted.

0
0

Stoke Council avoids fine over lost childcare data on USB stick farce

Peter Gathercole
Silver badge
FAIL

Don't suggest fining just the person who lost the stick

but also consider the IT/management team responsible for allowing unencrypted devices to be used, especially if the use is codified in a documented procedure.

It must be drummed in at all levels that this type of information stored on portable storage has to be encrypted.

3
0

Virgin demands ISPs end broadband speed 'con'

Peter Gathercole
Silver badge

@Alex Walsh

This has always been the case with both ADSL and Cable. The upload speed is a small fraction of the download speed, and they have never claimed otherwise. That's what the "A (Asymmetric)" in ADSL stands for, and I'm sure it is described in the T&Cs for cable customers as well.

If you really want good upload speed, might I suggest that you invest in a leased line, but I think you will be shocked by the price.

0
0
Peter Gathercole
Silver badge
FAIL

When you get to these speeds

you have to look at all parts of the link between you and where your data is coming from. Just because you can receive 50Mb/S does not mean that the other end can or will send it at that speed, or that the shared inter-ISP links are not congested. Try selecting a service from your ISP, and measuring that.

I think it's about time that people start looking at this in a holistic manner, and expecting the Internet as a whole to have infinite bandwidth.

3
0

North Carolina to raise army of Microsofties

Peter Gathercole
Silver badge

@Neil Gardiner

Your computer may well be able to work in French, but did it teach French to you?

I think that not even the most masochistic person would attempt to use the Windows message catalogue to learn a foreign language!

Still, point taken.

0
0
Peter Gathercole
Silver badge

@James Picket - I disagree

I believe that one is a superset of the other. Training IS a type of education (in the broadest sense), but education is a much broader field than training.

I presume that you are comparing Training as in what-you-need-to-do-a-particular-job, with Education as in what-schools-and-colleges-provide.

But even with these definitions, you often find colleges offering vocational training, at least in the UK. When I worked in a UK Polytechnic (sadly a type of institution that no longer exists here), we were often approached by industry to provide training courses for particular subjects and fields. It seemed more natural at that time than approaching a commercial training organisation, and provided much needed cash to the Poly to help provide a better all round service.

Maybe I am looking through rose-tinted glasses, but I don't think so.

0
0
Peter Gathercole
Silver badge

Once upon a time

the man pages were really the documentation. I'm talking Bell Labs UNIX version 7.

If the man pages were not enough, and there was nothing in the "UNIX Papers" documents (that were shipped as n/troff source files almost complete on every V7 tape), then you could resort to the source (which was also shipped, at least to educational customers).

I get tired nowadays of typing "man something", and being given a stub man page that suggests I type "info something", which gives no more information (I actually do not understand why info is supposed to be better than man, I always trip over the key bindings, even though I am an emacs user).

Now I know that something like sendmail or perl cannot be described in a <10 page man page, and that large packages like Open/Libre Office deserve their own books, but I really miss getting comprehensive documentation of at least the usage of a command through man.

2
0
Peter Gathercole
Silver badge

@Myself re: documentation under GPL

As I found out when I looked, what you need for documentation, and I presume training material, is the GNU Free Documentation License, or GNU FDL. Should really learn the lesson of checking before posting rhetorical questions.

0
0
Peter Gathercole
Silver badge
Unhappy

@Blarkon

The argument about Open Source documentation and training material illustrates a circular problem, that of who actually pays to develop the documentation and training.

Training material costs money to develop, so people to do this sort of thing for a living expect to get something back to make it worth their while. Open source organisations can build a business model around this, but they have to get paid for the training and consultancy they provide. Free software does not mean free training.

Microsoft, on the other hand, can divert money paid to them by current customers into developing the training material to lock in the next generation of customers, who will then fund the next generation ad nauseam. And because they have an effective monopoly, they can browbeat the education departments with 'Of course your students need Microsoft Office skills, after all EVERYBODY uses our software'

I'm not saying that this is any different to other programs given to education by vendors, except that Microsoft can use this to reinforce their monopoly paid for by their already locked in customers, in a way that nobody else can.

Of course, in a perfect world, educational institutions would write their own material around open software, and in the spirit of the Open Source movement, contribute that material for other people to use without cost (can you publish training material under GPL, I wonder?)

Unfortunately, this is not an ideal world.

1
0

Royal Wedding: Prince Charles is a ZX81, Wills is an iPad

Peter Gathercole
Silver badge
Boffin

My ZX81

was a useful stop-gap until Acorn delivered my BEEB (ordered on the first day of them accepting orders).

It didn't crash, as I took a Tandy keyboard, re-wired it (or actually re-painted the tracks on the membrane with silver paint) and connected it through a long ribbon cable, adding a power switch to the keyboard. Once I did this, it became quite stable, even with a Quicksilver pass through audio board (with sound modulator to the TV) between the '81 and the RAMpack. Never needed to touch the computer itself. No problems until my homebrew power supply blew it's bridge rectifier.

It was also interesting re-mapping the 1K internal RAM to another memory location when the RAMpack was attached, and putting a second 1K of static RAM under the keyboard connected on the ULA side of the bus isolating resistors, allowing me to change the IR register, which was used as the base address of the character table! Yay, programmable character set and (if you worked hard) bit-mapped graphics.

I actually tried writing a Battlezone variant, but it was just too slow in Slow mode.

Funny, I've just been called a hacker for a completely different reason by my colleague in the next desk. I wonder why.

3
0

Man cracks open floppy disk, inserts USB Flash drive

Peter Gathercole
Silver badge
Stop

It's worse than that...

What if he wants to add another micro-channel card to the system. It is necessary to have the reference disk to add the ADF file for the new adapter!

1
0

So did Windows Phone 7 'bomb in US'?

Peter Gathercole
Silver badge
Linux

@Lars

As do I (almost no MS software in sight on MY, rather than the rest of my family's systems), but think how different it would have been if that had happened in 1982!

0
0
Peter Gathercole
Silver badge

@Roger Jenkins

Some of us even remember when UNIX was new, and all microcomputers were 8 bit and minicomputers were 16 bit (and some mainframes had 24 or even 36 bit wordlengths).

Seriously, if (Intergalactic) Digital Research had persuaded everyone that MP/M (the multitasking variant of CP/M) was the OS to use, then desktop PCs would have been able to multitask from the word go. And that would have really changed history. But remember, even CP/M was not original and was superficially a rip off of RT11 on DEC PDP11s, even down to the device naming and PIP.

Still would have preferred a UNIX derivative on the desktop, even if it had to be crippled by needing to run from floppy disks (UNIX V6 on PDP11 circa 1975 - Kernel less that 56K, ran [just about] on systems with 128K memory, able to multitask, multiuser by default with recognisable permissions system). Definitely doable, although reliable multitasking without page level memory protection would have been a challenge.

6
0

Top 500 supers: China rides GPUs to world domination

Peter Gathercole
Silver badge

@jdx

The highest system on the list that looks like it is predominantly funded by a commercial organisation is the EDF research system at #37. You may like to suggest that this is partly funded by the French government (like #76), in which case it would probably be #78, which is just described as "IT service provider, Germany".

So yes, if you had the money, and the will to run the infrastructure, you could buy one and HP, IBM or Cray would be delighted to sell you one, but to be honest, what would you use it for! Even the Intel ones are not suited to run Crysis.

On reflection, if you go back 8 or 9 years, you can see a number of commercial organisations with systems in the top 100, including banks and telecoms providers. But this was when you could estimate the power of a system by adding the component parts, not by proving that it could run such jobs. The bank I used to work for had their SP/2 AIX server farms listed, because they were clusters. They were never actually used for any HPC type workload ever.

0
0

Robot wars break out on poker sites

Peter Gathercole
Silver badge

I will put my cards on the table

and openly admit that I do not play poker, either face-to-face or on-line, but it seems to me that in order for some people to keep winning, and for the house to take it's cut, it is obvious that someone has to keep loosing, and loosing a lot.

To me, this indicates that there is a significant group of mugs, self-renewing as each wave loose all their money, and replaced by others suckered in by the advertising that is EVERYWHERE it seems.

So there are people who keep playing because they are better than the newbies, and there are people who run bots that can do the same. Sounds like either of these two groups have nothing to complain about, as they will probably break even, otherwise they would stop.

So how many here admit that they've regularly lost money? I'm sure that if they do it will have been as an 'experiment' or 'just to try it out', not admitting that they're the mugs.

I think that the whole industry is unethical, and should contain greater safeguards for the uninformed. But the late night TV interactive game shows are allowed, where the odds are so obscured that you can't tell what the payback is, so I can't see on-line poker being stopped. I just feel sorry for the victims.

1
0

Pure One Mi portable DAB/FM radio

Peter Gathercole
Silver badge

@stucs201

If you've ever used a DAB radio on non-rechargeable batteries, you will know how expensive they are to run....

I guess using AA's would allow you to put rechargeables in, but would you be able to charge them in situe?

0
0
Peter Gathercole
Silver badge
Unhappy

Blah blah blah sound no good blah blah blah

It depends on the bitrate of the channel and the strength of the signal in your area. The lack of hiss is well worth it, and to be truthful, most people listening in kitchens, sheds cars etc. probably would not be able to tell the difference between 128Kb/S MP2 and 256Kb/S MP3. It's the dropout rate that is so bad.

ClassicFM and Radio3 (160Kb/S and 192Kb/S respectively) sound great in a good signal area, a good receiver and quiet conditions. Unfortunately, I don't live in a good signal area, and even though I have a Pure Highway specifically for cars, I can only get any DAB channels for about a third of my commute to work. But then, even FM drops out in one part of my journey.

DAB is a flawed service, I admit, but it is worth keeping unless it is replaced with something better, but even then many will whinge about having to buy new receivers (like me, I have 5 DAB radios).

1
0

The forgotten, fat generation of Mac Portables

Peter Gathercole
Silver badge
Happy

68000

If you ever see references to Dragonball processors, as used in PalmPilots, then these are low-power 68000 processors.

I'm sure I was poking about inside some consumer device (it may have been a Freeview TV box) recently, and came across a 68K based SoC being used as a micro-controller, which probably means that they are still being made.

The 68000 family should be regarded as one of the classic processor designs, alongside the IBM 370, the PDP/11, the MIPS-1 and possibly the 6502. Beats the hell out of the mire that Intel processors have become. Some people might also say that the NS32032 processor and maybe the ARM-1 should also be included in this list.

5
0

Hacker unshackles Kinect from Xbox

Peter Gathercole
Silver badge
Alert

This is a first step

If someone can work out how to drive the thing, that information would be very useful to someone who might want to make a work-a-like, and deprive Microsoft of hardware sales. I'm not saying that Microsoft is correct in what it is doing, but the reason why they are doing it is not really that hard to see.

Of course, MS would be able to prevent importation to countries with valid patents, but that would not stop imports from China via Ebay or the like.

If you look now, you can see non-licensed Wiimote-a-likes, and they are cheaper than the Nintendo originals. The same would happen for Kinect.

I though that there were several precedents set for reverse engineering. Cases involving garage openers and inkjet cartridges spring to mind, and I believe that they all went against the company attempting to maintain the monopolies.

2
0