1701 posts • joined 15 Jun 2007
Who down-voted me? I thought this was quite innocuous!
I will admit it would have been better if it was a Sinclair Z88, but it was close enough.
I don't know
I spent £6 on a Amstrad NC100 recently just as a curio. Was about as cheap as the horse brasses my wife bought at the same car boot, and I believe a better buy (and it still works, surprisingly!)
Sometimes it's worth squandering a little money just to own a piece of interesting history.
The zdnet article is dated 2003. 8 years on, AIX is still IBM's main Posix OS, and is unlikely to be completely replaced as long as paying customers want it and IBM makes a profit. And I believe that IBM is making quite healthy profits from their microelectronics division, which produces almost all of the Power(tm) processors that the world uses.
I don't doubt that the proprietary UNIX systems market will continue to shrink, and I don't think than any of the remaining players (IBM, Oracle and especially HP) are really interested in putting large amounts of money into further developing Genetic UNIXes (although they are all pretty well developed as they are).
Linux still has a way to go (IMHO) before it can match HP/UX, Solaris and AIX in overall manageability. I keep expecting to see some major announcement from a large vendor about their Linux distro being as good as the proprietary UNIXes, but I have yet to see it. I am beginning to think that people like Red Hat and SuSE (Attachmate?) are all still in the small system mentality, because of the current in-vogue push for virtualised smaller OS instances running on big systems. Or maybe, the fact that Linux is an Open Source OS means that there just is not the money in it to make that final push into the critical systems market that UNIX currently occupies.
I'm sure I don't know. All I know is that I prefer UNIX (and Linux) to the Microsoft alternative, but maybe I'm just getting old.
BTW. I've often wondered whether there is a degree of jealosy in your comments. Generally, UNIX jobs are still better paid than Linux ones. Are you just wishing for proprietary UNIX to disappear to bring UNIX people down to the level of Linux wages? Just speculating.
@John Riddoch re. Out of Order
Power 6 didn't do out-of-order execution, but Power 5 did, as does Power 7, so Alison has some justification about claiming it not being new.
IBM just had to learn what Intel did with Pentium 4, that high clock speeds and deep pipes are not the answer to overall throughput. That and power consumption issues resulted in Power 7 being a different processor than Power 6.
The reason why Power 6 did not do out-of-order execution was (as far as I am aware) a result of IBM pushing the clock-speed.
I am glad to see that there is still someone other than IBM investing in non-Intel processors. The world will be much more boring if/when x86 becomes the only show in town.
It will be interesting to see how independent comprehensive benchmarks show these systems vs. Power 7 and Power 7+ and the current crop of HP systems, not just the cherry-picked "World Record" results that Oracle put in the announcement. Not that Oracle are doing anything different from all the other hardware manufacturers in their marketing spiel.
@Field Marshal etc.
Yeh, yeh. Very droll.
I was trying to exclude the daft things students do at University.
But seriously, Cobol is quite definitely a commercial language, and is not at all suited to scientific work. It's missing lots and lots of things you take for granted in any language more suited. There is only one language (apart from the out-and-out weird ones for specific purposes) that I can think of that is less suited, and that is RPG!
On the subject of 6502 assembler, I'm sure if you looked hard you may still find a BBC micro or two buried in the depths of some lab. somewhere. BBC basic was written in 6502 assembler originally, and people did lots of interesting things in that, so 6502 assembler by proxy.
I always assumed
that relativity predicted that you could not travel at the speed of light, because it would imply infinite mass and thus infinite energy.
But if there was a dis-continuous way of jumping over the speed of light without actually accelerating through it, I believe that the equations could still hold, although I suspect that it would require a completely new branch of physics to explain the dis-continuous speed jump in the first place, and also some strange concepts like negative mass.
I'm expecting serious physicists to rip this suggestion to shreds ( I got no further than Principal Physics in my General Science degree thirty years ago - equivalent to the 2nd year of a normal Physics degree), and I'm expecting to be thumbed down, but it will be interesting to see what is said!
@Gav - I think that COBAL is so secure
because you are the only person who knows about it!
I suspect that you mean COBOL, and if there is any modern (post millennium) serious scientific application (I will not accept financial software as falling in this category, even if it is for scientific establishments) written in COBOL, I'll eat my copy or K&R.
What it says is that 55% Andriod users will definitely switch to another Android device. That does not automatically mean that 45% will definitely move from Android! There are no figures for "maybe" or "don't know", or even "I'll see what's out there when I'm ready".
It also looks like the 55% is Andriod customers who will stay with Andriod, but definitely switch vendor. That may not include Android users who actually do decide to stay with vendor. Including that figure may change the overall picture for Android.
When it comes to generic OSs, brand loyalty is not so significant. Most knowing people assuming that Android is very similar, will compare battery life, function, or reviews. With locked in customers with Apple and RIM, the only way they can maintain their user experience is to stick with brand.
I probably will not stick with Samsung, but I will definitely be getting an Android phone, unless, that is, a WebOS device comes my way at a knock-down price.
But this is all surveys and statistics anyway, and you know what they say about those....
If it comes to court (in the US), and Apple offers to license the patents for a reasonable (or even a generous) amount, and pay the damages, I think that VIA would have to accept that as settlement.
True, they could get an injunction and try to keep it going for as long as possible, but the US courts are unlikely to allow any injunction to persist if what the court deems a reasonable offer to settle has been made, and there are rules about how you value what such a settlement can be.
I'm all for Apple being hoisted on their own petard, but I don't think a single case like this is likely to change their behaviour.
Hmm. Not sure they could, even if they wanted to.
It depends on where VIA shares are listed, and whether the company maintains control over a majority of their own shares.
If they are listed on NASDAQ (I've just checked, and they are on the Taiwan stock exchange), and do not have a majority share holding in themselves, then it is possible, but if a majority of the shares are not being traded, then there is no way that Apple can force the VIA board to sell themselves in a hostile takeover.
And I don't think that a US court could force the winner to sell themselves as part of a settlement (this would be completely stupid) , and if the shares are listed outside of the US, then the only pressure Apple could put on VIA is commercial and other patent lawsuits.
VIA appears to be part of the Formosa Plastics Group, so they may be difficult to challenge anyway.
Apple has been playing a very dangerous game, and I think they are about to find this out.
I heard this at around the same time
while I was working for an AT&T and Phillips joint venture that was selling fibre-optic kit to BT.
But consider the radio-plays. Each time a record that is still in copyright gets airplay, it earns a play-fee for the copyright owner, and probably also the artist (this depends on their contract with their record label).
A small proportion of the PRS licence paid by shops, DJs, and other organization that play music in public places is also distributed to all artists who still have copyright on their works. There will be residual fees if it is included on a compilation, and for use on adverts. There's also ring-tones, re-mixes and samples on modern records, and if I thought hard about it, I could probably think of other uses of recorded music that may generate revenue (ahh, another one - games, although probably not Max Bygraves. And another, YouTube.)
I'm sure that most artists will not get a lot from this, but there will be some revenue, and something is better than nothing!
@Alan. I do no believe that this is correct
I believe that sheet music actually produced however many years ago is copyright free, but if a new edition of an old work, newly typeset with "significant changes", is published, then this has a copyright of it's own.
I quote from the copyright section of the CPDL website, which is a site for choral music in the public-domain, for whom adherence to copyright law is essential. I assume they have done their due-diligence.
"Can modern editions of public-domain music be copyrighted?
In short, the answer is yes. However, generally there has to be significant articstic/editorial content to make an edition copyrightable. There are a spectrum of editions. On one end are editions which are not copyrightable: these include old editions with expired copyright as well as republications of public domain editions which use the original engravings. Editions which are based on public domain music and add no other editorial content probably are not copyrightable. Further along on the spectrum are editions which include editorial explanations, piano reductions, translations and other additions. These aspects are copyrightable; however, if you perform an edition without using these additions, it might be difficult to prove that you have violated copyright law. Nevertheless, you certainly could be sued, and the resulting cost would be great, whether you lost or not. Further along are full-blown arrangements based on public domain works. These are fully copyrightable and can not be copied unless permission is granted by the copyright holder. The problem for the choral director is that most editons of older music fall somewhere in between being uncopyrightable and being fully copywritable. Add in the problem that almost all music today has a copyright notice (whether that notice is valid or not) and it becomes easiest to assume all editied music is copywritten." (sic - this was a cut-and-paste from their web page, I must point out the typos)
If what you have is a recently published exact facsimile copy of a score that was originally published over 70 years ago, then you could be correct, but the music publishers are wise to this, and only have to put an explanatory note, incidental 'clarification' or even 'corrections' onto the copy for it to be covered by a new copyright.
I'm sure that there are national differences in copyright law, but this is what I work to.
99% out of copyright?
What are you referring to in your 99% being copyright free?
Certainly not recorded music. Records became really affordable and common in the 1950's, with I guess the golden years for the recording industry and artists being the '60s and '70's. Almost all of that music is certainly within copyright.
If you are referring to sheet music, then the rules are different, but still most published sheet music will still be in copyright. And it is not only photocopying sheet music that is not allowed. If you take a piece of sheet music (called an imprint), and transcribe it by hand into Sibelius or Lilypond, then this is against copyright law.
You can only transcribe from an out-of-copyright source for it to be legal.
Even for music that was written hundreds of years ago, you are not allowed to transcribe it from a recently published copy, as it is the imprint, not the music that is copyrighted. This is one of the reasons why music publishers keep putting "Revised in XXXX" on the bottom of their music. Just changing the date counts as a revision, and thus renews the imprint. I often think that the errors you find on sheet music are deliberate, so that the publishers can track down exactly what imprint was transcribed onto the 'net.
This is the one of the reasons that the Harry Fox organisation were able to take down so many of the guitar Tab sites, as they claimed that they were transcribed from copies in copyright.
On a different note, even if someone writes and records their own music, they have automatic copyright ownership, even if they choose not to assert it. Copyright is automatic in most western legal systems.
Mind you, I wonder what happens with the records that were already deemed to have fallen out of copyright, which will now be covered (anything released from 1941 through 1961). Some of this will already have been published as copyright-free. How do you put the genie back in the bottle?
I would guess
that most of the time, people use the WiFi on a smartphone in bursts. What I think you could do is to have a delayed power down, so that all the time there is a stream of packets, the radio would remain on, but as soon as there is a gap in the flow of inbound packets destined for that device for more than the delay, it would power down the radio. Generally speaking, people will not be running server-type services on a smartphone (OK, I know that there are exceptions, like uPNP and DLNA), but most things will be initiated from the smartphone.
I suspect that for TCP type services, you could deliberately ignore or even NAK the first packet to force a re-transmit while the radio powers up. Would not work for UDP, icmp or lower level protocols, but UDP services normally have some mechanism for handling lost packets, and I would doubt that many people use layer 1 services on a smartphone.
I do this with Windows and KDE, but not with Gnome 2. I've done this for years, and also auto-hide it.
I won't disagree about choice, but choice is not what is needed to get non-technical users to use Linux, and lots of non-technical users are what is needed to get the application and content providers to take note of Linux as a viable desktop.
Making it so you have to install non-standard applications in order to make it usable is not going to get you the critical mass of users, and will keep Linux in the hobbyist and technical space with no hope of going mainstream.
Canonical appear to have bet the farm on the new interface, hoping that the non-technical user will see the bling and want it, but quite frankly, unless they get a manufacturer to make it a pre-installed alternative to Windows, users will never see it to want it, and Microsoft will never allow one of their large OEMs to also offer Linux without applying their anti-competitive practices.
I really thought that Ubuntu was the distro that might finally cross over into the mainstream.
I've now completely changed my mind, and I will be looking for a new distro.
What's changed my mind? Not the radical change in user experience, not the continual churn of new applications for commonly used things like listening to music or watching video, and not Canonical ignoring their loyal user-base but going for the 'new' (although all of these things are annoyances).
It's actually the way Canonical has split the established user-base into "I don't like it" and "I think it's the bee's knees" camps over Unity. What they've done is effectively alienated a considerable part of the people who (like myself) were strong advocates for, and encouraged the use of Ubuntu to users of other OS's. Unfortunately, the most valuable advocates are probably the people with most experience of Linux and Ubuntu, and who are most likely to be the ones upset.
I don't actually mind there being another UI. I don't mind them switching default apps. What I do mind is the "do it our way or not at all" approach of removing the old way of doing things. I feel it's almost as if they are deliberately making a statement of disinterest in some of their most loyal users.
I have recently been unpleasantly reminded about how unresponsive Canonical can be. I know that they have limited resources, and also rely on knowledgeable community members, but I don't like how fast thing change in the normal release process, and how quickly problems are swept under the carpet. I keep to LTS releases, because making significant changes on a regular basis to my daily use machine is not of interest to me. I have been using Hardy since about 6 months after its release, and I was suddenly informed that Google were stopping builds of Chromium for 8.04, because it had moved out of support.
They were right. As a desktop release, Hardy dropped off of support (on desktop systems) in about May this year.
Why was I still using Hardy? Well, in Lucid (10.04), Canonical imposed KMS (although to be fair, it was part of the Kernel), and completely broke suspend and resume support for ATI Mobillity graphics adapters even though it worked flawlessly in 8.04, broke Composite Rendering support (for Compiz), and also crippled Xv performance for video playback. Despite several defects raised by users of Thinkpads and Dell laptops, the calls languished unresolved, and the last suggestions were to upgrade to 10.10, which is *NOT* an LTS release. I spent 10's of hours trying to work out why all of these things were broken, before deciding that I could not afford the time to understand enough about KMS to be able to do anything useful, and went back to Hardy.
I've now (mostly) switched to Lucid, but have had to disable KMS (which is a blunt fix) to allow suspend and resume to work, and also turn off Advanced Desktop Effects (which I used to catch peoples attention), and switched mplayer and Xine to use a raw X11 frame buffer for rendering video (I've not worked out how to do the same for GStreamer/Totem). If I can't get Composite Rendering working, there is basically no chance that I will be able to use Unity on my Thinkpad, even if I wanted to.
So, I will keep the Hardy partition until I've checked that there is no other gotcha's from Lucid, and will then look around at my options. Maybe I will use Xfce on Ubuntu, but it was nice, for a while, to be able to use a Linux distribution that just worked without too much fiddling.
@AC re: Horse-shit
Of course, when it comes to social engineering, UAC and a popup sudo are no different, and are both as easy as each other to subvert.
But most users, and I suspect you as well, probably have never used a Linux system where your ID is not only not root, but is also not in the administrator group. It's just not necessary for most personal systems, and not being able to run sudo or having a root password makes it very, very difficult for an *ordinary user* to become root or touch system files.
But it's all about trust, as I said in a previous comment. If your trusted system is compromised, then this can propagate throughout a whole environment, even if Active Directory is involved. And Active Directory only protects a system while the group policy is available. Although I do not know, I strongly suspect that if you can get into a Windows system configured to use group policy using an OS weakness, like all systems, it will be possible to *TURN OFF* the requirement for the policy, making it just another Windows system with all of the inherent and widely publicised problems that Windows has.
I also read that often the group policy often just turns off the UI to various things. I have found out myself that it is sometimes possible to run the CLI utilities on a locked-down Windows system when the group policy prohibits the windows utility. This makes the security no better than "security by obscurity".
I suspect by your comment of "nothing (and I mean *NOTHING* is more secure than a properly configured AD and correctly-configured clients" (sic) that you have not looked into SELinux or AIX with RBAC, both with Kerberos turned on, which both implement service and object based tokenised remote authentication which is very similar to the Active Directory support of Windows. In fact, Active Directory is really an extended LDAP directory service with Kerberos authentication (if configured) to access to the directory. LDAP and Kerberos were both originally implemented on UNIX.
AIX had a kerborised command authentication system in the SP2 pssp cluster control package called sysctl over 14 years ago, and UNIX systems that implemented them also had a similar features as part of DCE and AFS, well before Microsoft implemented Active Directory.
I often comment that the Owner-Group-World access model in UNIX-like OSs is one of their weaker features. But where this simple model scores is that it is easy to understand, and a well implemented simple security model can be much more secure than a poorly implemented complex model. You probably have never had the opportunity to try to break out of a well implemented Linux system where you are an ordinary user, but I assure you that it is possible to make a system perfectly usable while being very, very difficult to break into.
Most ways that UNIX-like systems are compromised involve the wet-ware that administers the system, and I think that is exactly what has happened at linux.org, and could just as easily happen to a Windows system, even with AD configured.
Firstly, my word! what a provocative tag you have.
Now, regarding "an eye-opener for *nix people"
The problem here is that even quite technical users can be short-sighted when it comes to security. I know any number of very technically able people who regard security as a barrier to work, and quite often do very dangerous things to "work around the imposition of anti-productive security measures".
All the time this mindset persists with people who should know better, we will have the potential for this type of problem.
As a widely used example, ssh is a wonderful tool in the right hands, but allow people who can't be bothered to read the manual, and who use passphrase-less keys and/or distribute a single private key across their entire estate of systems, and you have a disaster waiting to happen. And if some of these people have escalated privileges, or use the same key for their own ID as they do for root, then it is just a case of lighting the blue touchpaper and waiting for the inevitable explosion.
Also, ssh can be used to circumvent many other security systems in ways that range from the constructive to the malicious. This makes it a multi-edged sword that can make magic happen, or can rip carefully thought out security measures to shreds at precisely the same time. How do I know? Because I have used it extensively to do just that (I think constructively, but sysadmins of other systems where I am a mere ordinary user may think differently).
Ssh can be abused on many OSs, including pretty much all UNIX and UNIX-like systems (and this includes BSD for those of you who have been suggesting that as a more secure OS), and there is at least one port of SSH server for Windows systems as well.
In reality, where you have a mechanism for one system to trust another using whatever means, there is scope for an intrusion on the trusted system to spread to the trusting system. And in the modern environment, where you need to manage hundreds or even thousands of systems from a central location, these trusts are essential. I believe that this is an axiom, and applicable to all OSs.
User training, partitioning of management domains, and insisting on adherence to properly thought out security policies, especially amongst the sysadmins and power users, is the only way to limit the damage of such a compromise.
Even if it is a barrier to productivity.
RISC is not a design, it's a design philosophy
It's not clear that a RISC processor is better at all things than a CISC, even after having been around for 30 years or so.
That's why modern RISC designs like POWER, SPARC, and even the good old ARM processor are having complex instructions added to their ISA (such as Thumb 2, VFD and NEON) as time goes by.
Increasingly, the difference between an augmented RISC processor and a CISC processor with some of their frequently used instructions being engineered to run in small numbers of clock cycles is becoming more and more different to see. It now appears to revolve around electrical power rather than computing power.
But it's all irrelevant, really. On a personal computer, unless you do hard-core gaming or real-time media transcoding, you just don't need anything much faster than around a 2GHz processor with some graphic assist. We've just got so used to bloated OS and application code that we accept that ever-faster processors are required without questioning why we need them.
@Nader re: physical borders
That's exactly it. The content producers demand different distribution rights on their content depending on the physical location of the consumer.
If you look at the American TV-on-demand sites you will find that they have negotiated the rights to the content *IN THE US ONLY*. This is normally because other companies have bought the rights for the same content in other countries.
For an example, let's assume that Universal Media Studios make another series of Heros. They license commercial broadcast in the US to NBC and in the UK to Sky.
If someone in the UK can watch or purchase it from the NBC on-demand service, they might not take out a Sky subscription, causing lost revenue to Sky.
So a condition of the license that Sky enter into with UMC is that US distributors must restrict online access to only people in the US, and if they don't you end up with severe lawsuits between all of the companies involved.
The only way that will change is if production and distribution companies take a whole-world view which is likely to harm choice by making large regional minorities too small to be considered in a whole-work market. There is no perfect solution.
We as consumers must realise that production and distribution companies are commercial enterprises, whose very existence is conditioned on their need get as much money out of their customers as possible.
I look back on the days before the growth-is-essential mantra, when it was enough for a company to ensure it's existence, make reasonable but no excessive profits, provide good employment to their workers and good service to their customers with some fondness. Maybe my glasses have just taken on a rose-tint.
There aren't half some numpties in the legal systems. If they apply this rule to 'phones, then most smartphone designs will be blocked in Germany.
Anonymous comments (I don't mind titles!)
Can we have two names registered against a mail address?
I am open enough to post many of my comments under my real name (unlike many of you), but I frequently use the anonymous option, normally if I am posting things that may upset my employer, wife, children, the police etc (OK maybe not the wife, she is a technophobe, and does not read the Register, and the police could get a court order if what I have said was against the law).
But I appreciate being able to use an icon with my anonymous posts.
What I may have to do is register a second account with an unrelated name to my real one. If I were allowed to have an alternative "alias" for my account, and be able to select it like I do Anonymous Coward as an alternative, I think that would be really useful.
Now, what has not been used yet, but would be suitably humorous?
Cheap delaying tactics
I suspect that it is much cheaper to file a law suite and get a temporary injunction, than it is to get one lifted. And if filing it delays a competing product from being launched, it means that you have a longer time to attempt to dominate a market, and reap as much profit as possible.
I can see a scenario where Apple hire just graduated lawyers cheap, and say to them "Here are the arguments, take them and stall in court; it doesn't matter if you don't win, just drag it out as long as possible".
Mind you, I suspect that the Japanese might be prepared to back an oriental company over an American one, especially for technology products where Japan excels, so Apple could have their nose bloodied in court over this one.
I do this often
mainly because finding CDs (and perish the thought, MP3s) of some of my older vinyl is almost impossible.
I leave all of the compression and tone altering filters out, and only turn on the digital scratch filters on if the amount of noise is very bad.
The CDs I produce like this sound very good (to my ears), even using the commodity A-D converters on generic mobos. Even though these cannot do the highest dynamic range, I suspect that my turntable and cartridge combination (good budget equipment - Pro-Ject Debut II with Ortofon OM-5e) is probably more of a limit on the dynamic range than the sound chip in the computer.
I applaud your sentiments and appreciate you actions, but unless and until governments in all countries actually employ people who understand technology and their own patent system, all politicians from any administration will be taking advice from interested parties.
These interested parties are often the people most likely to gain from a strong and all-encompassing patent system, and who have deep pockets so can 'voluntarily contribute' to the process, and will not give unbiased advice. This is especially true in the US, which to me as an outsider, it often looks like the government (of all parties) is actually run by big business.
Some of the statements made by the current US administration and echoed by the Europeans sound good, with words like 'reduce administrative costs', 'reuse patent searches during applications in multiple jurisdictions', but when you look into them, it is not suggested that pre-grant verification will be any stronger or with any greater rigour, but merely to make the application process easier, leading to still more stupid, un-enforceable patents going on the books.
The patent system was designed to protect small inventors. The way it has been corrupted means that it now does exactly the opposite.
@takuhill - from a comment on a previous article
This is a comment I made on a previous article a year ago about General Motors and Tesla, so some of the content may be out-of-context, but it shows some problems with replaceable battery packs.
Someone has to pick up the cost of the loss of capacity after a pack has been recharged a hundred or so times. Leasing makes more sense than owning, as nobody will complain about swapping one that is new for one that is near it's end-of-life it they lease it.
You would still have some uncertainly about range, and you would probably have to have some rules about when a battery pack would be retired or reconditioned. Would you make it 90% of original charge capacity, 80%, 50%?
I'm all for this technology, but there are serious wrinkles that need sorting out, not the least of which is the cleanness of the electricity. Also, could the power grid cope with thousands of battery packs drawing tens of amps at the same time? For example, if a battery charging station has 50 packs charging at any time, which draw 30A each while charging, we're talking 1,500 amps, or at 230V, 345KW per station. That's a lot of power. A typical UK house draws about 0.4KW per hour, averaged out across the year (according to EDF), so the charging station would put the same load on the grid as 800+ houses.
These figures are rough, based on the Tesla's battery pack which apparently take 3.5 hours to charge at 70A at 240V (thanks Wikipedia), mapped into something that is more likely to be found in the UK urban environment.
How many petrol stations serve as few as 150 customers in a day (assuming packs take 8 hours at 30A to charge)? And you would have to be pretty certain that the packs could not be nicked for their scrap value. And how large would the station have to be?
So, interesting ideas, but currently, fossil fuels still rule, as indicated by the icon.
Wow. That would be expensive
because the DRS6000 was launched in 1990! That must have been one hell of a deal getting a prototype system 10 years before the product launch!
In 1980, system memories were being measured in single figure MB count.
When I went to University in 1978, the IBM 370/168, which was at the time supposed to be the most powerful computer in the UK education system (I'm not sure how accurate that boast was) had a total of 6MB of memory. I can't exactly remember what was a lot in 1980, but in about 1984, we paid about £2000 for 1MB of second-hand memory for a PDP-11/34 (before anybody starts, it was in Systime covers, and had the 22 bit addressing feature added by them [not normally an option on a /34], allowing up to 4MB, although we could only afford 2)
Oh well, might as well join in
Thanks, and hope you enjoy whatever it is you are going to.
we have no compulsive new technology at the moment. 3-DTV has not caught on, and most people either already have, don't really care or don't know about HD.
What we have here is a down-swing compared to a previous up-swing caused by LCD TVs. People could see that an LCD TV occupies less space for a larger screen, can be wall mounted, and uses less electricity than a CRT, but do many of them care that LED is better than CFL for the back-light? And the current ultra-slim tellies are not that much slimmer than the 2-3 inches of the last generation in the scale of a living room. People are realising that, as long as it works, their two year old telly is still adequate for watching Coronation Street, the Simpsons, or Mythbusters.
Are we, at last, seeing a return to a domestic consumer electronics market that is not dominated by hype and the need for the latest shiny things? I sincerely hope so.
How music sounds
Considering how few people actually bother to sit down and listen to music in a quiet room arranged around the audio system, does it really matter that the audio quality wasn't listened to?
I would love to have some of the iPod generation(s) listen to a decent audio setup playing uncompressed audio sources, and also think that it would be a revelation to many of them to actually listen to some unadulterated live music (not what you get in a rave or night club!)
I'm sure that the majority of people believe that the multi-track recorded, compressed, bass and treble heavy mush that is turned out by today's modern music publishers, and then mashed to death by the distribution method (particularly FM radio stations) is actually how music should sound.
My audio setup is comparatively poor, consisting of best-of-breed budget audio equipment, most of which is over 20 years old, and I still get a Wow from some of my children's friends when they come and actually hear what vinyl on a reasonable turntable through real speakers sounds like.
Back to the article. As soon as you distribute a good audio source across a network to an iPad or similar mobile audio system listened to in a noisy environment, you might as well be using Compact Cassette in an 80's Walkman as far as the accurate reproduction of the original material is concerned. Whilst I actually appreciate the fact that quality audio equipment manufactures are making an effort in producing this type of kit, much of it is really just for convenience, not audio quality.
This is negotiated into the contract when the work is commissioned.
Modern productions have clauses in the contract between the BBC (and ITV and Channel 4) and the producing company (which is almost certainly not the broadcaster), and the actors, which specifically allow the content to be available for a limited amount of time on a view-on-demand service such as iPlayer, as well as having repeat rights. This has been the case for most UK produced programs for many years now, but often does not include foreign produced material (for instance Torchwood Miracle Day, which is NOT on iPlayer when I last checked).
This is also why some programs are available as unlimited podcasts (very liberal contracts, and probably only on things that have little ongoing commercial value, like news coverage and topical documentary programs), and some are only available for a limited amount of time, where there may be money to be made on pay-for-view or DVD sales.
But archive material is a bit different. You quite often find old programs being repeated on the BBC, both radio and television, which do not find their way onto iPlayer. This is because in the original production contracts, and the contracts with the actors, there were clauses for repeat broadcasts, but not for distribution using other means (and this includes DVD, CD and tape for very old series). As these were not things considered when the contracts were drawn up (why should they be, nobody thought such things would be possible), the lawyers tread very carefully to avoid the possibility of future loss of royalties law suites.
In order to make such material available through things like iPlayer (at least before the copyright expires), it is necessary to get agreement from the production company, and all of the actors, or in the case of a dead actor, representatives of their estate, to allow the material to appear on formats not considered when the original contracts was drawn up.
This can prove very difficult for the older material, which is very unfortunate for us the viewer, preventing some programmes from being available on DVD or on video-on-demand sites.
As an aside, as different countries have different copyright and royalty rules, this won't necessarily be the case for all countries.
Oh well. Thank goodness for YouTube, which appears to have a very liberal attitude towards copyright, at least until challenged.
Z80 DJNZ e - 13 T states if branch taken, or 3.25 microseconds at 4MHz
6502 DEY ; BNE e - 5 clock cycles if branch taken, or 2.5 microseconds at 2MHz
OK, it's one more byte (3 rather than 2), but your assertion that code density == speed is completely wrong when considering 8-bit microprocessors, because there was no overlap in instruction fetching, decoding and execution. The time of any instruction on either a Z80 or a 6502 is exactly what it says, from fetching the instruction and arguments to completion.. From the end of the last instruction to the end of the next is an absolute time, and is easy to determine.
Many Z80 instructions run into 15-20 T states, meaning that there are some situations where it is quicker to run several simple instruction than one complex one, even in the Z80 machine code.
@nyelvmark - "20 year head start"
I'm interested in what you are comparing with what.
ARM silicon started appearing in about the same time (give or take a year) as the 80386, and IIRC, ARM systems actually fared relevantly well in benchmarks compared to the i386, and even then were clocked at much lower clock speeds.
So although Intel had all of the years of 8086 development under their belt (which, incidentally, was less that 10 years), as 32 bit architectures, you can consider ARM and the first 32bit x86 processors as being of the same generation, and actually makes the ARM a more 'mature' processor than the 'great leap forward' of the i486.
It never ceases to amaze me
how a company generating quite respectable profits gets blasted for not reaching other peoples projected figures for unknown future business.
I'm just waiting for the shares to slide. It just indicates to me how badly broken the Corporate Capitalism economic model actually is.
You obviously don't know what UCAS is for.
although when you think about it, with active-shutter technology, it's a simple matter to make each set of glasses see a separate 2D image by operating each eye in the set in the pair at the same time, alternating with the other set of glasses. It's just a variation on the method of displaying a 3-D images.
If they are going to alternate 2 3-D images, that's a bit more difficult, as they would have to display 4 images, and each person's eye would only be seeing an image for 25% of the time. I suspect that this would be detectable as serious flicker by almost anybody, even if the frame rate was adjusted.
Re: AC -Yeah... - I'm curious
A-Levels are supposed to be the first step to understand complex subjects in preparation for Higher education. How are you supposed to demonstrate a good understanding of a complex subject with simple questions?
A large number of simple questions may suit subjects at GCSE, but A-Levels are supposed to be Advanced (remember, O-Levels were Ordinary, and A-Levels were Advanced).
I admit that it was over 30 years ago that I took my A-Levels in science subjects, but I remember that at least one paper in every subject required you to analyse a problem and recognise a particular technique to solve it, and then be able to work through that technique to achieve a solution. You could get some marks if you identified the correct technique, but worked it through incorrectly, or even the wrong technique, but applied it competently.
It demonstrated that you had a knowledge of the subject and how to apply that knowledge to a question. It did mean that there was a large element of luck in which questions would come up, but it was expected that you would have a broad enough understanding to field questions from the whole subject.
I have a 17 year old child who is studying vocational subjects, so I won't be able to see what current A-Level papers are like next year, but I shall be interested to see the test papers that my 15 year old is given in a couple of years time.
On the content of each subject, the chances are that my 30+ year old knowledge of the subjects I studied at A-Level almost certainly does not equip me to sit a modern exam, even if I could remember it all. Physics, Chemistry and even Maths have changed significantly in that time.
Give me 6 months of appropriate time to study a modern syllabus, and I would be happy to see how well I would do compared with a modern student.
Yes, but just how long does it take to copy the two files that made up the OS from a floppy, and then prompt for the date and time! Remember that the first IBM PC did not have a realtime clock, or ANYTHING other than a keyboard adapter. Everything else was on a card, including as far as I can remember, the floppy controller, the display adapter, serial and parallel adapters, and they all cost an arm-and-a-leg from IBM. So enterprising third parties produced 'multi-function' adapters that would include a parallel port on the display card and so on. And there was no plug-and-play, so there was all of the hassle of conflicting base addresses and IRQ settings. I'm sooo glad that those days are gone.
Anybody else remember ROM Basic that the system would drop into if there was no bootable floppy in the drive? If I remember correctly, this persisted in IBM PC's on into the PS/2 line that replaced the PC, although you had to disable the OS from the hard disk to get there.
The polytechnic I worked at took a decision in 1982 to install several computing lab's full of 5150's. Over the summer, we were inundated with the things, with boxes filling all the foyers, waiting to be unpacked. Horrible, horrible long persistence phosphor in the monochrome monitors, and the Poly' decided to ditch the one good feature (the keyboard) for a soft-touch silent Cherry keyboard as standard. Ugh.
I never liked them even then. Because they were floppy-disk only systems, the students had to book out the disks from a librarian for the software before they could use them, which meant that we had fragile 5-1/4 floppies moving around like crazy. We got an agreement through the distributor to allow us to keep the originals safe, and issue copies. Was not long before most of the students twigged on that they could further copy the disks, and then not bother with using the booking system.
I was glad when the first PC-ATs were installed, because we at least then only had to worry about keeping the hard disk clean, and repair the applications when the students trashed them. Introducing a virus on one of the ATs became one of the most serious offences, and we had to have disinfectant sessions to clean the student's own floppies to protect our systems and their work. Mind you, the 1.2MB floppy drives on the ATs caused no end of problems when students tried to write to 360KB floppies on them.
This was waaaaaay before disk cloning was thought about, and everything was done according to the installation process, although one of the labs (not one I worked with) was set up with a low cost (hmmm, relatively low cost, it was still bloody expensive) co-ax CSMA/CD Ethernet alternative called Omninet running at 1Mb/s for file and print sharing.
Interestingly, we had Pick installed on one of the ATs, and Xenix-286 on another.
I still regarded the PC's as poorer teaching tools than the lab of BBC micro's I also ran, and of course 'my' UNIX V7 (and RSX-11M) PDP11/34e (in Systime covers, with 22bit addressing and 2MB of memory, and CDC SMD disks to speed it up) was the bees knees as far as I was concerned, running Ingres to teach relational database. Knocked Ashton Tate DBase II (remember that!) into a cocked hat! And it was, of course, far less maintenance work.
The software line-up on the PCs was PC-Dos 1.1 (on the 5150s, the 5157's has PC-Dos 2.1 for the hard disk support) with Word 2, Multiplan (MS spreadsheet before Excel), and DBase II. I couldn't work with Word then, and still find it a traumatic experience now.
We definitely need either a rose-tinted spectacles or an old-fart icon here. I guess I'll just have to use the coat icon. It's the one with the big stretched pockets to hold the 5-1/4 disk box.
@Kirbini - It's amanforMars or one of his clones
This is what he does. You're not really meant to understand it, although there is a message in there somewhere.
There is a school of thought that suggests he writes a comment, translates it to some other language and back using something like Babelfish.
He is a Register treasure!
@King Edward 1
I'm not talking about the extinction of the human species because of applied technology, just trying to put some perspective into what we are doing with regard to relying on ever more complex technological interventions to keep an unreasonable amount of the population alive.
But more interventions require more resource. I'm sure I heard a discussion on the radio recently which suggested that many countries will be spending significant proportions of their GDP on healthcare within 20 years at current change rates, and the Economist has commissioned a report that presents this as a possibility.
I was actually going to say something about diverse genetic information, particularly what are apparently unused parts of the genome, but I was going to put that into the context of the pathogens keeping recessive attack vectors in their genome, although you are right, it runs both ways (but what is the survival advantage of cystic fibrosis, mongolism, Duchenne muscular dystrophy or even short-sightedness!)
My belief is that we will probably never be able to match the natural forces of evolution, although that does not mean that we should stand still. We need to discover replacements for antibiotics, otherwise we could have a new Black Death. MRSA and C.Difficile already provide pointers to this possibility, and TB is already on the way back.
BTW, and this is a bit of a diversion. Removing fire from our tool-chest cannot happen as long as there is organic material in our environment. But motorised transport? Or the technologies that sustain the Internet? We could lose all of those.
Remember that it is still within the span of a single human lifetime that *ALMOST ALL* of what we regard as modern life has come about (OK, the steam engine, and simple internal combustion engine are more like twice, but even 70 years ago, horses were still the primary power on the land). The rate of technical change has been staggering and accelerating. There is a chance that we could be knocked back into a pre-industrial society. It would not take that much, and if there was suddenly a critical shortage of energy (like if there was a cascade failure of the electricity grids caused by a serious EMP overload from sunspot activity [I am not normally a doom and gloom monger, but the chance is there, NASA says so]), we may lose the capability to rebuild the infrastructure, including the power grids themselves. It takes a lot of serious resource, and a long time, to build the number of large high-voltage transformers that might be needed.
We've used all of the easy-access energy and other resources, and if we were pushed too far down, it would be incredibly difficult to climb back up to where we are without opencast coal, iron, or copper ore mining or easy to extract oil.
And don't start talking about solar, wind or wave power. Without an existing technical and transport infrastructure, this cannot be deployed, maintained, or utilized. I challenge you to build a working wind turbine generator (with a reasonable capacity) with just the raw materials you can find within a 10 mile radius of where you are. You are not allowed to cheat by using existing motors or alternators because that is part of the wind-down, not the rebuild of technology.
The result of a breakdown would be chaos, and conflict over resource, and could lead to a new dark age where the remaining resources were controlled by force. It would be impossible to do anything at a national level. In such a world, there would be NO internet, NO national transport system, NO national electricity grid, and the road and rail systems would degenerate remarkably rapidly.
Just think what panic there was in the UK 10 years ago because supplies of petrol and diesel were disrupted. And that happened within a space of just days!
Do you actually remember a mere twenty years ago how useful (or not) personal computers were before the Internet! Answer, not very. Good for simple games and small data projects. There was a Society, however. Computers are vital for our current way of life, not our survival. They just make it easier.
But none of this would mean an automatic extinction of the human species. The genetic sieve would probably cut back in, and maybe, just maybe, inherited intelligence could prevent a fall back to the stone ages. But people would start dying to what we now regard as curable diseases merely because the technical interventions were no longer available to keep them alive.
Although many of these apparent breakthroughs are interesting, it is worth noting two things.
Firstly, apply the rule of unintended consequences to the breakthrough. It may take some time to find out what else these substances do, and some of these may be undesirable, meaning that the technique may never come to anything.
Secondly, the real world is rather akin to a battlefield at a genetic level, with an organisms immune system on one side, and the survival mechanisms of an untold number of pathogens on the other. In both sides, the genetic sieve operates.
Even before humans interfere, what we have are the forces of evolution working against each other. If you think about the operation of the genetic sieve on survival, it is necessary to remove the genetically susceptible members of a population to allow the non-susceptible members to survive and procreate. But the same is true on the other side of the battle, and the most obvious example of this is antibiotic resistant bacteria, where the very few surviving members of a pathogen after the application of antibiotics become the basis of the following generations. This is exacerbated by over-use antibiotics, and courses not being completed, but it will operate eventually anyway.
If we interfere, by allowing susceptible members to survive and pass their susceptibility on to their offspring, we are weakening the population as a whole, and building in a reliance to the techniques and technology for survival to the species. Just think what would happen if modern medicines became unavailable. I don't think we would quite go back to the dark ages (after all, we do now know about how infections spread, and can take physical precautions), but it would not be pleasant.
But even as we are making the sieve less effective on the survival side, we are adding to the sieve on the other. Evolution will eventually allow the pathogens to work around any barriers we put up by making successful members of the pathogen population pass on their success and killing the unsuccessful ones. Life, as has been noted elsewhere, is incredibly persistent, especially at the bacterial level.
We can and will never reach a utopia where diseases are eliminated. Evolution will see to that. And the human species really has no guaranteed right to survive over any other!
I think I've still got that copy somewhere. On the cover, it has something that looked like car headlights to give focussed transmission and reception. Strange I should have kept it, because I did not buy PW regularly.
Boy, does my memory work in weird ways!
I think that the vertical lines are actually artefacts of the film processing if that is what was done. This seems entirely reasonable and consistent.
Still, if I got pictures back from the developers with defects like this, I would ask for a set of reprints!
The same browser?
I wouldn't even use the same computer!
@AC. I take exception to the HPC comment
I am involved in running a top 500 supercomputer site, and it is reliable. So reliable, in fact, that the customer is saying that they want to manufacturer outages on a certain service so that their users don't get to automatically expect 100% availability.
The main secret as far as I am concerned is the old adage 'if it ain't broke, don't fix it'. Really annoys me when IBM say we *have* to upgrade the software stack to remain in a supported state!
So in answer to the comment, don't tar all services with the same brush.
The R&D version of UNIX was 5.2.5, not 3.2.5. This equated to SVR2 with some AT&T internal developments, including demand paging, enhanced networking (STREAMS [which could have Wollongong TCP/IP modules loaded], RFS), an enhanced multiplexed filesystem (not that I remember exactly what that gave us) and many more I can't remember.
- Does Apple's iOS make you physically SICK? Try swallowing version 7.1
- Fee fie Firefox: Mozilla's lawyers probe Dell over browser install charge
- Pics Indestructible Death Stars blow up planets with glowing KILL RAY
- Video Snowden: You can't trust SPOOKS with your DATA
- Review Distro diaspora: Four flavours of Ubuntu unpacked