1744 posts • joined 15 Jun 2007
64 bit (in)compatibility with AIX 5.1
I was going to enter a long, essay type reply to this when I realised that it wasn't worth it.
I was caught by the Oracle 8i to 9i problem (but on Winterhawk SP/2 nodes, not p690's), but I reckoned that this revolved around the lock kernel extension (read device driver) that Oracle added to AIX. There was an IBM documented incompatibility here.
Being a cynic, I always thought that this was just a ploy to make customers pay an upgrade charge to Oracle, rather than a real issue with AIX. It would not surprise me if it were perfectly possible for Oracle to have issued a patch for 8i, but this would have had little financial advantage for them.
IBM issued guidance that said that the kernel extension interface was changing, but this sounds very similar to the Adaptec driver issue mentioned. So no real difference from Solaris there.
The rest of the code may well have worked, but would have been completely unsupportable. IBM claimed in the AIX 5.1 readme that the only reason to recompile 64 bit code was to take advantage of the new features of the new release, which is not unreasonable.
You're timelines are wrong.
1. What were you doing in 1990. I would doubt that it was working in the UNIX field.
2. Sun did not have a logical volume manager at that time, unless you count Veritas, which was not their product Sun Disk Suite was a charged for add on that was released in 1991. Nor did HP until sometime after AIX 3.1 was launched (the HP/UX logical volume manager was derived from the code IBM contributed to OSF if I remember correctly, and added to HP/UX 9.0 in about 1992)
3. HP/UX had SAM. Hmm, not good, and not a patch on Smit.
4. I'm sorry, Solaris definitly did not have dynamically loadable AND unloadable device drivers (I'm not talking linking here, I'm talking device drivers) at this time. I was having to sysgen systems to tell it what devices were included in the Kernel. And I would ask when you think that Solaris was launched, bearing in mind that Solaris 1 was a packaging option including Sun OS 4, and was not the label for the entire OS until Solaris 2 which was SVR4 based in around 1992.
5. Sun OS was a BSD derived OS until Solaris 2 so of course it implemented BSD commands. It had a veneer of SystemV on some commands until the SVR4 initiative that was not Sun's idea (remember this). The way that you effectively chose which type of commands and environment you used was poor. You may not think this was important, but as you pointed out, this could just be an opinion.
6. I'll give you Starfire. I had forgotten about that system. As I pointed out, however, the single processor power negated some of the SMP benefits that other vendors had. But I believe that the cost per TPC was not in Sun's favor on these systems. IBM's closest system at the time was probably the S70, but they were a bit later than Starfire. S80 and p690 (regatta) closed the performance gaps.
7. OK, find them. I don't remember any, because for system management, Solaris was, and IMHO still is in the stone ages. I don't believe that Sun ever had decent hardware error handling system, and patch management appears more primitive than AIX. NFS and automount is better, granted, but even for a hardened CLI user, smit is a great fall-back when you can't remember how to do something you touch once in a blue moon. And dynamically loaded and unloaded device drivers allow you to fix a multitude of problems without needing a reboot. And all of the hot-swap hardware makes management easy.
8. Starfire again. How much were they? All IBM power4 systems were paritionable except for the very smallest.
9. Yes? IBMs SMT on Power4 implemented two separately scheduled hardware threads on a single CPU with multiple instruction units, so more like SMT than hyperthreading. The two threads were not completely symmetrical, which is why I said sort-of.
10. Yes. And dynamic LPARing does work. Very well, in fact, as do hot pluggable disks and adapters. Zones is quite different, and I will admit that WPARs were a direct copy of this functionality. I'm not 100% sure, but I am not sure how well Zones split the allocation of CPU and I/O bandwidth between the systems. LPAR overheads, mainly memory (but not CPU), are quite high granted.
11. Starfire again. And again, how much did it cost? And what was available on the smaller systems?
12. Backward compatibility. WTF. I would bet that a 32 bit executable built on AIX 3.1 on RIOS hardware in 1990 has a greater than 70% chance of still running on Power7 running AIX7 twenty years later. How much more backward compatibility do you want? And once you get to AIX 4.3 and AIX 5.1 it will be getting to more than 95% or more. And IBM make it quite clear what features are likely to prevent an application from working. Many software vendors are still compiling their code on AIX 5.1 or AIX 5.2 knowing that their software will work on all later versions of AIX (an example of this is the AIX Toolbox for Linux Compatibility from IBM, that still proclaims to be compiled on AIX 5.1).
I have VERY RARELY (in fact I can hardly remember the last time) had to recompile a program when switching AIX releases unless I wanted to take advantage of new features of a new processor or compiler.
We both obviously have our own perception of what happened and when, but I still believe that the original statement in the article was wrong.
@El. Reg. Don't agree!
"AIX was always the laggard when it came to commercial-grade Unixes"
You need to qualify this. When AIX 3.1 on the RS/6000 was first launced back in 1990, it wa streets ahead of any other commercial UNIX. It has a logical volume manager, an integrated system management utility (remember, this was a time when sysadm ruled the roost for most UNIXes), dynamically loadable device drivers, and was one of the first UNIXes that did a good job of merging SystemV with BSD flavours of commands and libraries (SUN's way of doing this was less transparent).
With the SP/2 in the mid 90's IBM moved AIX into high-performance computing (Deep Blue et. al)
In the late 90's, they were up there with 64 bit systems, and had a nearly seamless 32/64 bit strategy that meant that the kernel you booted did not have to match the binary you were running.
For absolutely years, AIX was the leader in the Gartner manageability surveys.
Power4 systems, available in the early 2000's implemented hardware partitioning. I'm not sure whether HP had this on the Superdome (or whatever they were called at the time), but I remember this being a real marketing differentiator at the time. Power4 also had SMT of a kind.
Power5 systems, available 2004/2005 implemented I/O virtualization, sub-cpu partitioning, and dynamic hardware allocation and de-allocation (this might have been possible on Power4, I can't remember exactly).
IBM were slow on SMP, the initial work being done by Bull with the G/J30s, but when you have systems with single CPUs running as fast as your competitors SMP boxes, what was the hurry.
The only thing that I believe that Sun had was the containers, and to tell you the truth, I never worked at a customer where this caused a problem.
So tell me. What else were IBM lagging behind their competitors.
I'm really not sure that you can count nuclear as a 'fossil fuel'. Uranium is dug up from the ground, yes, but it was never part of a living organism, and that is generally what a fossil is.
Remember, coal, oil and natural gas were all plants and marine creatures before they were buried in the ground.
But nuclear cannot, by its nature, be regarded as a renewable fuel. The amount is finite in/on the Earth, and I believe that this is the point you were trying to make. And I agree about nuclear being about the only low-carbon energy source, even if you include cost to build the stations.
Back in the days of Virgin.net, and with a 24x7 flat rate dial up service via Modem, I noticed something worrying. My Smoothwall firewall was reporting a huge number (100's per minute - remember it was dial up) of intrusion attempts on port 135.
I sent Virgin Support a mail, pointing out that many of the addresses probing me were from within their own network, and I got a replay saying that it was a problem affecting all ISP's (it was MSblast in the wild at the time), and that they were taking the issue very seriously, suggesting that I install a software firewall (ignoring my statement that I was using a well regarded dedicated firewall).
And that was it. Nothing else happened at all. Eventually, the frequency of the attack dropped to a more manageable level, but not due to any obvious action on their part.
So I actually welcome people being warned that their systems may be compromised, although I do agree that in this day and age, a paper letter is probably too little, too late.
Not the afternoon play, but...
The c*nt word was used on air by an female artist (can't remember the name, can't be bothered to look it up) on Front Row, which airs on Radio 4 between 19:15 and 19:45 (well before the watershed) I guess about 5-6 years ago. It was used in relation to an art installation of a particularly sexual nature, IIRC, involving models sitting in provocative postures with no underwear, a la Basic Instinct.
Mark Lawson (I think it was) made a hurried apology, together with a request not to use such language on air to the interviewee. Again, IIRC, the terms vagina and vulva were also used several times, but they were not censured. I think it was more shocking to Mark because it was a woman who said it.
Laugh, I could have crashed the car!
Being pedantic, are we talking mean, mode or median. These are all 'averages', but have significant differences. In particular, If you can find a job at 12K, and one at 62K, then the median would be 37K, regardless of the actual distribution. Consider the following set of numbers:
10 numbers, totalling 30.
Mode is 2, Mean is 3, Median is 5.5. I would point out that there one hell of a difference between the Mode and the Median.
I do not have the stats, but bearing in mind how many PC first line support jobs appear to be at the 15-20K level, and how prevalent windows systems are, I've often wondered about the source of these £37K average figures.
There is one classic '50's or '60's news clip of a Union representative who says into camera something like "We will not stop this action until all of our members are being paid at least the average wage", which of course, if applied to all workers (not just his union members, admittedly), would mean that everyone would be paid the same. Still, Maths education must be better than that nowadays, mustn't it?
It's a Maths teacher, obviously.
I would have thought that this would have been caught by the Moderatorix.
I'm going to use this as another call to get a reason for rejection added if a comment is rejected (dig, dig.)
Don't get me started.
When trying to fix a problem with a particularly badly sh*gged filesystem late at night, I had the corporate obscenity filters block my mails to and from a vendor support centre because I included a phrase like "I have fsck'd the filesystems, and the problem persists" (btw. I was in phone contact with them as well, but it's difficult to dictate several K of diagnostic data over the phone!)
I had to wait until the following day for the mails to be released when a real person could check the content. Good thing we worked out why the mails were not getting through.
I had a moan at the people running the mail filter who said that because it was a commonly used euphemism especially in spam emails, it had been added it to their blocklist. I then checked over a gig. of archived spam from my mailbox gathered over several years (don't ask me why I had kept it, I don't know, but I hadn't run out of disk space at that time) and found precisely 2 uses in many thousand emails. Not so common use, then.
..to providing my bank details on my tax return, not even knowing whether there is a rebate or not. I know it's a trivial issue, because I'm sure that the banks would roll over and provide bank account details to a suitable request from HM Customs and Revenue, but it just feels wrong.
Let them send me a cheque if I am owed money. Don't know what will happen when cheques are withdrawn, but I'll face that one when I come across it.
You're talking chain or band line printers here. I very much doubt that a dot-matrix, even a heavy duty one like a Printronix, would be able to do more than three part, chemical transfer paper.
When I was working with mainframe band printers, we were using multi-part fanfold stationery with interleaved carbon paper (not chemical transfer paper). There was a machine called a splitter, which would split the copies out and wind the carbon paper up for disposal, while leaving the two split copies neatly folded (at least, if the operator threaded it correctly). For three and more part stationery, it had to be put through further times to split each copy off. Interestingly enough, each carbon sheet had a completely legible copy of what was on the page. We also had authorized cheques with a second carbon copy, but this was for audit purposes.
I was once told that the hood on these fast printers was more than just acoustic protection, because if the band or chain broke, it was moving so fast that it would damage the hood as it flew off. Not something I would like to hit me.
Where's the old fart icon.
Two types of ribbon setting for Quietwriter
There was the full quality setting that advanced a complete letter at a time, and there was the draft quality setting, that actually only moved a fraction of a character position. This meant that you may get gaps in the later letters, where there was an overlap, but your ribbons lasted many times longer. This would make it much more difficult (though not impossible) to read from the ribbon.
I think that they were different ribbons, but it may have been a lever setting in the printer. I don't think it was a software setting.
These were actually thermal transfer printers rather than impact printers. This is how they managed to be so quite. Normal whirring from moving the print head and paper, but printing was silent.
Mine only advanced the ribbon for each letter printed (the ribbon was mounted on the print head), not on a per-line basis although mine was a Quietwriter III or IV and could have been different from Nuke's, so was not quite as wasteful as he suggested.
Pedantic, I know
but legally, there is a difference between two copies printed at the same time using multi-part stationery, and two copies printed one-after-another. There is no guarantee that the two serially printed sheets are identical, because they could just be one print after another, with the second one slightly different. How would you know unless you minutely compared them?
And yes, I know that the lower copies in a multi-part *could* have been pre-printed, but that is why they come bound together with tear-off sprockets, so that you can tell whether the lower copy has been tampered with.
Plasma's will become increasingly irrelevant as LCD and OLED technologies mature, especially with the LED backlights that are becoming available.
I predict that Plasma TV's will be banned because they are too energy inefficient once all filament lightbulbs have been eliminated. Especially if the manufacturers can lobby governments. Soon we will have TV's being replaced every year to meet government carbon emissions targets.
Joke. (I hope)
You're not using HAM or SW radio, and I guess that you don't listen to the air control bands either. Have you checked MW or LW radio reception, which I always found prone to interference (you may still find these on steam powered radios, but a lot of radio's don't even receive them nowadays).
There's LOTS of the EM spectrum in the radio bands, and radio, TV, Bluetooth and WiFi only use a very small fraction. Google (sorry, the URL is too long) for "Fuk_frequency_allocations_chart.pdf" (warning: PDF), and you will find a very interesting wall-chart of the spectrum use in the UK. Try to find the bands you use, and compare them to the whole.
I must admit that most of this thread has followed the normal Windows/UNIX path, leading to name calling.
It is interesting to be reminded of another OS that in it's own way has shaped what we have today.
VAX/VMS is an interesting OS, and in many ways my second favorite OS to UNIX. What you have said of VMS is quite true, but some of the assertions you have made about UNIX are wrong.
As someone who learned UNIX back in the late '70's and then took a spell of sysadmin'ing RSX/11M and eventually some VAX/VMS, I agree that the batch and spooling systems on VMS were much better, because DEC had Tops 10 and Tops 20 as a good model to work from. But RSX/11M's batch system was not as good as the UNIX at/batch commands, but that is because RSX/11 was not really a general purpose OS. In the UK RSTS/E was the main commercial PDP/11 OS, and very little of that mad it into VMS. If you remember that far back, you will find that VMS version 1 was really just a 32 bit port of RSX/11M, complete with non-hierarchical file system, and limited Files/11 support.
The backup and restore, I'm not so sure that BRU and Backup/Restore was hugely better that Fbackup, Frestore, Finc and Frec on generic AT&T UNIX systems, but these fell by the wayside.
It is quite clear that Files/11 (which was a layered product on RSX/11) and the VAX filesystem (I know it had a name, but I can't remember it at the moment) suited commercial use for VAX, including file and record locking, but that does not mean that there was nothing similar in UNIX. UNIX version 7 included a thing called the "Multiplexed file system". This allowed you to add all sorts of functionality to the standard file system. But the standard byte addresses file interface allowed you to implement pretty much any functionality anyway, including arbitrary sized record structure, and there were add-ons like C-isam, which was available as a library on most UNIX variants (OK 3rd party software), which was for a time a near industry standard for UNIX.
AT&T's UNIX from System 3 also had mandatory file and region locking for files in a filesystem. This was not carried into the BSD variants as far as I remember, until the SVR4 merged system that provided cross-fertilization between major UNIX variants.
It is interesting that people also overlook the RFS filesystem that came as part of SVR3.2 and later. This was a highly stateful distributed filesystem that implemented 100% of UNIX filesystem semantics, including the mandatory locking protocols. I'm fairly certain that if you came across UNIX from a BSD/SUN route, that you almost certainly never came across this advanced filesystem which again, fell by the wayside.
It is not directly comparable with VAX Cluster, which was a groundbreaking way of making your environment more that the sum of the machines in it, but this was an add-on to VMS, and if I remember correctly, quite expensive for commercial use.
VMS was good. It's DCL CLI was very helpful to novice users, utilities like EVE and TPU were very good for University work, and the wide variety of vendor provided applications. It had the demand paging system that other vendors aspired to. But DCL had it's own limits. If I remember correctly, in order to get the argument processing working for your application, you had to produce a prototype file so DCL could parse the arguments for you, whereas letting the application manage it's own arguments
But I would contend that although it was very suitable for many types of work, ultimately it was not as flexible or as widely deployed as UNIX. Although you could say that the WorldBox MicroVax II was a microprocessor based system (I was at the UK site of the World launch event in Harrogate), that there were personal VAX Stations, and there were some very large VAX systems, UNIX appeared on everything from desktop PC's (like the AT&T 3B1 and even PC/ATs if you count Xenix/286 as a UNIX) through to the largest mainframes of the time from the likes of IBM and Amdahl. And I haven't seen a HPC cluster running VMS, as I have with COS (at one time Cray's UNIX) and AIX (IBM's UNIX variant).
Don't get me wrong. I'm not criticizing VMS. As I said, it is one of my favorite OS's. But it is like comparing apples with pears. They are similar, but there are significant differences which mean some people like apples, and some pears. I would say that there is probably still a place for VMS, but it has become niche, in the same way that genetic UNIX is going. But UNIX has a direct successor that will keep the line going in Linux.
Maybe you ought to be pressing HP to license VMS with an open license. I think that that is the only way to stop it dying a slow and lingering death.
If you're talking about 'O' levels, then you are talking 20+ years ago, as schools have been teaching GCSE's since then.
One has to ask whether your son actually knows how to write grammatically correct and intelligible English, because unless he knows this, trying to teach him to use Word is pointless.
This would be the domain of the English classes, not ICT.
All those many years ago , I remember in English having to read, comprehend, and write relevant comments on a series of articles, which taught me how to use the language. Even though I was not very good at it, it laid the foundation for all of the wordy subjects (History, Geography) as well as a basis for reports on Science experiments.
I think that teaching basic computer use for everybody is a good thing, but there should be a differentiation between this, and teaching Computing as an engineering or technology discipline. This way you would be able to separate the mundane 'using a word processor, web browser and multimedia apps' from the interesting 'what is a CPU, how do programmes run and how do you write them and what is involved in networking'. If you did this, then I believe that the kids with a genuine interest could separate the boring and interesting stuff, hopefully keeping them engaged.
My youngest kids hate(d) the way that ICT is/was taught, but do actually have a genuine interest in how their computers connect together, and what the basic components are. indeed, when I built a system from scratch last Christmas, I had a willing audience for almost all of the work I did. Virtually nothing involved with putting the system together and installing the OS was familiar to them even though they both have studied or are studying ICT at GCSE level. And they are fascinated when I can write a quick program to do something specific, when they cannot see how a spreadsheet, almost the only data tool taught to them, could be applied to a problem.
I must agree that you should have properly trained teachers, at least for GCSE level and up, because having ICT as a second subject will never give the teacher enough background to do more that follow the pre-prepared courses from the syllabus.
I admit to being a little partisan about this, because 25 years ago I taught up to degree level at a UK Polytechnic for a while, and I could see the way that business computing was going at the time.
could well be better. According to another story, a solar plasma aurora storm is due to hit Earth, starting "early in the day on August 4th". I know this is Wednesday, but I believe that the storm will last a while.
If I were in the ISS, and not protected by the bulk of the Earth's magnetic field, I would want to find the most shielded part of the station and chill out for a few days, rather than going on a space walk. I guess that the boffins will have taken this into account.
Anybody got any idea whether the ISS is in a low enough orbit to be mostly protected, or is the space walk scheduled when it is in the Earth's shadow (I know some of the remains of the plasma will reach the far side, but it will be less).
Size of market?
People may say that this perception is because Apple have sold so many iPads, but I wonder exactly how many ThinkPads have been made over the years? I would guess that it also runs into millions.
Data devices into the US
You can just see it, cant you.
Paul Kane rolls up to US Border Control in a hurry to take the key to the "Secure IT data Centre in the US. USBC take one look at the smart card, and conclude that it might contain terrorist data or pornography.
USBC: Excuse me Mr Kane, could you give me access to the information on this memory card
PK: I'm sorry, the contents are encrypted, and are actually a security key for DNS on the Internet
USBC: A key for the Internet, you're kidding me. Show it.
PK: I'm sorry again, but I cannot do that, because if I release it to you, it may compromise the security of DNSSEC
USBC: Are you refusing to co-operate, and hand over the keys to unlock the data? I'm afraid we're going to have to take it and give it to our experts in the FBI to confirm there is nothing illicit on this card. We'll get it back to you when we are finished. Oh, by the way, we might damage the data while we are doing it.
A good job the Internet will continue without them!
so long as you have a license for you home. Daft, isn't it. If you don't have a license then you're committing an offense. Ditto the black and white license for a colour television. And it's a criminal offense, not a civil one, so you get a criminal record if caught.
Apparently, the BBC's figures indicate that several hundred thousand people were found to have no or an incorrect license in 2005-2006, and if you have a black and white license, you can still expect a visit from an enforcement officer, not that they can do much.
My thoughts exactly.
BBC World Service
is funded by the foreign office, not the license fee.
Depends on the water
If you buy Evian or one of the overpriced so-called mineral waters, you may be right about the price, but might I suggest that you look at the Tesco bottled water at about 15p a litre, or tap water which costs a tenth of a penny a litre in the UK (http://www.water.org.uk/home/water-for-health/healthcare-toolkit/did-you-know).
Bottled water will be filtered, sterilized, bottled and transported, so I don't think that there should be that much surprise in the difference in price, although the cost of the water transport system in the UK is non-trivial.
Remember that beside petrol, other products come out of the refining process, all of which have some value to the oil companies reducing the cost of petrol at the pump. But the cheapest petrol is still about 1000 times the price of tap water if you include the duty, and mearly 500 times the price if you exclude the duty and VAT. Not so cheap actually.
Black and White
So Stuart Leslie Goddard can only see greyscale? (can't see any evidence of this on the net). Does this mean that someone is going to continue making black and white televisions just for him? Or does it mean that there is some medical dispensation that allows him to buy colour televisions and only pay for a black and white license.
I could envisage a situation where there was a dispensation, like for the blind who get a half price license. It would be better to have a discount on medical grounds than a license for a type of television that will not exist in a few years time.
But I have a question. Do you only have black and white televisions, or do you have a colour set for which you benefit from a reduced cost license. If it is the former, then I am interested in where you are going to get your next set from. Iif it is the latter, it sounds like you personally are benefiting at the license payer's expense (WHY SHOULD *YOU* benefit from a discounted license just because of the unfortunate affliction your wife has).
So I think my point is still valid, and your wife's situation is the exception.
Has anybody seen a B&W TV with a SCART connector or a built in freeview adapter?
Almost all external freeview boxes will only work over SCART, only one or two out of all of the ones I have seen actually encode the freeview signal back over the aerial socket.
And if you watch Sky (whose boxes do encode the picture over the aerial) using only a black and white television, you need your head examining (Sky Freesat users exempted possibly)
As a result, will the black and white television license become an anachronism when the digital switch-over is complete? I can see no real need for it any more.
I was paraphrasing the license
I know that you can justify not having a license, and I think I said that. I am just interested in finding out whether analogue only televisions will still be counted as television receiving equipment.
Web site login
To everyone who has suggested logging in to iPlayer to prove that a license has been paid, how do you prevent "account sharing", whereby someone who pays their license fee gives their number, postcode, password or whatever to the rest of their friends and family.
It doesn't even work if you restrict the number of logins to the site, because I have 5 family members in my household who are all entitled to use iPlayer by the license that I pay.
Sky restrict the number of PC and Xboxes that can be registered against my SkyPlayer account to just 3, and I find this restrictive (plus, it doesn't work for Linux systems anyway!)
I had just such a conversation with the TV licensing people a few years back. In theory you can charge it from the mains, but if you watch TV with it while it is plugged in, it needs to be covered by a separate license. On battery is fine anywhere as long as you have a license for your home.
I was having the conversation about USB DVB tuners for laptops.
The only get-out is if your workplace has paid for it's own TV license.
Strangely enough, the person at the other end of the email conversation was quite belligerent about wanting to know my address or the license number.
It would be interesting....
.... to see how many of the "What's on TV is crap" lobby in the UK actually take a stand, and throw out all their set-top, Sky and Cable boxes, and actually go broadcast free.
Unless they do this, then all of their protestations about the license fee being unjustified is just hypocrisy.
I do know two people who have done this out of principal, so it can be done.
TV Licensing and enforcement
The criteria for enforcing license collection must be changing with the digital switch over.
Up until now, it used to be that if you had equipment that was capable of receiving broadcast television, the TV license enforcement people assumed that you needed a license, and you had to demonstrate to them (on a regular basis) that you didn't use it for broadcast TV in order to escape from their harassment.
The License used to be required for "owning equipment capable of receiving a TV broadcast". After switchover, TV's with only analogue tuners are no longer capable of doing this, so should be exempt and classed as monitors.
Will TV's that do not have digital tuners (and where no other digital tuner is in use) actually be exempt from requiring a license? I know that it is almost impossible to buy a TV without a digital tuner now, but I would guess that if you just use such a device for DVD's and videos, TV licensing should stop bothering households that have not purchased digital receiving equipment (ever wondered why you have to prove who you are when buying a TV - it's because that information is fed to TV licensing by the shop! I even had to do this when buying a DVD player not so long ago even though that was not able to receive TV, and also for a TV signal amplifier, even though that is technically not television receiving equipment)
Of course, the intent of the TV license now is that it should be required for receiving broadcast video in real time from any source (as indicated in the article), so the wording of the license must have changed, although I have not read it.
We'll probably get it just in time to offset the rise in VAT!
is not fit for any purpose other than locking internet users in to Microsoft platforms.
And before you quote Moonlight back at me, may I suggest that you see how far this lags behind Silverlight, and how many SL sites actually work with the current version of Moonlight.
I admit that on Windows, SL works well, but that is of little interest to me.
wrote the VP8 video codec. When Google bought On2, they combined it with a Matroska container format and the Vorbis audio codec, and called it WebM, and then licensed the whole under several licenses including the BSD license (although Matroska and Vorbis were already published under open licenses).
So it is not true that On2 invented WebM, although it is true that they wrote the video codec in WebM.
It's not necessary to post a reason in the forum, or to send an email. If you have logged in, and are looking at a comments page, then there will be a "My Posts" link on the right-hand side of the top of the page. You can see there whether your posts have been accepted or rejected. For rejected posts, it would be possible, time permitting, to have the post tagged with a reason for rejection.
It also allows you to review all of your previous posts, warts, typo's and all! Wish I could edit out some of the howlers I have made.
Even bigger sigh, but...
I was OK with this answer right up to "be big about it, accept it and move on", which annoyed me because I don't want to spend time writing comments that are rejected if I can avoid it.
I was not asking for a reasoned argument, merely something more like "Too long" or "Libellous", or even "I just don't like your tone". I'm sure that you could come up with a list of about a dozen, and then select using icons or changing the "reject" button to a drop-down selection box. Two clicks rather than one, and you must have thought about a comment in order to decide to reject it.
I fully expect to see this in the reject pile in a few minutes.
I understand the reason why comments are moderated. I understand that the moderators are human, and I also understand that they do not share my mindset, outlook, and sometimes just have shitty days, as we all do.
But sometimes, even on re-reading a rejected comment, it is not clear why it has been rejected.
If there were a one word or short phrase that could be added during the rejection, possibly selected from a pre-defined list, it would make it clearer exactly what in the post has irked the moderator.
Acoustic power transmission...
...in submarines that are supposed to be quiet to prevent detection! Seems unlikely.
EeePC and Linux
The problem was the distribution used (Xandros - hardly well known), and the fact that they managed to screw up the UnionFS implementation somehow. Every time you wrote something to your home directory, it somehow managed to use space in the read-only UnionFS base image. When you deleted it, it remained. The result is that you ran out of space, which you could not fix without re-installing.
I used Ubuntu 8.04 on my 701 for ages, and with a little tweaking for the slightly odd Atheros wireless implementation, it worked well.
I've got 9.04 currently, and am using it to type this without problems.
Of course, the 4GB internal memory is a squeeze for a full distro, you have to be careful to clean up past kernels and the apt cache, and the processor is a bit slow for Flash video from some sites, but I can work around all of these issues. It's still a useful addition to my available systems.
The whole stack is actually named...
This indicates the GNU basic tool set, on top of the Linux kernel. This makes it a complete OS, not just a Kernel. Read what RMS (oh, sorry, Richard Stallman) says on the subject at http://www.gnu.org/gnu/linux-and-gnu.html.
Education, education, education.
If you think that the 90's and 00's generations are any better educated, think again.
The rot started setting in in the mid 70's when education had to become non-competitive, and we started having a one-size-fits-all so called comprehensive system that treated everybody as if they were slightly below average.
Removing competition from the brightest led to bored kids, and teaching over the head of the slowest led to disruptive kids.
It's funny that in today's comprehensive system, they have re-introduced streaming
I was not suggesting 3G for the smart dust, just the listening point that would receive whatever frequency the dust was using, and re-transmit using 3G. The listening point would not have to be dust-sized, just hidden, and possibly outside of the building.
I admit small antenna==high frequencies, and high frequencies generally mean high power, but I'm not talking about driving a signal 10's of metres, merely to the next smart dust speck, and then to the next (get it, collaborative networking).
Please actually read what I wrote, because although I am not a chip designer, or a transmission specialist, I believe that I have a reasonable grounding in electronics (it being part of my degree) and computing (ditto), not that you would know that.
Why so futuristic?
OK, design a very small lump of semiconductor that includes:
1. Power gathering components, leaching from leaked magnetic fields around power cables
2. Data transmission components of low power, both consumption and transmission
3. Some form of information gathering components
4. A bit of processing power and a small amount of memory
Arrange these so that they can form a collaborative self healing network that will pass information from one to another.
Scatter them in a building.
Arrange to have a listening point outside (or even inside) the building using a WAN network technology such as 3G to pass the information on.
Three of the four requirements are already met by RFID tags. The only one missing is the data gathering. The collaborative, self healing network technology is already around, at least in principal (think OLPC, and I'm sure there was a kids device that used to pass messages in an informal ad-hoc network to other devices in range).
The thing that needs to be improved is the scale. RFID tags are too big currently, and power consumption is still too high, and gathering it still needs too large an antenna to pick up the power.
But these are exactly the problems that more sophisticated technology can fix! If, as predicted, there could be high efficiency wireless LED lighting using transmitted power, which is a technology being worked on at the moment (first link found from Google, there are others http://blogs.pcworld.com/staffblog/archives/004605.html), there could be ample power around for the smart dust. If the dust particles are mere centimetres apart, the amount of power needed to transmit and receive information would be minimal. And making the devices smaller using the level of mask shrinkage from the current memory and processor programs will make the devices smaller and lower power.
Disguise the devices as variously skin flakes, human hairs (think the antenna), bugs, biscuit crumbs, scraps of paper, sandwich wrappers, vending machine cups (Hmmm, thermal power!), all of which are larger than dust, and mainly unnoticeable (turn your keyboard over and shake it to see what is there), and Bob's your Uncle (or in my case, Bill's my Uncle).
The only problem then remaining would the the data gathering. I'll leave that as a problem for others.
If I am the first combining these, and it is enough for a patent application, I claim this as Prior Art!
More specific please
"It's easy to set up now, but WELL flakey in the field, sadly, both Ubuntu up to 9.0 and OpenSuse 11 suffer."
You don't say what it suffers from. And in the next sentence you ask if there is a Linux distro that doesn't.
If by flakey (sic) you mean unreliable or prone to crashes, then I would look at your hardware because I use Ubuntu LTS releases from 6.04 and have responsibility for many SuSE systems, and find them all incredibly stable (longest to hand, SLES 10.2 - OK, not OpenSuse, at 344 days uptime).
I'm sure we could answer if we knew what your detailed concerns were.
@Beaker's Love Child
Sounds good in principal, but if you are stuck in town, unable to by a drink or get a taxi because the ATM and card payment network is down and not going to be fixed until Monday, I think you may think differently.
What you are actually wanting is to pass the baton down to the next set of group B workers, and drop out.
Sounds like they've just lacquered the antenna so you don't touch the metal. If this is the case, I wonder what they will do once it starts wearing off?
Oh. sorry, it's not a problem as most users will have upgraded to the iPhone 4GS or iPhone 5 before this happens!
Different point of view
It's not about the web, Facebook et. al, it's about a common API to write code to, and to deliver applications.
Providing a framework that is common across all platforms, regardless of the OS, browser, machine type is the Holy Grail for application developers. It costs a lot of money to develop an app to run on multiple architectures.
Unfortunately, it would have been better if this had been integrated into the windowing environment, rather than as a layer that sits on top like the current browser based delivery mechanism. What has happened is a last desperate measure to try to wrest control of the user experience away from Microsoft, Apple, KDE and Gnome developers, by adding an abstraction layer above the windowing environment.
Google realize this, which is why ChromeOS effectively eliminates the windowing environment, and is moving this abstraction layer a couple of rungs down the software ladder.
But, unfortunately again, there is still no consensus. We have different people, (Microsoft with .NET, Google with everything they are doing, Facebook etc) all running in different directions, and not talking to each other. The result of this is that we will be left with the browser, with languages like Java, Flash and Silverlight as the lowest common denominators (ah - scratch Silverlight as Microsoft are making it difficult for the Moonlight developers to keep up!) And this will lead us to the same point, just with more layers in the software stack, still with incompatibilities between major offerings.
Of course, the hardware manufactures and network providers are laughing. As all these extra layers soak up extra CPU cycles, memory and network bandwidth, they see repeating markets for new devices to do effectively the same-old-things. Kerrr-ching!
The web *could* be a valid application deployment method. Google, with their Application Engine which allows local caching of applications and when you are off-line is a very usable technology that reduces the need to go on-line. Similar things can be achieved using Lotus Domino (waaaay before Google even existed) and even AFS or DCE/DFS, so application and data caching is not really new ground.
But using a web deployment method does not dictate having an always-on internet feed. It would be perfectly possible for businesses, and even home devices (think NAS or network media devices) serving applications within a closed network, without going out to the Internet. In theory, you could also have an app server installed on the same system where the application will run, using a loopback or similar internalized network. The advantage would be a common deployment method, the disadvantage will be increased inefficiency.
I admit that this fills me with dread. I want to get off the continual grind of new-is-better, with it's back-door to my wallet, and I really don't want to get to the point where the ISPs can hold my data and computer usage hostage to whatever they want to charge me. I like the idea of a standalone PC with network usage features where I control the access, the available resources, and how the data is used. The current model of a windowing framework that allows native applications suits me just fine, but I'm no longer a typical computer user, if I ever was.
This whole thing is reaching a "Stop the word, I want to get off!" threshold. I'll go and get my coat ready.
Surely it would be cheaper
I know SCO will die eventually, but I would have thought that by now, SCO would be so worthless (SCOXQ capitalization of 1.08M USD at 5C a share) that Novell, or even IBM could buy the remaining stock for less than their legal costs in a retrial/trial (depending on which lawsuit you are looking at).
Of course, this would depend on the expected outcome of any trial by the appointed Chapter 11 administrator, but I would have though that 50C or even less in the dollar would be attractive to them. Buying 51% at 50C would cost about 275K USD, which must be lower than the expected costs. Would have to make sure that any debts are ditched, though.
Once they have done that, it's a simple matter of closing them for good.
Or is there something in Chapter 11 bankruptcy protection that I don't understand?
Alternatively, we could do it! I've a tenner, anybody else interested?
- Apple stuns world with rare SEVEN-way split: What does that mean?
- Patch iOS, OS X now: PDFs, JPEGs, URLs, web pages can pwn your kit
- RIP net neutrality? FCC boss mulls 'two-speed internet'
- Special report Reg probe bombshell: How we HACKED mobile voicemail without a PIN
- Sony Xperia Z2: 4K vid, great audio, waterproof ... Oh, and you can make a phone call