1701 posts • joined 15 Jun 2007
Interesting slant. By thinking of it like this, you can consider emoticon replacement as an assembler or compiler.
I'm not sure whether a basic assembler pre-dates mark-up systems for printing, but I think that it might, although....
IIRC, special card images used to exist for carriage control in the days of Hollerith cards, pre-dating electronic computers, and dating back to mechanical card sorting machines. Would this count? Or possibly particular sequences of cards in Jacquard looms to weave patterns in cloth. That may be a bit of a stretch, but it is still taking one sequence of symbols and producing a related but different graphic.
OK, my car actually has a basic rotor as well
but it's so old that nobody in their right mind would think to steal it!
Surely you mean 'used to be'
Modern cars have fully electronic ignition systems, so don't have any form of rotor arm at all!
Spark is controlled by thyristors (or are they old-hat as well), and directed to the correct cylinder without mechanical intervention. Timing is taken from a some non-moving rotational sensor looking at either one of the cam shafts, or the crank shaft.
Yeah, what I wrote could be read as me repeating what was in the article, but the aside was the Nurse Chapel bit, which all Trekers should already know, but many others won't. It's amazing how the lag between posting and the post appearing can make it look you've not read all of the comments.
The thing is that ST:TOS was made about 45 years ago (it took a few years to reach the UK), which is before many of the commenters here were born. As Next Generation, Deep Space 9 and Voyager are much later series, with more episodes, and have been syndicated much more than the original series, and the voice of the computer appeared in almost every episode, I would say that she was much better known as the voice of the computer than Nurse Chapel.
I understand your sentiment, though.
Has anybody spotted whether Apple have filed any patents with regard to controlling phone functions using voice commands?
I'm certain that if they have, they should be invalid, but we all know the US patent system.
I think that 'Majel' is great as a name for this type of application, especially if they can get a sound-alike to do the voice. Anybody produced an LCARS theme for Android yet?
Interesting aside, Majel Barret (as she was then) had a recurring role in ST:OS as Nurse Chapel, as well as being the voice of the computer on board Federation ships circa NCC1701D.
See my previous correction about my lax language, and I think that if you had actually read the whole of the post, you would have seen that I do/did know that it is the operating of the receiving equipment to receive broadcast television that is what is covered by the law.
But a computer with a link to the Internet is also classed as receiving equipment.....
I sympathise with your treatment, it sounds a bit harsh. As I have suggested before, you can't catch the real cheats without also looking at the people who really don't watch TV. I appreciate that they could just take your word, but if that is all it took to avoid investigation (and that really means that someone is evaluating whether you should be purchasing a licence), then the people who say to themselves that they aren't going to buy a licence would have no difficulty lying.
No, not everybody, but a significant number.
I know that this is a question that cannot easily be answered, but I wonder how many addresses on the post-code database actually do not watch broadcast television?
Certainly those that are empty, but I would guess (and this is a finger in the air guess with no research) that television watching probably has about a 98% penetration of all addresses (this would allow for ~600,000 residential addresses to not watch TV, which is, I think, probably an over-estimate)
If you follow the odds and apply these figures, then the odds are that a good number of the people who do not buy a licence should. Wikipedia states that 27% of all visits in 2007-8 found that a licence should have been purchased. It's difficult to quantify how accurately the visits are targeted, but that is a significant amount of lost revenue, and a large number of people (possibly up to a million if there were 3.5 million visits, although this number probably includes multiple visits to the same address) who ARE prepared to LIE and break the law.
So the numbers probably justify some of these accusations, even if it is unpleasant. If you are one of the real people who do not watch broadcast TV, put at least some of the blame on those people who do cheat.
I was talking about
operating the TV as a receiver. I admit I was a bit lax in my language, but if you read my post, I do say about receiving broadcast TV (which includes simulcasts over the Internet). I was assuming that people would read the whole comment.
The whole concept of what can receive TV is a moot one. Some time back, I bought for cash a TV signal amplifier from Tesco. I used the household Tesco Club card, which silently provided the address information required which prevented the checkout assistant from asking me to provide identity information (normally, they take it from the payment card).
Unfortunately, the card is in my wife's name, not mine, but the TV licence is in my name. We do not have anything stupid like different surnames, so the surname match.
A few months later, my wife gets a letter from the TV licensing authority, accusing her of not having a TV licence. I immediately checked, and found that the licence was still valid, and I checked with her, and yes, we were still married, and we were both still living at the same address for which the licence was issued.
This begs two questions. Firstly, do they not accept surname and address as proof of living in the same household, and secondly, a TV amplifier is not strictly speaking receiving equipment for broadcast TV. I know that it will be used with a TV most of the time, but it is not proof that a TV is being operated. Being a bit of a an electronics fix-all, I may have just been wanting it for spare parts, or I could have been using it to boost and distribute a signal from a DVD or Video player around the house.
I actually bought it for my parents house, because their booster had just failed.
I sent the licensing authority an email, quoting the TV licence number, and never heard anything back again, not even an apology.
On another occasion, Tesco actually would not allow me to buy a £220 TV for cash unless I provided some ID. I'm not sure whether that is legal, although I understand that a shop has the right to not serve anybody if they so wish.
As another slightly humorous incident, a shop actually asked me to fill in an identity form when I purchased for cash a cheap DVD player (no TV receiver in it). Just shows how poorly the message is understood.
Anybody who really chooses not to have any TV receiving equipment can skip over the rest of this post, because I agree that there should be a real opt out from the abusive mail for people who do not watch any broadcast TV.
Now. For everybody who just disagrees with the licence fee - move to another country.
It may be outdated. It may be the wrong model to pay for public broadcast TV. It may be inequitable for people who don't watch BBC content, but it IS the law for owning TV receiving equipment.
It is as much a legal requirement to have a TV licence if you own and operate a TV in the UK as it is to tax a car if you keep it on the road. If you think it is wrong, lobby your MP to have the law changed.
I don't watch as much broadcast BBC as I used to, but I don't begrudge paying the money. It's a hell of a lot less than I pay Sky, and there is so much more real content generated by the BBC than Sky (buying content is not the same as making it or commissioning it).
The BBC generate content that appears on a lot more than the 8 main BBC channels (1, 2, 3, 4, CBBC, CBeebies, BBC Alba and BBC News 24).
I challenge anybody to flip through Sky or Virgin without finding some BBC generated content on the non-BBC channels. And a significant amount of the licence fee actually makes the terrestrial broadcasting system affordable to the other users such as ITV. Remember when OnDigital and ITV Digital effectively went bust because they could not generate the required revenue.
Actually thinking about it, maybe there should be an equivalent of SORN for non-TV owners. Make it an annual declaration, make it a criminal offence to make an incorrect statement, but make it enough to stop the mails. Ask people who buy TV receiving equipment for non-TV receiving purposes to renew their declaration. Back this up with an an investigation arm that can get warrants for entry, but put heavy penalties paid to the innocent party from the investigation arm if the entry does not find any infringement, and the penalties should come from the profits of the company operating the investigation, not from the fees charged to the BBC to run the collections.
I'm sure that this will not satisfy everybody, as no doubt there will be libertarians who see this as unnecessary in an ideal world, but hey, breaking news, this is not an ideal world, it contains liars and cheats.
Graffiti on capacitive screens
I use a Graffiti app on my android phone (it's in the market), and I find it a bit strange.
The problem is that I am (still) completely used to popping the stylus out when writing. Using my finger (especially as I play classical guitar and keep my nails long) is just unnatural.
I find that using one of the capacitive styluses that you can buy is no good either, because they generally have some form of rubber or latex tip that drags on the screen, making accurate swiping difficult.
Every now and then I will get my Treo 650 (with Graffiti Anywhere installed) out and marvel how natural it feels to use compared to my Galaxy. Of course it has a small screen, unwanted keyboard, and does not do WiFi or 3G, but it did most of what my android does now for the previous five years. And the battery (still) lasts 3-4 days! I just wish that they had produced a TX with a phone built in.
Yes, too true.
Econet security was almost non-existent.
You could have your BEEB as a privileged station, which enabled you to do all sorts of things like peek at the screen of another station on the network, or even remote control other machines. Tremendously useful in a teaching environment.
The only problem was that the only thing that marked your station out as privileged was a single bit in a particular memory location of your machine. It was easy to poke (well, use the ? operator) this byte, and hey presto, your system became privileged. Yes, we know you did it frequently, Gary Partis, wherever you may be.
Unfortunately, being on a privileged station, you could then do all sorts of bad things to the file server (yes, Econet Level 3 allowed you to have a hard disk based fileserver on the network), so we had to warn the lecturers not to keep their assessment marks on the file server.
Putting together the computer appreciation BBC micro lab we had at Newcastle Poly in the early '80s was one of the most fun things that I ever did in my working life, and as of last year, my BBC Model B with an Issue 3 motherboard, serial number in the 7000's and BBC Basic 1 in EPROM is still working (it's missing the OSBYTE, OSWORD and OSCLI keywords, amongst other things)
BTW. Anybody know where to get double-sided single density soft sectored 5.25" diskettes from? Mine are shedding oxide, and many are unfortunately unreadable.
Why do I have to put something in the body?
Um, Yeah. I got that. I was just being pedantic as he should have said "playback with Dolby B off" (I've corrected the capitalisation as well), because if you were using Dolby C, you probably also used decent quality Metal, Chrome or Ferro-chrome tapes anyway.
Very few commercial tapes were created with Dolby C, Dolby B was the norm.
Dolby C appeared on high end audio systems for users when recording their own material, and as far as I am aware, no record labels actually sold tapes recorded with Dolby C.
IIRC, Dolby B had a fixed expansion above a single frequency, Dolby C used multiple frequencies (two or three) and different levels of expansion for each, and Dolby D used a continuously variable expansion level dependent on the frequency. I can't ever remember seeing home grade equipment with anything better than Dolby C.
It really depends on what you mean as "technically". The C64 had the same speed processor, less available memory, and only really had an advantage in hardware sound and sprite facilities. It did not even look better! OK, it had a 6510 rather than a 65C02, but the differences where not huge, and most software did not take advantage of the re-instated 'missing' instructions in the 6502, or the extra I/O processor port.
The IIc had a screen resolution that was more appropriate for real work (rather than games) and a software portfolio that included real business software. It could be seen by affluent people as a 'real' computer, running 'real' programs, rather than a home computer that was only ever going to be used for games. And if you had good tax advice, it would not surprise me if you could get a tax break in the US for buying something like the IIc that could be considered a business machine!
It also had about as long a heritage as a personal computer could have at the time. I suppose that you could say that a C64 was an evolved PET, but in reality, there was nothing that made a C64 able to claim to be an updated PET except for the possibility of using some of the same (expensive) peripherals. The Basic was not really the same, the character set was different, and the memory map and OS entry points were completely different.
I used an European Apple II color (or should that have been colour), and was always impressed by the card slot capability that allowed the system to be extended in ways that were not imagined by Apple (like the UCSD Pascal system). The IIc only had marginal expansion capability, but included much of what made the II so useful (like disk drives, memory expansion, and I/O ports) in the base system.
There was no way that the C64 could offer many of these features without expensive and often ugly add-ons.
I will not deny that the C64 was successful, but that was entirely down to it's relatively low price rather than it's technical merit, which made it accessible to a much wider customer base. At the time, even £199 in the UK was a significant purchase, and for a normal working family to try to justify spending £800+ (which was the price of a reasonable 2nd hand car at the time) on something with only intangible benefits was just not going to happen. Most families at the time would pay £50-70 for a ZX81 or £125 for a Spectrum if their kids pestered them long enough.
As a foil to this, I don't believe that IBM even offered the updated PCJr in the UK, because there was just no market for it!
I still keep my original BBC Model B running though!
PDP-11 running UNIX. BBC Micros. Polytechnic. VMS.
I know that all these were quite common in education, but was this Newcastle Poly?
The PDP-11/34 (in SYSTIME 5000E covers) in the Maths department (which taught the computing courses) there was my pride an joy, and I was instrumental in getting a network of BBC Micros installed there. Was a Golden Period of my working life, and set me up for a life of supporting UNIX systems.
We were at constant tension with the Polytechnic computer service department, who had a passion for HARRIS systems (strange beasts that were neither Super-mini nor Mainframe), RML 480Z systems and a little later, 1st generation IBM PCs. Eventually, they gave up on Harris systems, and switched their main systems to a VAX 8600 and an 8300 running VMS, and an 8250 running Ultrix.
All seems a long time ago.
"...and dragons raised their heads as one and keened"
She was for many years one of my favourite authors. So much, that I paid full price on launch day for "Dragondrums" IN HARDBACK, in the days when books were published in hardback many months before they appeared in paperback, and were horribly expensive.
I'm trying to think of a suitable quite from something like "The Ship who Sang" or "Masterharper of Pern", or even from "Dragonsong" (something from the dirge sung by Menolly during Petrion's Dirge would be suitable), but memory fails me.
Love the reference
to 'Buffy - The Musical'.
...all your core products from a single supplier.
Makes it difficult to put together a coherent service, but at least when it comes to networking gear, prevents you from having the same flaw take out an entire layer of your infrastructure.
Japan, powered suits
All of my anime dreams are coming true! Now all we need are the A.D. Police and the Knight Sabres, Patlabor Mobile Police or possibly the Olympus ESWAT teams to pick up the pieces when the powered suits run amok.
Right sentiment, wrong details. According to the Levenez timeline document, Xenix was derived from 32V, which was a 32 bit port of Version/Edition 7 for the Interdata 32 system. As far as I am aware Version/Edition 5 did not make it outside of Murray Hill.
Why standardise on BST?
Surely it would really make more sense (sun overhead at midday) if we were to standardise on GMT.
It's only those who want to regularise our time with Europe who really want to go to permanent BST.
It constantly annoys me when people keep saying that the clocks change for Winter to make the mornings lighter.
The clocks change in Summer, for an entirely out-of-date and arbitrary reason.
Ahh yes. The vaunted efficiency argument....
I'm not sure I agree about the vendors preferring the days of separate servers, because the rackmounted server market became very cut-throat, and the vendors were not making much money per server, even it they were selling a lot of them.
What virtualization has allowed is vendors to tell customers that they are justified in replacing perfectly serviceable datacenter servers with years of life with brand new, high margin, expensive servers. For the vendors, high margin small volume is preferable to low margin high volume. That's why IBM's mainframe business is still very profitable.
I'm sure that the vendors can produce spreadsheets and charts to prove that they will save money on power, space, infrastructure and support costs by doing this, but that is what marketing people do. It will be interesting looking back in a few years time, but I'm not sure whether anybody will be publishing figures to see whether the savings were realised.
I was working on introducing virtualized systems six years ago in the UNIX space, and whenever we tried suggesting combining workloads so that the average usage of the workloads approached 90%+, we always got tripped up by the customers (separate departments in a large UK bank buying computer services from a central IT department) wondering loudly what happened to their workload if unscheduled peaks in separate workloads coincided. They never liked the fact that in these situations they might get less predictable batch timings than if they paid for guaranteed capacity. The result was that we put hard limits on each of the LPARs, effectively the same as them having systems of fixed size. They could not afford missing critical deadlines by uncertainties regarding job run times.
I admit that this was before it became easy to shuffle partitions live between different physical systems, but it became clear that end customers were not prepared to compromise in order to make more efficient use of the installed capacity.
I'm not involved in such work at the moment, so maybe 'education' or 'marketing presentations' are better at convincing customers nowadays.
I seriously consider the push to virtualisation as being a hardware vendor led campaign in order to justify the purchase of ever larger and expensive single servers.
It would be just as easy to have a server farm mentality using very small footprint, high density separate address space individual systems with networked shared filesystems. You know, network booted devices with common OS images. Maybe dozens of them per 2U server like BlueGene or blade centres, running much more simple OS's than windows has become.
In fact, if the cost and power consumption is right, why even go down the distant processing model at all. Put ARMs in the display devices (oops, they are already there!) with decent network connectivity (which is already required for RDP/VNC/Citrix) and a lightweight network based OS, and dispense with the huge server based processing complex completely. Move back to a file server model with much more modest systems with lots of storage in the data centre.
You would have to be careful about management issues, but I'm not advocating a return to the every-pc-has-it's-own-OS-and-applications, more like the SUN's "The Network IS the Computer" diskless boot model, so that the device on every desk is identical. One fails, replace it with another. All user data is on the fileservers, and the device on your desk is just a way of accessing it.
This is where low power SOC ARM systems can really go, and can probably provide at least 70% of the requirements of the business community with huge cost and energy savings.
I don't like discarding anything that works, and I do like fixing things that break in obvious ways.
Thus, I still have the last 4 TVs that have served as our main one, three of which have been repaired, often on more than one occasion.
The latest is a 32" 1080i HD LCD in the living room (still good enough for Sky HD - don't have or want a bluray player or PS3). The previous one is a 26" HD Ready LCD in the kids play/games room that my mother said should be our dining room. The one before that is a 28" widescreen CRT and is in the room where the HiFi and all the music instruments are (and is used only when the wife is watching boring home improvement programs), and the one before that is a 19" 4x3 CRT in the main bedroom. All the CRTs are physically turned off when not in use, to save power.
The kids have TVs in their rooms (bought as Christmas and Birthday presents), but none are larger than 20", and that is quite large enough.
I just can't stand the thought of perfectly serviceable equipment going to the tip.
@Sean Baggaley 1
"BSD Unix (which really *is* UNIX)" - not in the most pedantic sense.
A lot depends on your definition. BSD (which was originally a series of add-ons and modifications published in source) became a full OS distribution and split from Bell Labs UNIX around version/edition 6/7, and was never re-integrated (although SVR2/3/4 all added BSD features back into to the AT&T sourced versions).
As a UNIX pedant, I would say that BSD is *NOT* UNIX. Remember the lawsuit that forced AT&T code to be removed from BSD, leading to BSD/Lite, FreeBSD and BSD/386, so it is difficult to justify the claim that BSD is UNIX.
By comparison, HP/UX, AIX, Xenix, UNIXWare/SCO UNIX, Altix, SINIX and many more were derivatives of AT&T code, and passed UNIX branding tests, so could legally be called UNIX.
I accept that a lot of people who were from outside the Bell Labs/AT&T world may well have seen BSD before any commercial version of UNIX, and may well have referred to BSD as UNIX before AT&T got commercially sensitive about the UNIX brand, but that does not alter the fact that it was a very early fork of UNIX which never gained UNIX branding. I am not arguing that BSD is no good, because clearly it is, but that it's claim to be UNIX is subject to interpretation.
As far as I am aware, the only BSD variant that passes any of the UNIX compliance test suites is OSX!
What a surprise
that sheep shearers are not required in the winter!
I'll bet they appear on the list again next March.
Having seen some of the specialist sheep shearers from the antipodes, I'm not surprised that they are required. I've never seen anybody work so hard for so long as some of these guys. 8 hours or more with minimal breaks, averaging a sheep every 2-3 minutes or so, often for six days a week.
What they do is take two to three years travelling around the world from country to country as each reaches their spring. They work their bollocks off while spending almost nothing on accommodation. They earn a pile of money, which they then go back home with as the buy-in stake for a farm or ranch.
While I've seen some good UK based shearers, there are not that many, and certainly not enough for the number of sheep in this country.
I dispute the graphics designers, though. My daughter got told that the number of UK graphics students that get jobs in Graphic Design after completing their course is only a small fraction of the total. More like graphics agencies don't want the tedious job of breaking in inexperienced graduates when it is easier to recruit experienced workers from abroad. I cry Shame! on this and all past governments for not encouraging people trying to take the first step on the ladder. There really is no shortage!
I'm sure that the same must be true for software developers, although I do question the value of some of the computer and IT courses that run in this country.
A TTY33 would still work fine if you found one with an RS232 interface, (current-loop support is probably a bit esoteric for Linux, but no doubt someone supports a driver and hardware somewhere).
Shame you can't say the same for Gnome 2.3.
But that's only 6 months away
and there is almost no bug fixing going on in that version, even though it is an LTS release. There's lots of unhappy LTS users in the Ubuntu forums, myself included.
In the server release
there are a number of items that will not appear on the backports list for fixes from later releases. This includes all of the desktop items like Firefox, Evolution, and even Google stop producing fixes for Chrome once an LTS release goes out of support (as Hardy did earlier this year).
From my experience, once an LTS release has been out for a year or so, anything that is regarded as a bug rather than a security problem just will not get fixed.
Hmmm, makes me think of secretmail
on UNIX systems 30 years ago. See, nothing's new nowadays.
It has been a convention since UNIX made it outside of Bell Labs, which I can testify to since 1978 when I first used UNIX version/edition 6.
I agree that this does not suit all organisations or even all users in the same organisation, and the flexibility of UNIX allows this *convention* to be controlled where it is necessary. That does not alter the convention, merely the implementation.
Your statement that "Users should NOT install apps" is as blunt as me saying that they should. Neither can completely cover all situations. I also wonder whether you differentiate between locally written tools, and applications from external organisations, and also whether you also differentiate between compiled code and such things as shell scripts or other interpreted code (which actually can be run as long as you can run the interpreter, even if the noexec flag is set!). Do you also prevent shell access or disable aliases and functions?
Where I currently work, if the users were not allowed to compile and execute code, they could not work. But that is because our users are scientists who are working on creating computing models. There is no one-size-fits-all model for all organisations.
I'm not sure if that statement about 'self-declared admins' was aimed at me. If I am not a UNIX system admin (30 years looking after UNIX systems from many vendors in lots of industries, including writing some of the security standards and many operational procedures at some organisations), then I don't know what I am, or what a UNIX sysadmin should look like.
Believe me, I have been involved in enough hardened UNIX installations to know exactly what you are saying, and the convention stands.
This way lies anarchy. Just imagine if a virus writer found a way to hijack the deployment process. Instant huge botnet. Just as you cannot trust users, you also cannot trust automatic update processes. Even it they are signed by security certificate.
Maybe this shows limitations in the Windows way of storing data. Effectively saying that AppData is the only place a user can write, and should not be used for executables is saying that ordinary, non-privileged users should not be using programs other than what is deployed system wide.
In an older multi-user model (I'm sure you can guess the one I'm talking about), one of the normal conventions is a bin directory under a user's home directory. User written scripts, locally compiled software and trusted executables from other sources can live there. Add it to the path, and users can then effectively extend the OS to do what they want, rather than being limited by what the system provides. And in a homogeneous networked computing environment, this scales to network computing as well!
Windows has no such convention. Shame MS could not learn from history.
Maybe Ubuntu has lost it's shine
Having made a name for itself over the last 6 years or so, I think that people are beginning to realise that the corporate-customer-pay-for-support model that Canonical have been trying to work towards is a difficult one to build a business on.
The move towards the shiny has not helped, having polarized their advocates into those who don't understand the need to change, and those who love it. The former category, IMHO, is the one most likely to suggest Ubuntu in the server space, so in many ways the dis-Unity spat is indirectly doing more damage to their corporate support model than anything else they have done.
I'm sure that some people will remind me that Gnome is still in the repository and can be installed, and that server releases are different from workstation ones, but those people are missing the point about the work necessary to run server type systems.
To survive, Canonical has to approach profitability at some point, because Mark Shuttleworth won't bankroll them forever. If they are losing some of their high up managers, it appears to me that this thought may be occurring within the company as well.
Local time sources
Generally if you buy a time appliance that provides a NTP stratum 1 source using GPS, MSF or DCF, then they put a high-precision temperature controlled clock in the appliance for just the situation where your exterior time source is not available.
Normally these are accurate to <2ms per day drift (this is for the entry level device from Time and Frequency Solutions Ltd. - other NTP time appliances are available), so will take over a year to drift even a second from real time if they lose their external feed. There are better ones if you need more accuracy.
So if you need accurate time, relying on a regular feed from GPS is just not necessary.
Of course, some people might be doing time synchronisation on the cheap, but that is then their problem.
@openminded - Read the license
You've not 'bought' commercial software. Not ever unless it is an IP purchase.
What you've purchased is a license to use a copy of the software, and your custodianship of your copy is only allowed if you stay inside the terms and conditions of the license.
You agreed to this when you opened or installed the software. You gave the right away yourself, as long as it does not conflict with the law where you live.
And why should paying for the use of software entitle you to see the source code? Does buying a toaster entitle you to the complete spec's and blueprints for said device, or purchasing a CD entitle you to the sheet music for the songs?
Can we be sure
that these devices were actually TouchPads, and not some counterfeit Chinese copy based on an existing Android device on from eBay? Has anyone actually seen one?
Or maybe, the Chinese manufacturing plant have just decided to keep making them and shipping them without HPs say-so.
It would be nice if it were the latter, because possibly we may be able to buy them at a price closer to the manufacturing price than they would be from HP.
@ Ian Morrison - You've not understood
This is the information regarding the presentation of local time on the systems, and has absolutely nothing to do with reference times such as UTC. UTC (Coordinated Universal Time) is exactly that. A Universal time.
Almost all data that crosses national boundaries, including financial transactions and air-traffic control information are measured in UTC or it's close cousin GMT (or Zulu as it is known in military circles). Thus, this database will have no effect whatsoever on whether planes will fall out of the sky.
This database says what the offset from UTC a particular location in the world works to, and when Daylight Saving Time is going to be applied. It also documents when national governments have changed and will change the rules for DST, and when countries have changed and will change timezone (I love the comments about Dublin Standard Time at the beginning of the 20th century being 22 minutes and a number of seconds behind London, and the confusion about when the Irish government decided to shift to GMT/BST at the same time as also altering when DST cut in. Apparently, the Irish people were very, very confused).
Consequently, for almost all computer systems that use copies of this database, a lack of updates will make almost no difference whatsoever.
The way the document is written makes it look like all time services in the UNIX world will stop.
Well, that ain't going to happen.
Every UNIX system that uses this source (and it is not all UNIXs, even in this world of reduced choice) will have their own copy of the database. This copy will not evaporate. It's still good for at least the next year, end even it it were not, the rules for almost everywhere in the world are not likely to change in a hurry, so would work fine for 99% of the world without further updates.
Even if every UNIX-like system in the world were to be forced to delete the data in this database, the older SVR2 TZ rules still work, so it is possible to code your own daylight saving time and offset rules.
UNIX and UNIX-like systems should always have their main clock running UTC (almost the same as GMT), and these rules contain information about what offset the local time is from UTC, and when daylight-saving time kicks in and out. this database automates this. But the older methods still work.
In your world, you would end up with no ad-funded commercial television.
This may not appear such a bad thing until you see what public-funded TV is actually like in most countries, and also how expensive pay-TV would become if it had to be funded completely from the subscription.
I can see a scenario
where Steve was hanging on to life so that he could see how Tim Cook handled a product launch. Having seen that it was not a total disaster, he was content to hand over in the most final of ways, knowing that the markets had accepted the post-Steve Apple.
I am surprised at myself though. I thought that when Steve finally reached the ultimate purpose to life, that I would just accept it. I was actually quite moved and more than a little tearful when I heard the news on the radio as I was getting up this morning.
Shows that his influence could touch even this old cynic.
Or even to live there
if you have to do a patent search for every purchase you make to ensure that you are not a user of patented technology without license (see previous comments!).
Oracle on Solaris may be a majority,
I don't have the figures. Maybe you could post references. But I never said that it was mostly on Power/AIX, just that there was a lot of it on Power/AIX.
Everywhere I have worked in the last 15 years, with the exception of my current contract (who run Oracle on zOS and Linux on zSeries, not that I have any involvement in those systems) have run Oracle on AIX as their main DB. Quite a lot of it actually, and ended up paying Oracle big bucks (or, in-fact, pounds) in license and support fees.
This has been in (large) financial, government, utility and construction organisations.
I have not heard of many applications that were locked to Oracle unless they were Oracle apps (not a big surprise). Oracle may be the recommended or the best supported DB, but any 3rd party application developer would be limiting their market if they were unable to sell into non-Oracle sites. Oracle is not quite a monopoly.
I agree that it would be a costly and disruptive operation, but then so would replacing the servers, changing your management infrastructure and re-training your support staff. Bit of a no-win situation for anybody, so let's hope it does not happen.
I have a feeling
that if Oracle stopped developing their database product, or in fact any of the products they have mopped up, on AIX, especially if they had already shut down Itanium development, there would be one hell of an anti-trust lawsuit in the US.
Plus the fact that they would lose many millions of dollars of revenue from those customers who like AIX and Power more than Oracle, and would switch to DB/2 rather than to Solaris/Sparc.
The problem here is that many applications are database agnostic, using ODBC and JDBC and SQL amongst other abstractions as the means of using a database, which allows them to switch database products relatively easily.
Many Oracle DB customers get regularly annoyed by them because they can't appear to decide what the licensing model is. Some customers I worked with ended up re-negotiating their Oracle license fees every year because the way it was worked out changed each year.
LPF, I totally agree
If it is just the VFAT stuff, junk it.
All you lose is the ability to move your flash memory devices from one device to another.
And enough gadgets are not run by an MS operating system for the vendors to decide with each other on another, Open Source, filesystem, properly suited to flash based devices. Put a Windows user-land (or even kernel level if you can get it signed) filesystem driver in the support disk for the device, and bye-bye VFAT and good riddance.
But I suspect that there may be other patents involved. Boy, we need this list leaked! How are MS keeping it secret? I am still hoping that somebody has the balls to stand up to MS, and allow it to be taken to court (or not, if Microsoft choose to chicken out).
Re ZFS protection
No, it says that it protects from DISK ERRORS, and I said that.
My education is not great, just an ordinary degree. My 30+ years of UNIX, much of which is on kit other than IBM (including Sun, Digital, HP, Data General, Amdahl, AT&T and many others over the years), and a good part of which has involved UNIX source is more relevant, although if you look, I have often said that I am currently contracting for IBM, and that in the past I did work from them. I I am no marketing shill, however. I just appreciate features that make the work I do easier. I am as critical about some of the features as anyone else, and I often get very worked up by their support processes. When IBM entered the Open Systems world in 1990, they were regarded as the Big Enemy by many UNIX people, myself included, but I think that they did actually prove themselves.
Recently, whenever I have had to do work on HP/UX and Solaris, I have found that it is less easy than AIX, but that could be because I am more familiar with AIX. I definitely feel that Solaris and HP/UX feel more like traditional UNIX systems than AIX when it comes to management. Maybe a good thing, maybe not.
I don't know what you think I don't know? I am perfectly prepared to admit that I do not know everything, and I can be wrong. But what have I said that was wrong. The ZFS paper is quite clear, as are it's conclusions, which I referred to. I know GPFS enough to know that what I said was correct. I have compiled kernels enough to know what is involved there.
In relation to what I said regarding Fear, Uncertainty and Doubt (yes, I know that without looking it up in Wikipedia, I have been using the term myself for about 20 years), the first word in my sentence was *IF*. I did not acknowledge that what Matt and Jesper say was FUD, although I definitely would categorize some of what you say as such. In fact, I think I agree with Jesper on almost everything he says on these comments. Very detailed analysis, and worth reading.
I apologise for resorting to ad-hominem arguments. It's always a poor tactic, but sometimes what you say is not thought through, or maybe seen through a filter. I definitely know that what I say is often coloured by my experience, so maybe I should accept that it will be for everyone, but it may be worth you re-examining what you say sometimes. It comes across as very shrill.
You are like one of those kids in the playground who shout and insult everybody around, and who then claims that they are being bulled when one of the people at the receiving end is finally annoyed enough to put you in your place.
Damn, resorting to personal insults again, but you make it so easy!
@kebabburt re: FUD
If Jesper and Matt are spreading FUD, then they are doing it in a way that is less rabid than you. I would be surprised if you are not foaming at the fingertips when you type some of the things you do.
A point in question. How much re-writing do you think is necessary to increase the number of processors managed by an OS. According to you "it had to be *rewritten* last year, because it could not handle 256".
Well. All that is really required is to change a couple of numbers in the kernel header files, and re-compile the kernel and any tools that reference those headers. In fact, the support was included in a PTF fix for AIX 6.1, not even a new level. Definitely not a re-write, more like a tweak.
Like other shortcomings, I guess that you have never worked in source at a kernel level for a UNIX, and I would also hazard that you never had to play with sysgen-ing an older SunOS release. Honestly, the more you say, the less relevent what you say becomes.
When it comes to new OS features, what do you think that Oracle are adding to Solaris 11? Both DTrace and ZFS are old news. How often can you consider them new (both have been around in previous versions of Solaris), and neither of them are really an extension to the OS. They are what IBM would call 'layered products'.
Unlike Linux and Windows, the remaining UNIXes, and especially AIX IMHO have a mature set of APIs, RAS features and other management processes. I will concede that lack of change may indicate stagnation, but excess change may also indicate immaturity and feature bloat driven by marketing hype. There is middle ground. What new features would you like to see in a UNIX?
On the filesystem front, ZFS moves the disk hashing up into the filesystem layer, and produces protection at the file or other disk object level. Reed-Solomon encoding of data at the filesystem block level effectively does the same in the GPFS de-clustered raid system. Big deal. And apart from Sun themselves, not everybody believes ZFS is safe. See this paper www.usenix.org/events/fast10/tech/full_papers/zhang.pdf that was presented at Usenix, which concludes that ZFS may be more tolerant of disk errors, but is not invulnerable to data corruption.
There is a fundamental design difference between the T series Sparc processors and the Power series of processors. One that is closing from both ends, and again they are converging on the middle ground. The announcements of what was it - heavy thread?- just shows that the Sparc design is being changed. One of the real problems with T1 and T2 processors is that they were committed to the lightweight thread model, which made them excellent for many small processes, but very poor for smaller numbers of large processes. Why is this change an innovation, and IBM putting a larger number of slower cores a realisation of a deficiency. I believe that Matt described this far better than I can, elsewhere in this thread.
I think I agree with Jesper's analysis of Larry's announcement claims. They look good, but do not actually stand up to any real scrutiny, as they claim things that other vendors do not bother to benchmark, or cannot with the same levels of code. It's like you saying that you are the fastest person on earth at running from your front door to your living room, but you never allowing anybody else in to your house to try and beat your time. Surely you can see this?
So, please calm down, and actually read stuff that comes from people other than Oracle's marketing team.
POWER vs. Westmere EX
If AIX ran on Intel, then you would probably have a point, but the market for POWER is "AIX on POWER" and "IBM i on POWER". It's the whole package.
As far as I am aware, IBM is making no effort to (re-)port AIX to an Intel architecture and as far as I am aware will never consider i on that platform (I say re-port, because AIX 5.1L was available on Itanium, although nobody was interested in buying it). Earlier versions of AIX were also available for i386 and i486, but only on IBM PS/2 systems (in the same way that SunOS 4 was available on the Sun 386i system in the late '80s), but that was actually a different code tree (AIX 6 and 7 go back to AIX on the 6150 PC/RT platform, whereas AIX PS/2 came from the IX and AIX/370 mainframe port, originally done by Locus). There were big differences, and the only people who made any attempt to treat them as the same OS were the IBM marketing people.
I'm sure that a port could be done (it's almost all C anyway) but as you've pointed out, if POWER were to be dropped as a platform, I'm sure that IBM would produce an enhanced version of Linux with some form of AIX compatibility/migration tools to try to keep their customer base rather than port AIX. But that is not on the cards at the moment.
Strangely, out-and-out performance is not the primary motivation for large AIX customers to keep buying it. AIX itself and the RAS of POWER systems are, along with the associated support skills that the customers have invested in. If you are involved in a move from VMS and Solaris to Linux, I'm sure you are aware of this. The move from VMS is obvious (where else are you going to go), but I wonder whether the move from Solaris is because of lack-lustre statements of intent from Sun/Oracle with regard to Sparc and Solaris. If that is the case, then this latest release from Oracle is too little too late, at least for you, and I sympathize.
We've discussed this in these forums before
and in my view, it's 6 of one and half-a-dozen of the other. Both are still innovating, but neither are doing as much as they used to.
I believe that GPFS is going places faster than ZFS (the actual rate-of-change is staggering at the moment with de-clustered RAID and it's deployment in SONAS devices), but I agree that DTrace was a real innovation. And each vendor has copied part of their virtualisation technologies from the other. IBM is tending to concentrate on technologies that are layered on top of AIX and other OSs, rather than extending just AIX. Whether this means that AIX is stagnating is a moot point, but if the OS is mature and does what is needed, why change?
As to the comment about wages, yes, I do not know you, except by the sometimes over-zealous comments you post here. I was just speculating (in a provocative way, I admit) why you are so vocal about shouting down any UNIX technology other than Oracle/Sun's and Intel's processors. I believe that most people would regard many of your comments as being overly negative.
I've said it before, and I am quite prepared to say it again. There is really no one UNIX vendor at the moment who is 'better' at all things. Each has their strengths and weaknesses. I am glad to see Oracle has not totally abandoned their user base, and hope that they will actually continue to put resource into keeping Solaris relevant, in the same way that IBM is with AIX. I was worried when Oracle were so quiet about Solaris and Sparc after the takeover, and actually began to think that they would quietly relegate the technology to legacy status, but happily that appears to not be the case.
I am not an AIX bigot (at least I don't think I am - comments welcome), it is just the OS that I earn my living supporting, and the one that I know best. When I see something I believe is untrue, I will comment on it, and I will point out areas where I believe AIX has relevant technology where other OSs show deficiencies.
I don't go out of my way to try to put down other UNIXes, and I get annoyed at those that do. The remaining market is fragile enough as it is.
- Does Apple's iOS 7 make you physically SICK? Try swallowing version 7.1
- Fee fie Firefox: Mozilla's lawyers probe Dell over browser install charge
- Pics Indestructible Death Stars blow up planets with glowing KILL RAY
- Video Snowden: You can't trust SPOOKS with your DATA
- Review Distro diaspora: Four flavours of Ubuntu unpacked