But that's only 6 months away
and there is almost no bug fixing going on in that version, even though it is an LTS release. There's lots of unhappy LTS users in the Ubuntu forums, myself included.
1973 posts • joined 15 Jun 2007
and there is almost no bug fixing going on in that version, even though it is an LTS release. There's lots of unhappy LTS users in the Ubuntu forums, myself included.
there are a number of items that will not appear on the backports list for fixes from later releases. This includes all of the desktop items like Firefox, Evolution, and even Google stop producing fixes for Chrome once an LTS release goes out of support (as Hardy did earlier this year).
From my experience, once an LTS release has been out for a year or so, anything that is regarded as a bug rather than a security problem just will not get fixed.
on UNIX systems 30 years ago. See, nothing's new nowadays.
It has been a convention since UNIX made it outside of Bell Labs, which I can testify to since 1978 when I first used UNIX version/edition 6.
I agree that this does not suit all organisations or even all users in the same organisation, and the flexibility of UNIX allows this *convention* to be controlled where it is necessary. That does not alter the convention, merely the implementation.
Your statement that "Users should NOT install apps" is as blunt as me saying that they should. Neither can completely cover all situations. I also wonder whether you differentiate between locally written tools, and applications from external organisations, and also whether you also differentiate between compiled code and such things as shell scripts or other interpreted code (which actually can be run as long as you can run the interpreter, even if the noexec flag is set!). Do you also prevent shell access or disable aliases and functions?
Where I currently work, if the users were not allowed to compile and execute code, they could not work. But that is because our users are scientists who are working on creating computing models. There is no one-size-fits-all model for all organisations.
I'm not sure if that statement about 'self-declared admins' was aimed at me. If I am not a UNIX system admin (30 years looking after UNIX systems from many vendors in lots of industries, including writing some of the security standards and many operational procedures at some organisations), then I don't know what I am, or what a UNIX sysadmin should look like.
Believe me, I have been involved in enough hardened UNIX installations to know exactly what you are saying, and the convention stands.
This way lies anarchy. Just imagine if a virus writer found a way to hijack the deployment process. Instant huge botnet. Just as you cannot trust users, you also cannot trust automatic update processes. Even it they are signed by security certificate.
Maybe this shows limitations in the Windows way of storing data. Effectively saying that AppData is the only place a user can write, and should not be used for executables is saying that ordinary, non-privileged users should not be using programs other than what is deployed system wide.
In an older multi-user model (I'm sure you can guess the one I'm talking about), one of the normal conventions is a bin directory under a user's home directory. User written scripts, locally compiled software and trusted executables from other sources can live there. Add it to the path, and users can then effectively extend the OS to do what they want, rather than being limited by what the system provides. And in a homogeneous networked computing environment, this scales to network computing as well!
Windows has no such convention. Shame MS could not learn from history.
Having made a name for itself over the last 6 years or so, I think that people are beginning to realise that the corporate-customer-pay-for-support model that Canonical have been trying to work towards is a difficult one to build a business on.
The move towards the shiny has not helped, having polarized their advocates into those who don't understand the need to change, and those who love it. The former category, IMHO, is the one most likely to suggest Ubuntu in the server space, so in many ways the dis-Unity spat is indirectly doing more damage to their corporate support model than anything else they have done.
I'm sure that some people will remind me that Gnome is still in the repository and can be installed, and that server releases are different from workstation ones, but those people are missing the point about the work necessary to run server type systems.
To survive, Canonical has to approach profitability at some point, because Mark Shuttleworth won't bankroll them forever. If they are losing some of their high up managers, it appears to me that this thought may be occurring within the company as well.
Generally if you buy a time appliance that provides a NTP stratum 1 source using GPS, MSF or DCF, then they put a high-precision temperature controlled clock in the appliance for just the situation where your exterior time source is not available.
Normally these are accurate to <2ms per day drift (this is for the entry level device from Time and Frequency Solutions Ltd. - other NTP time appliances are available), so will take over a year to drift even a second from real time if they lose their external feed. There are better ones if you need more accuracy.
So if you need accurate time, relying on a regular feed from GPS is just not necessary.
Of course, some people might be doing time synchronisation on the cheap, but that is then their problem.
You've not 'bought' commercial software. Not ever unless it is an IP purchase.
What you've purchased is a license to use a copy of the software, and your custodianship of your copy is only allowed if you stay inside the terms and conditions of the license.
You agreed to this when you opened or installed the software. You gave the right away yourself, as long as it does not conflict with the law where you live.
And why should paying for the use of software entitle you to see the source code? Does buying a toaster entitle you to the complete spec's and blueprints for said device, or purchasing a CD entitle you to the sheet music for the songs?
that these devices were actually TouchPads, and not some counterfeit Chinese copy based on an existing Android device on from eBay? Has anyone actually seen one?
Or maybe, the Chinese manufacturing plant have just decided to keep making them and shipping them without HPs say-so.
It would be nice if it were the latter, because possibly we may be able to buy them at a price closer to the manufacturing price than they would be from HP.
This is the information regarding the presentation of local time on the systems, and has absolutely nothing to do with reference times such as UTC. UTC (Coordinated Universal Time) is exactly that. A Universal time.
Almost all data that crosses national boundaries, including financial transactions and air-traffic control information are measured in UTC or it's close cousin GMT (or Zulu as it is known in military circles). Thus, this database will have no effect whatsoever on whether planes will fall out of the sky.
This database says what the offset from UTC a particular location in the world works to, and when Daylight Saving Time is going to be applied. It also documents when national governments have changed and will change the rules for DST, and when countries have changed and will change timezone (I love the comments about Dublin Standard Time at the beginning of the 20th century being 22 minutes and a number of seconds behind London, and the confusion about when the Irish government decided to shift to GMT/BST at the same time as also altering when DST cut in. Apparently, the Irish people were very, very confused).
Consequently, for almost all computer systems that use copies of this database, a lack of updates will make almost no difference whatsoever.
The way the document is written makes it look like all time services in the UNIX world will stop.
Well, that ain't going to happen.
Every UNIX system that uses this source (and it is not all UNIXs, even in this world of reduced choice) will have their own copy of the database. This copy will not evaporate. It's still good for at least the next year, end even it it were not, the rules for almost everywhere in the world are not likely to change in a hurry, so would work fine for 99% of the world without further updates.
Even if every UNIX-like system in the world were to be forced to delete the data in this database, the older SVR2 TZ rules still work, so it is possible to code your own daylight saving time and offset rules.
UNIX and UNIX-like systems should always have their main clock running UTC (almost the same as GMT), and these rules contain information about what offset the local time is from UTC, and when daylight-saving time kicks in and out. this database automates this. But the older methods still work.
In your world, you would end up with no ad-funded commercial television.
This may not appear such a bad thing until you see what public-funded TV is actually like in most countries, and also how expensive pay-TV would become if it had to be funded completely from the subscription.
where Steve was hanging on to life so that he could see how Tim Cook handled a product launch. Having seen that it was not a total disaster, he was content to hand over in the most final of ways, knowing that the markets had accepted the post-Steve Apple.
I am surprised at myself though. I thought that when Steve finally reached the ultimate purpose to life, that I would just accept it. I was actually quite moved and more than a little tearful when I heard the news on the radio as I was getting up this morning.
Shows that his influence could touch even this old cynic.
if you have to do a patent search for every purchase you make to ensure that you are not a user of patented technology without license (see previous comments!).
I don't have the figures. Maybe you could post references. But I never said that it was mostly on Power/AIX, just that there was a lot of it on Power/AIX.
Everywhere I have worked in the last 15 years, with the exception of my current contract (who run Oracle on zOS and Linux on zSeries, not that I have any involvement in those systems) have run Oracle on AIX as their main DB. Quite a lot of it actually, and ended up paying Oracle big bucks (or, in-fact, pounds) in license and support fees.
This has been in (large) financial, government, utility and construction organisations.
I have not heard of many applications that were locked to Oracle unless they were Oracle apps (not a big surprise). Oracle may be the recommended or the best supported DB, but any 3rd party application developer would be limiting their market if they were unable to sell into non-Oracle sites. Oracle is not quite a monopoly.
I agree that it would be a costly and disruptive operation, but then so would replacing the servers, changing your management infrastructure and re-training your support staff. Bit of a no-win situation for anybody, so let's hope it does not happen.
that if Oracle stopped developing their database product, or in fact any of the products they have mopped up, on AIX, especially if they had already shut down Itanium development, there would be one hell of an anti-trust lawsuit in the US.
Plus the fact that they would lose many millions of dollars of revenue from those customers who like AIX and Power more than Oracle, and would switch to DB/2 rather than to Solaris/Sparc.
The problem here is that many applications are database agnostic, using ODBC and JDBC and SQL amongst other abstractions as the means of using a database, which allows them to switch database products relatively easily.
Many Oracle DB customers get regularly annoyed by them because they can't appear to decide what the licensing model is. Some customers I worked with ended up re-negotiating their Oracle license fees every year because the way it was worked out changed each year.
If it is just the VFAT stuff, junk it.
All you lose is the ability to move your flash memory devices from one device to another.
And enough gadgets are not run by an MS operating system for the vendors to decide with each other on another, Open Source, filesystem, properly suited to flash based devices. Put a Windows user-land (or even kernel level if you can get it signed) filesystem driver in the support disk for the device, and bye-bye VFAT and good riddance.
But I suspect that there may be other patents involved. Boy, we need this list leaked! How are MS keeping it secret? I am still hoping that somebody has the balls to stand up to MS, and allow it to be taken to court (or not, if Microsoft choose to chicken out).
No, it says that it protects from DISK ERRORS, and I said that.
My education is not great, just an ordinary degree. My 30+ years of UNIX, much of which is on kit other than IBM (including Sun, Digital, HP, Data General, Amdahl, AT&T and many others over the years), and a good part of which has involved UNIX source is more relevant, although if you look, I have often said that I am currently contracting for IBM, and that in the past I did work from them. I I am no marketing shill, however. I just appreciate features that make the work I do easier. I am as critical about some of the features as anyone else, and I often get very worked up by their support processes. When IBM entered the Open Systems world in 1990, they were regarded as the Big Enemy by many UNIX people, myself included, but I think that they did actually prove themselves.
Recently, whenever I have had to do work on HP/UX and Solaris, I have found that it is less easy than AIX, but that could be because I am more familiar with AIX. I definitely feel that Solaris and HP/UX feel more like traditional UNIX systems than AIX when it comes to management. Maybe a good thing, maybe not.
I don't know what you think I don't know? I am perfectly prepared to admit that I do not know everything, and I can be wrong. But what have I said that was wrong. The ZFS paper is quite clear, as are it's conclusions, which I referred to. I know GPFS enough to know that what I said was correct. I have compiled kernels enough to know what is involved there.
In relation to what I said regarding Fear, Uncertainty and Doubt (yes, I know that without looking it up in Wikipedia, I have been using the term myself for about 20 years), the first word in my sentence was *IF*. I did not acknowledge that what Matt and Jesper say was FUD, although I definitely would categorize some of what you say as such. In fact, I think I agree with Jesper on almost everything he says on these comments. Very detailed analysis, and worth reading.
I apologise for resorting to ad-hominem arguments. It's always a poor tactic, but sometimes what you say is not thought through, or maybe seen through a filter. I definitely know that what I say is often coloured by my experience, so maybe I should accept that it will be for everyone, but it may be worth you re-examining what you say sometimes. It comes across as very shrill.
You are like one of those kids in the playground who shout and insult everybody around, and who then claims that they are being bulled when one of the people at the receiving end is finally annoyed enough to put you in your place.
Damn, resorting to personal insults again, but you make it so easy!
If Jesper and Matt are spreading FUD, then they are doing it in a way that is less rabid than you. I would be surprised if you are not foaming at the fingertips when you type some of the things you do.
A point in question. How much re-writing do you think is necessary to increase the number of processors managed by an OS. According to you "it had to be *rewritten* last year, because it could not handle 256".
Well. All that is really required is to change a couple of numbers in the kernel header files, and re-compile the kernel and any tools that reference those headers. In fact, the support was included in a PTF fix for AIX 6.1, not even a new level. Definitely not a re-write, more like a tweak.
Like other shortcomings, I guess that you have never worked in source at a kernel level for a UNIX, and I would also hazard that you never had to play with sysgen-ing an older SunOS release. Honestly, the more you say, the less relevent what you say becomes.
When it comes to new OS features, what do you think that Oracle are adding to Solaris 11? Both DTrace and ZFS are old news. How often can you consider them new (both have been around in previous versions of Solaris), and neither of them are really an extension to the OS. They are what IBM would call 'layered products'.
Unlike Linux and Windows, the remaining UNIXes, and especially AIX IMHO have a mature set of APIs, RAS features and other management processes. I will concede that lack of change may indicate stagnation, but excess change may also indicate immaturity and feature bloat driven by marketing hype. There is middle ground. What new features would you like to see in a UNIX?
On the filesystem front, ZFS moves the disk hashing up into the filesystem layer, and produces protection at the file or other disk object level. Reed-Solomon encoding of data at the filesystem block level effectively does the same in the GPFS de-clustered raid system. Big deal. And apart from Sun themselves, not everybody believes ZFS is safe. See this paper www.usenix.org/events/fast10/tech/full_papers/zhang.pdf that was presented at Usenix, which concludes that ZFS may be more tolerant of disk errors, but is not invulnerable to data corruption.
There is a fundamental design difference between the T series Sparc processors and the Power series of processors. One that is closing from both ends, and again they are converging on the middle ground. The announcements of what was it - heavy thread?- just shows that the Sparc design is being changed. One of the real problems with T1 and T2 processors is that they were committed to the lightweight thread model, which made them excellent for many small processes, but very poor for smaller numbers of large processes. Why is this change an innovation, and IBM putting a larger number of slower cores a realisation of a deficiency. I believe that Matt described this far better than I can, elsewhere in this thread.
I think I agree with Jesper's analysis of Larry's announcement claims. They look good, but do not actually stand up to any real scrutiny, as they claim things that other vendors do not bother to benchmark, or cannot with the same levels of code. It's like you saying that you are the fastest person on earth at running from your front door to your living room, but you never allowing anybody else in to your house to try and beat your time. Surely you can see this?
So, please calm down, and actually read stuff that comes from people other than Oracle's marketing team.
If AIX ran on Intel, then you would probably have a point, but the market for POWER is "AIX on POWER" and "IBM i on POWER". It's the whole package.
As far as I am aware, IBM is making no effort to (re-)port AIX to an Intel architecture and as far as I am aware will never consider i on that platform (I say re-port, because AIX 5.1L was available on Itanium, although nobody was interested in buying it). Earlier versions of AIX were also available for i386 and i486, but only on IBM PS/2 systems (in the same way that SunOS 4 was available on the Sun 386i system in the late '80s), but that was actually a different code tree (AIX 6 and 7 go back to AIX on the 6150 PC/RT platform, whereas AIX PS/2 came from the IX and AIX/370 mainframe port, originally done by Locus). There were big differences, and the only people who made any attempt to treat them as the same OS were the IBM marketing people.
I'm sure that a port could be done (it's almost all C anyway) but as you've pointed out, if POWER were to be dropped as a platform, I'm sure that IBM would produce an enhanced version of Linux with some form of AIX compatibility/migration tools to try to keep their customer base rather than port AIX. But that is not on the cards at the moment.
Strangely, out-and-out performance is not the primary motivation for large AIX customers to keep buying it. AIX itself and the RAS of POWER systems are, along with the associated support skills that the customers have invested in. If you are involved in a move from VMS and Solaris to Linux, I'm sure you are aware of this. The move from VMS is obvious (where else are you going to go), but I wonder whether the move from Solaris is because of lack-lustre statements of intent from Sun/Oracle with regard to Sparc and Solaris. If that is the case, then this latest release from Oracle is too little too late, at least for you, and I sympathize.
and in my view, it's 6 of one and half-a-dozen of the other. Both are still innovating, but neither are doing as much as they used to.
I believe that GPFS is going places faster than ZFS (the actual rate-of-change is staggering at the moment with de-clustered RAID and it's deployment in SONAS devices), but I agree that DTrace was a real innovation. And each vendor has copied part of their virtualisation technologies from the other. IBM is tending to concentrate on technologies that are layered on top of AIX and other OSs, rather than extending just AIX. Whether this means that AIX is stagnating is a moot point, but if the OS is mature and does what is needed, why change?
As to the comment about wages, yes, I do not know you, except by the sometimes over-zealous comments you post here. I was just speculating (in a provocative way, I admit) why you are so vocal about shouting down any UNIX technology other than Oracle/Sun's and Intel's processors. I believe that most people would regard many of your comments as being overly negative.
I've said it before, and I am quite prepared to say it again. There is really no one UNIX vendor at the moment who is 'better' at all things. Each has their strengths and weaknesses. I am glad to see Oracle has not totally abandoned their user base, and hope that they will actually continue to put resource into keeping Solaris relevant, in the same way that IBM is with AIX. I was worried when Oracle were so quiet about Solaris and Sparc after the takeover, and actually began to think that they would quietly relegate the technology to legacy status, but happily that appears to not be the case.
I am not an AIX bigot (at least I don't think I am - comments welcome), it is just the OS that I earn my living supporting, and the one that I know best. When I see something I believe is untrue, I will comment on it, and I will point out areas where I believe AIX has relevant technology where other OSs show deficiencies.
I don't go out of my way to try to put down other UNIXes, and I get annoyed at those that do. The remaining market is fragile enough as it is.
The zdnet article is dated 2003. 8 years on, AIX is still IBM's main Posix OS, and is unlikely to be completely replaced as long as paying customers want it and IBM makes a profit. And I believe that IBM is making quite healthy profits from their microelectronics division, which produces almost all of the Power(tm) processors that the world uses.
I don't doubt that the proprietary UNIX systems market will continue to shrink, and I don't think than any of the remaining players (IBM, Oracle and especially HP) are really interested in putting large amounts of money into further developing Genetic UNIXes (although they are all pretty well developed as they are).
Linux still has a way to go (IMHO) before it can match HP/UX, Solaris and AIX in overall manageability. I keep expecting to see some major announcement from a large vendor about their Linux distro being as good as the proprietary UNIXes, but I have yet to see it. I am beginning to think that people like Red Hat and SuSE (Attachmate?) are all still in the small system mentality, because of the current in-vogue push for virtualised smaller OS instances running on big systems. Or maybe, the fact that Linux is an Open Source OS means that there just is not the money in it to make that final push into the critical systems market that UNIX currently occupies.
I'm sure I don't know. All I know is that I prefer UNIX (and Linux) to the Microsoft alternative, but maybe I'm just getting old.
BTW. I've often wondered whether there is a degree of jealosy in your comments. Generally, UNIX jobs are still better paid than Linux ones. Are you just wishing for proprietary UNIX to disappear to bring UNIX people down to the level of Linux wages? Just speculating.
Power 6 didn't do out-of-order execution, but Power 5 did, as does Power 7, so Alison has some justification about claiming it not being new.
IBM just had to learn what Intel did with Pentium 4, that high clock speeds and deep pipes are not the answer to overall throughput. That and power consumption issues resulted in Power 7 being a different processor than Power 6.
The reason why Power 6 did not do out-of-order execution was (as far as I am aware) a result of IBM pushing the clock-speed.
I am glad to see that there is still someone other than IBM investing in non-Intel processors. The world will be much more boring if/when x86 becomes the only show in town.
It will be interesting to see how independent comprehensive benchmarks show these systems vs. Power 7 and Power 7+ and the current crop of HP systems, not just the cherry-picked "World Record" results that Oracle put in the announcement. Not that Oracle are doing anything different from all the other hardware manufacturers in their marketing spiel.
Who down-voted me? I thought this was quite innocuous!
I will admit it would have been better if it was a Sinclair Z88, but it was close enough.
I spent £6 on a Amstrad NC100 recently just as a curio. Was about as cheap as the horse brasses my wife bought at the same car boot, and I believe a better buy (and it still works, surprisingly!)
Sometimes it's worth squandering a little money just to own a piece of interesting history.
Yeh, yeh. Very droll.
I was trying to exclude the daft things students do at University.
But seriously, Cobol is quite definitely a commercial language, and is not at all suited to scientific work. It's missing lots and lots of things you take for granted in any language more suited. There is only one language (apart from the out-and-out weird ones for specific purposes) that I can think of that is less suited, and that is RPG!
On the subject of 6502 assembler, I'm sure if you looked hard you may still find a BBC micro or two buried in the depths of some lab. somewhere. BBC basic was written in 6502 assembler originally, and people did lots of interesting things in that, so 6502 assembler by proxy.
that relativity predicted that you could not travel at the speed of light, because it would imply infinite mass and thus infinite energy.
But if there was a dis-continuous way of jumping over the speed of light without actually accelerating through it, I believe that the equations could still hold, although I suspect that it would require a completely new branch of physics to explain the dis-continuous speed jump in the first place, and also some strange concepts like negative mass.
I'm expecting serious physicists to rip this suggestion to shreds ( I got no further than Principal Physics in my General Science degree thirty years ago - equivalent to the 2nd year of a normal Physics degree), and I'm expecting to be thumbed down, but it will be interesting to see what is said!
because you are the only person who knows about it!
I suspect that you mean COBOL, and if there is any modern (post millennium) serious scientific application (I will not accept financial software as falling in this category, even if it is for scientific establishments) written in COBOL, I'll eat my copy or K&R.
What it says is that 55% Andriod users will definitely switch to another Android device. That does not automatically mean that 45% will definitely move from Android! There are no figures for "maybe" or "don't know", or even "I'll see what's out there when I'm ready".
It also looks like the 55% is Andriod customers who will stay with Andriod, but definitely switch vendor. That may not include Android users who actually do decide to stay with vendor. Including that figure may change the overall picture for Android.
When it comes to generic OSs, brand loyalty is not so significant. Most knowing people assuming that Android is very similar, will compare battery life, function, or reviews. With locked in customers with Apple and RIM, the only way they can maintain their user experience is to stick with brand.
I probably will not stick with Samsung, but I will definitely be getting an Android phone, unless, that is, a WebOS device comes my way at a knock-down price.
But this is all surveys and statistics anyway, and you know what they say about those....
If it comes to court (in the US), and Apple offers to license the patents for a reasonable (or even a generous) amount, and pay the damages, I think that VIA would have to accept that as settlement.
True, they could get an injunction and try to keep it going for as long as possible, but the US courts are unlikely to allow any injunction to persist if what the court deems a reasonable offer to settle has been made, and there are rules about how you value what such a settlement can be.
I'm all for Apple being hoisted on their own petard, but I don't think a single case like this is likely to change their behaviour.
It depends on where VIA shares are listed, and whether the company maintains control over a majority of their own shares.
If they are listed on NASDAQ (I've just checked, and they are on the Taiwan stock exchange), and do not have a majority share holding in themselves, then it is possible, but if a majority of the shares are not being traded, then there is no way that Apple can force the VIA board to sell themselves in a hostile takeover.
And I don't think that a US court could force the winner to sell themselves as part of a settlement (this would be completely stupid) , and if the shares are listed outside of the US, then the only pressure Apple could put on VIA is commercial and other patent lawsuits.
VIA appears to be part of the Formosa Plastics Group, so they may be difficult to challenge anyway.
Apple has been playing a very dangerous game, and I think they are about to find this out.
while I was working for an AT&T and Phillips joint venture that was selling fibre-optic kit to BT.
But consider the radio-plays. Each time a record that is still in copyright gets airplay, it earns a play-fee for the copyright owner, and probably also the artist (this depends on their contract with their record label).
A small proportion of the PRS licence paid by shops, DJs, and other organization that play music in public places is also distributed to all artists who still have copyright on their works. There will be residual fees if it is included on a compilation, and for use on adverts. There's also ring-tones, re-mixes and samples on modern records, and if I thought hard about it, I could probably think of other uses of recorded music that may generate revenue (ahh, another one - games, although probably not Max Bygraves. And another, YouTube.)
I'm sure that most artists will not get a lot from this, but there will be some revenue, and something is better than nothing!
I believe that sheet music actually produced however many years ago is copyright free, but if a new edition of an old work, newly typeset with "significant changes", is published, then this has a copyright of it's own.
I quote from the copyright section of the CPDL website, which is a site for choral music in the public-domain, for whom adherence to copyright law is essential. I assume they have done their due-diligence.
"Can modern editions of public-domain music be copyrighted?
In short, the answer is yes. However, generally there has to be significant articstic/editorial content to make an edition copyrightable. There are a spectrum of editions. On one end are editions which are not copyrightable: these include old editions with expired copyright as well as republications of public domain editions which use the original engravings. Editions which are based on public domain music and add no other editorial content probably are not copyrightable. Further along on the spectrum are editions which include editorial explanations, piano reductions, translations and other additions. These aspects are copyrightable; however, if you perform an edition without using these additions, it might be difficult to prove that you have violated copyright law. Nevertheless, you certainly could be sued, and the resulting cost would be great, whether you lost or not. Further along are full-blown arrangements based on public domain works. These are fully copyrightable and can not be copied unless permission is granted by the copyright holder. The problem for the choral director is that most editons of older music fall somewhere in between being uncopyrightable and being fully copywritable. Add in the problem that almost all music today has a copyright notice (whether that notice is valid or not) and it becomes easiest to assume all editied music is copywritten." (sic - this was a cut-and-paste from their web page, I must point out the typos)
If what you have is a recently published exact facsimile copy of a score that was originally published over 70 years ago, then you could be correct, but the music publishers are wise to this, and only have to put an explanatory note, incidental 'clarification' or even 'corrections' onto the copy for it to be covered by a new copyright.
I'm sure that there are national differences in copyright law, but this is what I work to.
What are you referring to in your 99% being copyright free?
Certainly not recorded music. Records became really affordable and common in the 1950's, with I guess the golden years for the recording industry and artists being the '60s and '70's. Almost all of that music is certainly within copyright.
If you are referring to sheet music, then the rules are different, but still most published sheet music will still be in copyright. And it is not only photocopying sheet music that is not allowed. If you take a piece of sheet music (called an imprint), and transcribe it by hand into Sibelius or Lilypond, then this is against copyright law.
You can only transcribe from an out-of-copyright source for it to be legal.
Even for music that was written hundreds of years ago, you are not allowed to transcribe it from a recently published copy, as it is the imprint, not the music that is copyrighted. This is one of the reasons why music publishers keep putting "Revised in XXXX" on the bottom of their music. Just changing the date counts as a revision, and thus renews the imprint. I often think that the errors you find on sheet music are deliberate, so that the publishers can track down exactly what imprint was transcribed onto the 'net.
This is the one of the reasons that the Harry Fox organisation were able to take down so many of the guitar Tab sites, as they claimed that they were transcribed from copies in copyright.
On a different note, even if someone writes and records their own music, they have automatic copyright ownership, even if they choose not to assert it. Copyright is automatic in most western legal systems.
Mind you, I wonder what happens with the records that were already deemed to have fallen out of copyright, which will now be covered (anything released from 1941 through 1961). Some of this will already have been published as copyright-free. How do you put the genie back in the bottle?
that most of the time, people use the WiFi on a smartphone in bursts. What I think you could do is to have a delayed power down, so that all the time there is a stream of packets, the radio would remain on, but as soon as there is a gap in the flow of inbound packets destined for that device for more than the delay, it would power down the radio. Generally speaking, people will not be running server-type services on a smartphone (OK, I know that there are exceptions, like uPNP and DLNA), but most things will be initiated from the smartphone.
I suspect that for TCP type services, you could deliberately ignore or even NAK the first packet to force a re-transmit while the radio powers up. Would not work for UDP, icmp or lower level protocols, but UDP services normally have some mechanism for handling lost packets, and I would doubt that many people use layer 1 services on a smartphone.
I do this with Windows and KDE, but not with Gnome 2. I've done this for years, and also auto-hide it.
I won't disagree about choice, but choice is not what is needed to get non-technical users to use Linux, and lots of non-technical users are what is needed to get the application and content providers to take note of Linux as a viable desktop.
Making it so you have to install non-standard applications in order to make it usable is not going to get you the critical mass of users, and will keep Linux in the hobbyist and technical space with no hope of going mainstream.
Canonical appear to have bet the farm on the new interface, hoping that the non-technical user will see the bling and want it, but quite frankly, unless they get a manufacturer to make it a pre-installed alternative to Windows, users will never see it to want it, and Microsoft will never allow one of their large OEMs to also offer Linux without applying their anti-competitive practices.
I really thought that Ubuntu was the distro that might finally cross over into the mainstream.
I've now completely changed my mind, and I will be looking for a new distro.
What's changed my mind? Not the radical change in user experience, not the continual churn of new applications for commonly used things like listening to music or watching video, and not Canonical ignoring their loyal user-base but going for the 'new' (although all of these things are annoyances).
It's actually the way Canonical has split the established user-base into "I don't like it" and "I think it's the bee's knees" camps over Unity. What they've done is effectively alienated a considerable part of the people who (like myself) were strong advocates for, and encouraged the use of Ubuntu to users of other OS's. Unfortunately, the most valuable advocates are probably the people with most experience of Linux and Ubuntu, and who are most likely to be the ones upset.
I don't actually mind there being another UI. I don't mind them switching default apps. What I do mind is the "do it our way or not at all" approach of removing the old way of doing things. I feel it's almost as if they are deliberately making a statement of disinterest in some of their most loyal users.
I have recently been unpleasantly reminded about how unresponsive Canonical can be. I know that they have limited resources, and also rely on knowledgeable community members, but I don't like how fast thing change in the normal release process, and how quickly problems are swept under the carpet. I keep to LTS releases, because making significant changes on a regular basis to my daily use machine is not of interest to me. I have been using Hardy since about 6 months after its release, and I was suddenly informed that Google were stopping builds of Chromium for 8.04, because it had moved out of support.
They were right. As a desktop release, Hardy dropped off of support (on desktop systems) in about May this year.
Why was I still using Hardy? Well, in Lucid (10.04), Canonical imposed KMS (although to be fair, it was part of the Kernel), and completely broke suspend and resume support for ATI Mobillity graphics adapters even though it worked flawlessly in 8.04, broke Composite Rendering support (for Compiz), and also crippled Xv performance for video playback. Despite several defects raised by users of Thinkpads and Dell laptops, the calls languished unresolved, and the last suggestions were to upgrade to 10.10, which is *NOT* an LTS release. I spent 10's of hours trying to work out why all of these things were broken, before deciding that I could not afford the time to understand enough about KMS to be able to do anything useful, and went back to Hardy.
I've now (mostly) switched to Lucid, but have had to disable KMS (which is a blunt fix) to allow suspend and resume to work, and also turn off Advanced Desktop Effects (which I used to catch peoples attention), and switched mplayer and Xine to use a raw X11 frame buffer for rendering video (I've not worked out how to do the same for GStreamer/Totem). If I can't get Composite Rendering working, there is basically no chance that I will be able to use Unity on my Thinkpad, even if I wanted to.
So, I will keep the Hardy partition until I've checked that there is no other gotcha's from Lucid, and will then look around at my options. Maybe I will use Xfce on Ubuntu, but it was nice, for a while, to be able to use a Linux distribution that just worked without too much fiddling.
Of course, when it comes to social engineering, UAC and a popup sudo are no different, and are both as easy as each other to subvert.
But most users, and I suspect you as well, probably have never used a Linux system where your ID is not only not root, but is also not in the administrator group. It's just not necessary for most personal systems, and not being able to run sudo or having a root password makes it very, very difficult for an *ordinary user* to become root or touch system files.
But it's all about trust, as I said in a previous comment. If your trusted system is compromised, then this can propagate throughout a whole environment, even if Active Directory is involved. And Active Directory only protects a system while the group policy is available. Although I do not know, I strongly suspect that if you can get into a Windows system configured to use group policy using an OS weakness, like all systems, it will be possible to *TURN OFF* the requirement for the policy, making it just another Windows system with all of the inherent and widely publicised problems that Windows has.
I also read that often the group policy often just turns off the UI to various things. I have found out myself that it is sometimes possible to run the CLI utilities on a locked-down Windows system when the group policy prohibits the windows utility. This makes the security no better than "security by obscurity".
I suspect by your comment of "nothing (and I mean *NOTHING* is more secure than a properly configured AD and correctly-configured clients" (sic) that you have not looked into SELinux or AIX with RBAC, both with Kerberos turned on, which both implement service and object based tokenised remote authentication which is very similar to the Active Directory support of Windows. In fact, Active Directory is really an extended LDAP directory service with Kerberos authentication (if configured) to access to the directory. LDAP and Kerberos were both originally implemented on UNIX.
AIX had a kerborised command authentication system in the SP2 pssp cluster control package called sysctl over 14 years ago, and UNIX systems that implemented them also had a similar features as part of DCE and AFS, well before Microsoft implemented Active Directory.
I often comment that the Owner-Group-World access model in UNIX-like OSs is one of their weaker features. But where this simple model scores is that it is easy to understand, and a well implemented simple security model can be much more secure than a poorly implemented complex model. You probably have never had the opportunity to try to break out of a well implemented Linux system where you are an ordinary user, but I assure you that it is possible to make a system perfectly usable while being very, very difficult to break into.
Most ways that UNIX-like systems are compromised involve the wet-ware that administers the system, and I think that is exactly what has happened at linux.org, and could just as easily happen to a Windows system, even with AD configured.
Firstly, my word! what a provocative tag you have.
Now, regarding "an eye-opener for *nix people"
The problem here is that even quite technical users can be short-sighted when it comes to security. I know any number of very technically able people who regard security as a barrier to work, and quite often do very dangerous things to "work around the imposition of anti-productive security measures".
All the time this mindset persists with people who should know better, we will have the potential for this type of problem.
As a widely used example, ssh is a wonderful tool in the right hands, but allow people who can't be bothered to read the manual, and who use passphrase-less keys and/or distribute a single private key across their entire estate of systems, and you have a disaster waiting to happen. And if some of these people have escalated privileges, or use the same key for their own ID as they do for root, then it is just a case of lighting the blue touchpaper and waiting for the inevitable explosion.
Also, ssh can be used to circumvent many other security systems in ways that range from the constructive to the malicious. This makes it a multi-edged sword that can make magic happen, or can rip carefully thought out security measures to shreds at precisely the same time. How do I know? Because I have used it extensively to do just that (I think constructively, but sysadmins of other systems where I am a mere ordinary user may think differently).
Ssh can be abused on many OSs, including pretty much all UNIX and UNIX-like systems (and this includes BSD for those of you who have been suggesting that as a more secure OS), and there is at least one port of SSH server for Windows systems as well.
In reality, where you have a mechanism for one system to trust another using whatever means, there is scope for an intrusion on the trusted system to spread to the trusting system. And in the modern environment, where you need to manage hundreds or even thousands of systems from a central location, these trusts are essential. I believe that this is an axiom, and applicable to all OSs.
User training, partitioning of management domains, and insisting on adherence to properly thought out security policies, especially amongst the sysadmins and power users, is the only way to limit the damage of such a compromise.
Even if it is a barrier to productivity.
It's not clear that a RISC processor is better at all things than a CISC, even after having been around for 30 years or so.
That's why modern RISC designs like POWER, SPARC, and even the good old ARM processor are having complex instructions added to their ISA (such as Thumb 2, VFD and NEON) as time goes by.
Increasingly, the difference between an augmented RISC processor and a CISC processor with some of their frequently used instructions being engineered to run in small numbers of clock cycles is becoming more and more different to see. It now appears to revolve around electrical power rather than computing power.
But it's all irrelevant, really. On a personal computer, unless you do hard-core gaming or real-time media transcoding, you just don't need anything much faster than around a 2GHz processor with some graphic assist. We've just got so used to bloated OS and application code that we accept that ever-faster processors are required without questioning why we need them.
That's exactly it. The content producers demand different distribution rights on their content depending on the physical location of the consumer.
If you look at the American TV-on-demand sites you will find that they have negotiated the rights to the content *IN THE US ONLY*. This is normally because other companies have bought the rights for the same content in other countries.
For an example, let's assume that Universal Media Studios make another series of Heros. They license commercial broadcast in the US to NBC and in the UK to Sky.
If someone in the UK can watch or purchase it from the NBC on-demand service, they might not take out a Sky subscription, causing lost revenue to Sky.
So a condition of the license that Sky enter into with UMC is that US distributors must restrict online access to only people in the US, and if they don't you end up with severe lawsuits between all of the companies involved.
The only way that will change is if production and distribution companies take a whole-world view which is likely to harm choice by making large regional minorities too small to be considered in a whole-work market. There is no perfect solution.
We as consumers must realise that production and distribution companies are commercial enterprises, whose very existence is conditioned on their need get as much money out of their customers as possible.
I look back on the days before the growth-is-essential mantra, when it was enough for a company to ensure it's existence, make reasonable but no excessive profits, provide good employment to their workers and good service to their customers with some fondness. Maybe my glasses have just taken on a rose-tint.
There aren't half some numpties in the legal systems. If they apply this rule to 'phones, then most smartphone designs will be blocked in Germany.
Can we have two names registered against a mail address?
I am open enough to post many of my comments under my real name (unlike many of you), but I frequently use the anonymous option, normally if I am posting things that may upset my employer, wife, children, the police etc (OK maybe not the wife, she is a technophobe, and does not read the Register, and the police could get a court order if what I have said was against the law).
But I appreciate being able to use an icon with my anonymous posts.
What I may have to do is register a second account with an unrelated name to my real one. If I were allowed to have an alternative "alias" for my account, and be able to select it like I do Anonymous Coward as an alternative, I think that would be really useful.
Now, what has not been used yet, but would be suitably humorous?
I suspect that it is much cheaper to file a law suite and get a temporary injunction, than it is to get one lifted. And if filing it delays a competing product from being launched, it means that you have a longer time to attempt to dominate a market, and reap as much profit as possible.
I can see a scenario where Apple hire just graduated lawyers cheap, and say to them "Here are the arguments, take them and stall in court; it doesn't matter if you don't win, just drag it out as long as possible".
Mind you, I suspect that the Japanese might be prepared to back an oriental company over an American one, especially for technology products where Japan excels, so Apple could have their nose bloodied in court over this one.
mainly because finding CDs (and perish the thought, MP3s) of some of my older vinyl is almost impossible.
I leave all of the compression and tone altering filters out, and only turn on the digital scratch filters on if the amount of noise is very bad.
The CDs I produce like this sound very good (to my ears), even using the commodity A-D converters on generic mobos. Even though these cannot do the highest dynamic range, I suspect that my turntable and cartridge combination (good budget equipment - Pro-Ject Debut II with Ortofon OM-5e) is probably more of a limit on the dynamic range than the sound chip in the computer.
I applaud your sentiments and appreciate you actions, but unless and until governments in all countries actually employ people who understand technology and their own patent system, all politicians from any administration will be taking advice from interested parties.
These interested parties are often the people most likely to gain from a strong and all-encompassing patent system, and who have deep pockets so can 'voluntarily contribute' to the process, and will not give unbiased advice. This is especially true in the US, which to me as an outsider, it often looks like the government (of all parties) is actually run by big business.
Some of the statements made by the current US administration and echoed by the Europeans sound good, with words like 'reduce administrative costs', 'reuse patent searches during applications in multiple jurisdictions', but when you look into them, it is not suggested that pre-grant verification will be any stronger or with any greater rigour, but merely to make the application process easier, leading to still more stupid, un-enforceable patents going on the books.
The patent system was designed to protect small inventors. The way it has been corrupted means that it now does exactly the opposite.
This is a comment I made on a previous article a year ago about General Motors and Tesla, so some of the content may be out-of-context, but it shows some problems with replaceable battery packs.
Someone has to pick up the cost of the loss of capacity after a pack has been recharged a hundred or so times. Leasing makes more sense than owning, as nobody will complain about swapping one that is new for one that is near it's end-of-life it they lease it.
You would still have some uncertainly about range, and you would probably have to have some rules about when a battery pack would be retired or reconditioned. Would you make it 90% of original charge capacity, 80%, 50%?
I'm all for this technology, but there are serious wrinkles that need sorting out, not the least of which is the cleanness of the electricity. Also, could the power grid cope with thousands of battery packs drawing tens of amps at the same time? For example, if a battery charging station has 50 packs charging at any time, which draw 30A each while charging, we're talking 1,500 amps, or at 230V, 345KW per station. That's a lot of power. A typical UK house draws about 0.4KW per hour, averaged out across the year (according to EDF), so the charging station would put the same load on the grid as 800+ houses.
These figures are rough, based on the Tesla's battery pack which apparently take 3.5 hours to charge at 70A at 240V (thanks Wikipedia), mapped into something that is more likely to be found in the UK urban environment.
How many petrol stations serve as few as 150 customers in a day (assuming packs take 8 hours at 30A to charge)? And you would have to be pretty certain that the packs could not be nicked for their scrap value. And how large would the station have to be?
So, interesting ideas, but currently, fossil fuels still rule, as indicated by the icon.