1696 posts • joined 15 Jun 2007
OK, the equipment may have been upgraded, but the premises which contain the exchanges, and a vast amount of the last-mile copper, all the telegraph poles, and the wayleaves agreements, and also many of the streetside connection boxes were in the original deal.
Not only has BT been able to use all of this without having to pay anyone, they have actually been able to achieve a capital gain on buildings which used to be exchanges, but have actually had the service consolidated into other nearby exchanges (think how much more compact digital exchanges are compared to the Strouger exchanges that have been shut down), and sold on. I was also told some years ago by someone in the telecommunications industry, that BT actually made money from ripping out the copper based long distance network, scrapping the copper, and replacing it with fibre.
If you were a new player, how much would it cost to put the last mile infrastructure in and buy the buildings for the local exchanges (especially in cities). We had an approximation when the cable infrastructure was put in 20 years ago (by the way, it was not Virgin, it was the small companies that merged to become NTL and Telewest that did the installation, and this is not Virgin Media by another name), and it was expensive then, and effectively bankrupted them. Think how much it would be now! (that's why they are experimenting with fibre through the sewers).
is now call ITIL, and is mandated on UK Government projects.
that too many of the readers will have difficulty working out the hymns that you so expertly para-phrase!
I do think that security audits can also be a bit of a trial. Can we include them in the hell reserved for ISO 9000 auditors!
I think that...
ID cards for non-EU nationals are still on the cards (pun intended!)
I'm *NOT* defending LimeWire. If you have read any of my other posts mentioning copyright infringement, you will spot that I say frequently that I buy my media, even downloads, and only in very exceptional circumstances (such as the material not being available on any download service or purchasable in shops or online) would I even think of downloading it. I am not a registered LimeWire or MP3 or torrent search site user, and I resent you implication that I am.
But what I am actually saying as if you take the extract quoted in the article at face value (and not qualified by the caveats in the actual ruling), then it could be used to prevent a service from being available even if it has acceptable uses, merely because it *could* be used for unacceptable purposes. The qualifications in the ruling clarify this, but the article did not.
As I said in my reply, I commented on the article, which did not have the qualifications of the actual ruling. But I was trying to make a point about overly-wide judgements that could be used to block legitimate uses of the Internet.
Many of the people who comment about so called fretards are shortsighted enough to have a protocol blacklisted or blocked merely because it *can* be used for copyright infringement. The current favorite is bittorrent. Bittorrent as a protocol is content-neutral, but can and is being used to distribute copyright material. I've heard many people state that the protocol should be banned, even when it can be easily demonstrated that one of it's original uses (Linux distribution) is still in use.
If you are able to successfully make the case that Bittorrent as a whole should be blocked, then by implication, so should any other mechanism that *could* be used to distribute copyright material against the copyright. Because Bittorrent, Kazaar, eDonkey, eMule and any number of other protocols grew up as unregistered protocols, to take this to it's ultimate conclusion requires you to block all but allowed protocols on the Internet so a new protocol cannot be developed. So, no ftp or SSH on the internet, and no new innovations with new protocols.
For an analogy. you do not prevent someone from sending photocopies of books through the post by shutting the postal service down. Content neutral protocols on the 'net should be the same.
From your tone, I take it that you approve of whatever the courts and primary legislation turn out, regardless of whether it is just or moral, or if it infringes on personal freedoms. I suggest that you look at the wider picture, and hope that we never get to the world you appear to want.
OK, I neglected to read a link from the original article. That's a fair criticism of my comment, but I stand by my dangerous precedent title, especially when used in contributory copyright infringement. I just chose a bad example. BTW, as far as I can see (I've not actually read the complete Grokster ruling, just the LimeWire one), I would say that Andrew Orlowski's comment on FTP client software (note he quotes client rather than server) is his interpretation, as I cannot see a reference to it in any quote or summary of the judgments.
Having actually now read the judgment for this case, which includes summaries of the Grokster ruling, it is clear that the article has cherry-picked very selective statements, one of which I picked up on. By this is a TINY part of the ruling, and one which actually appears to be in dispute as to whether the SONY-BETAMAX judgment is applicable to P2P services with regard to alternate legitimate uses. There appears to be a lot of 'mostly' qualifications that are woolly and open to interpretation.
The problem is that the text of the ruling is detailed and long, and it has been summarized to death, leading important associations being missed out.
So. I'm sorry I did not do due diligence, but that is a danger in using news sites and commenting when you don't have the time to read all referenced material.
The Lawyers will have to look at the wording of this judgment carefully. The way I quickly read it, any technology that facilitates location and downloading of copyrighted material could fall foul of the way it was worded.
If the critical part was the filtering, then any service that allows a partially wildcarded search including words such as mp3, avi, mpg may be covered by this.
I'm thinking search engines in particular, but even things as mundane as an FTP server could be seen to be covered by "providing direct infringers with a product that enables infringement".
You would have to ask whether ARPA and UC Berkeley are also liable for laying the groundwork for the Internet as a whole!
Generally speaking, phones do not really have backing storage (a filesystem to most people), but the objects that you drop onto the phone reside in the same memory as running programs. This makes the way that apps use memory different from, say, a PC, that has RAM for programs that are running and a disk for those that aren't.
On a PC, programs are copied into RAM from the disk, and then executed. When they stop, they are unloaded from RAM, freeing up the memory.
On a Phone, when you have a program (or app) loaded into the phone, it resides in the same memory space as it does when it is running. This means that when it runs, it runs in-situ, and does not need to be copied anywhere. The effect of this is that *EVERY* app that you load on your phone occupies memory, regardless of whether it is using CPU or not. (for evidence, see what memory is detailed in the specs of, say, an HTC Desire which lists ROM (which cannot be changed) and RAM, but does not differentiate between dynamic and flash RAM, because it's all the same).
Of course, an active app may also have a working data set for information that is it using, and this can be dropped if an app becomes inactive, but I don't think that this is the way that phones work (at least, my Palm Treo doesn't (although the memory model for this could be considered archaic), it keeps all the data even if I have not used an app for months, and even through a soft reset.
You may have a file-system like interface to the available memory, for convenience when using the phone as a storage device on a computer via USB, but in reality, this is just a presentation layer to the way that the device actually manages it's memory. The only memory that is really a filesystem is probably the SD card that is plugged into the memory card slot.
BYW. This is just me extrapolating what I know from Palm devices, and it may be that it is different on the iPhone OS and Android, but I do not believe this is the case. I'm prepared to be corrected by someone with internal knowledge of either OS, however. I look forward to hearing.
I can hear quite well...
...but I have been startled by a hybrid running on electric as I looked round while crossing a road. It's uncanny how quite they can actually be. And this was in a quiet rural town with little background noise apart from the birds!
And on the subject of noise, it is in the interests of the car manufacturer to reduce wind and tyre noise, as these are both energy drains on the vehicle.
Many vehicles are quiet below 20 MPH, which is the speed they will be traveling where most accidents of this type will happen. Hybrid/Electric vehicles at this speed can be nearly silent. Go and find a Prius in town (I'm not suggesting a G-wizz, as these are mainly found in London, and I am not capital-centric), and listen to it when it is running on electric.
Something sensible from Europe. How unusual!
Comments have missed the point
Microsoft are suggesting that they can apply cloud techniques to HPC problems, which makes it new and different.
It also illustrates that they have absolutely no idea about how large HPC shops work.
Not only is the number of processors important (which cloud can address), but I/O performance and homogeneous data access is also required for most large problems. This requires significant I/O power, and localization of the data to the processors. Cloud at this time cannot provide this, unless the dataset can be compartmentalized and shipped around the cloud. Using traditional decomposition techniques also requires processors dealing with their part of the problem to exchange data very rapidly (talking about microsecond latencies here) with the processors handling the adjacent cells, also a problem if the compute service is geographically distributed.
About the only way you can achieve this using cloud techniques is the way that SETI and the other collaborative projects work by breaking the work down into discrete chunks, which is not suited to large problems like climate, nuclear blast or materials modeling.
But in theory, there is no reason why Windows can't handle this in the future with the correct tuning, but you have to ask, why would anyone bother? Unless you're Microsoft, and can't bear there being a market that you are unable to dominate!
Hindsite is a wonderful thing
but having lived through (albeit at a young age), what the Beatles did in the 60's was ground-breaking. It may not appear unique or novel now, but it was then. It is a matter of perspective. I wonder how old the people making the Overrated comments are.
I do not like the Beatles/Stones/Beachboys who's-the-best arguments, because all of them have merit, and there are many more contemporary bands that produced worthy music, but the Beatles were THE wedge in the door that made the record companies look for other talent. This is in the same way that Johnny Cash, Elvis Presley and Buddy Holly were in their time.
Sometimes the music was pretentious, sometimes twee, sometimes just plain weird (I still can't get my head around Revolution #9 on the White Album), but it was often first of it's type to market. They had the power and influence to put on vinyl what other artists could not, getting the audiences to listen to new types of music. But it was popular then, and even now.
..because the rights include the sheet music, the right to perform and the recordings.
I *think* that it was the sheet music and the right to perform the songs published on Northern Songs that was sold (and if I remember my history, it was NOT any of the Beatles who made the decision to sell, it was Allen Klein and the board of Apple Corp. who were in control. The members of the Beatles had a 50% share in Apple Corp, but crucially the casting vote in any split decision went to the Chair of the Board.)
I believe that Sir Paul has since bought at least some of these rights back, even though they always had retained the right to perform the songs themselves.
The ownership of the recordings always remained with EMI, and was one of the cornerstones of their profitability. It is interesting to note the rash of new mastered copies and compilations that appear to be hitting the shelves in the run up to the copyright on the recordings expiring. Is this one of the things that is bringing EMI down?
And Rik. Why the obfuscation about their names. What's wrong with Paul McCartney, John Lennon, George Harrison, and Richard Starkey?
Lawyers do not make laws - thank god.
That is left to governments (which may have non-practicing lawyers in them unfortunately), and policed by the police, and ruled on by the legislature.
The lawyers are just skilled combative debaters, with a penchant for the law.
Here I am!
Long filenames on UNIX appeared in the Berkeley Fast Filesystem in BSD 4.2 around 1983. In AT&T releases up to SVR3, you still had the original limits of 14 characters overall, including dots or other characters (UNIX does not and never had the concept of a three character extension).
Around 1987, when SVR4 (and soon after, OSF/1) appeared, pretty much all UNIX vendors either had, or had plans to drop the original Version 7 derived version of UFS for one based on BSD FFS.
So yes. UNIX had it before Win95 AND OS/2.
Damn. Trying to be too clever for my own good. Of course I meant Theora!
Actually, I was not agreeing... at least not directly. As I read the license, H.264 is a cash-cow for the alliance, and is in no way is free like Ogg/Theora. End users are unlikely to have to pay according to the terms and conditions of the license, but content providers and codec suppliers will be without any doubt. In addition, it can be a throttle on the acceptance of free software.
As I tried to point out, this puts it on a collision course with the Open Source purists, meaning that people like RedHat and SuSE are unlikely to include support for it in their distro's. because if they ship (or even make available) more than 100,000 copies, they become liable for considerable license fees, with no way of recouping these from their end users. This *may* be OK for the larger distributions (although I doubt it myself), but puts an onus on any distribution supplier to track the use of their distribution.
Given the viral nature of Linux distribution (download it, burn it, give copies to your friends, distribute it via torrent etc.) it becomes impossible for any distro supplier to do this. Maybe you could track new systems appearing on the 'net through some spyware, but can you imagine the furore that would result!
What would happen is what happens now for MP3 and DVD, that the distros are shipped without H.264 support, with easily available instructions on how to add it from repositories outside the control of the distro owner. This will be a serious barrier to the adoption of Linux by people who just expect their computers to work out-of-the-box. This is why Canonical have bitten the bullet, and paid for license, because they want to be able to ship 'it just works' versions of Linux.
So this a sticking point, and will just enforce the notion that Linux can never be mainstream.
I defend my FUD comment, because putting a notional Sword of Damocles over the head of anybody who uses Theora is just that, notional. Unless you have explicit evidence of patent infringement, of course. As I pointed out, there cannot be any proof that any piece of software does not infringe someone else's patent, and this applies to H.264 as much as Theora. The only difference is that the Mpegla consortium that own the H.264 patent have a larger set of resources to fight any action, with money and additional patents to enter into cross-licensing agreements with anybody prepared to take them on.
Too many people concentrate on the fact that Flash is used to deliver video, and forget that it is actually a multimedia scripting language that does so much more than just video. It was never intended to deliver video, that is just an interesting sideline that became possible when systems became fast enough to do it.
I don't like Flash even knowing this, but HTML5 and H.264 will not do the interactive applications (I've come across Flash newsletters and estate agent brochures) and games that Flash delivers. I've even come across a comic book delivered as a Flash download.
The only thing I have seen that comes close is Silverlight, and I think that even the harshest critic of Flash will probably not dump it in favor of Microsoft's propriety alternative (and yes, I have tried Moonlight, which always seems 1-2 major releases behind), and I have not seen any statement about Silverlight being supported on iPod/Pad.
1. Is there a guarantee that the the licensing of H.264 will be renewed using the existing Ts&Cs with known changes? Well, yes, but the charges are clear, inflation of 10% per 5 years renewal is almost guaranteed, and it is the case that currently people like the BBC and Sky are *currently* liable to license charges for videos over 12 minutes long, and it is quite possible that Canonical will be liable to license charges if and when they ship over 100,000 copies of Ubuntu in a single year (which is why they have entered into an agreement).
The same is true for a rival to Apple who may ship over 100,000 media players using H.264 in a year.
2. There is a HUGE difference between *knowing* that there is a future patent/Licensing claim, as is the case with H.264, and suspecting that there may be but don't yet know, as in the case of Vorbis (I don't think that the Ogg container is likely to be patent encumbered, at least nobody has been talking about it yet).
If you play this FUD card, then you must acknowledge that any piece of shiny new software is a potential patent infringement, because that is the way that the patent system works. It is not possible to know every nuance of every patent still in force, and absolute proof of lack of infringement is not possible even if you pay megabucks in patent searches. It is still possible that someone may claim that H.264 infringes on a pre-existing patent.
This shows the essential weakness of the patent system, that it is impossible to prove a negative (it is always easier to prove that something has happened, as opposed to that it hasn't, and never will).
The FSF do have a war-chest for defending Open Source projects published under the GPL, although it is the case that Vorbis is dual licensed under LGPL and the BSD License. I'm sure that if there was a challenge to Vorbis, this would grow, especially if commercial organizations start using Vorbis more than they currently do.
WEP yes. WPA, probably no.
WEP can be cracked if you gather enough packets (but 90 seconds when you are in range is probably not enough time, even if you engage in aggressive packet injection).
WPA/PSK, you have to gather enough packets during the initial key setup using the fixed key. This is very short. Once the keys start changing dynamically, you have very little hope regardless of how many packets you snarf, because by the time you have enough, the key you are trying to crack has changed. And if you are using WPA/TKIP with a Radius server, for example, you do not even have the initial window of oportunity.
I realize that what I say here is simplistic, and there are known attacks for both PSK and TKIP, in general they take 10's of minutes, and I don't think that the google cars or bikes were traveling that slowly.
It is the case that there is an implied trust in anybody to whom you give your information, whatever it is.
I think that what Mark said is quite true, although a bit strongly worded (this is subjective, as many people, especially when they are young, use the f word freely in everyday speech without any specific meaning). I mean, why would you trust someone who you don't know with any sensitive data?
Anybody who believes that a third party does not have the ability to abuse their data is clearly too naive to be allowed to use such systems.
And in my mind, this extends to people like Google and other cloud storage organizations . Do you really trust them with all of your documents and emails about *everything*? What recourse do you have if your sensitive or commercially valuable IP leaks, whether intentionally or by accident? If they could be sued for loss of reputation, or damages through commercial loss, I'll bet that they would all close down, or become subscription services bound both ways by contract very quickly.
You sure you don't mean Saito or possibly Major Kusanagi with Sniper software rather than Batou? He's more like heavy infantry, where size is more important than accuracy.
When I switched from Red Hat (original, not RHEL) to Ubuntu as my main distro, it was exactly this dilemma that bothered me. I was not looking for an Enterprise release specifically, merely one that I would not have to upgrade every 6-9 months, but which would remain current enough that I could still get packages to compile.
Fedora became too volatile, and (I'm afraid), I was not in the market for paid support, which made RHEL unattractive to me.
I selected Ubuntu (then fairly new, I jumped on at Dapper), and have mainly stayed on LTS releases although I did put Jaunty on a netbook.
My experiences are that Enterprise or LTS releases have good and bad points.
If you remain too far behind the curve, it actually becomes quite difficult to add compile-from-source applications, but you do get good availability and stability. As my day-to-day system is a laptop which I used to plug in all sorts of miscellaneous hardware to try to get working (I was ahead of the releases for WiFi, 3G broadband, TV adapters, HomePlug adapters), I needed to be able to take what was currently being worked on, and try it. This became impossible if you fell too far behind the mainstream.
If, however, you follow the curve too closely, then for a period of time after an initial upgrade, you may have stability and functionality issues. Like many users, I had quite a challenging time when PulseAudio became the preferred sound system.
My answer is to remain on the previous LTS release until the new one is 3-6 months old. This allows you to remain fairly current, but avoid many of the teething troubles. I'm looking at Lucid on one of my systems, but will not switch from Hardy yet.
Of course, many enterprise systems will be installed for specific applications, rather than for general purpose use. For these, upgrade the system in line with the application. Once you have it stable, leave it for as long as you can (security updates excepted), and only consider an OS upgrade if the application requires it, or the OS drops out of support.
To all of the people who are complaining about major applications changing with upgrades, what the heck! Just put your favorite on *in addition* to the new one. They are likely to still be in the repository in most cases, and work as before, unless the package owner upgrades it significantly (I still rue the day that xmms became xmms2).
OK, so you need to authenticate, possibly we should give everybody a physical means of identifying themselves (after all, numbers and ID strings can be copied), and mandate a way of electronically reading these securely on someone's own PC.
So you are now supporting ID cards, with card readers, attached to PC's with trusted and supported operating systems with DRM built in - say Windows running either Vista or Windows 7. This is what the industry advisers who will be engaged by any government will say. Win for Microsoft and the PC makers, don't you think!
What happens to people who can't or won't invest to do this? There will not be sufficient demand for polling stations, so would you install PC's in Post Offices or Libraries (oops, none of these left), or possibly Pubs (rapidly going the same way in small villages)? Will we have a underclass of people who can't vote because they live in the country and have limited transport options?
I really don't think this is what you want.
R not 'created' in 1996, more like copied
R was is an open-source re-implementation of the tool S that was part of AT&T's software toolchest long before 1996 (Wikipedia says 1975), and originally developed by Bell Labs.
This is acknowledged in the documentation for R.
I was using S in 1988, and it was not new then. I was interested in it because it has a number of similarities with APL (A Programming Language) that is often cited as the first interactive computing environment.
How much of a problem?
Although I am a long-term technical specialist, I'm (almost proudly) mostly web ignorant (I can use it well, but don't expect me to be able to write any HTML or XML without a tool - Hey, I'm a core UNIX specialist, not a web designer!)
Looking at it with the eye of a novice then (hand-grenade time, this is flame bait), what is the problem with IE6?
I know, I've read that it does not confirm to standards, it's HTML implementation is poor, and generally web designers bitch at it all the time, but from a users perspective, pages appear when asked and it works in places where firefox, Chrome, and Safari still don't (I run Windows 2000 Pro in Virtual Box, and IE6 under Wine, and also have Windows 2000 Pro in a little used partition on my laptop for those awkward sites that insist on IE as a browser, especially when they need Silverlight or WMP as a backend for media - must update to XP to allow IE7, it is licensed for that).
I don't want to buy Windows 7 for my laptop, which is still working, so I don't need to change it. And I'm sure that many people who are mostly infrequent users feel the same. Ubuntu does 98% of what I need on a 2GHz Pentium 4 mobile.
I would actually like to see web sites that are strictly bound to one browser, or which are so PC unfriendly by splattering large flash animations across their pages, or which require a 1280x1024 minimum screen to display, blacklisted by the community for a few days. As a Linux and Firefox/Chromium user on laptops with 1024x768 screens, this would be a far better protest as far as I am concerned.
What I am also appalled at is the attitude of Giorgio Sardo, who obviously has an agenda of selling Windows, and indirectly, new PC's. Microsoft should be forced by legislation to provide modern browsers on their legacy OS's all the time there is still a sizable number (say, greater than 25% of the whole) of systems still in common use!
If IE6 stops working, then I suspect that there will be an anguished cry from tens of thousands of users, and a sudden lack of space in the Electrical section of the local municipal dump, not a large number of people installing Firefox.
I used to keep all of my ripped audio in Ogg Vorbis on my laptop. That was until I started using a high capacity media player. The time (and space) required for transcoding from Vorbis to MP3 for large parts of my collection began to get annoying, and due to a strangeness in Amarok and the transcode plugin, I ended up with both Vorbis and MP3 copies on my laptop. This ended up confusing Amarok, so I have now switched to all new rips saved in MP3. I don't like it, but I value my time and disk space more than a principal.
Software algorithms are not industrial processes. They can be innovated by you in your office or bedroom, by Johnny when he is not in school, or by your wife if she has the skill. It's a potential low cost, easily doable by almost anybody.
Having software patents prevents you from doing this, because if you unknowingly infringe on a patent, you are not allowed to use your own innovation. Are you prepared to do the searches to make sure that that clever snippet of code that morphs your cursor when you move over an icon or window does not infringe any one of thousands of patents? And what if you want to show off your extreme cleverness to your friends, are you prepared to indemnify them against possible law suites?
It's not the same as the way of physically holding data on a CD, or the process of masking transistors onto silicon, or any one of the hardware related patents you hold up as examples, because as an ordinary user, you will not be in a position to produce an industrial process in the same way as you can write software.
Patents can and should protect and encourage innovation, but the whole system has been corrupted to allow large corporations to make sure that no-one else can innovate. It is possible to own a patent, and then to grant an irrevocable right of use without fee to anybody. This is what everybody is hoping that Google will do with the patents they have just acquired.
More and more merchant receipts only show 4 of the 16 numbers. It's stupid to have them all.
What I want to know is...
If an ATM defaults to reading the mag stripe, where is the PIN stored? Is there a one-way hash algorithm in the ATM that reads a key from the card, that together with the PIN can be used to generate a non-reversable cryptographic signature whose authority can be checked in the ATM?
I would prefer to have different numbers for the same card for ATM and Merchant Services transactions. This would be much safer than using the same number everywhere.
Reason for not handeling card
is not so they can't read it, it is so they cannot put it through a card skimmer. There are not that many people with eidetic memories (for goodness sake, I can't remember a new phone number for more than a few seconds).
I'm fairly certain that a high enough res. camera or two would be able to capture the name, dates, long card number, and the security code on the back even if the till operative did not handle the card. This is enough to use the card for Internet transactions.
The scam used to be to skim the card, send the details to a country that does not have UK chip and pin, clone the card, and use it to pay for goods in that country. And if you are also able to grab the PIN by shoulder surfing, you can used the cloned card to get cash out of a non-chip-and-pin ATM abroad as well.
Now that all you need is the visible information from the card for card-holder-not-present transactions, the whole system is open to abuse. This is the reason why we have the Verified by Visa and the SecureWhatever-it-is for Mastercard for Internet transactions. But this is not needed for card payments over the phone, so don't do it.
The instance of banks that the PIN must be kept private should be communicated to retailers who put their merchant devices on fixed installations in plain sight (Tesco, I'm singling you out here, but I'm sure that most other Supermarkets are also guilty of this). I'm certain that I could with reasonable accuracy observe the PIN number of the two customers ahead of me in the queue on most occasions. This makes the whole system a joke.
You don't get an idea about the size of this monster until you get to the pictures of the touchscreen being used. I'm certain that because of just the size, this system will never appear on my laptop replacement shortlist. I guess I should have guessed, it having a 17" screen, but if it is so wide, why have they not made space for the missing keys!
They use this information themselves to check that you are still authorized to use ADSL, so it is no hardship for them to log it.
co.uk not clear.
Not sure I agree. Whilst Computer Programs are not patentable, it is still being argued about whether a software technique can be regarded as an invention, and thus patented. UK and EU law appears to be at odds here.
Also, while you may have a standard, there is also nothing that states that a standard is not encumbered by patents. I do not believe that H.264 is either a free or an open standard in the generally accepted meaning, it is just that the license fees have been waived until 2015. This should ring alarm bells to anybody with half a brain cell.
I'm fairly certain that most flash videos are H.264 encoded with a flash UI wrapped around them. This calls a decoder in the flash runtime. This is one of the reasons why the performance is so dire on non-windows platforms as Adobe show no real interest in anything other than the mainstream.
If we intend to keep the low power/cost end of the computing platforms alive (such as phones, pads and netbooks), we absolutely need the decoder part of a codec in the browser, not just language interpreters that allow you to run a decoder.
I've been playing around recently trying to use a different backend for flash video, specifically using mplayer with the correct modules for flash video. This works great (and much faster), until you hit a site that tries to query the version of flash in the browser plugin (like iPlayer), whereupon it falls down in a heap.
Can we tell where this is going yet?
This targeting of Ogg/Theora is the most blatant example of standards land-grab by patent that we have seen so far.
It would appear that Microsoft/Apple et. al. are not deploying their patent IP to generate income at this time, but merely to stifle an alternative technology that may deprive them of an effective monopoly.
I say effective, because the cross-licensing that big IP holders engage in has the ability to deflect anti-monopoly legislation, because a consortium of co-operating companies is not deemed to be a monopoly under the current rules.
Of course, once they have this effective monopoly, they can then leverage it for revenue generation. We can only hope that Google is prepared to defend the codec that Theora is based on.
As pointed out, if Microsoft and Apple are successful, then it is a grim portent of what is to follow.
USB wireless keyboard
Plug in a normal keyboard, drop into the BIOS during startup, and turn on Legacy USB support. Then reboot to see whether Grub understands the keyboard.
Grub is a minimal OS where size is a real issue. It relies on the BIOS settings being right!
I know that this is a trivial change, but the default background for Lucid reminds me of the early days of colour television when the tube would become magnetized leading to unpleasant blotches of colour.
It's different, I admit, but not pretty by any stretch of the imagination.
Troll alert! David Lawrence
I hope that this was deliberate flamebait!
The only reason I've had to do something like this in the last 5 years on Dapper or Hardy is when I have tried to get some hardware working when the vendor has not done anything to make it work under Linux themselves.
Remember, when you install a new piece of hardware on Windows, you have this nice shiny round thing with the hardware (it's called a driver CD), that the vendor has put a lot of work into to make it work in Windows. If they bothered to do the same for Linux, you would never have to touch the kernel. Try getting an HP printer working in Windows without the install diskette. It's nearly impossible if it is a printer Windows does not already know about.
For joe average, who wants to write documents, or browse the web, or even plug a printer in (unless it's a Lexmark - spit), everything they need is likely to already be in the distribution, or at least in the repository.
And claiming that you have to reboot twice is rich if you are coming from a Windows environment. Just installing a system from Windows media and the driver loads for XP will require you to boot your machine many, many times more for even Windows 7 than a recent Linux (just built a Windows 7 system over Christmas myself. Easier than XP by far - nearly as easy as the Lucid install I did on my workhorse laptop this morning!)
Go on, give it a try. Build a Windows and a Ubuntu system from scratch, and report back to the thread if you dare.
re. strictly for geeks...FAIL FAIL FAIL with a side order of FAIL
If there is one thing that UNIX like OS's are *MUCH* better than MS, it is the consistent directory naming system.
Remember that Linux, like UNIX, is a multi-user OS, so all of *YOUR* files should be kept under *YOUR* home directory, not scattered across the *SHARED* part of the filespace. This is what makes it possible to have UNIX like systems share their userspace across a networked filesystem, compared with the absolute CRAP of roving profiles in networked Windows systems.
If you look under your home directory on Lucid (or mos of the earlier Ubuntu's), you will find a directory called Desktop, one called Documents, one called Music, one called Pictures, one called Videos and one called Downloads. They are all yours, and will never be interfered with by another user logging on using a different name. If you want more, just create them in your home directory with whatever names you like (I have a local bin and a local lib and a local tmp just for me, but then I am a dyed in the wool UNIX user).
This is one of the fundamental strengths, for as a non-privileged user, you only have write access to the files under your home directory, and a restricted number of temporary directories. So if (and when - even on Linux) you get exposed to some malicious code, only your files are at risk, not the system as a whole!
and wash the cars
No. They teach the children, run the libraries, clean the streets, collect refuse (unless this is outsourced), inspect the environment, enforce parking regulations, grit the roads, examine planning applications, man the help lines and front office services, manage the care home provision (and in some cases run them), run the leisure centres and swimming pools, cross children across the road, maintain the street signage, run the electoral role and elections, collect an manage council tax, and on top of all of that, they manage themselves and their presentation to the people.
And I'm sure I have missed a whole lot out.
I might have mixed up local and county council responsibilities, but the councils actually do a huge amount to maintain our society. Whether they do it well or no is another matter...
If they were able to make a value decision themselves about benefit vs. cost, then I would agree with Mark65.
But they are being told in no uncertain terms by a meddlesome government what they must do, with strict timescales and financial penalties. Their hands are tied, they generally do their best (which may not actually be very good, because councils are not in the Internet business), and probably spend more money than they need on failed work and private sector 'consultants'.
Councils can be run like a business, although one with a tied customer base. It is believed by government and their (paid) advisers is that this is the way that councils should be run, as private sector management *MUST* be better than public sector. The problem with this is (as you have suggested) that councils are not really commercial businesses, and will only loose customers if the actually move geographically.
Please look at the scale
What are you comparing a council to?
If you do like-for-like comparisons between a county council with maybe 30,000 workers serving around 400,000 households with 850,000 residents (these are the approximate figures for Norfolk) against a corporation with 30,000 employees and close on a million customers, I think that the figures may be surprising.
How much do you think that a small bank, or possibly IBM UK spend on their web sites. I'm sure that the comparison would be very interesting. I would not be surprised if the councils spend less.
And look at the services they are being forced to supply (by government regulation) on the web, even if only to tell people their rights and entitlements. Housing, social services, schools, care provision, refuse collection, environmental health, roads, planning, enforcement of regulations, business rates, council tax, court services, local business development. And I'm sure I've missed many out. All the information has to be correct within guidelines.
It's a big, big problem that is quite beyond the experience of most people to comprehend (and probably most councils). This leads to the problem being treated as an elephant task, one bit at a time, which as we all know leads to inefficiencies.
Southwest One is an example where private-sector companies come in and manage to spend more money doing less than the councils ever did.
My god (if I had one), you're a better man than me!
Never sleeping or eating or going to the loo. You really never have an opportunity to log out!
I think you need to move your long-running tasks into a batch process that can run divorced from a GUI, and learn to set up your environment quickly.
I have met members of the "it takes too long to set everything up when I log in" brigade, and all I can say to them is to stop whining, and learn to automate their post-login setup process. I tend to use a personally enforced logout/login to keep the number of windows I have open manageable.
I think we have identified a REAL geek here! Obviously does not have a Significant Other who complains about the noise produced and electricity consumed (while using the tumble drier even when the sun is shining!)
I agree it does not NEED shutting down (see my comment on my firewall), but I pay huge amounts to the electricity company already.
What I am hoping for now is a power supply that draws less than a watt when in standby, a motherboard that will respond correctly to WOL requests and an ACPI subsystem which allows Linux to suspend and resume correctly. I can start my RS/6000 44P remotely, why have I not been able to get any of my Linux boxes to do the same.
I'll look at this again with Lucid, as I've recently replaced the motherboard in my workhorse system.
Maybe I should have checked, but...
...I was taking the article at face value, and it says that the patches were rolled into the release candidate. The release candidate is supposed to be what is released unless something seriously bad is found.
The reason why it is important is because there was bad press and lingering stories of woe for a couple of weeks after Karmic was released, and Canonical cannot really afford the same for Lucid.
This is newsworthy because we are two days away from GA, and this is not a bug in a beta, it is a bug found in the Release Candidate. Generally speaking, it is unusual to get late fixes in the RC, because of the alpha and beta programs.
One question, however. I know why you might want to keep a server on for months at a time (my firewall system gets restarted about 4 times a year when I have a loss of power), but why stay logged in for all that time? Every time you log out and in (at least on Dapper, Hardy and Jaunty, and various RedHat and Mandriva distros), the X server is re-started, recovering the lost memory.
This make me feel uneasy
Surely, playing around at this level could well cause cancers and teratomas (some of the pictures of these just make me feel sick!).
After all, it is a breakdown of the controls on cellular division that cause both of these conditions.
OpenOffice is not written as a commercial product. The fact that it is so good is a tribute to the people who have contributed their time and other resources, and the companies who believe that they can make other sales as a result of being apparently altruistic (Old Sun comes to mind). They are entitled to give their effort away if they choose.
But just giving away software is not a business model, which makes it an unfair comparison. In this respect, I do actually agree with Microsoft. They have invested effort producing the software, they are entitled to get reward for their effort if people want to use it. It would be their right to give it away, but it is not a duty for them to allow anybody to use it. It is more an argument of value and worth.
Now I'm not saying that people should stop using OpenOffice, but just that they be aware that free at the point of use does not mean free to produce. I also disagree with the prices that they charge, but I agree about their right to charge something.
You could almost turn the tables, and claim that Microsoft's commercial product is being undermined by the supply of a free alternative. This is very similar to the argument that Mozilla used when arguing against Microsoft giving away Internet Explorer (OK, OO is not an integral part of any OS, but that was not what was initially argued).
Not that simple
As I remember it, the HP/Intel tie-up looked like a good fit, even though in retrospect I would say that Intel took HP for a ride.
At the time HP decided to jump to a joint-developed chip with Intel, it was engaged in an arms-race with IBM. Tim said that it was the Itanium that made IBM invest in Power, but in actual fact this investment pre-dated the Itanium by at least 5 years. IBM bumped development of Power, PowerPC, RS64, and Power2-7 at various times, but it has been an almost continuous process if we overlook the stumble that happened with the 64 bit PowerPC 620 processor.
The original RIOS based IBM POWER systems, the RISC System/6000, was launched in 1990, and had been under development for at least 5 years before that. The driver was to be an industry leader in the Open Systems market place, as IBM had at last recognized that there was money to be made.
When first announced, the RS/6000 model 530 killed everything on the market stone dead, it was so much faster. HP had PA-RISC running in their MPE/iX line at the time, but it was not a single microprocessor, being built from discrete logic. The RS/6000 caused a huge stir, both because it was so much faster, and also because IBM put significant marketing weight behind their new systems. Sun were immediately knocked off the top of the workstation market, and never managed to really get back up there, and DEC invested heavily to try to produce a really hot chip in the Alpha, that was as a result of the need for speed, was significantly flawed and never really delivered on it's promise.
HP rushed systems to counter the RS/6000 based on the single chip implementation of PA-RISC, and running HP/UX. These were the HP 9000 model 720, 730 and 750, and the race was then on between IBM and HP to see who could have the fastest system. This reached it's peak in the late 1990's, when some models of RS/6000 had marketing lives of less than 6 months.
This was tremendously expensive, and HP, who did not have a big chip-fabrication division valiantly struggled to keep up, but was ultimately doomed to fail.
The way I remember the Itanium being pitched was that Intel were going to take on the development of the PA-RISC single microprocessor replacement, keeping most of the instruction set, but putting in features that would allow the processor to also run x86 binaries, and enhancing the x86 architecture for 64 bit. Intel would get access to HP's IP for the PA-RISC (which included high clock rate silicon and cache IP), and would use their considerable chip making skill to drive the product forward. HP would get a class-leading processor to keep their workstations and servers going. At least that is what was said by Intel.
What actually appeared to happen was that they designed Itanium to be their own processor, with less emphasis on making it a PA-RISC replacement, and more on trying to make it an upgrade path for 32bit x86 servers. They delivered it late, and the product did not live up to their claims as either a PA-RISC replacement, or a 64 bit x86 migration path. Intel attempted to use some of the IP to produce high speed x86 processors, but botched it with the Pentium 4, which was ultimately a dead-end.
Because of the delay, the world in general, and HP in particular, started looking elsewhere. HP appeared to loose interest in the UNIX market place, allowing both their own products and the subsumed products from DEC/Compaq (and to a lesser extent, Tandem) to fall into the legacy category. They produced Itanium based servers, but they were never up there with IBM, except in the very-large system market. Only customer pressure has kept many of the OS's alive.
In the meantime, IBM has been left with the only non-Intel/AMD UNIX offering that was actively being developed, and as a result, has kept market share. Even though there has been no real competitive pressure, IBM has used the convergence of the AS/400 and RS/6000 lines, and to a lesser but significant extent the z series, to move the architecture forward. They have borrowed from other IBM systems (and their competitors) to introduce type 1 hypervisors, hosted application partitions, and a pretty much unrivaled virtualization capabilities. The supported filesystems have scaled, the support for other technologies such as SAN and SVN has gone hand-in-hand with other IBM products.
...and made sure it was really hot
I'm off to find a teacake, and some perspective.
- Vid Hubble 'scope snaps 200,000-ton chunky crumble conundrum
- Updated + vids WHOA: Get a load of Asteroid DX110 JUST MISSING planet EARTH
- 10 years of Facebook Inside Facebook's engineering labs: Hardware heaven, HP hell – PICTURES
- Very fabric of space-time RIPPED apart in latest Hubble pic
- Massive new AIRSHIP to enter commercial service at British dirigible base