1813 posts • joined 15 Jun 2007
TV Licensing and enforcement
The criteria for enforcing license collection must be changing with the digital switch over.
Up until now, it used to be that if you had equipment that was capable of receiving broadcast television, the TV license enforcement people assumed that you needed a license, and you had to demonstrate to them (on a regular basis) that you didn't use it for broadcast TV in order to escape from their harassment.
The License used to be required for "owning equipment capable of receiving a TV broadcast". After switchover, TV's with only analogue tuners are no longer capable of doing this, so should be exempt and classed as monitors.
Will TV's that do not have digital tuners (and where no other digital tuner is in use) actually be exempt from requiring a license? I know that it is almost impossible to buy a TV without a digital tuner now, but I would guess that if you just use such a device for DVD's and videos, TV licensing should stop bothering households that have not purchased digital receiving equipment (ever wondered why you have to prove who you are when buying a TV - it's because that information is fed to TV licensing by the shop! I even had to do this when buying a DVD player not so long ago even though that was not able to receive TV, and also for a TV signal amplifier, even though that is technically not television receiving equipment)
Of course, the intent of the TV license now is that it should be required for receiving broadcast video in real time from any source (as indicated in the article), so the wording of the license must have changed, although I have not read it.
We'll probably get it just in time to offset the rise in VAT!
is not fit for any purpose other than locking internet users in to Microsoft platforms.
And before you quote Moonlight back at me, may I suggest that you see how far this lags behind Silverlight, and how many SL sites actually work with the current version of Moonlight.
I admit that on Windows, SL works well, but that is of little interest to me.
wrote the VP8 video codec. When Google bought On2, they combined it with a Matroska container format and the Vorbis audio codec, and called it WebM, and then licensed the whole under several licenses including the BSD license (although Matroska and Vorbis were already published under open licenses).
So it is not true that On2 invented WebM, although it is true that they wrote the video codec in WebM.
It's not necessary to post a reason in the forum, or to send an email. If you have logged in, and are looking at a comments page, then there will be a "My Posts" link on the right-hand side of the top of the page. You can see there whether your posts have been accepted or rejected. For rejected posts, it would be possible, time permitting, to have the post tagged with a reason for rejection.
It also allows you to review all of your previous posts, warts, typo's and all! Wish I could edit out some of the howlers I have made.
Even bigger sigh, but...
I was OK with this answer right up to "be big about it, accept it and move on", which annoyed me because I don't want to spend time writing comments that are rejected if I can avoid it.
I was not asking for a reasoned argument, merely something more like "Too long" or "Libellous", or even "I just don't like your tone". I'm sure that you could come up with a list of about a dozen, and then select using icons or changing the "reject" button to a drop-down selection box. Two clicks rather than one, and you must have thought about a comment in order to decide to reject it.
I fully expect to see this in the reject pile in a few minutes.
I understand the reason why comments are moderated. I understand that the moderators are human, and I also understand that they do not share my mindset, outlook, and sometimes just have shitty days, as we all do.
But sometimes, even on re-reading a rejected comment, it is not clear why it has been rejected.
If there were a one word or short phrase that could be added during the rejection, possibly selected from a pre-defined list, it would make it clearer exactly what in the post has irked the moderator.
Acoustic power transmission...
...in submarines that are supposed to be quiet to prevent detection! Seems unlikely.
EeePC and Linux
The problem was the distribution used (Xandros - hardly well known), and the fact that they managed to screw up the UnionFS implementation somehow. Every time you wrote something to your home directory, it somehow managed to use space in the read-only UnionFS base image. When you deleted it, it remained. The result is that you ran out of space, which you could not fix without re-installing.
I used Ubuntu 8.04 on my 701 for ages, and with a little tweaking for the slightly odd Atheros wireless implementation, it worked well.
I've got 9.04 currently, and am using it to type this without problems.
Of course, the 4GB internal memory is a squeeze for a full distro, you have to be careful to clean up past kernels and the apt cache, and the processor is a bit slow for Flash video from some sites, but I can work around all of these issues. It's still a useful addition to my available systems.
The whole stack is actually named...
This indicates the GNU basic tool set, on top of the Linux kernel. This makes it a complete OS, not just a Kernel. Read what RMS (oh, sorry, Richard Stallman) says on the subject at http://www.gnu.org/gnu/linux-and-gnu.html.
Education, education, education.
If you think that the 90's and 00's generations are any better educated, think again.
The rot started setting in in the mid 70's when education had to become non-competitive, and we started having a one-size-fits-all so called comprehensive system that treated everybody as if they were slightly below average.
Removing competition from the brightest led to bored kids, and teaching over the head of the slowest led to disruptive kids.
It's funny that in today's comprehensive system, they have re-introduced streaming
I was not suggesting 3G for the smart dust, just the listening point that would receive whatever frequency the dust was using, and re-transmit using 3G. The listening point would not have to be dust-sized, just hidden, and possibly outside of the building.
I admit small antenna==high frequencies, and high frequencies generally mean high power, but I'm not talking about driving a signal 10's of metres, merely to the next smart dust speck, and then to the next (get it, collaborative networking).
Please actually read what I wrote, because although I am not a chip designer, or a transmission specialist, I believe that I have a reasonable grounding in electronics (it being part of my degree) and computing (ditto), not that you would know that.
Why so futuristic?
OK, design a very small lump of semiconductor that includes:
1. Power gathering components, leaching from leaked magnetic fields around power cables
2. Data transmission components of low power, both consumption and transmission
3. Some form of information gathering components
4. A bit of processing power and a small amount of memory
Arrange these so that they can form a collaborative self healing network that will pass information from one to another.
Scatter them in a building.
Arrange to have a listening point outside (or even inside) the building using a WAN network technology such as 3G to pass the information on.
Three of the four requirements are already met by RFID tags. The only one missing is the data gathering. The collaborative, self healing network technology is already around, at least in principal (think OLPC, and I'm sure there was a kids device that used to pass messages in an informal ad-hoc network to other devices in range).
The thing that needs to be improved is the scale. RFID tags are too big currently, and power consumption is still too high, and gathering it still needs too large an antenna to pick up the power.
But these are exactly the problems that more sophisticated technology can fix! If, as predicted, there could be high efficiency wireless LED lighting using transmitted power, which is a technology being worked on at the moment (first link found from Google, there are others http://blogs.pcworld.com/staffblog/archives/004605.html), there could be ample power around for the smart dust. If the dust particles are mere centimetres apart, the amount of power needed to transmit and receive information would be minimal. And making the devices smaller using the level of mask shrinkage from the current memory and processor programs will make the devices smaller and lower power.
Disguise the devices as variously skin flakes, human hairs (think the antenna), bugs, biscuit crumbs, scraps of paper, sandwich wrappers, vending machine cups (Hmmm, thermal power!), all of which are larger than dust, and mainly unnoticeable (turn your keyboard over and shake it to see what is there), and Bob's your Uncle (or in my case, Bill's my Uncle).
The only problem then remaining would the the data gathering. I'll leave that as a problem for others.
If I am the first combining these, and it is enough for a patent application, I claim this as Prior Art!
More specific please
"It's easy to set up now, but WELL flakey in the field, sadly, both Ubuntu up to 9.0 and OpenSuse 11 suffer."
You don't say what it suffers from. And in the next sentence you ask if there is a Linux distro that doesn't.
If by flakey (sic) you mean unreliable or prone to crashes, then I would look at your hardware because I use Ubuntu LTS releases from 6.04 and have responsibility for many SuSE systems, and find them all incredibly stable (longest to hand, SLES 10.2 - OK, not OpenSuse, at 344 days uptime).
I'm sure we could answer if we knew what your detailed concerns were.
@Beaker's Love Child
Sounds good in principal, but if you are stuck in town, unable to by a drink or get a taxi because the ATM and card payment network is down and not going to be fixed until Monday, I think you may think differently.
What you are actually wanting is to pass the baton down to the next set of group B workers, and drop out.
Sounds like they've just lacquered the antenna so you don't touch the metal. If this is the case, I wonder what they will do once it starts wearing off?
Oh. sorry, it's not a problem as most users will have upgraded to the iPhone 4GS or iPhone 5 before this happens!
Different point of view
It's not about the web, Facebook et. al, it's about a common API to write code to, and to deliver applications.
Providing a framework that is common across all platforms, regardless of the OS, browser, machine type is the Holy Grail for application developers. It costs a lot of money to develop an app to run on multiple architectures.
Unfortunately, it would have been better if this had been integrated into the windowing environment, rather than as a layer that sits on top like the current browser based delivery mechanism. What has happened is a last desperate measure to try to wrest control of the user experience away from Microsoft, Apple, KDE and Gnome developers, by adding an abstraction layer above the windowing environment.
Google realize this, which is why ChromeOS effectively eliminates the windowing environment, and is moving this abstraction layer a couple of rungs down the software ladder.
But, unfortunately again, there is still no consensus. We have different people, (Microsoft with .NET, Google with everything they are doing, Facebook etc) all running in different directions, and not talking to each other. The result of this is that we will be left with the browser, with languages like Java, Flash and Silverlight as the lowest common denominators (ah - scratch Silverlight as Microsoft are making it difficult for the Moonlight developers to keep up!) And this will lead us to the same point, just with more layers in the software stack, still with incompatibilities between major offerings.
Of course, the hardware manufactures and network providers are laughing. As all these extra layers soak up extra CPU cycles, memory and network bandwidth, they see repeating markets for new devices to do effectively the same-old-things. Kerrr-ching!
The web *could* be a valid application deployment method. Google, with their Application Engine which allows local caching of applications and when you are off-line is a very usable technology that reduces the need to go on-line. Similar things can be achieved using Lotus Domino (waaaay before Google even existed) and even AFS or DCE/DFS, so application and data caching is not really new ground.
But using a web deployment method does not dictate having an always-on internet feed. It would be perfectly possible for businesses, and even home devices (think NAS or network media devices) serving applications within a closed network, without going out to the Internet. In theory, you could also have an app server installed on the same system where the application will run, using a loopback or similar internalized network. The advantage would be a common deployment method, the disadvantage will be increased inefficiency.
I admit that this fills me with dread. I want to get off the continual grind of new-is-better, with it's back-door to my wallet, and I really don't want to get to the point where the ISPs can hold my data and computer usage hostage to whatever they want to charge me. I like the idea of a standalone PC with network usage features where I control the access, the available resources, and how the data is used. The current model of a windowing framework that allows native applications suits me just fine, but I'm no longer a typical computer user, if I ever was.
This whole thing is reaching a "Stop the word, I want to get off!" threshold. I'll go and get my coat ready.
Surely it would be cheaper
I know SCO will die eventually, but I would have thought that by now, SCO would be so worthless (SCOXQ capitalization of 1.08M USD at 5C a share) that Novell, or even IBM could buy the remaining stock for less than their legal costs in a retrial/trial (depending on which lawsuit you are looking at).
Of course, this would depend on the expected outcome of any trial by the appointed Chapter 11 administrator, but I would have though that 50C or even less in the dollar would be attractive to them. Buying 51% at 50C would cost about 275K USD, which must be lower than the expected costs. Would have to make sure that any debts are ditched, though.
Once they have done that, it's a simple matter of closing them for good.
Or is there something in Chapter 11 bankruptcy protection that I don't understand?
Alternatively, we could do it! I've a tenner, anybody else interested?
So we know this AC is called Dave. Sarah, you're guilty of leaking information! Quick, phone the Data Protection Registrar.
I think we often forget that the moderators can see the real usernames (and mail addresses) of all of the contributors. So much power for blackmail.......
Damn, think I've just shot myself in the foot. Quick, what past comments have I made as an AC?
Don't understand your comment.
The ponies belonging to my wife, written as 'My wife's ponies'. I'm fairly certain that my apostrophe usage is correct. Or are you thinking that I should have capitalized wife? I don't think it is a proper noun in this context. My grammar is not good enough for me to be certain that I have capitalized it correctly here, but then whose is nowadays (damn, that's probably wrong too).
The comment had nothing to do with the toilet habits of my spouse, who is definitely human, and I cannot see how you could think it did.
Anyway, the ponies live, quite properly, in a field. The comment switched track from the ring of fire, to what ponies (and possibly sheep) might consider palatable. Sorry if you could not keep up. Try some coffee to wake you up.
Learn what goes into curry!
Ah, I think the "ring of fire" is probably caused by the the chilli and pepper, nothing to do with cinnamon, cloves, coriander, cumin and turmeric, which are all rather less, um, active. And the flatulence may well be a result of the large amount of protein in the meal, as well as the lager already commented on.
If my wife's ponies are anything to go by, something with a little more flavour than grass may well go down well with the sheep. Garlic (which keeps the flies away) goes down a treat, and they like dried stinging nettle a lot.
Not sure about dairy cattle, though. I've had milk from a cow that ate raw onions. Not good on cornflakes!
GIMP makes perfect sense.
It is an acronym, and stands for Gnu Image Manipulation Program. I agree it's not very intuitive, but I'm not sure that I would immediately identify Lightbox as an image manipulation program. I associate lightboxes with tracing and copying, and also viewing X-rays, although I do know that you can use them with positive image (slide) photographic film.
The reason why they go for strange names is mainly because most of the good and obvious names have already been trademarked. If you are a FOSS developer, especially if you are doing it in your own time, the last thing you need is a CEASE AND DESIST notice from a lawyer. It is also popular to either put an acronym in, especially if it is self-referential (like GNU itself), or a G or K to illustrate whether it is built primarily for a GNU or KDE environment. X used to be popular to indicate an X-Windows program. Sorry for that. I guess it's a geek thing.
The ones I cannot really get my head around are things like AmaroK, or Brasero, Audacity (OK this one is probably some form of pun), or even Evolution which look rather random, but I suspect that you will see instances of this in free/shareware on any platform.
It's never going to be able to provide an opaque image
The fact that each 'plane' will be translucent will mean that you are really only seeing layers, not a full 3D image of something. A true 3D image would have a back and sides that would obscure what was 'behind', from whatever angle you look at it. This will also only give an image with correct perspective from the front.
And to be really usable, you would need more than 3 layers, and each layer added would reduce the brightness of the image, because of needing more gaps between the drops of the front layers.
Still, interesting technology. I won't hold my breath for a TV based on this though.
Off topic but...
In these days of frame-buffers and digital TV panels, is there any reason why 1080p is likely to deliver a much better picture than 1080i. I know that you get a full frame at a time rather than the alternated scan lines per frame that interlace provides, but we are not using CRT screens any more.
With CRT's, it was necessary to display each set of scan lines as they were received, which led to a significant flicker due to interaction with mains frequency, and also an apparent vertical twitch of the picture on a TV with a fast phosphor.
With today's digital TV's, you will never draw the scan lines as they arrive, you will just write them into the frame buffer and then switch FBs. If you receive an interlaced picture, you wait until the alternate lines are drawn in, and then display the picture, avoiding the flicker and twitch. The only problem I can see is that the alternate scan lines may be coming from adjacent frames, which would account for the jaggedy look of diagonal edges if you freeze an interlaced picture from a DVD. But software deinterlace techniques appear to be quite good at fixing this.
Once you apply this, then the only noticeable effect will be a halving of the frame rate. As the motion picture industry has decided on 1080p24 (24 frames a second) as a digital mastering format, having a transmission frame rate of 25 rather than 50 or 60 will not gain you a better picture (you could claim that 1080p60 as a transmission frame rate would allow less noticeable frame adjustment (twelve extra frame every 48, rather than the 1 extra every 25), but this would be a trivial distinction, and anyway, you would just use 1080p24 (in the HD ready 1080p spec) to match.
I'm sure that I have heard it said that once you get beyond about 22fps, the human eye is not capable of detecting the difference. And I don't want people saying that they can detect flicker on their CRT TV's, as this is almost certainly them seeing the strobing interaction of the TV with the mains lights, or possibly them detecting the blanking interval of the flyback and colour-burst phases of a PAL TV signal, neither of which are present on digital TV's.
If anybody can see the flaws in what I am saying, could they please correct me.
Boundary firewalls nearly always NAT.
The main reason is so that inside the firewall, you can run private IP address ranges, otherwise all of your internal systems have to have IP addresses obtained from ICANN.
Only so called 'personal' firewalls, that are really connection filters on your PC do not.
There are a type of so-called 'transparent' firewalls which firewall without NAT.
Generally speaking, pure IP routers never NAT. That is not their function.
Thanks for the clarification
I'm intrigued. Is the 32 times actually a documented figure, or from experience. I just took the MTBF at face value.
I still see far too many electronic devices with leaking capacitors. Maybe the designers *are* numpties, or just too cheap to build things properly.
In an amazing co-incidence, an hour after my previous comment was posted, my father called me saying "My Dell computer has just stopped working." One 3300uF 6.3V later (and much swearing after failing to clear the solder from the through hole), it's running just fine again.
Unlike solid-state devices, most general purpose capacitors have a MTBF measured in '000s of hours (I've just looked at Maplin, and a so-called long-life electrolytic capacitor has a MTBF of 2000 hours).
Now this may seem like a long time, but with modern devices having always-on power supplies (identified by no physical switches) that contain electrolytic capacitors, then we are actually only talking 83 days being either on or on standby before you would expect a failure from this type of device. It's amazing they last so long.
When this type of capacitor was first invented, it was expected that any device would only be on for a few hours a day. But now we expect everything to come on at the touch of a remote control, everything is different.
I have talked to a Sky box specialist repairer (when my 1st gen Thompson Sky HD box failed due to capacitors in the power supply) and he said that these boxes were not expected to last more than about two years before failing. They sell hundreds of capacitor kits to repair this very fault. This explains why the after-market Sky insurance services are so prevalent.
The simple fact is that anything using cheap electrolytic capacitors must be expected to fail after a few years unless the manufacturer has made special efforts (such as using new technology solid-state capacitors). Unsurprisingly, this costs more money, and is unpopular for all but the most expensive devices.
As a sideline, all of the devices I have repaired by replacing the capacitors in the last year or so have all, without fail, had the failed devices branded as CapXon, who appear to make a significant number of the capacitors in electronic devices coming from China! Good thing everything is so cheap that it can be replaced.
RHEL vs. Fedora
Yeah. RHEL not Fedora. And of course YOU pay for it, don't you!
Your VPN solution requires network connectivity. Try VPN'ing into a government or financial institution! I thought I would be able to work from home at least some of the time when I first went contracting, but sadly, that wasn't the case in the real world.
BTW. Been working as a UNIX deep techie for 25+ years. Linux is the new-boy, and enables me to have a UNIX-like environment with me, especially on my netbook.
are so busy being techies for their living rather than a hobby that they don't have time for the frequent disruptions of major Fedora releases for their own systems, so use Ubuntu LTS releases instead.
And why not netbooks if they work well enough. Damned useful devices. If I could afford one, I might get an android phone to replace my Palm Treo, and use that instead, but until I do, my EeePC is more portable than my Thinkpad, and runs a full Linux distro just fine.
used the heat from their water cooled IBM 360/65 and 370/168 to heat Claremont tower when I was at Durham oh so many years ago.
Interestingly enough, on the current big IBM Power6 575 water cooled IH nodes, the exit temperature of the water from the frames is less that 30C, with a temperature rise of around 10C input-to-return, which makes the heat difficult to use, even if there is a lot of it.
..use InfiniBand for peripherals..
IBM do this in their Power6 systems, and have done for years. What are still referred to as RIO-2 (Remote I/O) adapters on these systems (AIX, i, and Linux) that are used to attach disk and adapter expansion drawers are the same adapters used for Infiniband, which is also in use where I work. The two adapters (IB and RIO) for p6 520s have the same FRU and part number, and can be swapped over and work just fine.
They are normally GX+ bus attached (the internal high-speed bus used as a processor memory and peripheral bus), eliminating the PCIe bottlenecks between the server and the I/O drawer, although the adapters in the drawer are normally a PCI variant.
I am no longer a mainframe user
but there have been in the past compelling reasons why mainframes made good sense as UNIX/Linux systems.
If you look as a sysplex or whatever they are called now, you can effectively have uptime as good as your software resilience, provided that the hardware it has been designed properly. The different parts of a sysplex can be in different locations (not sure about the distance between them) on different power infrastructure.
I'm not sure about the IFL's, but normal z9/z10/z11 processors have multi-bit memory protection, register parity, multiple data paths, failed instruction re-execution and dead processor detection that gives almost complete protection from single component failures within a system.
And while virtualization in the x86 world has been progressing leaps and bounds, it is nowhere near the maturity of the mainframe world, except when deployed as a mainframe (like Unisys do). As far as I am aware, there is no way to aggregate multiple x86 servers to make a single large general purpose system image, even using clustering technologies.
When you build multi-processor single systems even using x86 technologies, the prices tend to climb quite rapidly.
All of this technology costs money, and whether it is worth this is a value judgment that the customer has to make, but it is certainly not the no-brainer that you appear to believe.
Best Playmobile re-enactment yet!
What, you can't put a blank comment in, even though the title says it all!
@jlocke. Such is the percieved wisdom.
But I believe that ARM has always had a protected mode. It was certainly enough to have BSD 4.3 ported onto it in the A440/R140 days. It was called RISC iX.
And I can't see that it would be impossible to add interrupt driven multi-tasking to RISC OS. It's a programmed interrupt timer that will effectively call the same context switch code in the OS as is used in the co-operative system. May have to move some of the context save function into the interrupt handler, but that should be trivial.
The main drawback was that although RISC OS was intended from the start to be multi-tasking, I don't believe that it was ever meant to be a multi-user OS. This means that some of the protections you may expect in a modern OS could be missing. As far as I can remember, RISC OS programs are relocatable, and run in a shared address space, but this should not be a barrier to making it fully protected. If the current implementations have the requisite hardware, it should be perfectly possible to set up a Virtual Address space that will allow existing applications to run, just protected from each other.
The people who designed ARM initially were clever bunnies, and would not have missed so obvious a trick.
It may be interesting to look at ROOL to see how difficult it would be to retrofit.
Quite simply. Not the article, the comments.
I cannot, simply cannot, understand the vitriol in this set of comments.
The basic problem of restricted and non-free add-ons not being installed out of the box has nothing to do with deficiencies in APT or RPM or Yum, but with software patent enforcement, and proprietary software formats.
The fact that there is a CLI way and a GUI way to rectify this is not a problem but a beneficial feature. In both cases, it is necessary to either type or cut and paste, because the same reason that the software cannot be installed by default also prevents the repositories containing this software from being in the standard list.
If there is a Linux which automagically installs MP3, WMV, Flash or Realplayer codecs, or the DVDCSS libraries, then it is exposing the users to potential lawsuits in certain regions of the world. At least Ubuntu lets users make an informed choice.
Canonical are trying to be squeaky-clean, because when you get big, you become a target. When they try to address these problems on behalf of their users, for instance by licensing H.264 so that they can include it, they get criticized. It's damned if they do, or damned if they don't.
None of this is ideal. But getting people sniping at each other about the best Linux distro is completely unproductive. It's bad enough seeing this from Windows and Mac users.
I admit that I have not installed Fedora. The last Redhat release I installed was 9.1 (pre RHEL). But I decided that Fedora was too fast moving for me. I don't have the time for 2 major upgrades a year. I opted for Ubuntu LTS releases, starting with Dapper.
I now recommend Ubuntu to anybody who wants to play with Linux, because it is just so slick. It may not suite everybody, but I want there to be a major distro which gains the critical mass for general acceptance as an alternative OS. I believe that Ubuntu is the closest we have got so far. It's not perfect, but it is going in the right direction, which I don't believe that RedHat is (certainly not in the consumer space). Once we see critical mass maybe the wireless and display chipset problems will go away, as manufacturers package instructions and drivers for Linux as they do for Windows.
OK, you may like SuSE or Fedora or Mint or Debian, but are any of these gaining traction with users? Not as much as Ubuntu, I contend.
And I agree that it is easier to describe how to add the extras for the CLI, but that only needs to deter users if they are trying to understand what it is doing, something that normally never causes Windows users problems (who really knows what a Windows installer program does!). It could have been done with screenshots and the GUI, but not as succinctly. It may be lazyness, but I suspect I and many others here would have done the same, with a comment such as "don't worry what it is doing, just do it".
I would prefer to not have to do it, but that is not likely until all of the required add-ons are no longer required, or they are freed from proprietary control. What a happy day that would be!
other types of AFV (Armoured Fighting Vehicles). I don't believe we have any MBTs in Afghanistan.
When the term 'Tank' is used in the context of defence reviews, it means "Main Battle Tank" like the Challenger 2. This is what they are referring to when they talk about the Tank division.
Other tank-like (to the uninformed) vehicles ate things like the Scimitar, which is an Armoured Reconnaissance Vehicle or the Saracen, which is an Armoured Personnel Carrier, or the Warrior which is an Infantry Fighting Vehicle. Someone has written a useful page describing them all on Wikipedia.
You may regard this as pedantic, but to the soldier in the field and the command structure, it makes a huge difference.
My wife, who has a significant sight disorder, actually says that she dislikes HD TV, because she says that it is 'grainly'. I actually worked out why this was recently.
She has variable short-sightedness across her vision due to very strong astigmatisms, that means that even with glasses, only a part of her vision is clear at any time when looking at anything over about 10 feet away. This means that in real life, large parts of what she sees is a blur, she has no usable peripheral vision, and also means that even with today's optically dense glass, has very heavy spectacles.
When watching HD TV, the screen is within her visible region, so she can see all of the distant detail on screen that she cannot see in real life. This should be a benefit to her, but because she is not used to seeing it, it upsets her.
I guess what this shows is that you can't please everybody. And means I'm stuck watching everything in SD on my HD capable TV and Sky box if I don't want her moaning!
RE: RE: 3D Without the Glasses
The problem with parallax barrier technology is that it only works if you are in the right place in relation to the screen. This makes it suitable for games consoles and computer displays, but not for the living room or the pub.
Why not? Because they are not American
and he probably does not have Chinese patents that cover this. I do not believe that there really is something as simple as a World-Wide patent, it is only applicable to those countries that have signed up to WIPO. This is not the whole world.
He may be able to get infringing devices blocked from being imported to the US, and maybe Europe, but that is all.
Technique, not apparent effect
The text indicates that what is being patented is the technique for having three components interacting in a particular way. These are the OS, a device manager and a display manager, so it is not as simple as saying "my laptop did this 15 years ago". This could very easily have been the display hardware with no involvement from the OS, and I am not even certain that earlier versions of Windows had each of these functions as separate components.
It is clear, however, that what Apple are trying to patent is the udev functionality from Linux when applied to display devices. I agree that the patent should have never been granted.
MBR is just a bootstrap
plenty of prior art there, so I doubt it is patented, at least not by Microsoft. PDP/11's used second-stage bootstraps in the 1970's, before the PC was invented, and I'm sure they weren't the first.
What is more important is the disk label which dictates the partitioning. I believe that Microsoft invented this, although there are other partitioning formats out there. Probably about time it was re-worked. 4 primary partitions, one of which can be a kludge to make it a container for extended partitions is lame.
I presume that 2/100 CAD is 2 Canadian cents. Never seen it written like this, but then I am a Brit.
If we interpret every patent issue as black and white, we run the risk of loosing sight of what Linux has to do to become accepted.
Now I don't like software patents, and I do understand the licensing constraints on H.264, but it is currently not clear that the Google WebM codec will become accepted. Canonical have just covered all of the bases, which should allow Ubuntu to play in all parts of the media world. They have done this at their own expense (if any money a commercial company spends can be regarded as their own), and probably won't see the money recouped from the people who are most likely benefit.
What the Linux community have to accept is that an ordinary user (by this I mean someone who wants to buy a system, open the box, plug it in and use it) just will not use a Linux distro if every time they go to a web page it is a lottery as to whether their system will allow them to see and hear the media there. This is such a fundamental requirement that I sometimes wonder what many of the of the commenters are thinking.
If they are expecting the Open Source movement to really be able to overcome the might of Microsoft, Apple, et. al. with sweeping changes by just being there with a small share of the market, they are deluding themselves. Lets get a successful distro out there, and then use that as a lever to change the world. The bigger we can make it, the more likely we are to have an effect.
Missing a step
In your analogy, you've missed out a step, and that is turning on your tape recorder to record it so that it could be re-played.
But in principal, I agree with what you say. If you walk around with a directional microphone recording parts of everything you can hear, is this currently illegal?
My firewall records the first couple of hundred bytes of every stateful connection that runs through it. Am I likely to be sued by my kids because I can see some of their IM sessions? If someone illegally uses my wireless network, and I capture their credit card details, are we both guilty of illegal actions?
The problem is that the law cannot keep up with the speed of technological change. The result is that the courts are asked to rule on outdated laws, ruled on effectively by technology outsiders, and are asked to make reasonable precedent judgements.
It may be that it is not illegal to not encrypt your access point, and that it may not be illegal to receive unencrypted traffic (I hope this is the case, because an unintended consequence of using your network is that it will read the headers of all packets in order to know whether to discard it) but it is illegal to record it, but it must be seen as being pretty foolhardy to not take reasonable steps to secure your access point.
There is merit on almost all of the arguments made on this forum, but it is quite clear that the whole situation has so many ambiguities that a reasonable consensus cannot be achieved.
I cannot claim anything like the amount of expertise you appear to have, but having read your comment, I am not sure I totally agree with your conclusions.
Yes, it would appear that representatives of these cultures are attempting to use the western originated international tools, but I don't think that they fully understand them..... yet.
One of the primary cultural differences is that Islam is without borders. Muslims have allegiance to their god above all other things. This leads them to think that if they consider something an absolute wrong, it must be wrong anywhere in the world.
If they totally understood the western world, they would also understand the futility of the action that they are taking. Whether this would stop them, I don't know, but it shows that they have a way to go.
What scares me, and I am not trying to in any way be biased against any person, religion or philosophy, is the slow infection of western style democracy by a creeping change. It would appear that this could extend all the way to the International bodies like the UN. Until recently, I felt that the UN was primarily a forum for discussion, but with the actions in the Balkans, Afghanistan and Iraq, it has become more heavy handed, with individual countries trying to force the UN's to take specific actions. This is a mistake, and if the UN becomes heavily influenced by Islamic states, could backfire on the so called western countries.
What also worries me is that the world of Islam is fragmented, and this can often cause frictions even in Islamic states (think of the Shi'a and Sunni tensions in Iraq). Having this happen on a world scale could be disastrous.
I do not want to live under Sharia law, and I don't want it imposed on me by any external body or agency. Could I paranoid? It's possible, but I don't think so. Could this be an irrational fear? I think that this article answers that question. Am I becoming radicalised, I hope not, but I am beginning to get worried about my own state of mind!
Stop the World! I want to get off.
What worries me
is the use of desktop-and-server class processors in something that is built more like a laptop.
If the heat pipes that carry the heat from the GPU and processor becomes less efficient with time (as I believe they do), I'm sure you will see these cook themselves, as there is no direct cooling of the metal above the processors.
I've seen this happen with laptops using AMD mobile processors. They just break due to getting too hot once they get to a certain age!
If played at 45 RPM, that will probably be about 3 minutes of music (remember, singles used to be 7", and the inner 3 inches or so were the label, that some turntables won't track). At 33 RPM, you will get more (and not have to change the belt position on a Rega P1), but the quality of 33 RPM singles was always questionable.
To put it in perspective, Bohemian Rhapsody only just fitted onto a 7" single at about 6 minutes long, and some copies skipped from new because the grooves were too close together. My original 7incher sounds awful compared to the album.
You will need something to fill the hole in the middle, though. A standard autochanger hub is the wrong size.
Bring back the 12" LP! So much more space for the cover art.
The AH1542 was an ISA card, and most of the jumper settings were setting the base address and IRQ. There really wasn't a better way, because ISA was a limited architecture never intended for server class systems (too few IRQ lines and no interrupt sharing).
This was the main reason PCI, MCA EISA and Plug'n'Pray were invented. Definitely not Adaptec's fault. People just remember it more with SCSI cards than anything else, but the same issues were there with sound cards, network cards and a multitude of other adapters that weren't around when the PC was originally thrown together (I deliberately avoid saying 'designed' because I don't think it ever was!)
@AC re. numbers and AT&T
And this is a surprise? It's quite clear that Vista did not cut the mustard for corporates, and 8 months is not enough time for an organization to test, plan and implement Windows 7 (believe me, it's not).
And why should AT&T even consider it when the end-of-life for XP support is published as 2014?
I would be more worried by people still with NT4 and 2000 in their organizations.
FFS, How much do we trust Google
Cloud applications, cloud storage and now cloud printing.
Just hope "Do No Evil" never changes. I would hate for a serious commercially sensitive or security document to be snooped while it is passing through the Google print servers because someone knows no better about how their computer talks to their printer. Such information could be worth very large amounts of money.
An just think if a glitch ends up sending it to someone else's printer entirely!