1836 posts • joined 15 Jun 2007
You're talking chain or band line printers here. I very much doubt that a dot-matrix, even a heavy duty one like a Printronix, would be able to do more than three part, chemical transfer paper.
When I was working with mainframe band printers, we were using multi-part fanfold stationery with interleaved carbon paper (not chemical transfer paper). There was a machine called a splitter, which would split the copies out and wind the carbon paper up for disposal, while leaving the two split copies neatly folded (at least, if the operator threaded it correctly). For three and more part stationery, it had to be put through further times to split each copy off. Interestingly enough, each carbon sheet had a completely legible copy of what was on the page. We also had authorized cheques with a second carbon copy, but this was for audit purposes.
I was once told that the hood on these fast printers was more than just acoustic protection, because if the band or chain broke, it was moving so fast that it would damage the hood as it flew off. Not something I would like to hit me.
Where's the old fart icon.
Two types of ribbon setting for Quietwriter
There was the full quality setting that advanced a complete letter at a time, and there was the draft quality setting, that actually only moved a fraction of a character position. This meant that you may get gaps in the later letters, where there was an overlap, but your ribbons lasted many times longer. This would make it much more difficult (though not impossible) to read from the ribbon.
I think that they were different ribbons, but it may have been a lever setting in the printer. I don't think it was a software setting.
These were actually thermal transfer printers rather than impact printers. This is how they managed to be so quite. Normal whirring from moving the print head and paper, but printing was silent.
Mine only advanced the ribbon for each letter printed (the ribbon was mounted on the print head), not on a per-line basis although mine was a Quietwriter III or IV and could have been different from Nuke's, so was not quite as wasteful as he suggested.
Pedantic, I know
but legally, there is a difference between two copies printed at the same time using multi-part stationery, and two copies printed one-after-another. There is no guarantee that the two serially printed sheets are identical, because they could just be one print after another, with the second one slightly different. How would you know unless you minutely compared them?
And yes, I know that the lower copies in a multi-part *could* have been pre-printed, but that is why they come bound together with tear-off sprockets, so that you can tell whether the lower copy has been tampered with.
Plasma's will become increasingly irrelevant as LCD and OLED technologies mature, especially with the LED backlights that are becoming available.
I predict that Plasma TV's will be banned because they are too energy inefficient once all filament lightbulbs have been eliminated. Especially if the manufacturers can lobby governments. Soon we will have TV's being replaced every year to meet government carbon emissions targets.
Joke. (I hope)
You're not using HAM or SW radio, and I guess that you don't listen to the air control bands either. Have you checked MW or LW radio reception, which I always found prone to interference (you may still find these on steam powered radios, but a lot of radio's don't even receive them nowadays).
There's LOTS of the EM spectrum in the radio bands, and radio, TV, Bluetooth and WiFi only use a very small fraction. Google (sorry, the URL is too long) for "Fuk_frequency_allocations_chart.pdf" (warning: PDF), and you will find a very interesting wall-chart of the spectrum use in the UK. Try to find the bands you use, and compare them to the whole.
I would have thought that this would have been caught by the Moderatorix.
I'm going to use this as another call to get a reason for rejection added if a comment is rejected (dig, dig.)
Don't get me started.
When trying to fix a problem with a particularly badly sh*gged filesystem late at night, I had the corporate obscenity filters block my mails to and from a vendor support centre because I included a phrase like "I have fsck'd the filesystems, and the problem persists" (btw. I was in phone contact with them as well, but it's difficult to dictate several K of diagnostic data over the phone!)
I had to wait until the following day for the mails to be released when a real person could check the content. Good thing we worked out why the mails were not getting through.
I had a moan at the people running the mail filter who said that because it was a commonly used euphemism especially in spam emails, it had been added it to their blocklist. I then checked over a gig. of archived spam from my mailbox gathered over several years (don't ask me why I had kept it, I don't know, but I hadn't run out of disk space at that time) and found precisely 2 uses in many thousand emails. Not so common use, then.
I must admit that most of this thread has followed the normal Windows/UNIX path, leading to name calling.
It is interesting to be reminded of another OS that in it's own way has shaped what we have today.
VAX/VMS is an interesting OS, and in many ways my second favorite OS to UNIX. What you have said of VMS is quite true, but some of the assertions you have made about UNIX are wrong.
As someone who learned UNIX back in the late '70's and then took a spell of sysadmin'ing RSX/11M and eventually some VAX/VMS, I agree that the batch and spooling systems on VMS were much better, because DEC had Tops 10 and Tops 20 as a good model to work from. But RSX/11M's batch system was not as good as the UNIX at/batch commands, but that is because RSX/11 was not really a general purpose OS. In the UK RSTS/E was the main commercial PDP/11 OS, and very little of that mad it into VMS. If you remember that far back, you will find that VMS version 1 was really just a 32 bit port of RSX/11M, complete with non-hierarchical file system, and limited Files/11 support.
The backup and restore, I'm not so sure that BRU and Backup/Restore was hugely better that Fbackup, Frestore, Finc and Frec on generic AT&T UNIX systems, but these fell by the wayside.
It is quite clear that Files/11 (which was a layered product on RSX/11) and the VAX filesystem (I know it had a name, but I can't remember it at the moment) suited commercial use for VAX, including file and record locking, but that does not mean that there was nothing similar in UNIX. UNIX version 7 included a thing called the "Multiplexed file system". This allowed you to add all sorts of functionality to the standard file system. But the standard byte addresses file interface allowed you to implement pretty much any functionality anyway, including arbitrary sized record structure, and there were add-ons like C-isam, which was available as a library on most UNIX variants (OK 3rd party software), which was for a time a near industry standard for UNIX.
AT&T's UNIX from System 3 also had mandatory file and region locking for files in a filesystem. This was not carried into the BSD variants as far as I remember, until the SVR4 merged system that provided cross-fertilization between major UNIX variants.
It is interesting that people also overlook the RFS filesystem that came as part of SVR3.2 and later. This was a highly stateful distributed filesystem that implemented 100% of UNIX filesystem semantics, including the mandatory locking protocols. I'm fairly certain that if you came across UNIX from a BSD/SUN route, that you almost certainly never came across this advanced filesystem which again, fell by the wayside.
It is not directly comparable with VAX Cluster, which was a groundbreaking way of making your environment more that the sum of the machines in it, but this was an add-on to VMS, and if I remember correctly, quite expensive for commercial use.
VMS was good. It's DCL CLI was very helpful to novice users, utilities like EVE and TPU were very good for University work, and the wide variety of vendor provided applications. It had the demand paging system that other vendors aspired to. But DCL had it's own limits. If I remember correctly, in order to get the argument processing working for your application, you had to produce a prototype file so DCL could parse the arguments for you, whereas letting the application manage it's own arguments
But I would contend that although it was very suitable for many types of work, ultimately it was not as flexible or as widely deployed as UNIX. Although you could say that the WorldBox MicroVax II was a microprocessor based system (I was at the UK site of the World launch event in Harrogate), that there were personal VAX Stations, and there were some very large VAX systems, UNIX appeared on everything from desktop PC's (like the AT&T 3B1 and even PC/ATs if you count Xenix/286 as a UNIX) through to the largest mainframes of the time from the likes of IBM and Amdahl. And I haven't seen a HPC cluster running VMS, as I have with COS (at one time Cray's UNIX) and AIX (IBM's UNIX variant).
Don't get me wrong. I'm not criticizing VMS. As I said, it is one of my favorite OS's. But it is like comparing apples with pears. They are similar, but there are significant differences which mean some people like apples, and some pears. I would say that there is probably still a place for VMS, but it has become niche, in the same way that genetic UNIX is going. But UNIX has a direct successor that will keep the line going in Linux.
Maybe you ought to be pressing HP to license VMS with an open license. I think that that is the only way to stop it dying a slow and lingering death.
If you're talking about 'O' levels, then you are talking 20+ years ago, as schools have been teaching GCSE's since then.
One has to ask whether your son actually knows how to write grammatically correct and intelligible English, because unless he knows this, trying to teach him to use Word is pointless.
This would be the domain of the English classes, not ICT.
All those many years ago , I remember in English having to read, comprehend, and write relevant comments on a series of articles, which taught me how to use the language. Even though I was not very good at it, it laid the foundation for all of the wordy subjects (History, Geography) as well as a basis for reports on Science experiments.
I think that teaching basic computer use for everybody is a good thing, but there should be a differentiation between this, and teaching Computing as an engineering or technology discipline. This way you would be able to separate the mundane 'using a word processor, web browser and multimedia apps' from the interesting 'what is a CPU, how do programmes run and how do you write them and what is involved in networking'. If you did this, then I believe that the kids with a genuine interest could separate the boring and interesting stuff, hopefully keeping them engaged.
My youngest kids hate(d) the way that ICT is/was taught, but do actually have a genuine interest in how their computers connect together, and what the basic components are. indeed, when I built a system from scratch last Christmas, I had a willing audience for almost all of the work I did. Virtually nothing involved with putting the system together and installing the OS was familiar to them even though they both have studied or are studying ICT at GCSE level. And they are fascinated when I can write a quick program to do something specific, when they cannot see how a spreadsheet, almost the only data tool taught to them, could be applied to a problem.
I must agree that you should have properly trained teachers, at least for GCSE level and up, because having ICT as a second subject will never give the teacher enough background to do more that follow the pre-prepared courses from the syllabus.
I admit to being a little partisan about this, because 25 years ago I taught up to degree level at a UK Polytechnic for a while, and I could see the way that business computing was going at the time.
could well be better. According to another story, a solar plasma aurora storm is due to hit Earth, starting "early in the day on August 4th". I know this is Wednesday, but I believe that the storm will last a while.
If I were in the ISS, and not protected by the bulk of the Earth's magnetic field, I would want to find the most shielded part of the station and chill out for a few days, rather than going on a space walk. I guess that the boffins will have taken this into account.
Anybody got any idea whether the ISS is in a low enough orbit to be mostly protected, or is the space walk scheduled when it is in the Earth's shadow (I know some of the remains of the plasma will reach the far side, but it will be less).
Size of market?
People may say that this perception is because Apple have sold so many iPads, but I wonder exactly how many ThinkPads have been made over the years? I would guess that it also runs into millions.
Data devices into the US
You can just see it, cant you.
Paul Kane rolls up to US Border Control in a hurry to take the key to the "Secure IT data Centre in the US. USBC take one look at the smart card, and conclude that it might contain terrorist data or pornography.
USBC: Excuse me Mr Kane, could you give me access to the information on this memory card
PK: I'm sorry, the contents are encrypted, and are actually a security key for DNS on the Internet
USBC: A key for the Internet, you're kidding me. Show it.
PK: I'm sorry again, but I cannot do that, because if I release it to you, it may compromise the security of DNSSEC
USBC: Are you refusing to co-operate, and hand over the keys to unlock the data? I'm afraid we're going to have to take it and give it to our experts in the FBI to confirm there is nothing illicit on this card. We'll get it back to you when we are finished. Oh, by the way, we might damage the data while we are doing it.
A good job the Internet will continue without them!
so long as you have a license for you home. Daft, isn't it. If you don't have a license then you're committing an offense. Ditto the black and white license for a colour television. And it's a criminal offense, not a civil one, so you get a criminal record if caught.
Apparently, the BBC's figures indicate that several hundred thousand people were found to have no or an incorrect license in 2005-2006, and if you have a black and white license, you can still expect a visit from an enforcement officer, not that they can do much.
My thoughts exactly.
BBC World Service
is funded by the foreign office, not the license fee.
Depends on the water
If you buy Evian or one of the overpriced so-called mineral waters, you may be right about the price, but might I suggest that you look at the Tesco bottled water at about 15p a litre, or tap water which costs a tenth of a penny a litre in the UK (http://www.water.org.uk/home/water-for-health/healthcare-toolkit/did-you-know).
Bottled water will be filtered, sterilized, bottled and transported, so I don't think that there should be that much surprise in the difference in price, although the cost of the water transport system in the UK is non-trivial.
Remember that beside petrol, other products come out of the refining process, all of which have some value to the oil companies reducing the cost of petrol at the pump. But the cheapest petrol is still about 1000 times the price of tap water if you include the duty, and mearly 500 times the price if you exclude the duty and VAT. Not so cheap actually.
Black and White
So Stuart Leslie Goddard can only see greyscale? (can't see any evidence of this on the net). Does this mean that someone is going to continue making black and white televisions just for him? Or does it mean that there is some medical dispensation that allows him to buy colour televisions and only pay for a black and white license.
I could envisage a situation where there was a dispensation, like for the blind who get a half price license. It would be better to have a discount on medical grounds than a license for a type of television that will not exist in a few years time.
But I have a question. Do you only have black and white televisions, or do you have a colour set for which you benefit from a reduced cost license. If it is the former, then I am interested in where you are going to get your next set from. Iif it is the latter, it sounds like you personally are benefiting at the license payer's expense (WHY SHOULD *YOU* benefit from a discounted license just because of the unfortunate affliction your wife has).
So I think my point is still valid, and your wife's situation is the exception.
Has anybody seen a B&W TV with a SCART connector or a built in freeview adapter?
Almost all external freeview boxes will only work over SCART, only one or two out of all of the ones I have seen actually encode the freeview signal back over the aerial socket.
And if you watch Sky (whose boxes do encode the picture over the aerial) using only a black and white television, you need your head examining (Sky Freesat users exempted possibly)
As a result, will the black and white television license become an anachronism when the digital switch-over is complete? I can see no real need for it any more.
I was paraphrasing the license
I know that you can justify not having a license, and I think I said that. I am just interested in finding out whether analogue only televisions will still be counted as television receiving equipment.
Web site login
To everyone who has suggested logging in to iPlayer to prove that a license has been paid, how do you prevent "account sharing", whereby someone who pays their license fee gives their number, postcode, password or whatever to the rest of their friends and family.
It doesn't even work if you restrict the number of logins to the site, because I have 5 family members in my household who are all entitled to use iPlayer by the license that I pay.
Sky restrict the number of PC and Xboxes that can be registered against my SkyPlayer account to just 3, and I find this restrictive (plus, it doesn't work for Linux systems anyway!)
I had just such a conversation with the TV licensing people a few years back. In theory you can charge it from the mains, but if you watch TV with it while it is plugged in, it needs to be covered by a separate license. On battery is fine anywhere as long as you have a license for your home.
I was having the conversation about USB DVB tuners for laptops.
The only get-out is if your workplace has paid for it's own TV license.
Strangely enough, the person at the other end of the email conversation was quite belligerent about wanting to know my address or the license number.
It would be interesting....
.... to see how many of the "What's on TV is crap" lobby in the UK actually take a stand, and throw out all their set-top, Sky and Cable boxes, and actually go broadcast free.
Unless they do this, then all of their protestations about the license fee being unjustified is just hypocrisy.
I do know two people who have done this out of principal, so it can be done.
TV Licensing and enforcement
The criteria for enforcing license collection must be changing with the digital switch over.
Up until now, it used to be that if you had equipment that was capable of receiving broadcast television, the TV license enforcement people assumed that you needed a license, and you had to demonstrate to them (on a regular basis) that you didn't use it for broadcast TV in order to escape from their harassment.
The License used to be required for "owning equipment capable of receiving a TV broadcast". After switchover, TV's with only analogue tuners are no longer capable of doing this, so should be exempt and classed as monitors.
Will TV's that do not have digital tuners (and where no other digital tuner is in use) actually be exempt from requiring a license? I know that it is almost impossible to buy a TV without a digital tuner now, but I would guess that if you just use such a device for DVD's and videos, TV licensing should stop bothering households that have not purchased digital receiving equipment (ever wondered why you have to prove who you are when buying a TV - it's because that information is fed to TV licensing by the shop! I even had to do this when buying a DVD player not so long ago even though that was not able to receive TV, and also for a TV signal amplifier, even though that is technically not television receiving equipment)
Of course, the intent of the TV license now is that it should be required for receiving broadcast video in real time from any source (as indicated in the article), so the wording of the license must have changed, although I have not read it.
We'll probably get it just in time to offset the rise in VAT!
is not fit for any purpose other than locking internet users in to Microsoft platforms.
And before you quote Moonlight back at me, may I suggest that you see how far this lags behind Silverlight, and how many SL sites actually work with the current version of Moonlight.
I admit that on Windows, SL works well, but that is of little interest to me.
wrote the VP8 video codec. When Google bought On2, they combined it with a Matroska container format and the Vorbis audio codec, and called it WebM, and then licensed the whole under several licenses including the BSD license (although Matroska and Vorbis were already published under open licenses).
So it is not true that On2 invented WebM, although it is true that they wrote the video codec in WebM.
It's not necessary to post a reason in the forum, or to send an email. If you have logged in, and are looking at a comments page, then there will be a "My Posts" link on the right-hand side of the top of the page. You can see there whether your posts have been accepted or rejected. For rejected posts, it would be possible, time permitting, to have the post tagged with a reason for rejection.
It also allows you to review all of your previous posts, warts, typo's and all! Wish I could edit out some of the howlers I have made.
Even bigger sigh, but...
I was OK with this answer right up to "be big about it, accept it and move on", which annoyed me because I don't want to spend time writing comments that are rejected if I can avoid it.
I was not asking for a reasoned argument, merely something more like "Too long" or "Libellous", or even "I just don't like your tone". I'm sure that you could come up with a list of about a dozen, and then select using icons or changing the "reject" button to a drop-down selection box. Two clicks rather than one, and you must have thought about a comment in order to decide to reject it.
I fully expect to see this in the reject pile in a few minutes.
I understand the reason why comments are moderated. I understand that the moderators are human, and I also understand that they do not share my mindset, outlook, and sometimes just have shitty days, as we all do.
But sometimes, even on re-reading a rejected comment, it is not clear why it has been rejected.
If there were a one word or short phrase that could be added during the rejection, possibly selected from a pre-defined list, it would make it clearer exactly what in the post has irked the moderator.
Acoustic power transmission...
...in submarines that are supposed to be quiet to prevent detection! Seems unlikely.
EeePC and Linux
The problem was the distribution used (Xandros - hardly well known), and the fact that they managed to screw up the UnionFS implementation somehow. Every time you wrote something to your home directory, it somehow managed to use space in the read-only UnionFS base image. When you deleted it, it remained. The result is that you ran out of space, which you could not fix without re-installing.
I used Ubuntu 8.04 on my 701 for ages, and with a little tweaking for the slightly odd Atheros wireless implementation, it worked well.
I've got 9.04 currently, and am using it to type this without problems.
Of course, the 4GB internal memory is a squeeze for a full distro, you have to be careful to clean up past kernels and the apt cache, and the processor is a bit slow for Flash video from some sites, but I can work around all of these issues. It's still a useful addition to my available systems.
The whole stack is actually named...
This indicates the GNU basic tool set, on top of the Linux kernel. This makes it a complete OS, not just a Kernel. Read what RMS (oh, sorry, Richard Stallman) says on the subject at http://www.gnu.org/gnu/linux-and-gnu.html.
Education, education, education.
If you think that the 90's and 00's generations are any better educated, think again.
The rot started setting in in the mid 70's when education had to become non-competitive, and we started having a one-size-fits-all so called comprehensive system that treated everybody as if they were slightly below average.
Removing competition from the brightest led to bored kids, and teaching over the head of the slowest led to disruptive kids.
It's funny that in today's comprehensive system, they have re-introduced streaming
I was not suggesting 3G for the smart dust, just the listening point that would receive whatever frequency the dust was using, and re-transmit using 3G. The listening point would not have to be dust-sized, just hidden, and possibly outside of the building.
I admit small antenna==high frequencies, and high frequencies generally mean high power, but I'm not talking about driving a signal 10's of metres, merely to the next smart dust speck, and then to the next (get it, collaborative networking).
Please actually read what I wrote, because although I am not a chip designer, or a transmission specialist, I believe that I have a reasonable grounding in electronics (it being part of my degree) and computing (ditto), not that you would know that.
Why so futuristic?
OK, design a very small lump of semiconductor that includes:
1. Power gathering components, leaching from leaked magnetic fields around power cables
2. Data transmission components of low power, both consumption and transmission
3. Some form of information gathering components
4. A bit of processing power and a small amount of memory
Arrange these so that they can form a collaborative self healing network that will pass information from one to another.
Scatter them in a building.
Arrange to have a listening point outside (or even inside) the building using a WAN network technology such as 3G to pass the information on.
Three of the four requirements are already met by RFID tags. The only one missing is the data gathering. The collaborative, self healing network technology is already around, at least in principal (think OLPC, and I'm sure there was a kids device that used to pass messages in an informal ad-hoc network to other devices in range).
The thing that needs to be improved is the scale. RFID tags are too big currently, and power consumption is still too high, and gathering it still needs too large an antenna to pick up the power.
But these are exactly the problems that more sophisticated technology can fix! If, as predicted, there could be high efficiency wireless LED lighting using transmitted power, which is a technology being worked on at the moment (first link found from Google, there are others http://blogs.pcworld.com/staffblog/archives/004605.html), there could be ample power around for the smart dust. If the dust particles are mere centimetres apart, the amount of power needed to transmit and receive information would be minimal. And making the devices smaller using the level of mask shrinkage from the current memory and processor programs will make the devices smaller and lower power.
Disguise the devices as variously skin flakes, human hairs (think the antenna), bugs, biscuit crumbs, scraps of paper, sandwich wrappers, vending machine cups (Hmmm, thermal power!), all of which are larger than dust, and mainly unnoticeable (turn your keyboard over and shake it to see what is there), and Bob's your Uncle (or in my case, Bill's my Uncle).
The only problem then remaining would the the data gathering. I'll leave that as a problem for others.
If I am the first combining these, and it is enough for a patent application, I claim this as Prior Art!
More specific please
"It's easy to set up now, but WELL flakey in the field, sadly, both Ubuntu up to 9.0 and OpenSuse 11 suffer."
You don't say what it suffers from. And in the next sentence you ask if there is a Linux distro that doesn't.
If by flakey (sic) you mean unreliable or prone to crashes, then I would look at your hardware because I use Ubuntu LTS releases from 6.04 and have responsibility for many SuSE systems, and find them all incredibly stable (longest to hand, SLES 10.2 - OK, not OpenSuse, at 344 days uptime).
I'm sure we could answer if we knew what your detailed concerns were.
@Beaker's Love Child
Sounds good in principal, but if you are stuck in town, unable to by a drink or get a taxi because the ATM and card payment network is down and not going to be fixed until Monday, I think you may think differently.
What you are actually wanting is to pass the baton down to the next set of group B workers, and drop out.
Sounds like they've just lacquered the antenna so you don't touch the metal. If this is the case, I wonder what they will do once it starts wearing off?
Oh. sorry, it's not a problem as most users will have upgraded to the iPhone 4GS or iPhone 5 before this happens!
Different point of view
It's not about the web, Facebook et. al, it's about a common API to write code to, and to deliver applications.
Providing a framework that is common across all platforms, regardless of the OS, browser, machine type is the Holy Grail for application developers. It costs a lot of money to develop an app to run on multiple architectures.
Unfortunately, it would have been better if this had been integrated into the windowing environment, rather than as a layer that sits on top like the current browser based delivery mechanism. What has happened is a last desperate measure to try to wrest control of the user experience away from Microsoft, Apple, KDE and Gnome developers, by adding an abstraction layer above the windowing environment.
Google realize this, which is why ChromeOS effectively eliminates the windowing environment, and is moving this abstraction layer a couple of rungs down the software ladder.
But, unfortunately again, there is still no consensus. We have different people, (Microsoft with .NET, Google with everything they are doing, Facebook etc) all running in different directions, and not talking to each other. The result of this is that we will be left with the browser, with languages like Java, Flash and Silverlight as the lowest common denominators (ah - scratch Silverlight as Microsoft are making it difficult for the Moonlight developers to keep up!) And this will lead us to the same point, just with more layers in the software stack, still with incompatibilities between major offerings.
Of course, the hardware manufactures and network providers are laughing. As all these extra layers soak up extra CPU cycles, memory and network bandwidth, they see repeating markets for new devices to do effectively the same-old-things. Kerrr-ching!
The web *could* be a valid application deployment method. Google, with their Application Engine which allows local caching of applications and when you are off-line is a very usable technology that reduces the need to go on-line. Similar things can be achieved using Lotus Domino (waaaay before Google even existed) and even AFS or DCE/DFS, so application and data caching is not really new ground.
But using a web deployment method does not dictate having an always-on internet feed. It would be perfectly possible for businesses, and even home devices (think NAS or network media devices) serving applications within a closed network, without going out to the Internet. In theory, you could also have an app server installed on the same system where the application will run, using a loopback or similar internalized network. The advantage would be a common deployment method, the disadvantage will be increased inefficiency.
I admit that this fills me with dread. I want to get off the continual grind of new-is-better, with it's back-door to my wallet, and I really don't want to get to the point where the ISPs can hold my data and computer usage hostage to whatever they want to charge me. I like the idea of a standalone PC with network usage features where I control the access, the available resources, and how the data is used. The current model of a windowing framework that allows native applications suits me just fine, but I'm no longer a typical computer user, if I ever was.
This whole thing is reaching a "Stop the word, I want to get off!" threshold. I'll go and get my coat ready.
Surely it would be cheaper
I know SCO will die eventually, but I would have thought that by now, SCO would be so worthless (SCOXQ capitalization of 1.08M USD at 5C a share) that Novell, or even IBM could buy the remaining stock for less than their legal costs in a retrial/trial (depending on which lawsuit you are looking at).
Of course, this would depend on the expected outcome of any trial by the appointed Chapter 11 administrator, but I would have though that 50C or even less in the dollar would be attractive to them. Buying 51% at 50C would cost about 275K USD, which must be lower than the expected costs. Would have to make sure that any debts are ditched, though.
Once they have done that, it's a simple matter of closing them for good.
Or is there something in Chapter 11 bankruptcy protection that I don't understand?
Alternatively, we could do it! I've a tenner, anybody else interested?
So we know this AC is called Dave. Sarah, you're guilty of leaking information! Quick, phone the Data Protection Registrar.
I think we often forget that the moderators can see the real usernames (and mail addresses) of all of the contributors. So much power for blackmail.......
Damn, think I've just shot myself in the foot. Quick, what past comments have I made as an AC?
Don't understand your comment.
The ponies belonging to my wife, written as 'My wife's ponies'. I'm fairly certain that my apostrophe usage is correct. Or are you thinking that I should have capitalized wife? I don't think it is a proper noun in this context. My grammar is not good enough for me to be certain that I have capitalized it correctly here, but then whose is nowadays (damn, that's probably wrong too).
The comment had nothing to do with the toilet habits of my spouse, who is definitely human, and I cannot see how you could think it did.
Anyway, the ponies live, quite properly, in a field. The comment switched track from the ring of fire, to what ponies (and possibly sheep) might consider palatable. Sorry if you could not keep up. Try some coffee to wake you up.
Learn what goes into curry!
Ah, I think the "ring of fire" is probably caused by the the chilli and pepper, nothing to do with cinnamon, cloves, coriander, cumin and turmeric, which are all rather less, um, active. And the flatulence may well be a result of the large amount of protein in the meal, as well as the lager already commented on.
If my wife's ponies are anything to go by, something with a little more flavour than grass may well go down well with the sheep. Garlic (which keeps the flies away) goes down a treat, and they like dried stinging nettle a lot.
Not sure about dairy cattle, though. I've had milk from a cow that ate raw onions. Not good on cornflakes!
GIMP makes perfect sense.
It is an acronym, and stands for Gnu Image Manipulation Program. I agree it's not very intuitive, but I'm not sure that I would immediately identify Lightbox as an image manipulation program. I associate lightboxes with tracing and copying, and also viewing X-rays, although I do know that you can use them with positive image (slide) photographic film.
The reason why they go for strange names is mainly because most of the good and obvious names have already been trademarked. If you are a FOSS developer, especially if you are doing it in your own time, the last thing you need is a CEASE AND DESIST notice from a lawyer. It is also popular to either put an acronym in, especially if it is self-referential (like GNU itself), or a G or K to illustrate whether it is built primarily for a GNU or KDE environment. X used to be popular to indicate an X-Windows program. Sorry for that. I guess it's a geek thing.
The ones I cannot really get my head around are things like AmaroK, or Brasero, Audacity (OK this one is probably some form of pun), or even Evolution which look rather random, but I suspect that you will see instances of this in free/shareware on any platform.
It's never going to be able to provide an opaque image
The fact that each 'plane' will be translucent will mean that you are really only seeing layers, not a full 3D image of something. A true 3D image would have a back and sides that would obscure what was 'behind', from whatever angle you look at it. This will also only give an image with correct perspective from the front.
And to be really usable, you would need more than 3 layers, and each layer added would reduce the brightness of the image, because of needing more gaps between the drops of the front layers.
Still, interesting technology. I won't hold my breath for a TV based on this though.
Off topic but...
In these days of frame-buffers and digital TV panels, is there any reason why 1080p is likely to deliver a much better picture than 1080i. I know that you get a full frame at a time rather than the alternated scan lines per frame that interlace provides, but we are not using CRT screens any more.
With CRT's, it was necessary to display each set of scan lines as they were received, which led to a significant flicker due to interaction with mains frequency, and also an apparent vertical twitch of the picture on a TV with a fast phosphor.
With today's digital TV's, you will never draw the scan lines as they arrive, you will just write them into the frame buffer and then switch FBs. If you receive an interlaced picture, you wait until the alternate lines are drawn in, and then display the picture, avoiding the flicker and twitch. The only problem I can see is that the alternate scan lines may be coming from adjacent frames, which would account for the jaggedy look of diagonal edges if you freeze an interlaced picture from a DVD. But software deinterlace techniques appear to be quite good at fixing this.
Once you apply this, then the only noticeable effect will be a halving of the frame rate. As the motion picture industry has decided on 1080p24 (24 frames a second) as a digital mastering format, having a transmission frame rate of 25 rather than 50 or 60 will not gain you a better picture (you could claim that 1080p60 as a transmission frame rate would allow less noticeable frame adjustment (twelve extra frame every 48, rather than the 1 extra every 25), but this would be a trivial distinction, and anyway, you would just use 1080p24 (in the HD ready 1080p spec) to match.
I'm sure that I have heard it said that once you get beyond about 22fps, the human eye is not capable of detecting the difference. And I don't want people saying that they can detect flicker on their CRT TV's, as this is almost certainly them seeing the strobing interaction of the TV with the mains lights, or possibly them detecting the blanking interval of the flyback and colour-burst phases of a PAL TV signal, neither of which are present on digital TV's.
If anybody can see the flaws in what I am saying, could they please correct me.
Boundary firewalls nearly always NAT.
The main reason is so that inside the firewall, you can run private IP address ranges, otherwise all of your internal systems have to have IP addresses obtained from ICANN.
Only so called 'personal' firewalls, that are really connection filters on your PC do not.
There are a type of so-called 'transparent' firewalls which firewall without NAT.
Generally speaking, pure IP routers never NAT. That is not their function.
Thanks for the clarification
I'm intrigued. Is the 32 times actually a documented figure, or from experience. I just took the MTBF at face value.
I still see far too many electronic devices with leaking capacitors. Maybe the designers *are* numpties, or just too cheap to build things properly.
In an amazing co-incidence, an hour after my previous comment was posted, my father called me saying "My Dell computer has just stopped working." One 3300uF 6.3V later (and much swearing after failing to clear the solder from the through hole), it's running just fine again.