Re: Not lost...
Well, after the type of language used in this article, I'm not surprised it was never sent!
2924 posts • joined 15 Jun 2007
When the IBM AIX Systems Support Centre in the UK was set up in in 1989/1990, the standard system that was on the desks of the support specialists was a PS/2 Model 80 running AIX 1.2. (I don't recall if they were ever upgraded to 1.2.1, and 1.3 was never marketed in the UK).
386DX at 25MHz with 4MB of memory as standard, upgraded to 8MB of memory and an MCA 8514 1024x768 XGA graphics adapter and Token Ring card. IIRC, the cost of each unit excluding the monitor ran to over £4500.
Mine was called Foghorn (the specialists were asked to name them, using cartoon character names).
These systems were pretty robust, and most were still working when they were replaced with IBM Xstation 130s (named after Native American tribes), and later RS/6000 43Ps (named after job professions - I named mine Magician, but I was in charge of them by then so could bend the rules).
I nursed a small fleet of these PS/2 re-installed with OS/2 Warp (and memory canalized from the others to give them 16MB) for the Call-AIX handlers while they were in Havant. I guess they were scrapped after that. One user who had a particular need for processing power had an IBM Blue Lightning 486 processor (made by AMD) bought and fitted.
Though to be truthful, the key action in the Model M keyboards had appeared in the earlier Model F keyboards, and the very first Model M Enhance PC keyboard appeared with an IBM PC 5 pin DIN connector on the 5170 PC/AT.
We had some 6MHz original model PC/ATs where I worked in 1984, and even then I liked the feel of the keyboard. Unfortunately, the Computer Unit decided to let the departmental secretaries compare keyboards before the volume orders went in, and they said they liked the short-travel 'soft-touch' Cherry keyboards over all the others (including Model Ms).
As this was an educational establishment, the keyboards got absolutely hammered, and these soft-touch keyboards ended up with a lifetime measured in months, whereas the small number of Model Ms never went wrong unless someone spilled something sticky into them.
I wish I had known at the time that they were robust enough to be able to withstand total immersion in clean water, as long as they were dried properly.
Well, I didn't feel like a coding god, because the first thing that happened in my first job was that I was sat down with a audio training course (on cassette) to learn RPG-II that they did not have the books that went along with it.
Up to that point, I'd been schooled in PL/1, APL and I'd taught myself C and BASIC and some FORTRAN (this was 1981!), and was reasonably familiar with UNIX already.
BTW. RPG is/was a business language. It stands for Report Program Generator, and was about as usable as an intermediate level macro-assembler with some automatic I/O formatting (a bit like COBOL) code added. I believe it's still available in some form.
It's a repeating problem, because IBM continually buys other companies, inheriting the workforce from those companies.
They then have to shed an equivalent number of people, because as a result of employee transfer of rights, they have to keep the transferred people for a fixed amount of time, whether they want them or not.
Some of the people they take on they will actually want to keep, so to keep the numbers basically fixed, they have to shed an equivalent number of people from somewhere else in the business. And, of course, there may be a cost-saving favoring them getting rid of more experiences, and expensive, people.
Bit-rot is generally a concern for large disk estates, and fundamentally happens all the time. Generally you don't notice it, because the checksum process in the device controller corrects it before sending the data on to the OS. Each block or sector stored on a disk has a significant amount of error-correction added to it, because magnetic media is far perfect.
Unfortunately, the checksum process is not fool-proof, and multi-bit corruptions that pass the checksum calculations are possible. The more bit-flips and disk read operations that happen, the more likely an undetected read failure is to make it past the controller and up to the OS.
As the number of read operations goes up, both because the speed and size of storage estates is increasing, so does the chance of undetected corrupt reads, until eventually it becomes a statistical certainty. We are easily past that point with the largest storage systems around (think how big S3 must be).
Because magnetic devices (particularly) can have magnetic domains (bits) that become marginal and actually flip state both while the device is used, but also when it is idle, due to environmental issues, it is normal for many of the more sophisticated disk controllers to reduce this chance by periodically reading and writing back all data on the disk so that any bits that have been flipped will be written back correctly with new checksum information. This will provide higher confidence that the data read is correct by keeping the number of flipped bits down.
Bit rot in Flash devices is countered by similar processes, but its more common that once flash cells are damaged, the whole block will probably have to be replaced from the spare list, and this can make flash storage devices apparently completely fail suddenly when sufficient failures have happened.
I do not profess to have your level of experience, but I did receive some training on SS7 when I worked for a telco technology company in the '80s.
I believe that in data transmission on physical lines, most SS7 hardening is 'armadillo', i.e. boundary protection with not so much once you get into an operators internal network. SS7 controls call routing through a network, so if you have access to the internal network and can inject false routing information using SS7, it would be possible to re-route calls through routing nodes that you control, and thus potentially eavesdrop on the conversation. It would not surprise me if the TLAs in the US use this mechanism in US telephone operators networks.
Of course, back when it was created, the concept of miscreants getting access to the internal network of an operator was considered unlikely, so there was not reason to think about security for SS7.
I'm sure Microsoft will use this to try to drive down the price that they buy Intel processors for Azure. After all, it wold be a shame if they lost one of the larger Cloud platforms to another processor.
Whether it will make Intel processors any cheaper for the rest of the world, well, we'll have to see.
I think the Ryzan announcements may do more to Intel's pricing than Windows on ARM, however.
... that before it was called IBM Spectrum Scale storage, IBM Elastic storage, or General Parallel (I think) File System, GPFS was actually called the Multi-Media File System?
The evidence is quite clear, because as per normal, even though the name of the product changes, the names of the commands within the product haven't.
A huge number of the commands you run to configure and control GPFS start mm-, things like mmlsfs, mmlsconfig etc.
The original product was developed to provide a many server striped scaleable and reliable filesystem for IBM SP/2 Scaleable Parallel (sometimes called Supercomputers), often known as lan-in-a-can clusters, when IBM tried to sell them as media storage and delivery systems for what was then an almost non-existent on-demand video market. This was in the mid-1990s, before the likes of Netflix even thought of an over-the-net video delivery service, and when Amazon was just shifting books.
Normally that site I was talking about has a shred policy, but they gave an exemption because we were able to prove to the satisfaction of the security team that once the disks in the RAID sets were scrubbed, juggled, per-disk scrubbed and the RAID configuration and disk layout mapping completely destroyed, that there was effectively no way of re-constructing the Reed-Solomon encoding (no data on any of these RAID disks was actually stored plain, it's all hashed).
And actually, the grading of the data was no higher than Restricted even by aggregation, and the vast majority was much lower or unclassified (intermediate computational results that would mean nothing to anybody outside the field, and not much to those in it), so sign off was granted.
Also, the cost of shredding 4000 or so disks was considered exorbitant, and would probably have taken more time than the rest of the decommissioning.
I used to run HPC clusters where doing this on the compute nodes would not have been quite as catastrophic as on a normal system. They would probably have rebooted OK.
The reason for this is that / was always copied into a RAMfs on boot from a read-only copy, /usr was a read-only mount and most of what would normally be other filesystems were just directories in / and /usr. It's true that /var would have been trashed, and any of the data filesystems if they were mounted would also have gone, but the system would have rebooted!
On a related note, when the clusters were decommissioned, I was the primary person responsible at all stages of the systematic, documented and verified destruction of the HPC clusters. It ranged from the filesystems, through to the deconstruction of the RAID devices and scrubbing of all of the disks (about 4000 of them), the destruction of the network configuration and routing information, deleting all of the read-only copies of the diskless root and usr filesystems, even as far as the scrub of the HMCs disks (it's interesting, they run Linux, and it was possible to run scrub against the OS disk of the last HMC [it was jailbroken], while the HMC was still running!)
The complete deconstruction, from working HPC systems to them being driven away from the loading bay took 6 (very long) working days, and finished with a day's contingency remaining in the timetable.
So I am one of a relatively small number of people who can claim that they've deliberately, and with complete authorization, destroyed two of the top 200 HPC systems of their time!
I had real mixed feelings. It was empowering to be able to do such a thing, and upsetting, because keeping them running was almost my complete working life for four years or so.
I want to know why this information is being sent even if the device is not triggered.
I don't understand why the alert phrase is not identified locally to switch on the recording. I mean, recognizing one of three words to activate the device is not particularly difficult, and providing it worked as advertised, would prevent Amazon recording things other than what's intended.
In fact, I would prefer that a majority of the voice recognition was done locally, so there would be a chance that they could do something useful even when not connected to cloud services. Make them use my NASor music server to find media, use a local calender, and only go out to the 'net when it could not satisfy a request locally.
But I suspect that one of the primary reasons these things exist is to get people used to an always connected house.
Chances are the clock in a mechanical timer is an electric one. When the power goes out, the clock stops. When it comes back on, unless you are exceedingly lucky and have had a multiple of 12 hour (or 24 hour if you have a 24 hour clock) outage, the clock will be wrong and you will need to set it.
But it's usually a matter of turning it until it's correct again.
I think that this assumes that the majority of the value add for Apple products is due to the design work (IP) that is done during product development.
It totally ignores the value add associated in the taking of raw materials, and manufacture them into the finished devices.
It also ignores the value add of the marketing and distribution network, although you could say that it did include the premium that people pay just to buy an Apple device.
The IP argument is really a diversionary one, because it assigns a value to a largely intangible asset. This allows them to claim that the majority of the cost is an arbitrary value that they can essentially say comes from the lowest tax jurisdiction they can find.
IIRC, Starbucks did something similar by using one of their hierarchy of companies in a low tax jurisdiction to buy coffee on the open market, and then sell it to their operations in other countries at a stupid markup, along with licensing charges for branding. This allowed them to move profits to the low tax jurisdiction and claim that in most countries, their profit levels were so low that they did not need to pay much corporation tax. This became even more offensive when you think that the coffee never went near the country that supposedly added to it's value.
What did the Cayman islands actually add to an iPhone beside being the arbitrary 'owner' of some IP?
I like all the references, but you're wrong about Thunderbird 1 (and Thunderbird 3).
They both land tail first back at Tracy Island, and what's even cleverer, they managed to suck in the smoke!
But that's easy when you run the film backwards, a trick AP Films did more often than I would have wished. I guess that it's easier to pull a model than let gravity have it's way when trying to lower it.
I could probably dig out the names of the episodes when both were seen, but then I am a bit of a Gerry Anderson geek!
I was really surprised when I saw the original Falcon take off, hover and landing tests about how much it looked like a AP films sequence!
The Thunderbirds effect/sequence I was most terrified and then later impressed with was in the episode "Terror in New York City", where Thunderbird 2 had to make an emergency landing after being attacked by USN Sentinel (bloody Yanks!) That was some serious special effects and model making, even by today's standards. I remember being horror-struck when I saw it as a very impressionable young child in the 1960's.
I wonder whether the model makers had any qualms about dirtying up on of their frequently used models in order to film the sequence. If any of them read here, I would love to know.
I have a copy of Atari Arcade hits for Windows, but it's a bit flaky under Virtual Box (I never really bought into Windows, long term Linux and before that UNIX user).
But it's not the code. The Linux version of Mame is pretty good, and runs the original ROMs. It's the hardware that's the problem. You really relied on the momentum of the huge trackball for the missile sweeps. It's not possible to do the same with a mouse, and the desktop trackballs are too small!
It was one of the two games I was good at (the other being Battlezome). I used to be able to make a single game last 15 minutes or more, and clock up scores in the 350,000 mark. I could normally get on the top 10 on any machine I came across, and jockeyed for the top on the machine I played most frequently (If anybody is interested, the initials I used were PCG).
One day back in the early 1980's, I went to my local arcade. There, on the machine I was most familiar with was a new guy playing.
He was soooooooo much better than anybody else I had seen, and better than me by a mile! He could hit the really crazy smart bombs that appear in the later screens, and low altitude bombers and satellites as well. He lost cities, but slower than he earned them (and the machine was set to only give cities at 15,000 intervals IIRC)
I watched him play a single game for about 40 minutes or more By that time, the colours had cycled through all the outrageous combinations, some so bright that the screen was dazzling, with red, purple and black on a white sky being one I particularly remember. The missile patterns reached what must have been their most difficult, but he could cope. He clocked the score counter (I can't remember what it wrapped at, but it was in the 10's of million).
Eventually, and with cities stacked across the bottom of the screen still, he got fed up, and just walked away from the machine. I never saw him in the arcade again!
It really was a pinball wizard moment.
I stopped playing arcade machines shortly after that, because I knew I could never be as good as that guy. I will occasionally play one if I find one in good working order (very, very rare nowadays and you just don't find the heavy trackballs to play on a PC under Mame), but my playing days are over. Anyway, arcades are now mainly penny falls and fruit machines, and what video games there are are all driving, cycling and shooting games.
A lost era!
Well, I suppose so, but it is some pretty impressive cables, both number and type, and the fact that there are no separate network switches for the Aries (they're integrated into the compute nodes themselves). Some of the cables are trunked into solid connectors for ease of maintenance. Not as well engineered or as 'pretty' as the IBM system IMHO, but...
For most large HPC systems, the interconnect is far more interesting than the compute capability. My point was that the Sonexion storage, although it has lots of lights, is architecturally probably the least interesting part of a Cray.
Also, after the photo was taken, there was custom artwork attached to the compute rack doors. I believe there is a time-lapse set of pictures on the Met Office web site that shows the artwork being attached to one of the clusters.
...that this stock picture, take in the Met Office sometime in 2015, is focusing on the Sonexion storage subsystem of one of the smaller of the Cray systems there, and that the racks of the compute nodes are behind the photographer.
So what you have is a picture of a bunch of Dell servers and Xyratex (Segate) disk shelves linked together by only moderately interesting Infiniband, and running Lustre.
The more interesting compute part, including the Aries interconnect, is not visible.
The IBM 9125 F2C that can still be seen in the background of the picture was a much more interesting system IMHO, but I'm biased, because I used to support those systems!
Most of the time, cases only make it to court in the UK if there is a very good chance that the accused will be found guilty, so a significant number of the cases that make it to court will end up with the accused pleading guilty anyway.
If you can reduce the cost of this process for both the accused and the court system, it looks like a win-win situation to me. Just as long as those who think they've been unjustly accused still have access to the court system if they want.
??? - Of course there are still costs.
The offense still has to be written up, and actually entered as an offense. A case still has to be made before it could be prosecuted. The evaluation of whether a case is likely to succeed if taken to court still has to be made.
I agree that the costs should be fairly minimal, but they are still costs
But to my mind, this new system is really intended to offer people who know they have committed a crime to admit to it, and have a means of going through the justice system without having to go to court, reducing the cost of the whole procedure and saving precious court time.
We already have such a system for traffic offenses. You get caught speeding bang to rights, you can offer to pay the fine, accept the points, and never see the inside of a court room.
In motoring offenses, if you think you're not guilty, you can still opt to go to court, plead your case and let the magistrate decide whether you're guilty or not. The way I read this, you will be able do exactly the same for a number of other minor offenses.
The difference here is that they can be minor criminal offenses, but still, probably ones that would only result in a fine, not a custodial sentence. What's wrong with that?
If you know you're guilty, plead guilty through the web site, and avoid a physical court case. Think that you're innocent, or you've got a chance to avoid it, take your chances in court.
It's not as if the computer will be deciding guilt, taking the place of a magistrate, judge or jury.
The only way this could be seen as disruptive to the justice system is if you are encouraged to plead guilty when you're not, merely to reduce the financial burden. That really would be unjust!
Exception, maybe, but I worked with a team out of Poughkeepsie (not one of the core hubs, but the location is vital for maybe the only profitable hardware segment left in IBM), and all I can say is that from the UK, the only time that you could tell that people were remote was if you heard doorbells or pets in the background on the conference calls.
I and the customer got excellent support, and often it allowed me to talk to the people I needed at stupid-o-clock in the morning their time, and get meaningful help from them, because they had full office setups at their houses. Their responsibility spanned the entire globe, so office hours for them were pretty much non-existent.
These were committed professionals who were prepared to fire up their systems in the early hours of the morning, give advice, and then go back to bed for an hour or two before getting up to do their normal job. I'm not sure they would have been prepared to get in the car and drive to the office to check out a problem. And they were of a level (senior development engineer or higher) who could not charge overtime or standby!
It also allowed for the Power HPC team in Poughkeepsie to have a team leader working out of Austin, the home of Power development, so that cross-location collaboration could actually work. (BTW, the Power IH systems were put together by an associated team of Mainframe development in POK using a lot of mainframe technologies like water-cooling and high-density power distribution, rather than Austin).
I can see this new way of working alienating a huge number of very experienced engineers, to the detriment of IBM as a whole.
Well, I suppose I'd better come clean, because the version I actually used was Mint Debian edition, where you don't use the Ubuntu repositories.
I liked the idea of no dist-upgrades and a rolling upgrade policy, but did not like the fact that all the default installed tools had different names (which was important when you start, for example, gnome-terminal from the command line), nor the (irrelevant in this case) fact that the packages in the Debian repositories are frequently rather old.
It's my laziness, I admit.
I do sometimes wonder, now there are more usable official editions of Ubuntu (like the MATE and Gnome edition), why the derivative Mint distros are still as popular as they are.
It's not that clear.
I declined to use Unity on Ubuntu, but not by switching to Mate, but by using the Gnome Fallback, which actually looks and feels a lot like Gnome 2. I found that I preferred to continue to use Ubuntu than one of it's derivatives, mainly because of the additional Gnome tools that you would need to find alternatives for.
The reason I didn't like Unity on a desktop/laptop, is that the early releases made it awkward to have more than one window visible on the screen. Applications would open full screen, and often, trying to open a second instance of an application dropped you back into the first instance, rather than opening a new one. It followed the Mac idea that window controls were best on a bar at the top of the screen, rather than attached to the window, In early releases, this was take it, or don't use Unity - there were no configuration tools to change the behavior.
These issues can now be configured, so I can at least use Unity on my laptop, but I still prefer not to.
But that's not my whole story. When I needed a second mobile phone because of poor network coverage in two locations where I spent significant amounts of time, I decided to get a second-hand Nexus 4 and put Ubuntu Touch on it.
Unity on this platform makes a lot of sense, and once you've got to grips with right, left and top swipes, it's a very suitable platform for devices where you only have one application visible at a time. I would actually think about using it as my primary phone, if I was not so attached to some of the Android apps. I would be very interested to try it out on a tablet, if only there was a reference hardware device available at a reasonable price second-hand that had a current build.
I remember seeing an astounding piece of ASCII art in the early 1980s. It was a picture of a mountain climber hanging off a cliff, printed on several lengths of 132 column line printer paper. The whole picture was hung on a wall, occupying something like a 6x4 foot space on the wall (I may have the dimensions over-blown due to poor memory, but it definitely filled a large part of the wall).
I believe that it was printed from a card-deck, with just enough JCL to directly print from the deck to the line-printer.
Apparently, printing it on the University's central line printer was banned, and several people got into real trouble, and had their copy of the card deck confiscated when trying to print it.
What you are noticing on the T420 is one of the effects of higher resolution display panels, exacerbated by 16x9 screen aspect ratios.
Back in the days of the Thinkpads up to the T60, the screen resolution for most systems was 1024x768 (obviously there were laptops with higher resolutions, but this was a common panel resolution).
Being 4x3 aspect ration, the 768 pixels were spread over ~8.5 inches, giving you a DPI of about 90. The T430 I am currently using is 1600x900, and the vertical height of the screen is ~6.7 inches, giving a DPI of around 134.
So you have 132 more vertical pixels in 1,8" less space. If you do not tell the software that it has a higher resolution screen, it will by default choose the same font size (measured in pixels) as it did before, making the characters only about half of the size of an older Thinkpad, and the web site you are looking at has no way of knowing that the font size it's using comes out smaller.
So, it's not only (or I would possibly say at all) your eyes that are making the Register more difficult to read,
Back in the days of X11, the DPI setting of the screen was set (in X.org, you have to create an xorg.conf and override the DPI setting) so when you selected a character point size (like Courier 10), you actually got the characters approximately the same size on screens of different resolutions. Point size should be measured in 1/72" (in modern typography) units, and should allow resolution independent character sizing!
I have often commented on the apparent pointlessness of full HD or higher screens on laptops or 'phones, and this just makes my point IMHO.
This was probably 5.25 half height 1GB 'Spitfire' disks.
You are close, but the wrong lubricant was used during manufacture, and it vaporized when the disk was spinning, and condensed on the disk surface when the disk cooled down. When the disk was powered down, the head was parked in contact with the landing zone on the platter, and promptly stuck enough so that the motor could not get the disk to start spinning. A quick tap would free the head, and allow the disk to spin.
The condition was termed 'Stiction' (portmanteau of Stick and Friction), and IBM had a recall on all of the disks, although they would only be replaced when they failed to spin up. The replacement had to come from a pool of disks specifically for warranty replacement of this problem, so when a CE came across such a disk, he generally 'fixed' the disk, and then ordered one of the replacements and arranged to come and fit it. In some customers, the disks were never replaced, because scheduled maintenance was difficult to arrange.
The spin-off of Lexmark from IBM happened before inkjet printers hit the mainstream.
IBM did have some inkjet printers before Lexmark got split off, such as the 4079 Postscript inkjet printer, but the cost of running one of these was astronomical. But colour printers were pretty rare at the time.
The earliest colour inkjet printer I used was in about 1985, branded Integrex or something similar, although it was apparently a badged (and possibly re-rom'd) Cannon PJ-1080A. It appeared to print one line (really, like it only had one nozzle for each colour) of the image at a time, and as a result was abysmally slow.
I believe that it is only the inkjet market that Lexmark left. They're still making laser printers.
Epson do not want you fixing the printers, so do not tell ordinary people how, but the procedure to remove the head is quite simple, and the heads are pretty robust. It's easily within the ability of anyone with a few basic tools, a fairly steady hand and a bit of patience to clean the print heads.
There are also procedures on the 'net to reset the cleaning cycle count that says when the ink sponge is full.
After a number of refills, the re-manufactured HP cartridges will stop working because the print head is quite fragile. I would expect properly maintained Epson printers to still be running as long as you can buy ink to refill the cartridges.
I did once fail to get a R1800 working, but one of the blacks (that printer has two different black cartridges, plus a gloss 'finisher') was completely blocked such that the cleaning solution could not get in to dissolve the ink. I left it soaking for several days, and it made no difference. The guy in the shop that asked me to look at it said that he didn't know whether it had ever worked, because he had taken it back from a customer under warranty, but never returned it to Epson. They then left it in the workshop for over a year before looking at it!
Lots of videos of it on YouTube. This one looks appropriate for you.
Basically, don't take the printer apart. You can do it with just the top cover opened, and it only takes about three minutes to take the head our. The head is a ceramic plate underneath the print cartridges.
Take the cartridges out, and you should see a number of clips/plates holding a couple of ribbon cables in place on the RHS of the print head carrier. Release the clips.
Then find the clips at the back that hold the contact cradle that the cartridges sit in, and release them.
You should then see three screws holding the head in the carrier. Remove them, and the whole head assembly should lift out of the carrier.
You can remove the cables, but remember what went where.
Reassembly is the reverse procedure.
It is possible to clean the head in the printer using a syringe, fluid and some plastic tubing, but it's a bit messy.
Please note. You do this at your own risk. I don't offer any warranty on this.
I actually try to keep older Epson printers running, merely because the third-party inks are so cheep, as the cartridges are pretty much ink buckets, with little or no electronics in the cartridges.
I currently have a Stylus Photo 1290 A3 printer as the volume printer, because I can get five sets of compatible cartridges for the price of one set of Epson originals, and as this is mainly used as the volume printer rather than for quality this is not an issue (though the quality is not bad either, even with the re-manufactured cartridges I use). It's attached through a NAS, so is on all the time for whoever wants to print in the house.
The real problem now is the windows drivers. There are none published for Windows 7 or later, so you have to jump through hoops to install the XP ones, which do work OK on Win7, and I've not tried on later versions.
It's still usable from Linux without problems, however.
I actually though that Lexmark had left the consumer inkjet printer market, so this appears a stupid thing to try to defend.
Indeed, I tried to find some print cartridges for a rather neat 15x10cm Lexmark portable photo printer recently (did I really say "rather neat" and Lexmark in the same sentence?), and there was *NOBODY* stocking original cartridges, and only a few people able to supply re-manufactured cartridges.
Maybe they're trying to expunge any reference that they were in the inkjet market at all by preventing re-manufactured cartridges, causing people to throw them out. They really need to, because their products were pretty crap. Even the aforementioned photo printer had to be repaired, because the nylon drive gears on the lower paper advance had shattered due to age (amazing what little neoprene O-rings can be used for)
The nice thing about Epson print heads is that they're very well made, and can be manually dismantled and flushed with a syringe if you really need to. You have to be a little bit careful not to get the small amount of electronics attached to the head wet, or if you do, make sure that they're rinsed with clean (preferably distilled) water and then thoroughly dried before reassembling (I've heard reports of the magic smoke escaping from print heads that have been assembled without checking they're perfectly dry!)
The solution I've used to flush heads with has been distilled water with about 10% isopropyl alcohol, Some online resources suggest you should use pure isopropyl alcohol, but I've never had a problem.
I've seen people use liquid as mundane as cheep spray window cleaner, but I'm a bit worried about the surfactants that are added to these solutions.
I'll bite. I think the statement about the car not surviving is reasonable.
The velocity of the fuel is based on the aperture of the nozzle and the amount of fuel that would need to pass through it. Moving fuel to a ship is performed using multiple lines all of which will be wider than a car fuel nozzle. If you try to move the same amount of fuel through a narrower pipe, the velocity will have to be much higher (hence the Mach 2 figure).
Comparing this to water-jet cutters, they typically use similar velocity, and although they normally have an abrasive in them to cut steel, fuel tanks in Range Rovers are probably made from a poly-something plastic.
As a result, I would expect a Mach 2 liquid stream to be able to cut through the car's fuel pipe, tank, and almost certainly through other parts of the car.
And this is avoiding the simple factor that this amount of fuel moving at Mach 2 would have a huge amount of kinetic energy which would have to go somewhere if it were to go from Mach 2 to rest in a short distance. I estimate that ~100 liters of fuel (Range Rover TD6) would weigh only a little less that 100Kg. Mach 2 is about 686 meters per second, so it would have a total energy (E=m*V2) of 47MJ. This is a lot of energy to dissipate.
As a result, I would not expect the car to survive.
I call BS on this as well.
X11 has the concept of window hierarchy. Starting at the root window, which IIRC always has window ID 0, is is possible to traverse the complete hierarchy, obtaining the window ID, the name of the application and it's window name, it's colour depth, position and hints.
Find the xprop and xwininfo binaries, run them, and point each at a window. Everything that is printed has been obtained through the X11 properties of the window. kwin is acting as an X11 window manager, so automatically has access to all this information.
I don't know how this will be altered in Wayland or Mir, but X11 (either in it's MIT, XFree86 or X.org guise) has been the standard windowing framework on Linux and UNIX (the exceptions being very old Sun and Apollo systems [if you remember them - although not strictly UNIX], and Mac OSX) for a very long time.
I don't know whether it's still true, but a PDF effectively used to be encapsulated PostScript, which allows very flexible device independent formatting, including embedded fonts, bitmaps and vector drawing capabilities.
When it was first deployed, it used to be set up so that documents could be immutable, i.e. not changeable by the recipient, so that you could be sure that what you saw was what the creator wanted you to see.
Of course, that did not suit everybody, so now PDFs are as editable as any other document format, and can even be used to produce forms that can be filled in and returned as a PDF.
If I remember my A-Level and university maths, what a vector represents is dependent on the number of dimensions you're working in.
If you are working in one dimension, a vector and a scalar are the same thing. In two dimensions (the standard environment when you are learning vectors IIRC), a vector is normally described as a one by two array in a cartesian co-ordinate system, or a scalar and an angle in a polar co-ordinate system.
In three dimensions, a vector will be a one by three three array in cartesian, or a scalar and two angles in polar co-ordinate system.
I'm sure that some theoretical physicist or mathematician will point out that they work in more than three dimensions!
So the upshot of this is that if you are working in one dimension, taking the path of the asteroid as a dimensional frame of reference, the velocity, even if treating it as a vector can be considered the same as it's speed, and this is what most lay people will count as a velocity.
Of course, celestial mechanics is never that simple, and is normally in at least 4 dimensions.
Your plan does not take into account the restricted spectrum that I mentioned during the transition. I still think it is unlikely that extra spectrum will be allocated during the switchover. Maybe Norway will, but I'm pretty certain that Ofcom in the UK won't.
You also assume that people are happy to replace functioning equipment after a number of years. I will and do operate kit until it breaks (and if I can, I fix it when it does break), so I expect a DAB radio to last me 10+ years (my oldest DAB radio is about 12 years now, and still functioning). Even at this age, I would be upset about being forced to replace it.
I know a significant number of people who objected to buying new TVs or set-top boxes in the UK when analog TV was switched off.
I totally agree that DAB was rolled out in the UK too early, but it's always difficult difficult to change things once they're generally (if you can say this about DAB) adopted.
Switching to DAB+ will be disruptive and expensive for those people who have already forked out for kit, and will be disruptive because they will have to reduce the available channels for DAB while they transition to DAB+ (they're not going to allocate any more spectrum during the roll-out).
Even if they offer a subsidy on new kit, I'm a skinflint, and don't want to re-buy, even at a discount, replacements for the 5 DAB radios I already have.
Mind you, I don't listen to it much at the moment, because for the coverage for my current commute (the time I use DAB most) is very patchy.
But I think DAB is dying in the UK. Some of the channels I used to listen to have left DAB as a platform, because (I understand) the cost of operating a DAB station is of the order of a million pounds per year, whereas transmitting over the Internet is much lower, and if you can get DAB somewhere, you're probably also able to get reasonable mobile data service. This shifts the cost of a broadcast service from the provider to the listener. I object to this (did I say I was a skinflint).
I think that by the time they are prepared to suggest a switch to DAB+, there will be no appetite for any over-the-air digital broadcast radio service any more.
But I do believe that there is still a place for analogue radio. It's still the best coverage, the best in terms of battery consumption for mobile devices, and the most widely adopted. I also think that it has a place in civil defence, because in the case of some national emergency, the digital infrastructure will be one of the first things to be affected. Operating an FM (or even AM) service is within the reach of a reasonably competent tinkerer in electronics using readily scavenged components, whereas digital broadcasting requires much more sophisticated knowledge and infrastructure.
This original law was drafted before the referendum, when the government thought that they would remain in Europe, so they should have expected to have this type of battle on their hands.
As such, it's got sod all to do with the result of the referendum, and much more to do with the fact that Home Secretaries (and I include Ms. May in this category) think that they have valid reason to ride roughshod over the privacy of Her Majesties subjects. This is true what ever political colour they have been. Remember Wacky Jackie?
Biting the hand that feeds IT © 1998–2019