* Posts by Peter Gathercole

2610 posts • joined 15 Jun 2007

I was authorized to trash my employer's network, sysadmin tells court

Peter Gathercole
Silver badge

Re: @Peter ... rm -fr @IMG

Normally that site I was talking about has a shred policy, but they gave an exemption because we were able to prove to the satisfaction of the security team that once the disks in the RAID sets were scrubbed, juggled, per-disk scrubbed and the RAID configuration and disk layout mapping completely destroyed, that there was effectively no way of re-constructing the Reed-Solomon encoding (no data on any of these RAID disks was actually stored plain, it's all hashed).

And actually, the grading of the data was no higher than Restricted even by aggregation, and the vast majority was much lower or unclassified (intermediate computational results that would mean nothing to anybody outside the field, and not much to those in it), so sign off was granted.

Also, the cost of shredding 4000 or so disks was considered exorbitant, and would probably have taken more time than the rest of the decommissioning.

0
0
Peter Gathercole
Silver badge

rm -fr @IMG

I used to run HPC clusters where doing this on the compute nodes would not have been quite as catastrophic as on a normal system. They would probably have rebooted OK.

The reason for this is that / was always copied into a RAMfs on boot from a read-only copy, /usr was a read-only mount and most of what would normally be other filesystems were just directories in / and /usr. It's true that /var would have been trashed, and any of the data filesystems if they were mounted would also have gone, but the system would have rebooted!

On a related note, when the clusters were decommissioned, I was the primary person responsible at all stages of the systematic, documented and verified destruction of the HPC clusters. It ranged from the filesystems, through to the deconstruction of the RAID devices and scrubbing of all of the disks (about 4000 of them), the destruction of the network configuration and routing information, deleting all of the read-only copies of the diskless root and usr filesystems, even as far as the scrub of the HMCs disks (it's interesting, they run Linux, and it was possible to run scrub against the OS disk of the last HMC [it was jailbroken], while the HMC was still running!)

The complete deconstruction, from working HPC systems to them being driven away from the loading bay took 6 (very long) working days, and finished with a day's contingency remaining in the timetable.

So I am one of a relatively small number of people who can claim that they've deliberately, and with complete authorization, destroyed two of the top 200 HPC systems of their time!

I had real mixed feelings. It was empowering to be able to do such a thing, and upsetting, because keeping them running was almost my complete working life for four years or so.

3
0

Amazon goes to court to stop US murder cops turning Echoes into Big Brother house spies

Peter Gathercole
Silver badge

Re: This makes no sense

I want to know why this information is being sent even if the device is not triggered.

I don't understand why the alert phrase is not identified locally to switch on the recording. I mean, recognizing one of three words to activate the device is not particularly difficult, and providing it worked as advertised, would prevent Amazon recording things other than what's intended.

In fact, I would prefer that a majority of the voice recognition was done locally, so there would be a chance that they could do something useful even when not connected to cloud services. Make them use my NASor music server to find media, use a local calender, and only go out to the 'net when it could not satisfy a request locally.

But I suspect that one of the primary reasons these things exist is to get people used to an always connected house.

0
0

Quantum takes on GPFS and Lustre in commercial HPC market

Peter Gathercole
Silver badge

I wonder how many people remember...

... that before it was called IBM Spectrum Scale storage, IBM Elastic storage, or General Parallel (I think) File System, GPFS was actually called the Multi-Media File System?

The evidence is quite clear, because as per normal, even though the name of the product changes, the names of the commands within the product haven't.

A huge number of the commands you run to configure and control GPFS start mm-, things like mmlsfs, mmlsconfig etc.

The original product was developed to provide a many server striped scaleable and reliable filesystem for IBM SP/2 Scaleable Parallel (sometimes called Supercomputers), often known as lan-in-a-can clusters, when IBM tried to sell them as media storage and delivery systems for what was then an almost non-existent on-demand video market. This was in the mid-1990s, before the likes of Netflix even thought of an over-the-net video delivery service, and when Amazon was just shifting books.

1
0

AWS's S3 outage was so bad Amazon couldn't get into its own dashboard to warn the world

Peter Gathercole
Silver badge

@Lotaresco

Chances are the clock in a mechanical timer is an electric one. When the power goes out, the clock stops. When it comes back on, unless you are exceedingly lucky and have had a multiple of 12 hour (or 24 hour if you have a 24 hour clock) outage, the clock will be wrong and you will need to set it.

But it's usually a matter of turning it until it's correct again.

1
0

Apple to Europe: It's our job to design Ireland's tax system, not yours

Peter Gathercole
Silver badge

Re: mostly Cupertino

I think that this assumes that the majority of the value add for Apple products is due to the design work (IP) that is done during product development.

It totally ignores the value add associated in the taking of raw materials, and manufacture them into the finished devices.

It also ignores the value add of the marketing and distribution network, although you could say that it did include the premium that people pay just to buy an Apple device.

The IP argument is really a diversionary one, because it assigns a value to a largely intangible asset. This allows them to claim that the majority of the cost is an arbitrary value that they can essentially say comes from the lowest tax jurisdiction they can find.

IIRC, Starbucks did something similar by using one of their hierarchy of companies in a low tax jurisdiction to buy coffee on the open market, and then sell it to their operations in other countries at a stupid markup, along with licensing charges for branding. This allowed them to move profits to the low tax jurisdiction and claim that in most countries, their profit levels were so low that they did not need to pay much corporation tax. This became even more offensive when you think that the coffee never went near the country that supposedly added to it's value.

What did the Cayman islands actually add to an iPhone beside being the arbitrary 'owner' of some IP?

22
0

SpaceX blasts back into the rocket trucking business

Peter Gathercole
Silver badge

Re: It's like the 1950s all over again @Mike Richards

I like all the references, but you're wrong about Thunderbird 1 (and Thunderbird 3).

They both land tail first back at Tracy Island, and what's even cleverer, they managed to suck in the smoke!

But that's easy when you run the film backwards, a trick AP Films did more often than I would have wished. I guess that it's easier to pull a model than let gravity have it's way when trying to lower it.

I could probably dig out the names of the episodes when both were seen, but then I am a bit of a Gerry Anderson geek!

I was really surprised when I saw the original Falcon take off, hover and landing tests about how much it looked like a AP films sequence!

The Thunderbirds effect/sequence I was most terrified and then later impressed with was in the episode "Terror in New York City", where Thunderbird 2 had to make an emergency landing after being attacked by USN Sentinel (bloody Yanks!) That was some serious special effects and model making, even by today's standards. I remember being horror-struck when I saw it as a very impressionable young child in the 1960's.

I wonder whether the model makers had any qualms about dirtying up on of their frequently used models in order to film the sequence. If any of them read here, I would love to know.

3
0

NORKS fires missile that India reckons it could shoot down in flight

Peter Gathercole
Silver badge

Re: I used to be a pretty good Missile Command player

I have a copy of Atari Arcade hits for Windows, but it's a bit flaky under Virtual Box (I never really bought into Windows, long term Linux and before that UNIX user).

But it's not the code. The Linux version of Mame is pretty good, and runs the original ROMs. It's the hardware that's the problem. You really relied on the momentum of the huge trackball for the missile sweeps. It's not possible to do the same with a mouse, and the desktop trackballs are too small!

1
0
Peter Gathercole
Silver badge
Mushroom

I used to be a pretty good Missile Command player

It was one of the two games I was good at (the other being Battlezome). I used to be able to make a single game last 15 minutes or more, and clock up scores in the 350,000 mark. I could normally get on the top 10 on any machine I came across, and jockeyed for the top on the machine I played most frequently (If anybody is interested, the initials I used were PCG).

One day back in the early 1980's, I went to my local arcade. There, on the machine I was most familiar with was a new guy playing.

He was soooooooo much better than anybody else I had seen, and better than me by a mile! He could hit the really crazy smart bombs that appear in the later screens, and low altitude bombers and satellites as well. He lost cities, but slower than he earned them (and the machine was set to only give cities at 15,000 intervals IIRC)

I watched him play a single game for about 40 minutes or more By that time, the colours had cycled through all the outrageous combinations, some so bright that the screen was dazzling, with red, purple and black on a white sky being one I particularly remember. The missile patterns reached what must have been their most difficult, but he could cope. He clocked the score counter (I can't remember what it wrapped at, but it was in the 10's of million).

Eventually, and with cities stacked across the bottom of the screen still, he got fed up, and just walked away from the machine. I never saw him in the arcade again!

It really was a pinball wizard moment.

I stopped playing arcade machines shortly after that, because I knew I could never be as good as that guy. I will occasionally play one if I find one in good working order (very, very rare nowadays and you just don't find the heavy trackballs to play on a PC under Mame), but my playing days are over. Anyway, arcades are now mainly penny falls and fruit machines, and what video games there are are all driving, cycling and shooting games.

A lost era!

9
0

OK, 2016 wasn't the best, but look for a buyer? That's Cray

Peter Gathercole
Silver badge

Re: I love the fact...

Well, I suppose so, but it is some pretty impressive cables, both number and type, and the fact that there are no separate network switches for the Aries (they're integrated into the compute nodes themselves). Some of the cables are trunked into solid connectors for ease of maintenance. Not as well engineered or as 'pretty' as the IBM system IMHO, but...

For most large HPC systems, the interconnect is far more interesting than the compute capability. My point was that the Sonexion storage, although it has lots of lights, is architecturally probably the least interesting part of a Cray.

Also, after the photo was taken, there was custom artwork attached to the compute rack doors. I believe there is a time-lapse set of pictures on the Met Office web site that shows the artwork being attached to one of the clusters.

0
0
Peter Gathercole
Silver badge

I love the fact...

...that this stock picture, take in the Met Office sometime in 2015, is focusing on the Sonexion storage subsystem of one of the smaller of the Cray systems there, and that the racks of the compute nodes are behind the photographer.

So what you have is a picture of a bunch of Dell servers and Xyratex (Segate) disk shelves linked together by only moderately interesting Infiniband, and running Lustre.

The more interesting compute part, including the Aries interconnect, is not visible.

The IBM 9125 F2C that can still be seen in the background of the picture was a much more interesting system IMHO, but I'm biased, because I used to support those systems!

1
0

Want to come to the US? Be prepared to hand over your passwords if you're on Trump's hit list

Peter Gathercole
Silver badge

Re: All my social media logins...

Careful. You may be arrested for wielding an offensive weapon!

1
0

Conviction by computer is go, confirms UK Ministry of Justice

Peter Gathercole
Silver badge

Re: Prosecution Costs?! @AC

Most of the time, cases only make it to court in the UK if there is a very good chance that the accused will be found guilty, so a significant number of the cases that make it to court will end up with the accused pleading guilty anyway.

If you can reduce the cost of this process for both the accused and the court system, it looks like a win-win situation to me. Just as long as those who think they've been unjustly accused still have access to the court system if they want.

0
2
Peter Gathercole
Silver badge

Re: Prosecution Costs?! @User McUser

??? - Of course there are still costs.

The offense still has to be written up, and actually entered as an offense. A case still has to be made before it could be prosecuted. The evaluation of whether a case is likely to succeed if taken to court still has to be made.

I agree that the costs should be fairly minimal, but they are still costs

But to my mind, this new system is really intended to offer people who know they have committed a crime to admit to it, and have a means of going through the justice system without having to go to court, reducing the cost of the whole procedure and saving precious court time.

We already have such a system for traffic offenses. You get caught speeding bang to rights, you can offer to pay the fine, accept the points, and never see the inside of a court room.

In motoring offenses, if you think you're not guilty, you can still opt to go to court, plead your case and let the magistrate decide whether you're guilty or not. The way I read this, you will be able do exactly the same for a number of other minor offenses.

The difference here is that they can be minor criminal offenses, but still, probably ones that would only result in a fine, not a custodial sentence. What's wrong with that?

If you know you're guilty, plead guilty through the web site, and avoid a physical court case. Think that you're innocent, or you've got a chance to avoid it, take your chances in court.

It's not as if the computer will be deciding guilt, taking the place of a magistrate, judge or jury.

The only way this could be seen as disruptive to the justice system is if you are encouraged to plead guilty when you're not, merely to reduce the financial burden. That really would be unjust!

4
5

Big blues: IBM's remote-worker crackdown is company-wide, including its engineers

Peter Gathercole
Silver badge

Re: Titanic deckchair painting strikes again!

Exception, maybe, but I worked with a team out of Poughkeepsie (not one of the core hubs, but the location is vital for maybe the only profitable hardware segment left in IBM), and all I can say is that from the UK, the only time that you could tell that people were remote was if you heard doorbells or pets in the background on the conference calls.

I and the customer got excellent support, and often it allowed me to talk to the people I needed at stupid-o-clock in the morning their time, and get meaningful help from them, because they had full office setups at their houses. Their responsibility spanned the entire globe, so office hours for them were pretty much non-existent.

These were committed professionals who were prepared to fire up their systems in the early hours of the morning, give advice, and then go back to bed for an hour or two before getting up to do their normal job. I'm not sure they would have been prepared to get in the car and drive to the office to check out a problem. And they were of a level (senior development engineer or higher) who could not charge overtime or standby!

It also allowed for the Power HPC team in Poughkeepsie to have a team leader working out of Austin, the home of Power development, so that cross-location collaboration could actually work. (BTW, the Power IH systems were put together by an associated team of Mainframe development in POK using a lot of mainframe technologies like water-cooling and high-density power distribution, rather than Austin).

I can see this new way of working alienating a huge number of very experienced engineers, to the detriment of IBM as a whole.

14
0

Ubuntu Linux daddy Mark Shuttleworth: Carrots for Unity 8?

Peter Gathercole
Silver badge

Re: The one interface to rule them all @Updraft102

Well, I suppose I'd better come clean, because the version I actually used was Mint Debian edition, where you don't use the Ubuntu repositories.

I liked the idea of no dist-upgrades and a rolling upgrade policy, but did not like the fact that all the default installed tools had different names (which was important when you start, for example, gnome-terminal from the command line), nor the (irrelevant in this case) fact that the packages in the Debian repositories are frequently rather old.

It's my laziness, I admit.

I do sometimes wonder, now there are more usable official editions of Ubuntu (like the MATE and Gnome edition), why the derivative Mint distros are still as popular as they are.

0
0
Peter Gathercole
Silver badge

Re: The one interface to rule them all

It's not that clear.

I declined to use Unity on Ubuntu, but not by switching to Mate, but by using the Gnome Fallback, which actually looks and feels a lot like Gnome 2. I found that I preferred to continue to use Ubuntu than one of it's derivatives, mainly because of the additional Gnome tools that you would need to find alternatives for.

The reason I didn't like Unity on a desktop/laptop, is that the early releases made it awkward to have more than one window visible on the screen. Applications would open full screen, and often, trying to open a second instance of an application dropped you back into the first instance, rather than opening a new one. It followed the Mac idea that window controls were best on a bar at the top of the screen, rather than attached to the window, In early releases, this was take it, or don't use Unity - there were no configuration tools to change the behavior.

These issues can now be configured, so I can at least use Unity on my laptop, but I still prefer not to.

But that's not my whole story. When I needed a second mobile phone because of poor network coverage in two locations where I spent significant amounts of time, I decided to get a second-hand Nexus 4 and put Ubuntu Touch on it.

Unity on this platform makes a lot of sense, and once you've got to grips with right, left and top swipes, it's a very suitable platform for devices where you only have one application visible at a time. I would actually think about using it as my primary phone, if I was not so attached to some of the Android apps. I would be very interested to try it out on a tablet, if only there was a reference hardware device available at a reasonable price second-hand that had a current build.

1
0

Hacker: I made 160,000 printers spew out ASCII art around the world

Peter Gathercole
Silver badge

Re: Bah!

I remember seeing an astounding piece of ASCII art in the early 1980s. It was a picture of a mountain climber hanging off a cliff, printed on several lengths of 132 column line printer paper. The whole picture was hung on a wall, occupying something like a 6x4 foot space on the wall (I may have the dimensions over-blown due to poor memory, but it definitely filled a large part of the wall).

I believe that it was printed from a card-deck, with just enough JCL to directly print from the deck to the line-printer.

Apparently, printing it on the University's central line printer was banned, and several people got into real trouble, and had their copy of the card deck confiscated when trying to print it.

1
0

'Grey technology' should be the new black

Peter Gathercole
Silver badge

Re: "Grey" tech should be Top tech.

What you are noticing on the T420 is one of the effects of higher resolution display panels, exacerbated by 16x9 screen aspect ratios.

Back in the days of the Thinkpads up to the T60, the screen resolution for most systems was 1024x768 (obviously there were laptops with higher resolutions, but this was a common panel resolution).

Being 4x3 aspect ration, the 768 pixels were spread over ~8.5 inches, giving you a DPI of about 90. The T430 I am currently using is 1600x900, and the vertical height of the screen is ~6.7 inches, giving a DPI of around 134.

So you have 132 more vertical pixels in 1,8" less space. If you do not tell the software that it has a higher resolution screen, it will by default choose the same font size (measured in pixels) as it did before, making the characters only about half of the size of an older Thinkpad, and the web site you are looking at has no way of knowing that the font size it's using comes out smaller.

So, it's not only (or I would possibly say at all) your eyes that are making the Register more difficult to read,

Back in the days of X11, the DPI setting of the screen was set (in X.org, you have to create an xorg.conf and override the DPI setting) so when you selected a character point size (like Courier 10), you actually got the characters approximately the same size on screens of different resolutions. Point size should be measured in 1/72" (in modern typography) units, and should allow resolution independent character sizing!

I have often commented on the apparent pointlessness of full HD or higher screens on laptops or 'phones, and this just makes my point IMHO.

3
0

Naughty sysadmins use dark magic to fix PCs for clueless users

Peter Gathercole
Silver badge

Re: No quite wizadry but.../ Percussive Maintenance @TheNeonSpirit

This was probably 5.25 half height 1GB 'Spitfire' disks.

You are close, but the wrong lubricant was used during manufacture, and it vaporized when the disk was spinning, and condensed on the disk surface when the disk cooled down. When the disk was powered down, the head was parked in contact with the landing zone on the platter, and promptly stuck enough so that the motor could not get the disk to start spinning. A quick tap would free the head, and allow the disk to spin.

The condition was termed 'Stiction' (portmanteau of Stick and Friction), and IBM had a recall on all of the disks, although they would only be replaced when they failed to spin up. The replacement had to come from a pool of disks specifically for warranty replacement of this problem, so when a CE came across such a disk, he generally 'fixed' the disk, and then ordered one of the replacements and arranged to come and fit it. In some customers, the disks were never replaced, because scheduled maintenance was difficult to arrange.

5
0

Doomsday Clock moves to 150 seconds before midnight. Thanks, Trump

Peter Gathercole
Silver badge

Re: Move along, no science here @Lord Elpuss

Wasn't the hit to Russia called the "Orange Revolution", and the following overtures made by the EU to get Ukraine to look to the EU rather than Russia?

1
0

How Lexmark's patent fight to crush an ink reseller will affect us all

Peter Gathercole
Silver badge

Re: Um...so Lexmark's long term plan...

The spin-off of Lexmark from IBM happened before inkjet printers hit the mainstream.

IBM did have some inkjet printers before Lexmark got split off, such as the 4079 Postscript inkjet printer, but the cost of running one of these was astronomical. But colour printers were pretty rare at the time.

The earliest colour inkjet printer I used was in about 1985, branded Integrex or something similar, although it was apparently a badged (and possibly re-rom'd) Cannon PJ-1080A. It appeared to print one line (really, like it only had one nozzle for each colour) of the image at a time, and as a result was abysmally slow.

I believe that it is only the inkjet market that Lexmark left. They're still making laser printers.

0
0
Peter Gathercole
Silver badge

@Charles 9

Epson do not want you fixing the printers, so do not tell ordinary people how, but the procedure to remove the head is quite simple, and the heads are pretty robust. It's easily within the ability of anyone with a few basic tools, a fairly steady hand and a bit of patience to clean the print heads.

There are also procedures on the 'net to reset the cleaning cycle count that says when the ink sponge is full.

After a number of refills, the re-manufactured HP cartridges will stop working because the print head is quite fragile. I would expect properly maintained Epson printers to still be running as long as you can buy ink to refill the cartridges.

I did once fail to get a R1800 working, but one of the blacks (that printer has two different black cartridges, plus a gloss 'finisher') was completely blocked such that the cleaning solution could not get in to dissolve the ink. I left it soaking for several days, and it made no difference. The guy in the shop that asked me to look at it said that he didn't know whether it had ever worked, because he had taken it back from a customer under warranty, but never returned it to Epson. They then left it in the workshop for over a year before looking at it!

3
0
Peter Gathercole
Silver badge

@Ogi

Lots of videos of it on YouTube. This one looks appropriate for you.

Basically, don't take the printer apart. You can do it with just the top cover opened, and it only takes about three minutes to take the head our. The head is a ceramic plate underneath the print cartridges.

Take the cartridges out, and you should see a number of clips/plates holding a couple of ribbon cables in place on the RHS of the print head carrier. Release the clips.

Then find the clips at the back that hold the contact cradle that the cartridges sit in, and release them.

You should then see three screws holding the head in the carrier. Remove them, and the whole head assembly should lift out of the carrier.

You can remove the cables, but remember what went where.

Reassembly is the reverse procedure.

It is possible to clean the head in the printer using a syringe, fluid and some plastic tubing, but it's a bit messy.

Please note. You do this at your own risk. I don't offer any warranty on this.

4
0
Peter Gathercole
Silver badge

Re: major cause of landfill

I actually try to keep older Epson printers running, merely because the third-party inks are so cheep, as the cartridges are pretty much ink buckets, with little or no electronics in the cartridges.

I currently have a Stylus Photo 1290 A3 printer as the volume printer, because I can get five sets of compatible cartridges for the price of one set of Epson originals, and as this is mainly used as the volume printer rather than for quality this is not an issue (though the quality is not bad either, even with the re-manufactured cartridges I use). It's attached through a NAS, so is on all the time for whoever wants to print in the house.

The real problem now is the windows drivers. There are none published for Windows 7 or later, so you have to jump through hoops to install the XP ones, which do work OK on Win7, and I've not tried on later versions.

It's still usable from Linux without problems, however.

2
0
Peter Gathercole
Silver badge

Re: Um...so Lexmark's long term plan...

I actually though that Lexmark had left the consumer inkjet printer market, so this appears a stupid thing to try to defend.

Indeed, I tried to find some print cartridges for a rather neat 15x10cm Lexmark portable photo printer recently (did I really say "rather neat" and Lexmark in the same sentence?), and there was *NOBODY* stocking original cartridges, and only a few people able to supply re-manufactured cartridges.

Maybe they're trying to expunge any reference that they were in the inkjet market at all by preventing re-manufactured cartridges, causing people to throw them out. They really need to, because their products were pretty crap. Even the aforementioned photo printer had to be repaired, because the nylon drive gears on the lower paper advance had shattered due to age (amazing what little neoprene O-rings can be used for)

3
0
Peter Gathercole
Silver badge

Re: "Epson ecotanks. That's the approach all manufactures should be taking."

The nice thing about Epson print heads is that they're very well made, and can be manually dismantled and flushed with a syringe if you really need to. You have to be a little bit careful not to get the small amount of electronics attached to the head wet, or if you do, make sure that they're rinsed with clean (preferably distilled) water and then thoroughly dried before reassembling (I've heard reports of the magic smoke escaping from print heads that have been assembled without checking they're perfectly dry!)

The solution I've used to flush heads with has been distilled water with about 10% isopropyl alcohol, Some online resources suggest you should use pure isopropyl alcohol, but I've never had a problem.

I've seen people use liquid as mundane as cheep spray window cleaner, but I'm a bit worried about the surfactants that are added to these solutions.

3
0

Britain collects new naval tanker a mere 18 months late

Peter Gathercole
Silver badge

@MakingBacon

I'll bite. I think the statement about the car not surviving is reasonable.

The velocity of the fuel is based on the aperture of the nozzle and the amount of fuel that would need to pass through it. Moving fuel to a ship is performed using multiple lines all of which will be wider than a car fuel nozzle. If you try to move the same amount of fuel through a narrower pipe, the velocity will have to be much higher (hence the Mach 2 figure).

Comparing this to water-jet cutters, they typically use similar velocity, and although they normally have an abrasive in them to cut steel, fuel tanks in Range Rovers are probably made from a poly-something plastic.

As a result, I would expect a Mach 2 liquid stream to be able to cut through the car's fuel pipe, tank, and almost certainly through other parts of the car.

And this is avoiding the simple factor that this amount of fuel moving at Mach 2 would have a huge amount of kinetic energy which would have to go somewhere if it were to go from Mach 2 to rest in a short distance. I estimate that ~100 liters of fuel (Range Rover TD6) would weigh only a little less that 100Kg. Mach 2 is about 686 meters per second, so it would have a total energy (E=m*V2) of 47MJ. This is a lot of energy to dissipate.

As a result, I would not expect the car to survive.

8
0

Microsoft Germany says Windows 7 already unfit for business users

Peter Gathercole
Silver badge

Re: WONTFIX in KDE

I call BS on this as well.

X11 has the concept of window hierarchy. Starting at the root window, which IIRC always has window ID 0, is is possible to traverse the complete hierarchy, obtaining the window ID, the name of the application and it's window name, it's colour depth, position and hints.

Find the xprop and xwininfo binaries, run them, and point each at a window. Everything that is printed has been obtained through the X11 properties of the window. kwin is acting as an X11 window manager, so automatically has access to all this information.

I don't know how this will be altered in Wayland or Mir, but X11 (either in it's MIT, XFree86 or X.org guise) has been the standard windowing framework on Linux and UNIX (the exceptions being very old Sun and Apollo systems [if you remember them - although not strictly UNIX], and Mac OSX) for a very long time.

0
0

Solaris 12 disappears from Oracle's roadmap

Peter Gathercole
Silver badge

Re: SPARC/Solaris more expensive? Not anymore! @AC

I think you are Kebabbert, and I claim my five pounds!

2
1

Brilliant phishing attack probes sent mail, sends fake attachments

Peter Gathercole
Silver badge

Re: Sigh. Not again.

I don't know whether it's still true, but a PDF effectively used to be encapsulated PostScript, which allows very flexible device independent formatting, including embedded fonts, bitmaps and vector drawing capabilities.

When it was first deployed, it used to be set up so that documents could be immutable, i.e. not changeable by the recipient, so that you could be sure that what you saw was what the creator wanted you to see.

Of course, that did not suit everybody, so now PDFs are as editable as any other document format, and can even be used to produce forms that can be filled in and returned as a PDF.

4
1

Asteroid nearly gave Earth a new feature, two days after its discovery

Peter Gathercole
Silver badge

Re: So close? Re:Vector

If I remember my A-Level and university maths, what a vector represents is dependent on the number of dimensions you're working in.

If you are working in one dimension, a vector and a scalar are the same thing. In two dimensions (the standard environment when you are learning vectors IIRC), a vector is normally described as a one by two array in a cartesian co-ordinate system, or a scalar and an angle in a polar co-ordinate system.

In three dimensions, a vector will be a one by three three array in cartesian, or a scalar and two angles in polar co-ordinate system.

I'm sure that some theoretical physicist or mathematician will point out that they work in more than three dimensions!

So the upshot of this is that if you are working in one dimension, taking the path of the asteroid as a dimensional frame of reference, the velocity, even if treating it as a vector can be considered the same as it's speed, and this is what most lay people will count as a velocity.

Of course, celestial mechanics is never that simple, and is normally in at least 4 dimensions.

0
1

FM now stands for 'fleeting mortality' in Norway

Peter Gathercole
Silver badge

Re: DAB+ @DrXym @John Brown re: mobile data

and I then followed this up with a statement indicating it was shifting the cost from the broadcaster to the listener, so I have already agreed with your point.

0
0
Peter Gathercole
Silver badge

Re: DAB+ @DrXym - Replacement kit

Your plan does not take into account the restricted spectrum that I mentioned during the transition. I still think it is unlikely that extra spectrum will be allocated during the switchover. Maybe Norway will, but I'm pretty certain that Ofcom in the UK won't.

You also assume that people are happy to replace functioning equipment after a number of years. I will and do operate kit until it breaks (and if I can, I fix it when it does break), so I expect a DAB radio to last me 10+ years (my oldest DAB radio is about 12 years now, and still functioning). Even at this age, I would be upset about being forced to replace it.

I know a significant number of people who objected to buying new TVs or set-top boxes in the UK when analog TV was switched off.

1
0
Peter Gathercole
Silver badge

Re: DAB+ @DrXym

I totally agree that DAB was rolled out in the UK too early, but it's always difficult difficult to change things once they're generally (if you can say this about DAB) adopted.

Switching to DAB+ will be disruptive and expensive for those people who have already forked out for kit, and will be disruptive because they will have to reduce the available channels for DAB while they transition to DAB+ (they're not going to allocate any more spectrum during the roll-out).

Even if they offer a subsidy on new kit, I'm a skinflint, and don't want to re-buy, even at a discount, replacements for the 5 DAB radios I already have.

Mind you, I don't listen to it much at the moment, because for the coverage for my current commute (the time I use DAB most) is very patchy.

But I think DAB is dying in the UK. Some of the channels I used to listen to have left DAB as a platform, because (I understand) the cost of operating a DAB station is of the order of a million pounds per year, whereas transmitting over the Internet is much lower, and if you can get DAB somewhere, you're probably also able to get reasonable mobile data service. This shifts the cost of a broadcast service from the provider to the listener. I object to this (did I say I was a skinflint).

I think that by the time they are prepared to suggest a switch to DAB+, there will be no appetite for any over-the-air digital broadcast radio service any more.

But I do believe that there is still a place for analogue radio. It's still the best coverage, the best in terms of battery consumption for mobile devices, and the most widely adopted. I also think that it has a place in civil defence, because in the case of some national emergency, the digital infrastructure will be one of the first things to be affected. Operating an FM (or even AM) service is within the reach of a reasonably competent tinkerer in electronics using readily scavenged components, whereas digital broadcasting requires much more sophisticated knowledge and infrastructure.

37
1

Landmark EU ruling: Legality of UK's Investigatory Powers Act challenged

Peter Gathercole
Silver badge

Re: But I thought we "took back control" @Richard

This original law was drafted before the referendum, when the government thought that they would remain in Europe, so they should have expected to have this type of battle on their hands.

As such, it's got sod all to do with the result of the referendum, and much more to do with the fact that Home Secretaries (and I include Ms. May in this category) think that they have valid reason to ride roughshod over the privacy of Her Majesties subjects. This is true what ever political colour they have been. Remember Wacky Jackie?

18
0

It's round and wobbles, but madam, it's a mouse pad, not a floppy disk

Peter Gathercole
Silver badge

Re: Poor instructions @Dave 126.

I've only looked at UK pressings under UK light, so they are for 50Hz.

0
0
Peter Gathercole
Silver badge

Re: Poor instructions

If you look at 45's during the era of auto-changers, you would see that many of them had a circular 'bump' track between the innermost grove and the label. This was there so that when they were stacked, they would 'lock' together, preventing the upper ones from slipping while being rotated through a stack of lower disks.

What was more interesting is that the number of the 'bumps' was such that when viewed under a bright mains filament light while spinning on the turntable, they should appear static (strobe effect) if the turntable was running at the right speed, but you had to look very hard.

I have a copy of Tommy by the Who, which was a two LP set, which had sides 1 and 4 on one disk, and 2 and 3 on the other. This was so that you could play sides 1 and 2 on an auto-changer, and then turn both disks over together as a sandwich to play sides 3 and 4.

Mind you, the weight of the records falling down the spindle, especially the heavier vinyl used in the '60s and '70s was such that I was always surprised that the turntable survived. I suspect that is why the BSR decks (at least) has spring suspension to absorb the impact, not for any audio isolation. My Grandmother also used to use the auto-changer on her PYE Stereogram (about the same size as a small sideboard) for shellac 78s which were really heavy.

11
0

Christmas cheer for KCL staffers with gift of extra holiday after IT disaster

Peter Gathercole
Silver badge

Re: Repeatable?

If the data was real-world observations, it's still science, but not repeatable unless you have a time machine.

5
1

What’s next after hyperconvergence?

Peter Gathercole
Silver badge
Joke

Back in the last century

We had infrastructure that had small numbers of large system that controlled their own resources, be it memory, CPU, storage, or networking, with software components optimizing the use of resource. It ran on hardware that had enhanced RAS capabilities, and was quit expensive. Call this Stage 1.

Since then, we've been through:

Stage 2. Multiple smaller systems, each controlling their own resources, but they were cheaper.

Stage 3. Rolling all storage for these multiple systems into centralized storage solutions to make storage more flexible

Stage 4. De-duplicating the storage systems, so that the multiple OS files (and really only these files) would not have multiple copies wastefully stored

Stage 5. Virtualising all these multiple systems onto larger servers 'to save money and reduce wasted CPU and memory through resource sharing, and putting it on expensive systems with enhanced RAS.

Stage 6. Replacing the SAN with software defined storage systems.

Stage 7. Moving your communication infrastructure into the virtualised environment.

Stage 8. Virtualising the software defined storage systems into the enhanced RAS systems

So where are we.

We will now have infrastructure that has small numbers of large system that control their own resources, be it memory, CPU, storage, and networking, with software components optimizing the use of resource. It runs on hardware that has enhanced RAS capabilities, and is quit expensive.

All we appear to have done is replaced the OS with a hypervisor, moving everything one rung up the ladder, and we now have the traditional OS fulfilling the same function as the application runtime environments.

The next step will be to replace the traditional OS with a minimal runtime (hmmm, is that what containerization is all about), and we will have reinvented the Mainframe!

I've added the joke icon to try to deflect all of those of you who will try to point out the difference in detail between mainframes and hyperconverged systems.

4
0

systemd free Linux distro Devuan releases second beta

Peter Gathercole
Silver badge

And remember...

He had form before systemd, in that he was responsible for the cluster-fuck that was Pulse Audio, which was only really fixed after he moved on, I understand.

That was also an over-arching package that tried to control everything audio wise. He tried to offset blame to the distro maintainers (particularly for Ubuntu), but I struggled with it for many years (main problem being resampling rates and buffer-underruns after suspend on IBM Thinkpads, leading to gaps in the audio) before it suddenly just worked after an update.

Back in the tail end of the noughties, I think that one issue pushed more curious people away from Linux than anything else!

0
0
Peter Gathercole
Silver badge

@Vincent

I don't doubt your longevity with UNIX. It's definitely longer than me (6th Edition, 1978), but I seriously doubt that you were using 7th Edition in 1975 (although I believe PDP-11/45 in this time scale).

Most of the V7 documentation is dated 1978, and the Levenez timeline dated 7th Edition to 1979, so unless you were working in Bell Labs, I suspect that you were using 5th or 6th Edition in 1975.

Sorry to nitpick.

I must admit it is the use of XML and the severe scope of systemd that I don't like, as to me, it makes the startup of Linux pretty opaque.

4
0

UK.gov was warned of smart meter debacle by Cabinet Office in 2012

Peter Gathercole
Silver badge

Re: Points from a briefing

Not just nuclear. All bio and fossil fuels are carbon stores (a carbon battery?) that can be re-charged over years and megania (is this a word? A thousand millennia? It should be!) respectively from an outside power source (the sun).

Unfortunately, all you're really doing is moving energy around (you never 'generate' energy - merely convert it from one form to another, including matter - E=mc2), and will continue doing this until the heat-death of the universe!

3
0
Peter Gathercole
Silver badge

Re: Just Say NO

Or alternatively, your meter is reaching it's end-of-life (they are only certified to be accurate for a fixed period of time) and needs to be replaced anyway.

If this is the case, and it were me, I'd want to make sure a direct equivalent of the existing meter is installed, not a smart-meter.

4
0

Internet Archive preps Canadian safe haven to swerve Donald Trump

Peter Gathercole
Silver badge

Re: Asimov ?

Unfortunately, with all the snooping going on, hiding in plain sight in Canada (Terminus) is not an option.

I favor the option of Dicksons "Final Encyclopedia", or possibly Hactar.

1
0

Debian putting everything on the /usr

Peter Gathercole
Silver badge

Re: Not new @Daggerchild

The process was under the control of the HMC (Hardware Management Console). It would create the file, execute it, and delete it, and if anything failed, the entire process failed.

I'm no stranger to doing exactly as you suggest (even using hex editors to hand-hack binaries) to move files in awkward locations to better ones, but in this case, there was no point where I could break into the process to alter the location it was trying to use.

I even had a jail-broken HMC, and worked through how the process worked. It was using a script on the read-only filesystem (so immutable, even by changing the file on the server serving it - there was some strangeness in the NFS implementation where changes on the server were not picked up on the client, something to do with it being read-only mount and having NFS caching enabled), so while I could reboot the server to pick up the changes, that negated being able to hot-swap the PCIe cards.

We did the work. I just wanted to have the process fixed, because I have what sometimes appears a perverse desire to see defects fixed, rather than working around them (especially as I had already worked through the issue, and could point to exactly where the defect was).

Must be something to do with me having worked in Level 2 AIX support for a number of years. I really don't like having to tell people who are supposed to be providing support to me how to do their job.

I'm a really awkward customer!

2
0
Peter Gathercole
Silver badge

Re: only thing I ask @Olius

The problem with PCs in general is that if you use the old DOS MBR partition system, you can only have 4 primary partitions, and everything else has to be in an extended partition in one of the primary partitions. This generally meant that Linux was installed in a single partition, as in a dual-boot system you could not guarantee that there was more than one partition available for filesystems.

On my laptop, I used to have a rarely used Windows 7 (32 bit) primary partition, two Ubuntu OS primary partitions (one my current use release, and the other either a previous or the next version of Ubuntu depending on where I was in evaluating the LTS releases), and an extended partition containing a /home filesystem and the swap space (plus any partition backups I wanted to keep).

When I got my latest 2nd user Thinkpad, I found that Windows used two primary partitions, adding a boot partition. I dropped one of the Ubuntu OS partitions, although I did reserve the space in the extended partition for it for future use.

I really need to think about migrating to Xenial Xerus, but I'm not 100% sure I can install Ubuntu in a secondary partition. Maybe I should just bite the bullet and do a dist-upgrade, but I am not comfortable clobbering my current daily use OS with no fallback.

Presumably, my next laptop will probably have a GPT, but that's no reason to replace my perfectly functional system.

Stupid PCs.

0
0
Peter Gathercole
Silver badge

Re: I don't like change

I think it was a matter of convention and knowledge. The install docs (V7 here, nroff source) for Bell Labs UNIX did not give very many hints about how to do it, and if all you did was to follow the docs with a single disk system, you would end up with a layout that probably left you with nowhere other than /usr to store user files (Sorry, I did have links to PDF formatted documents from the Lucent UNIX archive, but that appears to have disappeared - still, "groff -ms -T ascii filename" will make a reasonable attempt to format these for the screen).

On the first UNIX system I logged into in 1978 at Durham University, there was a separate /user filesystem which mapped to a complete RK05 disk pack (about 2.5MB per pack). / and /usr (and the swap partition) were on disk partitions on a separate RK05 disk pack. At this time in V6 and V7, disk partitions were compiled in to the disk driver (in the source), and IIRC, the default RK05 split was something like 25%, 60% and 15% for root, usr and swap.

Whilst I was there, the system admins. (mostly postgrad students) added a Plessey fixed disk that appeared as four RK05 disk packs, and allowed them to give ingres it's own filesystem. This happened at the same time that V7 was installed on the system, over Summer vacation in 1979.

When I installed my first UNIX system (1982, again V6 and later V7 UNIX), I kept a similar convention. although I had two 32MB CDC SMD disks to play with, configured as odd sized RP03 disks, and I split each of the disks up as either four quarter disks, 2 half disks or one complete disk - don't use overlapping disk partitions! (again in the device driver source of V7 UNIX). It was a very involved process getting UNIX onto these non-standard geometry disks, but that's a tale for another day.

During this time, I also had access to an Ultrix system which user /u01, /uo2 etc (BSD convention).

When I worked at AT&T (1986-1989), they also used the /u01, /u02... convention for user filesystems.

Following that, I've always had a /home filesystem for user files.

2
0
Peter Gathercole
Silver badge

Re: I don't like change

I've been working with UNIX for 38 years (Bell Labs V6 onwards), and while I don't disagree with you, /usr has never been used for user files in my experience in all that time.I think I read in one of the histories of UNIX that it might have been used like this on the earliest PDP 8 releases (before my time), before they moved to PDP 11.

What was common was to actually have a /user filesystem in addition to /usr, although a convention adopted from BSD I think often had /u01, /u02 etc for user files.

IIRC, Sun introduced the concept of /home.

12
0
Peter Gathercole
Silver badge

Not new

Sun introduced a filesystem layout back in the 80's with SunOS 2 (I think), where /usr was a largely imutable filesystem.

What this allowed was the /usr filesystem of a system serving diskless clients to share it's own /usr filesystem with the clients.

If anybody cares to remember, the diskless client model meant that Sun 2, 3 and 4 workstations could just be CPU, memory, display and network, with no local persistent storage. Back when SCSI disks were very expensive, this allowed you to centralise the cost in a large server, and keep the cost of the workstations down.

The model was that all filesystems were mounted over NFS, with /, and /var (a new filesystem in this model) mounted (IIRC - myy memory could be faulty and confused by the differences between the Sun and IBM models) from /export/root/clientname and /export/var/clientname on the server as read-write filesystems, and /usr, (and later /usr/share) mounted read-only, served either from the /usr and /usr/share if the clients ran the same architecture and OS level, or from some other location which mirrored /usr if the clients ran a different version (this allowed SPARC architecture systems to be served from Motorola ones, or vice-versa).

Directories such as /etc, /var/adm, /usr/spool, /usr/tmp, which would have been on read-only or read-mostly became symlinks into /var (which was unique to each client as it was mounted from a different directory on the server).

Other vendors including IBM and Digital adopted very similar layouts for clusters of diskless clients. With IBM in 1991, it appeared with AIX 3.2 (and refined in 3.2.5). The filesystem layout meant that no machine should really write into /usr except during an upgrade, containing any variable files into /var. Unfortunately, many people (including IBM software developers) forgot this, and over the years, software expected to be able to write into directories below /usr.

Interestingly, the IBM 9125-F2C, aka Power7 775, supercomputer running AIX reintroduced the concept of diskless clients in 2011. The filesystem layout was modified slightly, with the concept of a statefull read-only NFS filesystem (STNFS), which allowed changes to the read-only filesystem to be either cached in memory for the duration of the OS run (a bit like a filesystem Union), or files/directories to be point-to-point mounted over entities on the read-only filesystem into a read-write filesystem.

/ became a STNFS read only mount, /usr was a read-only filesystem, and /var was a read-write mount off of an NFS server. /tmp was left on the / filesystem, meaning files were lost on a reboot, and also that writing lots of files into /tmp reduced the amount of RAM the node had!

Work related filesystems were mounted over GPFS for performance (NFS was just too slow), although any paging did actually work over NFS (obviously, paging was a major no-no for these performance optimised machines, but we could not get AIX to run without a paging space).

Unfortunately, as I found out, the hot-swap process for adapters, run over RMC from the HMC (Hardware Management Console) had a habit of trying to construct scripts in /usr/adm/ras (on the read-only part of the file tree) to execute to enable the swap, and as a result, we were unable to hot-swap adapters, which caused problems on more than one occasion. I did raise a PMR with support/development, but had trouble arguing the problem through, as the systems were so niche, that the support droids could not understand the problem.

5
0

Forums

Biting the hand that feeds IT © 1998–2017