* Posts by Peter Gathercole

2692 posts • joined 15 Jun 2007

systemd-free Devuan Linux hits version 1.0.0

Peter Gathercole
Silver badge

Re: Honest inquiry @myself

Hey!

I've just looked at TUHS, and if you're interested in UNIX source code, there's lots of interesting stuff has appeared there recently.

Not just source for Edition 8, but Editions 9 and 10 as well.

The biggest revelation I had was when I found the source for something called pdp11v, which is also called PDP-11 3+2.

Have a look, and work out what it is yourself! Remember, even large PDP-11s were really rather small (maximum 4MB memory, small 16KB memory segments, maximum of 128KB text and data size for single processes without some fancy overlaying), so someone having got this running was a real feat.

0
0
Peter Gathercole
Silver badge

Re: Honest inquiry

Back on my own machine. V7x86 partition fired up.

/etc/init is a binary that is run from inside main.c, and it is crafted as process 1 (the source refers to process 0 as being the scheduler, and is just a loop that sleeps on a timer interrupt, and presumably inspects the process table to schedule the other processes).

The source for the Edition 7 init is very simple. It handles single and multi-user modes, and runs /etc/rc, and also handles respawning the getty processes (controlled by the entries in /etc/ttys) as they are used by users logging on and off. It's written as an infinite loop with a wait in it. The wait will return every time a process terminates. It then puts a record in utmp, and if the process was a getty or whatever getty exec'd, it respawns the getty.

Other than that, it does very little. The processes that run at boot are actually started by the /etc/rc script, and that is a simple top-to- bottom script that mounts the other filesystems, starts cron and the update process that periodically.

So much more simple that the SysVinit that implements inittab. I don't have access to any Bell Labs or AT&T source later than Edition 7, although I guess I could look at BSD, but that may not give any insight to when the full-blown SysVinit appeared.

I believe that the Edition 8 source may now be at TUHS (at www.tuhs.org). I must check it out, although this is only related to SysV through the common ancestor of Edition 7.

BTW, Correction to my previous post. Lions is spelt Lions, not Lyons.

5
0
Peter Gathercole
Silver badge

Re: Honest inquiry

Um. Monitoring processes is exactly what SysVinit does, but it requires you to actually have processes directly created by init that stick around.

Look at the entries in /etc/inittab. See field 3 in each line, the one that says wait, once or respawn. Respawn allows you to have a service that will be re-stared if the process dies.

What you are referring to as SysVinit is actually the /etc/rc script that is called from init on runlevel change, that runs the scripts from /etc/rc.d (although different SysV UNIX ports actually implement it slightly differently). While this is part of the init suite, it is not init itself.

The concept of init in UNIX goes back to before SysV. I have a copy of the Lyons Edition 6 commentary, and that references an init process, although I think that the /etc/inittab file did not exist at that time. I will fire up my Nordier Intel port of Edition 7 VM at some point to refresh my memory about how Edition 7 started the initial processes.

The rc.d directory heirarchy of scripts appeared at some point between SVR2 and SVR3 IIRC. The first UNIX I remember seeing it in was R&D UNIX 5.2.6 (which was an internal AT&T release).

16
0

Farewell Unity, you challenged desktop Linux. Oh well, here's Ubuntu 17.04

Peter Gathercole
Silver badge

Re: Won't install properly @Peter R. 1

I hope your comment was not aimed at me!

If it was, I think you've missed out the gist of what I was saying. If you install or buy some bleeding edge or niche hardware for Windows, something that is not in the normal Windows driver repository, the vendor provides this thing, normally a shiny silver disk or a link to a web site, that adds the support for that device to Windows.

Without it, you would have as much trouble running that hardware on Windows as many people experience on Linux. As an exercise, try installing Windows on one of these problem systems just from Microsoft media, and see how much stuff doesn't work without the mobo and other driver disks from people other than Microsoft. It's an education.

The problem hardware vendors do not provide their own drivers for Linux, and this is the biggest problem for niche hardware. You cannot expect anybody else in the Linux community to reverse-engineer hardware drivers for this type of device. If it's important, do it yourself, and contribute it back into the community!

Do not expect someone like RedHat or Canonical to provide drivers for Linux when Microsoft do not do it for Windows (remember, even drivers in the Windows repository are often provided by the vendor, not Microsoft themselves). It really is the vendors responsibility to ensure that their hardware is supported, not the OS community.

It is a wonder that as much works as it does with just the base Linux install media. A testament to all the hard work that has been done, often by volunteers or philanthropic companies.

What I find more cynical is those vendors who provide Mac OS drivers which would differ comparatively little from the Linux ones, but don't actually bother with that last step of packaging and testing for Linux.

8
0
Peter Gathercole
Silver badge

Re: My thoughts on this ... @Julian

Before the turn of the century, I liked the version of twm that added a virtual desktop. The version I used was called vtwm.

I actually found the source for it a bit back, and compiled it up. It still does the main part of the job I need a window manager to do quite well (and in an absolutely tiny footprint), but the lack of integration with things like the network manager for wireless keys, no applets and a number of other niggles prevented me from going back to it full time.

I suppose I could have spent more time investigating getting it working better, but I just lost interest. We get too used to the extra luxuries of modern desktops, unfortunately.

0
0
Peter Gathercole
Silver badge

Re: Good riddance, but..

GNOME flashback (or failback, whatever they want to call it) works for me. GNOME 2 look and feel delivered on top of GNOME 3. It's not identical (plugins have to be re-written, for example), but it's close enough.

I chose that on Ubuntu rather than switching to Mint.

2
1
Peter Gathercole
Silver badge

Re: Won't install properly

Unless the nVidia drivers in the repository are back-level compared to other distributions, blame nVidia themselves for the poor quality.

As I understand it, both nVidia and AMD (ATI) provide a binary blob that is wrapped to allow it to be plugged into X.org, Mir or Wayland for each distro. As long as that blob is wrapped correctly, any instability will be caused by the blob. Also, are you sure it crashes the system, and not just the GUI? X11 or Mir drivers should be running in user mode, so should be incapable of taking the whole system out. Have you tried Ctrl-Alt-F1 to get to a console so that you can kill the X server?

If the repository is out-of-date, then pick up the new blob from the nVidia or AMD website, and compile it into the wrapper yourself.

Personally, I find the open-source drivers sufficient for my needs, and much less prone to have the code to drive my older graphic cards removed with no notice (which has happened more than once). But then, I'm not a hard-core gamer.

I suspect that the code that Realtek provide for their WiFi dongles (presumably you mean USB devices) hasn't been updated by Realtek recently, and may not compile because the Kernel version and library stack has moved on from when their code was written. Try engaging Realtek to ask them to provide a copy that will compile on what is, after all, a mainstream Linux distro.

But the basic point is, get the chipset vendors to support their hardware better on Linux rather than griping at the distro maintainers. Or buy hardware that is more Linux friendly.

11
1
Peter Gathercole
Silver badge

Re: My thoughts on this ... @badger31

I never liked Unity on the desktop, but having used it on a 'phone for some time, it works surprisingly well.

My view is that it works well for people and devices that only really do one thing at a time, thus it works on 'phones quite well (who tries to multitask several applications on a phome screen?). Scopes are really interesting, and switching between different concurrently opened programs by swiping from the left does work. I would have loved to use a WebOS device to see whether the Cards feature from that and the task switcher in Ubuntu Touch worked in the same way.

On a desktop or laptop, people who fill the whole screen with what they are doing probably like Unity (and probably the Mac interface and Metro as well). But the original behavior, where applications opened full screen by default and the launcher bringing to the front an already open window rather than opening a new instance alienated me and a whole lot of other users.

6
0

Will the MOAB (Mother Of all AdBlockers) finally kill advertising?

Peter Gathercole
Silver badge

Re: I havent got the bandwidth yet @Kiwi

There are some. I have a PCI card based on a Broadcom chipset, inherited when I got given a Shuttle compact PC, where I could not find any support, either pre-compiled or in source, that would work to get the card to function in Linux (specifically Ubuntu 12.04 - it was a few years ago).

But then again, the card was so obscure that it took an absolute age to find some drivers that worked in Windows XP, as well.

I also had some problems with the Atheros wireless chip in the original EEE PC 701 with Ubuntu, because it took some time for the particular chipset to be supported in the repository.

1
0
Peter Gathercole
Silver badge

Re: I havent got the bandwidth yet @Charles

<pedant>If you're wanting any graphics drivers in the kernel beyond the console mode drivers, you're going to be disappointed</pedant>.

What is in the Linux kernel is a series of stub syscalls that allow the user mode graphic drivers to access the hardware, and many of these stubs are actually wrapped in to KMS. The drivers (which I admit may lag the availability of new hardware) are not in the kernel.

This is the X.org way of doing things. I do not know for certain that Wayland does things the same way, but I think that it does.

I've pointed out many times that the reason why the type of examples you've quoted are difficult to find is because the hardware manufacturers can't be arsed, or deliberately refuse to provide Linux support for their hardware (although the GPL does raise some barriers if they want to keep their code secret).

It's unfair to blame the Linux community for the lack of support for these hardware devices. The open-source graphics modules are getting better, but they effectively rely on some clever bods, sometimes working on their own time, to reverse-engineer the support code for new hardware, and this does not happen instantly.

It's often only niche or bleeding-edge hardware which is difficult (even the mainstream Atheros chipsets are quite well supported now). I've not really had problems with WiFi on laptops from the mainstream suppliers for some time now.

Aim your scorn at the hardware manufacturers.

2
0

Canonical sharpens post-Unity axe for 80-plus Ubuntu spinners

Peter Gathercole
Silver badge

Re: Reboot

The example I used was for commercial UNIXes, where the on-disk image of the kernel is actually overwritten with a kernel update. This is mainly because the initial boot loader is designed to load something like /unix.

For quite some time, Linux has had the ability to have multiple kernels installed on a system. In this respect, you are correct in saying that not rebooting will not cause symbol table mis-matches of the type I described, although I would not like to say there would be no issues (especially if there were any kernel API changes, not unheard of in the Linux kernel).

But I'm pretty certain that the early Linux systems, using Lilo rather than Grub, still relied on there being a link of some kind to a fixed named file in the top level root directory.

My first experience of Linux was with Red Hat 4.1 (original numbering system, not RHEL) around 20 years ago, and I'm sure that is how it worked in those earlier releases. I'm pretty certain that in-place online kernel updates were almost unheard of back then, and nobody would even think of not rebooting after updating the system from a CD. In fact, if I remember correctly, to update a systems back then normally required you to boot a system from the CD containing the updates, so rebooting was mandatory.

My Unix experience at source level goes back to 1978 (goodness, 40 year anniversary of first logging on to a Unix system next year!), so I'm pretty certain of the behaviour of traditional UNIX systems

Prior to the /proc pseudo-filesystem, the normal way for a process like ps, for example, to read the process table was for the process to be set-uid to root, and then open /dev/kmem and seek to the process table using the symbol table obtained from the /unix file. This behaviour was copied from traditional Unix systems in early Linux command sets, and you would be surprised about how many processes actually needed access to kernel data structures.

1
0
Peter Gathercole
Silver badge

Re: Reboot

Reboots are suggested every time you update the kernel. If you don't reboot after updating the kernel, some things, particularly anything that looks at the symbol table for the running system by looking at the image on disk could cause problems.

This should be less of an issue than on traditional UNIX systems, because they used to change the default kernel image on disk that contained the addresses of most kernel data structures, so the symbol table in /unix (or whatever it may have been called) did not match the actual addresses in /dev/kmem.

Since /proc. /sys et.al. are now used to access most kernel data structures in Linux without having to look in /dev/kmem, there should be fewer problems, as the kernel symbol table should not be used as much.

If kernel updates really bug you, then black-list one or more of the kernel packages, and allow all of the package updates that do not affect the kernel to be updated. At your convenience, remove the black-list entry, and allow the kernel to update, and then reboot the system.

LTS does not mean fewer updates. It just means that you are guaranteed support for a longer period of time. Just because it is an LTS release does not mean that there are fewer bugs that need patching, or that the rate of patch delivery is any slower.

2
0

'Tech troll' sues EFF to silence 'Stupid Patent of the Month' blog. Now the EFF sues back

Peter Gathercole
Silver badge

Re: Personal opinion

While I agree in your qualification, licensing the patent for someone else to use, provided it results in a real product, would be perfectly acceptable demonstration of the practicality of implementation of a patent. The time scale of 6 months may be a bit short for full a full product to be produced, but should be enough for a demonstration.

As we all know, the problem with what is happening is that there is no attempt to turn patents into a product, but the patent is used to extort money from other people, especially for patents that are so obvious they should not have been granted in the first place.

Although later ARM designs may look like designs on paper licensed to other people, the background of ARM is based on solid product development. Acorn produced both ARM-1 and ARM-2 processors, although they out-sourced the fabrication, they were branded as Acorn products.

9
0

Mark Shuttleworth says some free software folk are 'deeply anti-social' and 'love to hate'

Peter Gathercole
Silver badge

Re: True to some extent but in this case?

The Edge phone looked like it was going to be an interesting thing, but you could get much of the experience for much less than £200.

I picked up a second user Nexus 4 (one of the reference platforms for the Ubuntu phone distro) for £50, and spent about an hour putting Ubuntu Touch on it.

It's my backup phone, and I actually quite like it. I don't like Unity on a laptop, but it really works on a single-task-at-a-time touch screen device. My one gripe is that there is no real apps for it, although I did nothing myself to add anything to the ecosystem, so I guess that I can't really complain. If it had gained enough momentum, I reckon it it could have been a contender, but the chances of that were always slim.

I guess that I'll have to look for another quirky backup phone at some point (my previous backup was a Palm Treo, which I kept running long past it useful life because I liked it so much). Anybody any suggestions?

0
0
Peter Gathercole
Silver badge

Re: Weird

It's not really X-Window vs. Mir. X-Windows, although it will live on for a long time as a compatibility layer, is on the way out.

The war was really Wayland vs.Mir, with a rearguard action trying to defend X-Windows. Several campaigns have still to be fought, but it's less complicated with Mir out of the way.

Although it has a long and illustrious-but-tarnished history, X-Windows is not suitable for all graphics devices. Even with the extensions to direct rendering, it can be slow compared to less abstracted systems, and there have always been security concerns with it, which is a bit strange considering that it's major strength was that clients could exist on different systems than the server, as long as there was a network path between them.

It is about time that X was retired, but it will be difficult to get something to the level of ubiquity that X-Windows achieved in the Open Systems era (remember, it was embraced by some of the non-UNIX workstation vendors like Digital), and all mainstream Linux and BSD distributions (but not Android) come with it built in. With Mir disappearing, Wayland will hopefully achieve this, but it is not certain.

0
0

Put down your coffee and admire the sheer amount of data Windows 10 Creators Update will slurp from your PC

Peter Gathercole
Silver badge

Re: I thought @Adair

Whilst it is quite true that there is representative software that runs on Open Source operating systems, it is not one-for-one compatible.

Don't get me wrong, I'm an Open Source advocate, and have been for a long time, but Open Source application software is often only as good as the time and effort it's writers put into it, and this is often not enough to make it completely functionally equivalent to commercial software, This leads to interoperabillity problems.

Now, for ordinary individuals or SMBs, that is probably OK, but just wait until you engage with another organization that is still wedded to commercial software. and you can suddenly find that for some application types, the fact that a document does not render quite right, or the macros that are used either error, don't work at all, or produce the wrong result, and it becomes a serious issue, possibly risking the viability of the business. This is why most organizations toe the line, and use the dominant offerings.

Big businesses like the control that is available via things like Active Directory, and often Open Source alternatives do not have anything like group policies that make marshaling large estates of desktop PCs easier, and that's ignoring cloud-based modern applications.

And then you have the bespoke applications that are specific to certain technologies. If they are only available on Windows, you have no choice (and please don't talk about emulation - its unlikely to be supported by the vendor and it's fraught with problems, and VMs are a sop that still encourages locked-in application/OS links).

What we actually need, and I've said this over and over again, is for application writers to realize that an Open Source OS does not necessarily mean Open Source applications. Commercial software can be delivered on Linux without having to open up the application source (as long as you abide by the LGPL). But we need either a standardized or dominant Linux environment, so that the Linux support requirements are affordable to software companies. That's just not happening, and the landscape is getting poorer (see the Canonical news about reducing ambitions over the last few days).

The Linux community is, unfortunately, letting the very opportunity offered by unpalatable licensing conditions in other application platforms slip through their fingers. The best we can hope for at this time is something like the Chromebook model to provide an alternative, but in a toss up between the New Microsoft and Google, With these choices, I'll take the third option, almost without regard to what it is.

2
0

Ubuntu UNITY is GNOME-MORE: 'One Linux' dream of phone, slab, desktop UI axed

Peter Gathercole
Silver badge

Re: When prototypes go too far

I could do that with my old Sony Xperia SP with a MHL adapter.

MHL adapter plugged into the phone USB port, powered USB hub plugged into USB socket on MHL adapter, HDMI cable to a TV plugged into HDMI socket on MHL adapter, keyboard and mouse plugged into USB hub. You could leave it all on a desk, and just plug a single cable into the phone (I believe that Sony actually made a cradle to allow you to drop it into the cradle). And the phone charged at the same time!

Single app nature of Android was a bit of a problem, but with ConnectBot, I could use it to access remote systems as a terminal, and move files between other systems and the phone, and use local apps to process files on the phone.

0
0

Manchester pulls £750 public crucifixion offer

Peter Gathercole
Silver badge

Re: I see an opportunity

That's in poor taste, (maybe even blasphemous!)

The SPB pretty much was Lester, and he is (unfortunately) no longer with us!

Still a great loss.

7
0

Mac Pro update: Apple promises another pricey thing it will no doubt abandon after a year

Peter Gathercole
Silver badge

Re: Not lost...

Well, after the type of language used in this article, I'm not surprised it was never sent!

1
0

It's 30 years ago: IBM's final battle with reality

Peter Gathercole
Silver badge

When the IBM AIX Systems Support Centre in the UK was set up in in 1989/1990, the standard system that was on the desks of the support specialists was a PS/2 Model 80 running AIX 1.2. (I don't recall if they were ever upgraded to 1.2.1, and 1.3 was never marketed in the UK).

386DX at 25MHz with 4MB of memory as standard, upgraded to 8MB of memory and an MCA 8514 1024x768 XGA graphics adapter and Token Ring card. IIRC, the cost of each unit excluding the monitor ran to over £4500.

Mine was called Foghorn (the specialists were asked to name them, using cartoon character names).

These systems were pretty robust, and most were still working when they were replaced with IBM Xstation 130s (named after Native American tribes), and later RS/6000 43Ps (named after job professions - I named mine Magician, but I was in charge of them by then so could bend the rules).

I nursed a small fleet of these PS/2 re-installed with OS/2 Warp (and memory canalized from the others to give them 16MB) for the Call-AIX handlers while they were in Havant. I guess they were scrapped after that. One user who had a particular need for processing power had an IBM Blue Lightning 486 processor (made by AMD) bought and fitted.

3
0
Peter Gathercole
Silver badge

Re: succesful standard

Though to be truthful, the key action in the Model M keyboards had appeared in the earlier Model F keyboards, and the very first Model M Enhance PC keyboard appeared with an IBM PC 5 pin DIN connector on the 5170 PC/AT.

We had some 6MHz original model PC/ATs where I worked in 1984, and even then I liked the feel of the keyboard. Unfortunately, the Computer Unit decided to let the departmental secretaries compare keyboards before the volume orders went in, and they said they liked the short-travel 'soft-touch' Cherry keyboards over all the others (including Model Ms).

As this was an educational establishment, the keyboards got absolutely hammered, and these soft-touch keyboards ended up with a lifetime measured in months, whereas the small number of Model Ms never went wrong unless someone spilled something sticky into them.

I wish I had known at the time that they were robust enough to be able to withstand total immersion in clean water, as long as they were dried properly.

8
0

Wi-Fi sex toy with built-in camera fails penetration test

Peter Gathercole
Silver badge

USB connected endoscope is much cheeper...

...for checking drains. Not sure about the other things a Siime can do.

2
0

BOFH: The Boss, the floppy and the work 'experience'

Peter Gathercole
Silver badge

Re: Being on a placement myself...

Well, I didn't feel like a coding god, because the first thing that happened in my first job was that I was sat down with a audio training course (on cassette) to learn RPG-II that they did not have the books that went along with it.

Up to that point, I'd been schooled in PL/1, APL and I'd taught myself C and BASIC and some FORTRAN (this was 1981!), and was reasonably familiar with UNIX already.

BTW. RPG is/was a business language. It stands for Report Program Generator, and was about as usable as an intermediate level macro-assembler with some automatic I/O formatting (a bit like COBOL) code added. I believe it's still available in some form.

5
0

Stop us if you've heard this one before: IBM sheds more workers – this time, tech sales

Peter Gathercole
Silver badge

Re: Incredible Boneheaded Move @stephanh

I'm sorry. What systems with Xeons inside would they be then?

IBM offloaded all of the Intel business to Lenovo.

There is no cost advantage of selling Lenovo systems unless they can sell significant amounts of services as well.

2
0
Peter Gathercole
Silver badge

Re: This is still fairly trivial...

It's a repeating problem, because IBM continually buys other companies, inheriting the workforce from those companies.

They then have to shed an equivalent number of people, because as a result of employee transfer of rights, they have to keep the transferred people for a fixed amount of time, whether they want them or not.

Some of the people they take on they will actually want to keep, so to keep the numbers basically fixed, they have to shed an equivalent number of people from somewhere else in the business. And, of course, there may be a cost-saving favoring them getting rid of more experiences, and expensive, people.

2
0

As of today, iThings are even harder for police to probe

Peter Gathercole
Silver badge

Re: Is bit-rot a real phenomena?

Bit-rot is generally a concern for large disk estates, and fundamentally happens all the time. Generally you don't notice it, because the checksum process in the device controller corrects it before sending the data on to the OS. Each block or sector stored on a disk has a significant amount of error-correction added to it, because magnetic media is far perfect.

Unfortunately, the checksum process is not fool-proof, and multi-bit corruptions that pass the checksum calculations are possible. The more bit-flips and disk read operations that happen, the more likely an undetected read failure is to make it past the controller and up to the OS.

As the number of read operations goes up, both because the speed and size of storage estates is increasing, so does the chance of undetected corrupt reads, until eventually it becomes a statistical certainty. We are easily past that point with the largest storage systems around (think how big S3 must be).

Because magnetic devices (particularly) can have magnetic domains (bits) that become marginal and actually flip state both while the device is used, but also when it is idle, due to environmental issues, it is normal for many of the more sophisticated disk controllers to reduce this chance by periodically reading and writing back all data on the disk so that any bits that have been flipped will be written back correctly with new checksum information. This will provide higher confidence that the data read is correct by keeping the number of flipped bits down.

Bit rot in Flash devices is countered by similar processes, but its more common that once flash cells are damaged, the whole block will probably have to be replaced from the spare list, and this can make flash storage devices apparently completely fail suddenly when sufficient failures have happened.

2
0

Why are creepy SS7 cellphone spying flaws still unfixed after years, ask Congresscritters

Peter Gathercole
Silver badge

Re: Why do we still have the traditional cell infrastructure anyway?

I do not profess to have your level of experience, but I did receive some training on SS7 when I worked for a telco technology company in the '80s.

I believe that in data transmission on physical lines, most SS7 hardening is 'armadillo', i.e. boundary protection with not so much once you get into an operators internal network. SS7 controls call routing through a network, so if you have access to the internal network and can inject false routing information using SS7, it would be possible to re-route calls through routing nodes that you control, and thus potentially eavesdrop on the conversation. It would not surprise me if the TLAs in the US use this mechanism in US telephone operators networks.

Of course, back when it was created, the concept of miscreants getting access to the internal network of an operator was considered unlikely, so there was not reason to think about security for SS7.

3
0

The future of storage is ATOMIC: IBM boffins stash 1 bit on 1 atom

Peter Gathercole
Silver badge

Re: Not quite ready to replace the flash in your phone

Yes, they do like their research with intricate types of microscope, after spelling out IBM in atoms in 1990.

Big bullies,picking on something so much smaller than themselves!

5
0
Peter Gathercole
Silver badge

Re: What we need...

Shame Tim Worstall doesn't post here any more. I'm sure, as someone who deals with the metals market, he would have an interesting insight on this.

7
0

Windows Server ported to Qualcomm's ARM server chip. Repeat, Windows Server ported to ARM server chip

Peter Gathercole
Silver badge

Re: LINUX BEAT THEM BY YEARS

I'm sure Microsoft will use this to try to drive down the price that they buy Intel processors for Azure. After all, it wold be a shame if they lost one of the larger Cloud platforms to another processor.

Whether it will make Intel processors any cheaper for the rest of the world, well, we'll have to see.

I think the Ryzan announcements may do more to Intel's pricing than Windows on ARM, however.

2
0

Watt the f... Dim smart meters caught simply making up readings

Peter Gathercole
Silver badge

Re: Well there is a simple answer to all of this @itzman

Well, electricity from nuclear fusion may be too cheap to meter in the future, but we're still stuck with fission, unfortunately.

3
1

Quantum takes on GPFS and Lustre in commercial HPC market

Peter Gathercole
Silver badge

@Wild Westener

Good point (which I did know), although I was commenting on the fact that the Quantum filesystem is being touted as a media filesystem, something that GPFS had been intended for 20 years ago.

1
0
Peter Gathercole
Silver badge

I wonder how many people remember...

... that before it was called IBM Spectrum Scale storage, IBM Elastic storage, or General Parallel (I think) File System, GPFS was actually called the Multi-Media File System?

The evidence is quite clear, because as per normal, even though the name of the product changes, the names of the commands within the product haven't.

A huge number of the commands you run to configure and control GPFS start mm-, things like mmlsfs, mmlsconfig etc.

The original product was developed to provide a many server striped scaleable and reliable filesystem for IBM SP/2 Scaleable Parallel (sometimes called Supercomputers), often known as lan-in-a-can clusters, when IBM tried to sell them as media storage and delivery systems for what was then an almost non-existent on-demand video market. This was in the mid-1990s, before the likes of Netflix even thought of an over-the-net video delivery service, and when Amazon was just shifting books.

1
0

I was authorized to trash my employer's network, sysadmin tells court

Peter Gathercole
Silver badge

Re: @Peter ... rm -fr @IMG

Normally that site I was talking about has a shred policy, but they gave an exemption because we were able to prove to the satisfaction of the security team that once the disks in the RAID sets were scrubbed, juggled, per-disk scrubbed and the RAID configuration and disk layout mapping completely destroyed, that there was effectively no way of re-constructing the Reed-Solomon encoding (no data on any of these RAID disks was actually stored plain, it's all hashed).

And actually, the grading of the data was no higher than Restricted even by aggregation, and the vast majority was much lower or unclassified (intermediate computational results that would mean nothing to anybody outside the field, and not much to those in it), so sign off was granted.

Also, the cost of shredding 4000 or so disks was considered exorbitant, and would probably have taken more time than the rest of the decommissioning.

0
0
Peter Gathercole
Silver badge

rm -fr @IMG

I used to run HPC clusters where doing this on the compute nodes would not have been quite as catastrophic as on a normal system. They would probably have rebooted OK.

The reason for this is that / was always copied into a RAMfs on boot from a read-only copy, /usr was a read-only mount and most of what would normally be other filesystems were just directories in / and /usr. It's true that /var would have been trashed, and any of the data filesystems if they were mounted would also have gone, but the system would have rebooted!

On a related note, when the clusters were decommissioned, I was the primary person responsible at all stages of the systematic, documented and verified destruction of the HPC clusters. It ranged from the filesystems, through to the deconstruction of the RAID devices and scrubbing of all of the disks (about 4000 of them), the destruction of the network configuration and routing information, deleting all of the read-only copies of the diskless root and usr filesystems, even as far as the scrub of the HMCs disks (it's interesting, they run Linux, and it was possible to run scrub against the OS disk of the last HMC [it was jailbroken], while the HMC was still running!)

The complete deconstruction, from working HPC systems to them being driven away from the loading bay took 6 (very long) working days, and finished with a day's contingency remaining in the timetable.

So I am one of a relatively small number of people who can claim that they've deliberately, and with complete authorization, destroyed two of the top 200 HPC systems of their time!

I had real mixed feelings. It was empowering to be able to do such a thing, and upsetting, because keeping them running was almost my complete working life for four years or so.

3
0

Amazon goes to court to stop US murder cops turning Echoes into Big Brother house spies

Peter Gathercole
Silver badge

Re: This makes no sense

I want to know why this information is being sent even if the device is not triggered.

I don't understand why the alert phrase is not identified locally to switch on the recording. I mean, recognizing one of three words to activate the device is not particularly difficult, and providing it worked as advertised, would prevent Amazon recording things other than what's intended.

In fact, I would prefer that a majority of the voice recognition was done locally, so there would be a chance that they could do something useful even when not connected to cloud services. Make them use my NASor music server to find media, use a local calender, and only go out to the 'net when it could not satisfy a request locally.

But I suspect that one of the primary reasons these things exist is to get people used to an always connected house.

0
0

AWS's S3 outage was so bad Amazon couldn't get into its own dashboard to warn the world

Peter Gathercole
Silver badge

@Lotaresco

Chances are the clock in a mechanical timer is an electric one. When the power goes out, the clock stops. When it comes back on, unless you are exceedingly lucky and have had a multiple of 12 hour (or 24 hour if you have a 24 hour clock) outage, the clock will be wrong and you will need to set it.

But it's usually a matter of turning it until it's correct again.

1
0

Apple to Europe: It's our job to design Ireland's tax system, not yours

Peter Gathercole
Silver badge

Re: mostly Cupertino

I think that this assumes that the majority of the value add for Apple products is due to the design work (IP) that is done during product development.

It totally ignores the value add associated in the taking of raw materials, and manufacture them into the finished devices.

It also ignores the value add of the marketing and distribution network, although you could say that it did include the premium that people pay just to buy an Apple device.

The IP argument is really a diversionary one, because it assigns a value to a largely intangible asset. This allows them to claim that the majority of the cost is an arbitrary value that they can essentially say comes from the lowest tax jurisdiction they can find.

IIRC, Starbucks did something similar by using one of their hierarchy of companies in a low tax jurisdiction to buy coffee on the open market, and then sell it to their operations in other countries at a stupid markup, along with licensing charges for branding. This allowed them to move profits to the low tax jurisdiction and claim that in most countries, their profit levels were so low that they did not need to pay much corporation tax. This became even more offensive when you think that the coffee never went near the country that supposedly added to it's value.

What did the Cayman islands actually add to an iPhone beside being the arbitrary 'owner' of some IP?

22
0

SpaceX blasts back into the rocket trucking business

Peter Gathercole
Silver badge

Re: It's like the 1950s all over again @Mike Richards

I like all the references, but you're wrong about Thunderbird 1 (and Thunderbird 3).

They both land tail first back at Tracy Island, and what's even cleverer, they managed to suck in the smoke!

But that's easy when you run the film backwards, a trick AP Films did more often than I would have wished. I guess that it's easier to pull a model than let gravity have it's way when trying to lower it.

I could probably dig out the names of the episodes when both were seen, but then I am a bit of a Gerry Anderson geek!

I was really surprised when I saw the original Falcon take off, hover and landing tests about how much it looked like a AP films sequence!

The Thunderbirds effect/sequence I was most terrified and then later impressed with was in the episode "Terror in New York City", where Thunderbird 2 had to make an emergency landing after being attacked by USN Sentinel (bloody Yanks!) That was some serious special effects and model making, even by today's standards. I remember being horror-struck when I saw it as a very impressionable young child in the 1960's.

I wonder whether the model makers had any qualms about dirtying up on of their frequently used models in order to film the sequence. If any of them read here, I would love to know.

3
0

NORKS fires missile that India reckons it could shoot down in flight

Peter Gathercole
Silver badge

Re: I used to be a pretty good Missile Command player

I have a copy of Atari Arcade hits for Windows, but it's a bit flaky under Virtual Box (I never really bought into Windows, long term Linux and before that UNIX user).

But it's not the code. The Linux version of Mame is pretty good, and runs the original ROMs. It's the hardware that's the problem. You really relied on the momentum of the huge trackball for the missile sweeps. It's not possible to do the same with a mouse, and the desktop trackballs are too small!

1
0
Peter Gathercole
Silver badge
Mushroom

I used to be a pretty good Missile Command player

It was one of the two games I was good at (the other being Battlezome). I used to be able to make a single game last 15 minutes or more, and clock up scores in the 350,000 mark. I could normally get on the top 10 on any machine I came across, and jockeyed for the top on the machine I played most frequently (If anybody is interested, the initials I used were PCG).

One day back in the early 1980's, I went to my local arcade. There, on the machine I was most familiar with was a new guy playing.

He was soooooooo much better than anybody else I had seen, and better than me by a mile! He could hit the really crazy smart bombs that appear in the later screens, and low altitude bombers and satellites as well. He lost cities, but slower than he earned them (and the machine was set to only give cities at 15,000 intervals IIRC)

I watched him play a single game for about 40 minutes or more By that time, the colours had cycled through all the outrageous combinations, some so bright that the screen was dazzling, with red, purple and black on a white sky being one I particularly remember. The missile patterns reached what must have been their most difficult, but he could cope. He clocked the score counter (I can't remember what it wrapped at, but it was in the 10's of million).

Eventually, and with cities stacked across the bottom of the screen still, he got fed up, and just walked away from the machine. I never saw him in the arcade again!

It really was a pinball wizard moment.

I stopped playing arcade machines shortly after that, because I knew I could never be as good as that guy. I will occasionally play one if I find one in good working order (very, very rare nowadays and you just don't find the heavy trackballs to play on a PC under Mame), but my playing days are over. Anyway, arcades are now mainly penny falls and fruit machines, and what video games there are are all driving, cycling and shooting games.

A lost era!

9
0

OK, 2016 wasn't the best, but look for a buyer? That's Cray

Peter Gathercole
Silver badge

Re: I love the fact...

Well, I suppose so, but it is some pretty impressive cables, both number and type, and the fact that there are no separate network switches for the Aries (they're integrated into the compute nodes themselves). Some of the cables are trunked into solid connectors for ease of maintenance. Not as well engineered or as 'pretty' as the IBM system IMHO, but...

For most large HPC systems, the interconnect is far more interesting than the compute capability. My point was that the Sonexion storage, although it has lots of lights, is architecturally probably the least interesting part of a Cray.

Also, after the photo was taken, there was custom artwork attached to the compute rack doors. I believe there is a time-lapse set of pictures on the Met Office web site that shows the artwork being attached to one of the clusters.

0
0
Peter Gathercole
Silver badge

I love the fact...

...that this stock picture, take in the Met Office sometime in 2015, is focusing on the Sonexion storage subsystem of one of the smaller of the Cray systems there, and that the racks of the compute nodes are behind the photographer.

So what you have is a picture of a bunch of Dell servers and Xyratex (Segate) disk shelves linked together by only moderately interesting Infiniband, and running Lustre.

The more interesting compute part, including the Aries interconnect, is not visible.

The IBM 9125 F2C that can still be seen in the background of the picture was a much more interesting system IMHO, but I'm biased, because I used to support those systems!

1
0

Want to come to the US? Be prepared to hand over your passwords if you're on Trump's hit list

Peter Gathercole
Silver badge

Re: All my social media logins...

Careful. You may be arrested for wielding an offensive weapon!

1
0

Conviction by computer is go, confirms UK Ministry of Justice

Peter Gathercole
Silver badge

Re: Prosecution Costs?! @AC

Most of the time, cases only make it to court in the UK if there is a very good chance that the accused will be found guilty, so a significant number of the cases that make it to court will end up with the accused pleading guilty anyway.

If you can reduce the cost of this process for both the accused and the court system, it looks like a win-win situation to me. Just as long as those who think they've been unjustly accused still have access to the court system if they want.

0
2
Peter Gathercole
Silver badge

Re: Prosecution Costs?! @User McUser

??? - Of course there are still costs.

The offense still has to be written up, and actually entered as an offense. A case still has to be made before it could be prosecuted. The evaluation of whether a case is likely to succeed if taken to court still has to be made.

I agree that the costs should be fairly minimal, but they are still costs

But to my mind, this new system is really intended to offer people who know they have committed a crime to admit to it, and have a means of going through the justice system without having to go to court, reducing the cost of the whole procedure and saving precious court time.

We already have such a system for traffic offenses. You get caught speeding bang to rights, you can offer to pay the fine, accept the points, and never see the inside of a court room.

In motoring offenses, if you think you're not guilty, you can still opt to go to court, plead your case and let the magistrate decide whether you're guilty or not. The way I read this, you will be able do exactly the same for a number of other minor offenses.

The difference here is that they can be minor criminal offenses, but still, probably ones that would only result in a fine, not a custodial sentence. What's wrong with that?

If you know you're guilty, plead guilty through the web site, and avoid a physical court case. Think that you're innocent, or you've got a chance to avoid it, take your chances in court.

It's not as if the computer will be deciding guilt, taking the place of a magistrate, judge or jury.

The only way this could be seen as disruptive to the justice system is if you are encouraged to plead guilty when you're not, merely to reduce the financial burden. That really would be unjust!

4
5

Big blues: IBM's remote-worker crackdown is company-wide, including its engineers

Peter Gathercole
Silver badge

Re: Titanic deckchair painting strikes again!

Exception, maybe, but I worked with a team out of Poughkeepsie (not one of the core hubs, but the location is vital for maybe the only profitable hardware segment left in IBM), and all I can say is that from the UK, the only time that you could tell that people were remote was if you heard doorbells or pets in the background on the conference calls.

I and the customer got excellent support, and often it allowed me to talk to the people I needed at stupid-o-clock in the morning their time, and get meaningful help from them, because they had full office setups at their houses. Their responsibility spanned the entire globe, so office hours for them were pretty much non-existent.

These were committed professionals who were prepared to fire up their systems in the early hours of the morning, give advice, and then go back to bed for an hour or two before getting up to do their normal job. I'm not sure they would have been prepared to get in the car and drive to the office to check out a problem. And they were of a level (senior development engineer or higher) who could not charge overtime or standby!

It also allowed for the Power HPC team in Poughkeepsie to have a team leader working out of Austin, the home of Power development, so that cross-location collaboration could actually work. (BTW, the Power IH systems were put together by an associated team of Mainframe development in POK using a lot of mainframe technologies like water-cooling and high-density power distribution, rather than Austin).

I can see this new way of working alienating a huge number of very experienced engineers, to the detriment of IBM as a whole.

14
0

Ubuntu Linux daddy Mark Shuttleworth: Carrots for Unity 8?

Peter Gathercole
Silver badge

Re: The one interface to rule them all @Updraft102

Well, I suppose I'd better come clean, because the version I actually used was Mint Debian edition, where you don't use the Ubuntu repositories.

I liked the idea of no dist-upgrades and a rolling upgrade policy, but did not like the fact that all the default installed tools had different names (which was important when you start, for example, gnome-terminal from the command line), nor the (irrelevant in this case) fact that the packages in the Debian repositories are frequently rather old.

It's my laziness, I admit.

I do sometimes wonder, now there are more usable official editions of Ubuntu (like the MATE and Gnome edition), why the derivative Mint distros are still as popular as they are.

0
0
Peter Gathercole
Silver badge

Re: The one interface to rule them all

It's not that clear.

I declined to use Unity on Ubuntu, but not by switching to Mate, but by using the Gnome Fallback, which actually looks and feels a lot like Gnome 2. I found that I preferred to continue to use Ubuntu than one of it's derivatives, mainly because of the additional Gnome tools that you would need to find alternatives for.

The reason I didn't like Unity on a desktop/laptop, is that the early releases made it awkward to have more than one window visible on the screen. Applications would open full screen, and often, trying to open a second instance of an application dropped you back into the first instance, rather than opening a new one. It followed the Mac idea that window controls were best on a bar at the top of the screen, rather than attached to the window, In early releases, this was take it, or don't use Unity - there were no configuration tools to change the behavior.

These issues can now be configured, so I can at least use Unity on my laptop, but I still prefer not to.

But that's not my whole story. When I needed a second mobile phone because of poor network coverage in two locations where I spent significant amounts of time, I decided to get a second-hand Nexus 4 and put Ubuntu Touch on it.

Unity on this platform makes a lot of sense, and once you've got to grips with right, left and top swipes, it's a very suitable platform for devices where you only have one application visible at a time. I would actually think about using it as my primary phone, if I was not so attached to some of the Android apps. I would be very interested to try it out on a tablet, if only there was a reference hardware device available at a reasonable price second-hand that had a current build.

1
0

Hacker: I made 160,000 printers spew out ASCII art around the world

Peter Gathercole
Silver badge

Re: Bah!

I remember seeing an astounding piece of ASCII art in the early 1980s. It was a picture of a mountain climber hanging off a cliff, printed on several lengths of 132 column line printer paper. The whole picture was hung on a wall, occupying something like a 6x4 foot space on the wall (I may have the dimensions over-blown due to poor memory, but it definitely filled a large part of the wall).

I believe that it was printed from a card-deck, with just enough JCL to directly print from the deck to the line-printer.

Apparently, printing it on the University's central line printer was banned, and several people got into real trouble, and had their copy of the card deck confiscated when trying to print it.

1
0

Forums

Biting the hand that feeds IT © 1998–2017