* Posts by Peter Gathercole

2654 posts • joined 15 Jun 2007

UK gov says new Home Sec will have powers to ban end-to-end encryption

Peter Gathercole
Silver badge

Re: Wow @Danny 5

They don't have to control encryption as such.

Before I go on, this is just a thought experiment, OK. I'm not actually suggesting the following.

It would be perfectly possible for ISPs to block everything by default and whitelist allowed services, and then use DPI to see whether the allowed services were being subverted to tunnel encrypted traffic. That would mean as soon as you put traffic that was not allowed down your link, it would be quenched.

They would also have to make sure that non-IP data circuits (dark fibre etc.) services out of the country were also banned. That would just leave bi-directional satellite services and point-to-point microwave/wireless across national boundaries (like the Northern Ireland border with Eire) to worry about.

Mind you, the Internet in the UK would then bear no resemblance to what it looks like at the moment, and it would look more restrictive than China.

Unfortunately, there is something in the Home Office that seems to make seemingly ordinary cabinet ministers and MPs adopt completely stupid ideas once they become Home Secretary. And we now have an ex-Home Secretary as PM, and a new one with the same ideas.

We're doomed, I say!

8
0

DevOps: The spotty faced yoof waiting to blossom

Peter Gathercole
Silver badge

Automation

The automation part of the Wikipedia article is there to suggest that in order to be able to do rapid development and deployment (which is really an agile concept, not necessarily a DevOps one), it is necessary to be able to do rapid and consistent regression and functional testing and deployment with minimal effort.

Unfortunately, automated regression and acceptance testing is good at finding the problems you've seen before. It's not so good at finding new problems. That requires time and rigor in the testing processes.

So, by reducing the testing effort to enable rapid deployment of new code, you're actually exposing yourself to unexpected problems closer to the live environment. To my mind, this is the single biggest issue with agile development, and by extension DevOps. IMHO, large organizations that have a critical reliance on their IT systems will remain with their traditional testing regimes, which will make DevOps difficult to integrate into their working practices. It's a Risk thing.

It's interesting that in several organizations I've worked at over the 35+ years I've been working in IT, Operations team members have been present during all phases of projects, and on the distribution and approval lists of the change processes, so communication isn't a new thing. It just seems to have dropped out of favor a bit in recent decades as IT has become more silo'd.

6
0

RM: School spending on tech is soft, soggy and downright subdued

Peter Gathercole
Silver badge

480Z...

... was seriously overpriced even when compared to it's contemporary, the BBC micro (which was also expensive).

There were niches where the 480Z was a more appropriate machine than a BEEB, but IMHO, if you didn't need CP/M compatibility, the BEEB was more versatile and accessible machine for schools.

The 380Z was from a different time, several years before most schools had budget to buy computers (and could be built and upgraded piecemeal) and before cheaper machines were available. They were well built, however, and survived for years, especially as they were often locked away from general use, or used as file and print servers for 480Zs.

0
0

Linux letting go: 32-bit builds on the way out

Peter Gathercole
Silver badge

Bringing this in in 18.10 means that there is one more LTS release (18.04) for Ubuntu on 32 bit Intel hardware, and as the article points out, this means that there will be security updates well into the 2020s (Ubuntu LTS has four years before the repositories stop being updated, and years more before they are retired).

Even though I use older kit for all of my systems, I seriously doubt that even I will have non x86-64 Intel kit doing serious work. As it is now, my daily system, a Thinkpad, is a Core 2 duo, as is my desktop mule that I use for things too large for my laptop (and I have some Core 2 quads sitting in a drawer waiting to be deployed).

I have a quick-and-dirty Atom based netbook running 32 bit Ubuntu, and my mostly retired Linux firewall is still 32 bit, but both of these are close to the end of their life. My wife's laptop is not 64 bit capable, but it probably won't last until 32 bit support is dropped.

3
0

Microsoft's Windows 10 nagware goes FULL SCREEN in final push

Peter Gathercole
Silver badge

Re: A final throw of the Minty dice before

My wife wanted to stay with WinXP. I told her she couldn't, and built a Win7 machine for her, which she hates (she's such a techno-luddite, she wouldn't learn the XP->7 UI change). If she needed to use a PC, she reluctantly asked to borrow my Thinkpad (Ubuntu LTS), and asked me to start "Google" for her (Google is the Internet, as far as she is concerned).

When I replaced my Thinkpad (with another one, of course), she asked whether she could have my old one. As a final piece of maintenance work on that system, I put in an SSD and one of the XP skins on Gnome.

She is now happily using this Linux laptop daily for genealogy research, Whilst she knows it's not Windows XP, it works and looks pretty much as she expects. She even uses LibreOffice on occasion, and does not appear to miss MS software at all.

I check it on occasion (I now borrow it if I just need to look up something quickly), and install any updates, but she admits that even she could manage this if I didn't.

1
0
Peter Gathercole
Silver badge

Re: A final throw of the Minty dice before @Adam 52

The question we need to know is whether you have some esoteric or maybe cutting edge graphics card, or are maybe trying to use the proprietary binary graphics driver from AMD or Nvidia on an older graphics card..

For new high end cards from both Nvidia and AMD, the proprietary Linux drivers often lag the availability of the cards by some months, and the open drivers may not support the newer hardware until some bright spark works out how the API has changed.

There are also some obscure cards that there may not be drivers for in the Linux repositories, but this is rare.

What is more annoying is that the proprietary drivers are dropping support for older cards. I was caught out when I upgraded an LTS release on a system with an Nvidia fx7800 onboard that had the proprietary Nividia drivers loaded. After upgrading, I suddenly was down to un-accelerated 800x600 256 colour (i.e. basic VESA) rather than the 32 bit colour 1280x1024 that I was expecting. This sounds similar to your situation. I've had similar problems with older AMD/ATI cards as well.

The new release of the proprietary Nvidia binary had silently dropped support for the older chipset, leading to the lowest-common denominator driver being used. Unfortunately, the main way of removing the binary driver, which is required to get the open source drivers configured correctly, is normally written using dpkg from the command line. It is also possible from Synaptic (which is no longer installed by default), but is rather more difficult from the Ubuntu Software Centre (which seems to decide that removing software is something that users should be dissuaded from doing).

Unless you actually desperately need them, I would nowadays always suggest that you use the open drivers, and if you do use the proprietary drivers, switch back to the open drivers before doing a dist-upgrade.

Of course, this is not Linux's fault (if Linux can actually have fault attributed to it). It actually shows up a fundamental support issue with the companies that produce PC hardware without a full commitment to Linux. This should even extend to the obsolete chipsets IMHO, because Linux is very often deployed on old kit. Companies should either fork their proprietary drivers and leave the old ones in the repositories so you can keep using the old drivers without having to hold them back (and don't get me started on this, it has huge problems), or open-source the drivers, or even just the full API for the cards they deem obsolete to allow the community to support the cards without having to reverse-engineer the chipsets.

4
0

Those Xbox Fitness vids you 'bought'? Look up the meaning of the word 'rent'

Peter Gathercole
Silver badge

Re: Refund? @Doctor Syntax

I was not saying that there is not consumer legislation or small claims courts outside California, but that article itself says that that case would be unlikely to succeed outside of California.

I don't understand your analogy of Jane Fonda. Whether you had watched it or not is irrelevant. I appreciate that in the case of a tape, you retain physical control of the tape, whereas the MS product you never own a physical copy, but the point I was trying to make is that technologies become obsolete. The difference I will admit here is that MS are able to declare the technology obsolete, but suppliers are not legally bound to provide alternatives.

Maybe the providing servers run on Windows Server 2003 with one of the withdrawn windows application deployment frameworks, and porting it to a more recent version is not feasible/cost effective, rendering it obsolete.

0
0
Peter Gathercole
Silver badge

Re: Refund? @Doctor Syntax

Crossed purposes. That case relies on the particularly consumer friendly court system in California. It's pretty much not applicable anywhere else in the world.

And I'm not sure that the small claims courts elsewhere would be prepared to rule on this issue, as the perceived loss verses the use the customer has already had from the product is debatable (how many people are prepared to try to claim back the cost of their VHS fitness tapes, because you can no longer buy a tape player). They may well require it ti be handled by a higher court.

1
3
Peter Gathercole
Silver badge

Re: Refund? @Deltics

It will be covered in the small print of the EULA, or at least if it's not, it will be soon (according to the same EULA, it will be your responsibility to check online for changes in the conditions).

Of course, that cannot trump local consumer legislation, however MS or any other company choose to fence their responsibility, but how many people are prepared to take on a company like MS in the courts!

4
0

Lightning strikes: Britain's first F-35B supersonic fighter lands

Peter Gathercole
Silver badge

Re: "supersonic fighter" @Steve Davies 3

Supersonic in a dive is known as transonic.

3
4

Vodafone hints at relocation from UK

Peter Gathercole
Silver badge

Re: Bye Bye Then

The Vodafone Group PLC holding company probably does not employ many people, so income tax and NI is not a huge issue. Corporation tax is another matter, but I'm sure a company like the Vodafone Group employs advanced financial engineering to minimize it's corporation tax.

4
0
Peter Gathercole
Silver badge

Re: London based?

I actually think you'll find that the Vodafone Group PLC, the holding company which owns Vodafone UK along with all the other national Vodafone operations, is in Paddington.

You are right that Vodafone UK, the UK operating company, has it's registered office in Newbury.

It's sometimes awkward to work things out when companies set themselves up for international operations. There have to be separate tax entities set up in each tax jurisdiction (at least at the moment, until further EU integration to form a superstate creates a single tax region for the whole of the EU).

5
0

What Brexit means for you as a motorist

Peter Gathercole
Silver badge

Re: Speculation

Rare would be welcome.

On my way to work yesterday, I was on the A34 southbound overtaking the commercial vehicles, in a stream of three BMWs immediately in front of me, and at least two behind!

BTW, I'm in something that could, just, be described as a British car, although it is a little elderly.

4
0

You can be my wingman any time! RaspBerry Pi AI waxes Air Force top gun's tail in dogfights

Peter Gathercole
Silver badge

Re: T500

No, I'm thinking of the X-9 Ghost in Macross Plus, or the F/A-37 Talon in Stealth.

0
0

A month to save digital currency Ethereum?

Peter Gathercole
Silver badge

Currency scarcity

Arthur - "What's that smoke"

Ford - "It's just the Golgafrinchams burning the trees"

0
0

Lester Haines: RIP

Peter Gathercole
Silver badge
Pint

I've nothing to say,

Too upset (reaches for Lohan mug for consolation, while wishing I'd got the glass for a more appropriate beverage).

3
0

RIP ROP: Intel's cunning plot to kill stack-hopping exploits at CPU level

Peter Gathercole
Silver badge

Re: It'd be nice to have a system...

What do you run the VM's on? Intel Mainframes of course.

0
0

ZFS comes to Debian, thanks to licensing workaround

Peter Gathercole
Silver badge

Re: Correct me if I'm wrong, please: @INFINITY -1

Back when UNIX was effectively a Bell Labs. internal project, with educational institutions given source code access for the cost of the media, a lot of this UNIX software you talk about was actually written by people working in the educational institutions. As such, they almost certainly did not own all of, or even very much of the work they did (most institutions take part or all of the rights to inventions by their employed staff, with research sponsors taking some of what's left).

Back in those days, it was much more simple to write something for your local use, and make it available for free or media costs to other educational institutions, rather than trying to monetize it. As such it was often provided with little or no license other than something like "free to use for educational institutions". This effectively meant that by sharing it, you had already lost control.

RMS himself understood this. In order to be free of these restrictions so that he could assert his right to make software freely available, he resigned from MIT shortly after starting the GNU project.

3
0
Peter Gathercole
Silver badge

GPL and LGPL

For applications that are compiled and run on Linux, the fact that most of GCC and many libraries are actually published under the Lesser GPL (LGPL) means that it is possible to ship code that is compiled and uses these libraries under any license you wish.

The problem here is that ZFS requires code that runs in kernel-space, and thus uses more of Linux that a user-space program. There have been discussions about whether kernel modules use enough of the interface (specifically the kernel symbol table) to the kernel to mean that GPL licensing restrictions apply.

In my view, there needs to be clarification of the state of kernel modules. IMHO, I feel that there should be some exemption, like the LGPL derivative work exemptions for statically linking LGPL libraries into binaries so that correctly written kernel modules can be added without violating the GPL. In this respect, I think that the stance RMS is taking, for all his good deeds and words, is akin to cutting off his nose to spite his face.

I understand that he has a glorious vision, but pragmatically, it will never be possible to have the whole world's software infrastructure running under GPL.

3
0

This is what a root debug backdoor in a Linux kernel looks like

Peter Gathercole
Silver badge

Re: Writing to /proc as user?

You know, it used to be that the correct user of permissions, separate UID and GIDs for applications, and the standard UNIX permission model was deemed sufficient to protect from most application programs, without a sandbox.

The whole idea of sandboxes came about on other OSs, which needed the OS to be protected from applications.

I appreciate that modern sandbox implementations allow resources to be fenced as well as access, and also that chroot used to be commonly used to give more protection, but still the assumption that a sandbox is required on a Linux system suggests that there is something wrong in the implementation or configuration of the systems (and that is what the article suggests).

3
1

Ubuntu kernel patches land

Peter Gathercole
Silver badge

Re: "don't we all just love ALSA?"

The Linux sound system was not made any easier when Pulse Audio came along (thanks Lennart, you should have been shot for this long before you were allowed to meddle with init).

The Linux audio environment is overly complicated by the need to be backward compatible to all of the previous sound systems. Because there are so many applications still in use that are no longer developed, it's necessary to maintain ALAS and OSS compatibility (thankfully, the Enlightenment Sound Daemon is mostly dead and can be ignored), and layering on new meta-systems like PA and Jack don't make life easier.

In some respects, I would have been happy sticking with ALSA and avoiding all the pain PA brought.

0
0

Microsoft: Why we tore handy Store block out of Windows 10 Pro PCs

Peter Gathercole
Silver badge

Re: >I wonder if you could get a discount on licence fees for features that are removed...

That argument stands up well until the UI that everybody grew up with is changed almost beyond recognition.

It then becomes just as easy to retrain to a sane UI on another platform as it does to retrain them to the new Windows one.

MS appear to have taken one step back from the brink, but the changes are still pretty radical.

6
0
Peter Gathercole
Silver badge

@Destroy All Monsters

What does that make a real physical calculator?

Some locations I work at don't allow phones or laptops to be brought in. They allow calculators, so I have one just for those locations.

(P.S. I also have a slide rule, but that is just a curio, not because I actually use it in anger!)

1
0

IBM's quantum 'puter news proves Big Blue still doesn't get 'cloud'

Peter Gathercole
Silver badge

Re: Why get upset about this?

The IBM press release is quite clear. "..access through a first-of-a-kind quantum computing platform delivered via the IBM Cloud".

So, the quantum computer itself is not part of a cloud, the means of getting access to it and creating jobs is. The access will be via a SoftLayer application running within the "IBM Cloud". Note the qualification. It's not "The Cloud", it's the "IBM Cloud". So it is whatever IBM defines is the "IBM Cloud", and given the wide scope of most people's definition of what cloud computing is all about (PaaS, IaaS, SaaS etc.), they've neither lied, nor have they misunderstood cloud computing.

The release also talks about a 5 qbit system, and then about being able to use individual qbits. This potentially means that up to 5 jobs may be running at the same time, and if the cloud access platform allows jobs to be queued up in some form of batch system (I don't know, but I would set it up that way myself), then many, many people could be using the access platform at the same time. I very much doubt you get a command prompt directly on the quantum computer itself (I'd love to see the source code of the OS if you could!)

Whether it's misleading is a very subjective matter, and will be based entirely on whatever definition of cloud computing the reader wishes to believe.

Me, I subscribe to Cloud as being "someone elses computer", as stated by an AC at the beginning of the comment thread, so this fits that definition.

2
0

ZX Printer's American cousin still in use, 34 years after purchase

Peter Gathercole
Silver badge

Re: Not a 'thermal' printer...

The UK ZX printer caused a lot of radio interference when it was printing. There is no way it could have been marketed in the US because of the FCC rules on interference.

5
0
Peter Gathercole
Silver badge

I think there must have been more differences.

The ZX Printer, using metal covered foil, plugged directly into the ZX expansion port via the board-edge connector. When used with a ZX-81, it had a pass-through port for the RAM pack.

If this printer works in the same way, I am interested in what he was driving it with, because the ZX-81 and the original ZX Spectrum were the only machines capable of driving it. Anything else would require a box of tricks to emulate the ZX-Expansion bus.

I suspect that this is an RS-232 printer, which would have been plugged into a ZX Interface-1, and thus could be used with any other computer with a serial port and the correct cable (IIRC, the serial port on the Interface-1 used a non-standard [not that there were standards back then] pin layout, but it was documented, so within the ability of anyone with a soldering iron to make a cable).

If he is actually using a Spectrum (sorry, Timex-Sinclair 2068), then I suspect that would be much more newsworthy.

1
0

Hold on a sec. When did HDDs get SSD-style workload rate limits?

Peter Gathercole
Silver badge

Re: All of this also ignores... @John

Ah, but more frequent single failures in a raid set is an annoyance, but not putting your data at risk (as long as you replace the failed disks)

Multiple concurrent failures risk your data!

I will opt every time for a scenario where I have to replace single drives more frequently, as opposed to one with less frequent work, but increased risk of data loss.

2
0
Peter Gathercole
Silver badge

All of this also ignores...

... the read-writes that go on under the covers performed by most RAID controllers to prevent bitrot. It could very well be that there is further 'amplification' (and, it should be noted, will also happen as long as the RAIDset is powered, even if it is not being actively written to.

This probably makes it even more important to not buy all of the disks in a RAIDset at the same time or from the same batch of disks.

8
0
Peter Gathercole
Silver badge

@Adam 1

The S.M.A.R.T data maintained by the drive actually does contain counters of all sorts, which includes the total amount of writes, I believe, so that could be used to try to enforce this type of limit.

0
0

RIP Prince: You were the soundtrack of my youth

Peter Gathercole
Silver badge

Re: Seems to be a mass die-off of celebrities at the moment

Oops. Typo.

0
0
Peter Gathercole
Silver badge
Holmes

Re: Seems to be a mass die-off of celebrities at the moment

But along with the conclusion about a 'bad year' they also offered an explanation, in that the number of celebrities that were recognized increased over the decades from the 1950's onward because of the influence of television (and before that, radio, film, newspapers and theater would all have had their effect on boosting the number).

Those celebrities, who would have been in their 20s and 30s when telly was new, are now in their 70s and 80s. And in the decades after the 60s, celebrities tended to do things that damaged their long-term health ("I hope I die, before I get old"), so are probably candidates for an earlier death.

So it's really not a huge surprise. I predict that the number will increase year-on-year for the next 25 years or so, and then plateau, and then people will lose interest as the Internet age celebrities reach an age when they start dying. Either that, or we'll get Logan's Run type euthanasia, or people will transferred their conscienceless into robots.

3
0

BOFH: Thermo-electric funeral

Peter Gathercole
Silver badge

Re: as if owning IT antiquity was one of those positive character traits

I've got some 8" floppies formatted to V7 UNIX UFS standard from 1978 and 1979. We had to use them as overflow storage on a PDP11/34 when I was at university, because there was too little space on the RK07 drive packs!

1
0

Pro who killed Apple's Power Mac found... masquerading as a coffee table

Peter Gathercole
Silver badge

Re: Older machines were far more versatile--

That's ef'ing genius! I love it.

1
0

Edward Snowden sues Norway to prevent extradition

Peter Gathercole
Silver badge

Passport issues?

Hasn't he had his US passport invalidated, or did Russia issue him one of theirs?

If he doesn't have a valid passport, he cant really travel anywhere (except maybe the US).

Edit: Oops, looks like I should have read all of the comments before saying this! I'm an idiot.

5
0

I am sending pouting selfies to a robot. Its AI is well buff

Peter Gathercole
Silver badge

Re: You clean up nice!! @Esme

I think you're objectifying him!

Complementing him on his looks, that's clearly sexual harassment!

Quick, call the reverse-feminist brigade.

(what do you mean he's not complaining)

8
0

High performance object storage: Not just about reskinning Amazon's S3

Peter Gathercole
Silver badge
FAIL

Re: Nothing new @Reg

Your comment contains a oxymoron. A "globally unique identifier" cannot clash, by definition.

Adding the geo-datacentre makes it hierarchical, and actually means that it becomes difficult to address an object if it moves to another datacentre.

In case you had not realized, there are many ways to get files from filesystems that do not require mounting (if you know a file handle, some past implementations of NFS allowed you to access a file without mounting the filesystem, but that was a bug!). You're just applying current thinking to make an artificial distinction to try and preserve the definition of an object file store.

Despite your completely valid points, I still maintain that an object filestore is just a filesystem by another name.

My use of the POSIX example was just to illustrate the use of inodes, and that things can be familiar and different at the same time. I was not saying that all filesystems need to be POSIX compliant, and the use of things like SSHfs, which is in essence stateless but runs on top of existing filesystems indicates that the APIs you suggest can (and probably are in most instances) just a layer on top of existing filesystems.

0
0
Peter Gathercole
Silver badge

Re: Nothing new @pPPPP re MP3 players.

Yes, but all you're doing is storing an index, in the same way that the permuted index for old-style UNIX man pages from 40 years ago allows you to identify pages that mentioned particular key words.

And if you break it down, in a UNIX-like system, the objects are actually tracked by inode which links blocks to objects (files), and the file tree structure is just a way of indexing the inodes.

It could be perfectly possible, if a little unwieldy, to have an index of inodes other than a hierarchy of directory indexes, but you would have to do something about permissions, as although the inode itself includes permissions that can be checked, UNIX also requires a permissions check on the path to a file, not just the file itself.

In fact, I understand that a number of POSIX compliant filesystem implementations do allow this type of access. GPFS (sorry, IBM Spectrum Storage, or whatever it's called this week) for example, has a policy engine that allows files to be accessed outside of the traditional file tree.

1
0
Peter Gathercole
Silver badge

Re: Nothing new

I know that responding to my own comment is a bit... well, poor form but -

How global is global? If it's really global, what is the arbitration system to make sure that there are no collisions with other systems and organizations? And are objects immutable so that you have to version them as part of their globally unique identifier?. I cannot really believe that there are people who believe that a non-hierarchical unique identifier is really possible at any scale.

Is there any structure at all imposed upon the identifier and format of the metadata? If there is a structure, then it's just another type of file system with a different indexing system. Tree based filesystems are not the only type that have been used, they've just become almost standard because they mostly fit the requirements of most users.

I know that, in theory, if you can segregate the object from the path to access the actual storage of the object, you become storage agnostic, such that objects can be moved to different stores and it still be found, but under the covers, there will still be something that resembles a filesystem.

This whole concept still sounds a bit like buzz words, even though CAFS have been around for more than 30 years.

2
0
Peter Gathercole
Silver badge
Joke

Nothing new

From Wikipedia (I know, but it's a useful first description).

"Each object typically includes the data itself, a variable amount of metadata, and a globally unique identifier"

Hey, I've got an object storage system, and didn't know it! The "globally unique identifier" starts with "/home/peter/Media..." or some such, and each object has some metadata that can be seen using examination tools like "ls -l", "istat" and "file"

Wow. Whoda thuk it!

5
0

Admin fishes dirty office chat from mistyped-email bin and then ...?

Peter Gathercole
Silver badge

..bouncing incorrectly addressed mail

This is a security issue, as it allows spammers to identify real email addresses in an organization. If it doesn't bounce, it's a real address.

I know of many large organizations that just black-hole them for exactly this reason.

14
1

Bibliotheca Alexandrina buys a Huawei superdupercomputer

Peter Gathercole
Silver badge

Quite honestly...

... commodity hardware, attached via Infiniband with some software-defined storage solution is not particularly difficult to build nowadays. It's like Lego, and putting Linux/Lustre/Rocks/Slurm & LSF/Open-MPI on top is very formulaic.

All you need is the money. Of course, whether it actually does anything useful depends on the detailed design, and the skill of the people using it.

0
0

This year's H-1B visa lottery jammed full in just six days

Peter Gathercole
Silver badge

Re: In my experience, there's always a shortage of the highly-skilled workers ...

Has your company actually considered training and graduate level apprenticeships as a route to obtaining the right skills?

I get so fed up with there being people with qualifications but a lack of experience in a field that means that they can't apply for the available jobs, while the companies complain they can't get skilled staff. For Bog's sake, take someone with some of the skills, and train them into the rest!

This was brought home to me when my daughter was doing a degree in Graphics Design, and had been given a talk by a previous grad of the course, who achieved a solid 2.1, saying that they could not get a job in the field because they could not show relevant experience. This was in the same week that the government published a list of skill shortages that they were adding to the visa quota that included, to my surprise, graphic designers.

We need to join up businesses with collages so that not only are the right skills being taught, but so newly qualified or retrained people can get a foot in the door in their field. Having to recruit from abroad is just not the right answer.

10
0

Apple's fruitless rootless security broken by code that fits in a tweet

Peter Gathercole
Silver badge

Re: The tree that flew.

If admins need to change permissions on files to make the services work, then they've probably already done something wrong because they've not understood how it hangs together.

The UNIX permission model has it's quirks, but it is relatively simple (actually one of UNIX's weaknesses). If admins can't understand it, they haven't a hope in hell of understanding RBAC and ACLs!

And I'm not talking about what people generally use groups for nowadays, but another level entirely (if you've got a Linux system, read the gpasswd manual page for example).

0
0
Peter Gathercole
Silver badge

Re: The tree that flew.

To be fair, there is a requirement to be able to separate out different administration functions to non-root accounts on multi-user UNIX-like systems.

The thing is, this is a problem that was solved to some extent years ago via the normal permissions model and using groups and group administrators, and just fell into disuse.

The reason why most UNIX systems have groups like system, adm, daemon, uucp, lp etc, was so that you could use the group permissions on the programs to control the different aspects of a UNIX system, and then add the group to a person's group membership (or on really ancient UNIXes, use newgrp to change your current group) to allow you to run the necessary commands. You then restrict root access so only your most trusted users could use it, and have them use it very sparingly.

You didn't even need to be root to control the group membership. There is (was) the capability to set a password on a group, and the first member of the group would be a group administrator who could control other members of the group! You add and remove groups from someones group set to control what they can do. Even now, some of the things still persist. For example, on AIX, I believe that it is still the case that being a member of the system group allows you to do things like mount and unmount filesystems.

It's lazy UNIX administrators who got used to using root for administering everything that caused this facility to fall into disuse.

I'm not sure whether modern UNIX and UNIX-like systems still have the code to allow this to work, but the vestigial remains are still there, without most people understanding why.

It was not as flexible or as granular as the RBAC and ACL based systems used in OSX (and to some extent in the other remaining modern UNIX systems - although the ACL systems need to work better with RBAC), and the underlying mechanisms still relied on there being a 'superuser' UID, and suid, euid and sgid, but it was the case that you could administer a system day-to-day without needing to run commands as root.

5
0

Hands on with the BBC's Micro:Bit computer. You know, for kids

Peter Gathercole
Silver badge

Re: "Same memory as the BBC Micro Model A" @Simon

16KB OS, 16KB Basic, with other ROM based language or OS extensions paged into the same address space as Basic.

The ability to switch into and out-of a paged ROM to handle OS extension calls without disrupting the running programs was an extremely clever piece of design that overcame what should have been a serious limitation of the Beeb. Put some RAM in the same address space, and you could do some really clever things.

0
0
Peter Gathercole
Silver badge

Re: Old photo caption

The other systems on the school list was the Research Machines 380Z/480Z systems, which were, IMHO, less useful in the classroom than the Beebs, although one could argue that they may have had more potential for business type computing as they could run variants of CP/M and associated software which was the microcomputer OS of choice for business prior to the IBM PC.

They were also much more expensive!

I think that the Newbury Newbrain was also on the list, but nobody bought them!

1
0
Peter Gathercole
Silver badge

Re: Who wrote this rubbish?

Fast page 0 access on the 6502 was a major feature, well used in the Beeb for OS vectoring and frequently used counters (like buffer counters), which made extending the OS possible to even moderately competent machine code programmers.

In many ways, the 6502 was a model for RISC processors. Simple, many instructions executing in a deterministic small number of clock cycles (OK, maybe not single cycle, but better than an 8080 or Z80), very regular instruction set (as long as you ignore the missing instructions that did not work) and with enough useful addressing modes.

Mind you, it was simple because of the limited transistor budget available, rather than a desire to create a RISCy processor.

2
1
Peter Gathercole
Silver badge

Re: The Model A of when?

What is this. A Willy Waving competition?

I graduated from University in 1981, having already worked with UNIX for three years (very progressive university, Durham)!

And, although it was launched in 1981, most people who placed an order for a Beeb when they opened the process (like me, model B, issue 3 board, serial number 7000 and something, still working) waited more than 6 months to actually receive theirs.

I'm just waiting for the real gray-beards who make me look young to wade in with their PET, Apple ][ and Altair stories.

2
0

Apple iPhone GPU designers Imagination axes 20 per cent of staff

Peter Gathercole
Silver badge

@nijam

It depends how far you go back.

Silicon Graphics complete workstation line used to be MIPS based, and DECstations from Digital Equipment Corp. also were powered by MIPS processors overlapping the VAXstations, and the hot Alphastations.

Of course, this was when there were significant differences between proper technical workstations and high-end PCs. MIPS powered two of the five (POWER - IBM. PA-RISC - HP, Sparc - SUN, MIPS - DEC and SGI) major technical workstion platforms, and they also appeared in a number of high performance UNIX minicomputer designs (Pyramid and I think Sequoia spring to mind, but I believe there were others).

4
0

Forums

Biting the hand that feeds IT © 1998–2017