Arthur - "What's that smoke"
Ford - "It's just the Golgafrinchams burning the trees"
2390 posts • joined 15 Jun 2007
Arthur - "What's that smoke"
Ford - "It's just the Golgafrinchams burning the trees"
Too upset (reaches for Lohan mug for consolation, while wishing I'd got the glass for a more appropriate beverage).
What do you run the VM's on? Intel Mainframes of course.
Back when UNIX was effectively a Bell Labs. internal project, with educational institutions given source code access for the cost of the media, a lot of this UNIX software you talk about was actually written by people working in the educational institutions. As such, they almost certainly did not own all of, or even very much of the work they did (most institutions take part or all of the rights to inventions by their employed staff, with research sponsors taking some of what's left).
Back in those days, it was much more simple to write something for your local use, and make it available for free or media costs to other educational institutions, rather than trying to monetize it. As such it was often provided with little or no license other than something like "free to use for educational institutions". This effectively meant that by sharing it, you had already lost control.
RMS himself understood this. In order to be free of these restrictions so that he could assert his right to make software freely available, he resigned from MIT shortly after starting the GNU project.
For applications that are compiled and run on Linux, the fact that most of GCC and many libraries are actually published under the Lesser GPL (LGPL) means that it is possible to ship code that is compiled and uses these libraries under any license you wish.
The problem here is that ZFS requires code that runs in kernel-space, and thus uses more of Linux that a user-space program. There have been discussions about whether kernel modules use enough of the interface (specifically the kernel symbol table) to the kernel to mean that GPL licensing restrictions apply.
In my view, there needs to be clarification of the state of kernel modules. IMHO, I feel that there should be some exemption, like the LGPL derivative work exemptions for statically linking LGPL libraries into binaries so that correctly written kernel modules can be added without violating the GPL. In this respect, I think that the stance RMS is taking, for all his good deeds and words, is akin to cutting off his nose to spite his face.
I understand that he has a glorious vision, but pragmatically, it will never be possible to have the whole world's software infrastructure running under GPL.
You know, it used to be that the correct user of permissions, separate UID and GIDs for applications, and the standard UNIX permission model was deemed sufficient to protect from most application programs, without a sandbox.
The whole idea of sandboxes came about on other OSs, which needed the OS to be protected from applications.
I appreciate that modern sandbox implementations allow resources to be fenced as well as access, and also that chroot used to be commonly used to give more protection, but still the assumption that a sandbox is required on a Linux system suggests that there is something wrong in the implementation or configuration of the systems (and that is what the article suggests).
The Linux sound system was not made any easier when Pulse Audio came along (thanks Lennart, you should have been shot for this long before you were allowed to meddle with init).
The Linux audio environment is overly complicated by the need to be backward compatible to all of the previous sound systems. Because there are so many applications still in use that are no longer developed, it's necessary to maintain ALAS and OSS compatibility (thankfully, the Enlightenment Sound Daemon is mostly dead and can be ignored), and layering on new meta-systems like PA and Jack don't make life easier.
In some respects, I would have been happy sticking with ALSA and avoiding all the pain PA brought.
That argument stands up well until the UI that everybody grew up with is changed almost beyond recognition.
It then becomes just as easy to retrain to a sane UI on another platform as it does to retrain them to the new Windows one.
MS appear to have taken one step back from the brink, but the changes are still pretty radical.
What does that make a real physical calculator?
Some locations I work at don't allow phones or laptops to be brought in. They allow calculators, so I have one just for those locations.
(P.S. I also have a slide rule, but that is just a curio, not because I actually use it in anger!)
The IBM press release is quite clear. "..access through a first-of-a-kind quantum computing platform delivered via the IBM Cloud".
So, the quantum computer itself is not part of a cloud, the means of getting access to it and creating jobs is. The access will be via a SoftLayer application running within the "IBM Cloud". Note the qualification. It's not "The Cloud", it's the "IBM Cloud". So it is whatever IBM defines is the "IBM Cloud", and given the wide scope of most people's definition of what cloud computing is all about (PaaS, IaaS, SaaS etc.), they've neither lied, nor have they misunderstood cloud computing.
The release also talks about a 5 qbit system, and then about being able to use individual qbits. This potentially means that up to 5 jobs may be running at the same time, and if the cloud access platform allows jobs to be queued up in some form of batch system (I don't know, but I would set it up that way myself), then many, many people could be using the access platform at the same time. I very much doubt you get a command prompt directly on the quantum computer itself (I'd love to see the source code of the OS if you could!)
Whether it's misleading is a very subjective matter, and will be based entirely on whatever definition of cloud computing the reader wishes to believe.
Me, I subscribe to Cloud as being "someone elses computer", as stated by an AC at the beginning of the comment thread, so this fits that definition.
The UK ZX printer caused a lot of radio interference when it was printing. There is no way it could have been marketed in the US because of the FCC rules on interference.
I think there must have been more differences.
The ZX Printer, using metal covered foil, plugged directly into the ZX expansion port via the board-edge connector. When used with a ZX-81, it had a pass-through port for the RAM pack.
If this printer works in the same way, I am interested in what he was driving it with, because the ZX-81 and the original ZX Spectrum were the only machines capable of driving it. Anything else would require a box of tricks to emulate the ZX-Expansion bus.
I suspect that this is an RS-232 printer, which would have been plugged into a ZX Interface-1, and thus could be used with any other computer with a serial port and the correct cable (IIRC, the serial port on the Interface-1 used a non-standard [not that there were standards back then] pin layout, but it was documented, so within the ability of anyone with a soldering iron to make a cable).
If he is actually using a Spectrum (sorry, Timex-Sinclair 2068), then I suspect that would be much more newsworthy.
Ah, but more frequent single failures in a raid set is an annoyance, but not putting your data at risk (as long as you replace the failed disks)
Multiple concurrent failures risk your data!
I will opt every time for a scenario where I have to replace single drives more frequently, as opposed to one with less frequent work, but increased risk of data loss.
... the read-writes that go on under the covers performed by most RAID controllers to prevent bitrot. It could very well be that there is further 'amplification' (and, it should be noted, will also happen as long as the RAIDset is powered, even if it is not being actively written to.
This probably makes it even more important to not buy all of the disks in a RAIDset at the same time or from the same batch of disks.
The S.M.A.R.T data maintained by the drive actually does contain counters of all sorts, which includes the total amount of writes, I believe, so that could be used to try to enforce this type of limit.
But along with the conclusion about a 'bad year' they also offered an explanation, in that the number of celebrities that were recognized increased over the decades from the 1950's onward because of the influence of television (and before that, radio, film, newspapers and theater would all have had their effect on boosting the number).
Those celebrities, who would have been in their 20s and 30s when telly was new, are now in their 70s and 80s. And in the decades after the 60s, celebrities tended to do things that damaged their long-term health ("I hope I die, before I get old"), so are probably candidates for an earlier death.
So it's really not a huge surprise. I predict that the number will increase year-on-year for the next 25 years or so, and then plateau, and then people will lose interest as the Internet age celebrities reach an age when they start dying. Either that, or we'll get Logan's Run type euthanasia, or people will transferred their conscienceless into robots.
I've got some 8" floppies formatted to V7 UNIX UFS standard from 1978 and 1979. We had to use them as overflow storage on a PDP11/34 when I was at university, because there was too little space on the RK07 drive packs!
That's ef'ing genius! I love it.
Hasn't he had his US passport invalidated, or did Russia issue him one of theirs?
If he doesn't have a valid passport, he cant really travel anywhere (except maybe the US).
Edit: Oops, looks like I should have read all of the comments before saying this! I'm an idiot.
I think you're objectifying him!
Complementing him on his looks, that's clearly sexual harassment!
Quick, call the reverse-feminist brigade.
(what do you mean he's not complaining)
Your comment contains a oxymoron. A "globally unique identifier" cannot clash, by definition.
Adding the geo-datacentre makes it hierarchical, and actually means that it becomes difficult to address an object if it moves to another datacentre.
In case you had not realized, there are many ways to get files from filesystems that do not require mounting (if you know a file handle, some past implementations of NFS allowed you to access a file without mounting the filesystem, but that was a bug!). You're just applying current thinking to make an artificial distinction to try and preserve the definition of an object file store.
Despite your completely valid points, I still maintain that an object filestore is just a filesystem by another name.
My use of the POSIX example was just to illustrate the use of inodes, and that things can be familiar and different at the same time. I was not saying that all filesystems need to be POSIX compliant, and the use of things like SSHfs, which is in essence stateless but runs on top of existing filesystems indicates that the APIs you suggest can (and probably are in most instances) just a layer on top of existing filesystems.
Yes, but all you're doing is storing an index, in the same way that the permuted index for old-style UNIX man pages from 40 years ago allows you to identify pages that mentioned particular key words.
And if you break it down, in a UNIX-like system, the objects are actually tracked by inode which links blocks to objects (files), and the file tree structure is just a way of indexing the inodes.
It could be perfectly possible, if a little unwieldy, to have an index of inodes other than a hierarchy of directory indexes, but you would have to do something about permissions, as although the inode itself includes permissions that can be checked, UNIX also requires a permissions check on the path to a file, not just the file itself.
In fact, I understand that a number of POSIX compliant filesystem implementations do allow this type of access. GPFS (sorry, IBM Spectrum Storage, or whatever it's called this week) for example, has a policy engine that allows files to be accessed outside of the traditional file tree.
I know that responding to my own comment is a bit... well, poor form but -
How global is global? If it's really global, what is the arbitration system to make sure that there are no collisions with other systems and organizations? And are objects immutable so that you have to version them as part of their globally unique identifier?. I cannot really believe that there are people who believe that a non-hierarchical unique identifier is really possible at any scale.
Is there any structure at all imposed upon the identifier and format of the metadata? If there is a structure, then it's just another type of file system with a different indexing system. Tree based filesystems are not the only type that have been used, they've just become almost standard because they mostly fit the requirements of most users.
I know that, in theory, if you can segregate the object from the path to access the actual storage of the object, you become storage agnostic, such that objects can be moved to different stores and it still be found, but under the covers, there will still be something that resembles a filesystem.
This whole concept still sounds a bit like buzz words, even though CAFS have been around for more than 30 years.
From Wikipedia (I know, but it's a useful first description).
"Each object typically includes the data itself, a variable amount of metadata, and a globally unique identifier"
Hey, I've got an object storage system, and didn't know it! The "globally unique identifier" starts with "/home/peter/Media..." or some such, and each object has some metadata that can be seen using examination tools like "ls -l", "istat" and "file"
Wow. Whoda thuk it!
This is a security issue, as it allows spammers to identify real email addresses in an organization. If it doesn't bounce, it's a real address.
I know of many large organizations that just black-hole them for exactly this reason.
... commodity hardware, attached via Infiniband with some software-defined storage solution is not particularly difficult to build nowadays. It's like Lego, and putting Linux/Lustre/Rocks/Slurm & LSF/Open-MPI on top is very formulaic.
All you need is the money. Of course, whether it actually does anything useful depends on the detailed design, and the skill of the people using it.
Has your company actually considered training and graduate level apprenticeships as a route to obtaining the right skills?
I get so fed up with there being people with qualifications but a lack of experience in a field that means that they can't apply for the available jobs, while the companies complain they can't get skilled staff. For Bog's sake, take someone with some of the skills, and train them into the rest!
This was brought home to me when my daughter was doing a degree in Graphics Design, and had been given a talk by a previous grad of the course, who achieved a solid 2.1, saying that they could not get a job in the field because they could not show relevant experience. This was in the same week that the government published a list of skill shortages that they were adding to the visa quota that included, to my surprise, graphic designers.
We need to join up businesses with collages so that not only are the right skills being taught, but so newly qualified or retrained people can get a foot in the door in their field. Having to recruit from abroad is just not the right answer.
If admins need to change permissions on files to make the services work, then they've probably already done something wrong because they've not understood how it hangs together.
The UNIX permission model has it's quirks, but it is relatively simple (actually one of UNIX's weaknesses). If admins can't understand it, they haven't a hope in hell of understanding RBAC and ACLs!
And I'm not talking about what people generally use groups for nowadays, but another level entirely (if you've got a Linux system, read the gpasswd manual page for example).
To be fair, there is a requirement to be able to separate out different administration functions to non-root accounts on multi-user UNIX-like systems.
The thing is, this is a problem that was solved to some extent years ago via the normal permissions model and using groups and group administrators, and just fell into disuse.
The reason why most UNIX systems have groups like system, adm, daemon, uucp, lp etc, was so that you could use the group permissions on the programs to control the different aspects of a UNIX system, and then add the group to a person's group membership (or on really ancient UNIXes, use newgrp to change your current group) to allow you to run the necessary commands. You then restrict root access so only your most trusted users could use it, and have them use it very sparingly.
You didn't even need to be root to control the group membership. There is (was) the capability to set a password on a group, and the first member of the group would be a group administrator who could control other members of the group! You add and remove groups from someones group set to control what they can do. Even now, some of the things still persist. For example, on AIX, I believe that it is still the case that being a member of the system group allows you to do things like mount and unmount filesystems.
It's lazy UNIX administrators who got used to using root for administering everything that caused this facility to fall into disuse.
I'm not sure whether modern UNIX and UNIX-like systems still have the code to allow this to work, but the vestigial remains are still there, without most people understanding why.
It was not as flexible or as granular as the RBAC and ACL based systems used in OSX (and to some extent in the other remaining modern UNIX systems - although the ACL systems need to work better with RBAC), and the underlying mechanisms still relied on there being a 'superuser' UID, and suid, euid and sgid, but it was the case that you could administer a system day-to-day without needing to run commands as root.
16KB OS, 16KB Basic, with other ROM based language or OS extensions paged into the same address space as Basic.
The ability to switch into and out-of a paged ROM to handle OS extension calls without disrupting the running programs was an extremely clever piece of design that overcame what should have been a serious limitation of the Beeb. Put some RAM in the same address space, and you could do some really clever things.
The other systems on the school list was the Research Machines 380Z/480Z systems, which were, IMHO, less useful in the classroom than the Beebs, although one could argue that they may have had more potential for business type computing as they could run variants of CP/M and associated software which was the microcomputer OS of choice for business prior to the IBM PC.
They were also much more expensive!
I think that the Newbury Newbrain was also on the list, but nobody bought them!
Fast page 0 access on the 6502 was a major feature, well used in the Beeb for OS vectoring and frequently used counters (like buffer counters), which made extending the OS possible to even moderately competent machine code programmers.
In many ways, the 6502 was a model for RISC processors. Simple, many instructions executing in a deterministic small number of clock cycles (OK, maybe not single cycle, but better than an 8080 or Z80), very regular instruction set (as long as you ignore the missing instructions that did not work) and with enough useful addressing modes.
Mind you, it was simple because of the limited transistor budget available, rather than a desire to create a RISCy processor.
What is this. A Willy Waving competition?
I graduated from University in 1981, having already worked with UNIX for three years (very progressive university, Durham)!
And, although it was launched in 1981, most people who placed an order for a Beeb when they opened the process (like me, model B, issue 3 board, serial number 7000 and something, still working) waited more than 6 months to actually receive theirs.
I'm just waiting for the real gray-beards who make me look young to wade in with their PET, Apple ][ and Altair stories.
It depends how far you go back.
Silicon Graphics complete workstation line used to be MIPS based, and DECstations from Digital Equipment Corp. also were powered by MIPS processors overlapping the VAXstations, and the hot Alphastations.
Of course, this was when there were significant differences between proper technical workstations and high-end PCs. MIPS powered two of the five (POWER - IBM. PA-RISC - HP, Sparc - SUN, MIPS - DEC and SGI) major technical workstion platforms, and they also appeared in a number of high performance UNIX minicomputer designs (Pyramid and I think Sequoia spring to mind, but I believe there were others).
I was thinking the same thing.
Where I worked at that time, there were various floppys with GIFs formatted for 256 colour VGA floating around (I believe. Of course I never had copies). I think they were downloaded by others from dodgy dialup bulletin boards and USENET.
We had a 64Kb/s leased line to a proto-ISP in 1990, but it was all FTP, Archie and Gopher.
Barker inherited Lt. Queeg. The character was originally performed by Scottish comedian Chic Murray, and the character was so popular that when Chic said he wouldn't do it any more, Ronnie kept the character alive with a near perfect imitation.
'Doing voices' was common on BBC radio shows, but The Navy Lark took it to the limit with audience favorite characters voiced by other stalwarts like Jon Pertwee (who can forget Commander Wetherby once you had heard him), Michael Bates (the Pardre, amongst others), and Heather Chasen, and even relatively ordinary actors (and the writer!) often voicing more than one regular character.
In order to make it work with tapes, you keep a database of all of the backed-up objects, so you don't need to look at the tapes all the time. In fact, what then determines the storage technology is how fast you need to restore your clients, not how you back them up.
Using a database makes it possible to have an incremental-forever method, although identifying duplicate objects is a bit difficult unless your database contains not only modification time and date info, but some unique hash of the backed up object. But it does allow expiry controlled archive as well as backup operations in the same solution.
Established traditional high volume backup solutions still work well with flash, disk, tape, and even worm devices, although they are generally not cheap.
And despite being wrong-endian, Intel x86 is the dominant ISA.
Even IBM Power has been changed at Power 8 to allow it to work with the same endianess as x86. A retrograde step, IMHO.
(x86 should have been strangled at birth, not encouraged by IBM!)
I suspect you're deliberately grossly exaggerating. 20GB of symlinks is a whole lot of symlnks, bearing in mind that they actually occupy relatively small amounts of disk space each (if the path pointed to by the symlink is relatively short, the destination address is actually stored in the inode!)
They clutter the directory structure, true, but the main advantage is that they don't use much disk space.
Too early for OS/2, which was intended to be the follow-on product from Windows 3.X.
Of course, windowing systems for UNIX systems (Looking Glass looked really slick), Apple Lisa, as well as Xerox Star, PERC and others existed before Windows 3.0.
And don't forget DR GEM!
Nope. Windows 3.11 ran perfectly well on an 80386, which did not have any co-processor. It even worked on the cut-down 80386SX version.
Not that I was really that interested, being a committed UNIX person even then.
It must be getting to the point where the bankruptcy administrators realize that continuing will end up costing them more, without any prospect of generating any value.
As I understand it, it is only the possibility of winning some money from IBM that is keeping the remains of SCO only half-dead. If all that is left is the IBM counter claims, then there are serious costs and potential losses, but no potential gains in keeping the company in it's zombie state. They should just accept their losses, and finally wind SCO up.
Hopefully very shortly.
For non-structured basics, where the IF statement could only condition a single following statement, ELSEs were not available, and before procedures and when functions were so primitive they were basically useless, and the only conditional loop was FOR...NEXT, using gotos was the only way you could write code.
It took versions like GW Basic and BBC Basic (plus various versions on Mini-computers) to bring it into a relatively modern era.
People forget how simple Dartmouth and Basic-80/MBasic were!
IF there is a problem, it's not with the rm command.
About the only thing I can think may be at fault is the code that allows the UEFI variable to be accessed as part of sysfs. This will be some code in sysfs itself or an associated plug-in module to sysfs.
As has been pointed out earlier, it may be possible to mitigate this slightly by changing the sysfs abstraction to UEFI so that anything that looks like a directory does not have the "w" bit set, preventing any of the UEFI variables that appear as files from being deleted (although we are talking about root here...), but that would not prevent someone with the correct privileges from overwriting the contents of one of the 'files'. It may also make it impossible to create new 'files' (variables), if this was required.
I suspect that UEFI actually does have a filesystem like storage structure for it's own use (it's an OS of sorts, anyway), so it would make sense to the developer to make it appear under sysfs as a directory tree.
The concept is actually very simple. "Everything as a file" means that you can use any tool you like that works with files on other things. It's incredibly powerful.
An analogy to what was happening here could be like files in a remote share in Windows that appears on the windows desktop. It looks like folder containing files, but is not stored on any hard disk local to the system, and does not actually appear in the user's "Desktop" folder. It's abstracted to a different storage medium by the OS, so wiping the local disk by formatting it will not touch these files (in this case, on the share). Like psuedo filesystems in /sys, files on a share can be explicitly overwritten or deleted, but formatting the disk won't touch them.
For a share, the files are actually stored on another computer. For the /sys directory on Linux, it is 'stored' (or translated) to another medium than the disk, which can include the NVRAM in UEFI. Some entities in /sys are read-only (mainly for providing information, but also for input only devices like keyboards and mice), but anything that can change will probably be writable with the appropriate permissions. Being able to write to UEFI allows Linux utilities to make useful changes to the way the system boots the OS, amongst other things.
Like everything UNIXy, you should treat super-user (root) with more care, and you should never really do more than is actually essential with raised privileges.
I actually enjoyed the job back then.
It was a time when people in the UK could actually influence products, rather than what happens now, just complaining to support reps. in whatever-is-the-cheapest-location-this-year, and getting completely ignored because complex problems upset their call statistics.
I find it rewarding identifying and overcoming complicated problems! Does that make me odd? (No, on second thoughts, don't answer that).
Once, in the early '90s, when I was creating an APAR for a particularly obnoxious setup problem for an IBM printer on AIX, I was accused by the US support team of wanting to set an 'obscure' paper size as the default, rather than the 'standard'. The size I wanted was, of course, A4, and the US had set a hard default of US Letter.
After some fruitless to-ing and fro-ing, I suggested that they either climb down from their ivory tower, or re-christen AIX as the "American Interactive Executive".
This got me flamed for my unprofessional remarks in the problem management system, which came back down the management chain. I appealed back to my management chain in the UK explaining the scope of the problem, who thought my comments were, on the whole, rather restrained.
I actually got an apology, together with a thorough re-working of the factory defaults in the 4019 laser printer, a fix to the printer setup prepended to the print job by the driver, and a re-work of the nroff and troff device defaults, effectively fixing the problem in three different places!
Sometimes support processes work, sometimes they don't.
This means that you've (yes, you) f***ed up the page size settings in the page layout of whatever-app-you're-using.
Unfortunately, there is plenty of scope for this, especially if you rely on documents crafted piecemeal from many sources by cut-and-paste, because many office products will also keep individual page settings if you plagiarize other peoples documents in large chunks.
Mind you, I think that the first step in teaching office package use should be setting up default page size, dictionary and keyboard settings (huh - you got no training! Shocking).
I have to admit that whenever I use certain dominant desktop OSs, it really bugs me that it no longer seems to be the case that you can set these things up on a personal basis in your profile, and that many applications appear to want to remember what you used last time, rather than work from the defaults.
Yes, I know the last time I used A***e Reader to print some handouts I printed two-up, doublesided, tumbled and flipped on the long side. That doesn't mean I want the paper copy of my tax form printed the same way! What a waste of paper!