Re: @Graham Marsden Splinter of the Mind's Eye
This was not just an Expanded Universe story. It's THE FIRST Expanded Universe story.
It was published before The Empire Strikes Back was released, so is extremely non-canon.
2953 posts • joined 15 Jun 2007
Don't for a second think that ACLs are a feature introduced by Windows.
The earliest I remember ACLs being discussed was in Multics, whose design goes back to the 1960's, before Microsoft was even a company. Multics had a very complete security model for it's time, which included control over processes and services as well as the filesystem.
The thing about UNIX-like file permissions is that they have been good enough for most purposes for decades. They're a long way from being perfect, and I've said as much many times on these forums, but they can be made to do most of what is required with the right amount of knowledge. This has meant that until recently there was no pressing need to implement ACLs.
Where they were implemented, they were frequently unused because system administrators of the time did not think it necessary. Simpler times, maybe.
ACL implementations have existed in UNIX systems for many, many years. They first appeared in AIX with AIX 3.1 in 1990, and I'm pretty sure that the Veritas filesystem that could be used as the base filesystem on a number of proprietary operating systems also included ACLs.
The Andrew File System had both Kerberos support and ACLs from the early 1990's as well.
If you think that filesystem ACLs are not enough, look at the UNIX and Linux implementations of RBAC (and SELinux). Because most RBAC implementations use PAM, this means that it is possible to have RBAC controlled by Kerberos, and even put LDAP in the mix, and this allows something not that dissimilar to what I read Windows can do. And this has been possible for many years, before Microsoft jumped on the Kerberos bandwagon.
It is perfectly possible to use Kerberos to control access to a Linux system. All distros I know ship a PAM (Pluggable Authentication Module) which allows you to use Kerberos as a primary access control mechanism. OpenSSH has Kerberos support built in, and there is support for Kerberos tickets in sudo to control user commands.
Many years ago (~20 IIRC - before even NTFS 5 and Windows 2000), there was a file system called DCE/DFS for POSIX'y systems that also integrated Kerberos tickets into filesystem ACLs. The Andrew File System (which DCE/DFS was adapted from) still exists and still uses Kerberos tickets to control access. Generally speaking, it's a technology that was regarded as unnecessary, or maybe it was just ahead of it's time. I think that GPFS can also use Kerberos, but that may just be for system-to-system authentication. Thinking about it NFS4 and later uses GSSAPI, and you can plug Kerberos into that as well.
So don't think that Microsoft invented these things in Windows. They're playing catchup, but no doubt they will try embrace, extend and extinguish again as they have tried with LDAP/Active Directory and DNS.
Having just read a Technet description of Kerberos constrained delegation, it would appear that Microsoft have implemented a service using a fundamental feature of Kerberos - which appeared on a number of platforms including UNIX before it was added to Windows, and have been presumptuous enough to have given it a name.
Linux implementations of Kerberos will have the same fundamental technologies, but nobody has given it s specific name except Microsoft, who are trying to cash in on other people's work. I'm pretty certain that all Linux distro's will have Kerberos 5 support in their repositories. RHEL6.5 certainly has.
There are also several deduplication facilities available for Linux, including a number of filesystems like btrfs and ZFS. You just have to use a search engine to find them. ZFS also supports tiered storage (before Windows 2012, btw), as does IBM Elastic Storage, although Elastic Storage (aka GPFS) is commercial software.
I admit that it's not out-of-the-box, but it's hardly difficult to come by.
It all depends.
If the web-site designers have loaded it with copious numbers of large images, it may actually not be possible to use dial up, at least not if you intend to maintain your sanity.
I've not seen the Verify web site, so I can't say for certain how heavy that page is.
From my perspective, there are basically two different types of farm. Large ones, run by technically capable farmers, and small mainly family run farms that may be years behind in the deployment of technology. I have both types in my extended family, and have had to help my father-in-law comply with some of the demands made of them in the past, as even something like deciding what the map reference and acreage of a field is can be a challenge. My father-in-law before he gave up farming would have no idea about how to verify his ID using a service like this. He would rely on a professional like an accountant or other professional to do it for him, like he did with his tax, VAT, and to some extent his subsidiary paperwork. If that avenue was not open, he would have left farming earlier, like so many others.
It may actually be the case that using a relatively technically challenged group is a good one to test the system out with, but you'd better make sure that there is an emergency catch-net, because in the case of some farms, the EU subsidy is all that separates break-even from loss, and I'm not sure that the banks are compassionate enough to refrain from taking action if loan or other payments are not made because the subsidy payment is delayed.
We need a technology that can be abandoned and still be readable in future times.
Any technological solution is bound to fail because maintaining it requires repeated investment in either maintaining what will become an obsolete storage format in the future, or repeatedly re-writing it as new media are invented.
It's all very well suggesting that technology from people such as "Carnegie Mellon University and IBM Research" might be worth using, but this assumes a certain amount of continuity to maintain the physical storage that requires organisations to survive. You cannot rely on government or industry to still be around in the future, and the 'Cloud' (whatever is meant by that) needs to be maintained as well.
You end up with stupid chicken-and-egg situations if the description of the programs and machines necessary to read the media is only stored on the media itself.
I respect Vint Cerf. He's very influential. But he's not, in the grand scheme of things, an engineer (his degrees are in Mathematics, and he's managed various teams and companies mainly on data communication). Nowadays, he's good at the grand scheme thinking, not the detail.
He was being interviewed on Radio 4 this morning, and I got the feeling that he was either dumbing down what he was saying for a non-technical audience, or that he did not fully understand various fundamentals on machine architecture and what would be necessary to maintain in order to run a program from a current generation of machines. I would hope that it was the former, but I was not convinced. When taking about the systems, he talked about taking a snapshot of the software "with a description of the machine it runs on", glossing over that the description would have to be incredibly detailed to capture all the nuances of machine architecture to allow a working machine to be reconstructed from that description.
I would suspect strongly that it would already be nigh on impossible already to reconstruct systems from people like DG, Prime or Tandem (amongst others) unless working physical instances exist.
Trying to capture all of the operating characteristics of a complex modern processor like Power 8 or a Haswell and the associated support chipsets to allow it to be reimplemented in the future on architectures unimaginable at the moment would be a herculean task!
Much better would be to ban the use of all proprietary closed file formats, and keep the definition of the open file formats in enough detail to reconstruct the data stored in those formats.
But this does not alter the fact that there needs to be readable media maintained in perpetuity.
I think 5.25" floppy disk was a flawed media anyway.
You relied on the disk remaining flexible enough to spin in the case which was not flat, stiff enough to not crease while it was being spun up and moved over the heads, and for the glue to remain stable enough to keep the rust particles attached to the disk while it was scoured by the disk head and the 'soft' material on the inside of the case. And you also have the problem of the way that the clamp on the drive grabbed the plastic of the disk itself, and over time damaged the edge of the hole.
From my experience, all 8" and 5.25" floppies ware now of questionable use, regardless of manufacturer. Certainly, I have Verbatim disks that still read, and BASF, Nashua and 3M disks that don't.
The 3.5" and 3" disks were slightly better because at least the case was rigid, and was less likely to rub the surface of the disk while it rotated.
Optical disks are much less likely to suffer, because there is significantly less physical contact (ideally none) anywhere on the disk except where the clamp grabs the disk. I have 30 year old audio CDs that still play, and some CD-R disks that I recorded before the millennium that still can be read.
My personal feeling is that if the media is rated for that long, there will be some effort made to make sure that, at least in the medium term, that the media can be read.
Couple this with the fact that it is piggy-backing on a consumer level technology, and should be able to be read on any BD-XL drive means that there is a higher chance that devices will continue to be made into the near future (decades) that will read it. I know that it is not really a good comparison, but CDs are still readable in current generation BluRay readers, so that shows that a medium with sufficient market penetration can still be readable nearly four decades later.
I know that this is not anything like the 1000 years specified for this media, but it is suitable for medium term (decades) archive of financial data in a way that current generations of disk/tape technology is not.
I'm not going to suggest that we should stop using durable physical media for the intelectual riches of our society, however, because when the technology fall happens, anything that is not readable by eye will be useless anyway!
And do what, once they are out of the seat?
Have you actually tried to do anything using just the minimal set of buttons on the telly itself?
You're normally limited to buttons for power, channel up and down and volume up and down. If you're lucky you may have an input selector and sometimes a menu button. And if you're really lucky, there may be a physical power switch somewhere you can find it.
Whilst checking a Sharp telly I was given (without a remote), I tried to get it to re-scan the DTV channels after I had done a reset. Turns out you can't do it at all without the remote. Fortunately, I came across a code for one of my universal remotes that provided the "DTV menu" button needed. I also think that my main living room telly can't select HDMI as an input from the buttons on the telly.
Somewhere in the house I have a Maplin catalogue from about 1981. I know that this is a few years after they started (I first ordered from them in about 1976, but from an advert in ETI). This was the time that Radio Spares and Farnell would not sell to you unless you has a business account.
I'm pretty certain that even back then they sold gadgets and gimmicks like RC cars, clocks et. al. There just weren't as many things available (remember that back then, digital watches were a pretty neat idea), and basic things like digital multimeters, calculators, breadboards with TTL and LEDS and electronic ignition modules satisfied the techno-lust of the geeks of the day, and other men (sorry, the time was just more sexist) wanted motorbikes, powerful cars and season tickets to their football team.
I never saw a US Radio Shack store, having only seen the UK Tandy stores. I was never impressed by them because they were too small to stock enough of the available Radio Shack range to be useful.
But on one of the previous articles, someone posted a link to a blog that contained a link to the old catalogues site for Radio Shack which shows a proud history of actually selling useful things.
I can see that some older people might look back fondly on the past, but today is a different time. I miss bricks and mortar shops of all types, there's nothing like browsing and being actually able to see what you are buying, but I can see that they can't stock the ranges or match the prices of internet sellers. Unless you live in a large city, it's much easier nowadays to order online and just have the stuff delivered.
People in the UK compare Maplin to Radio Shack, but I wonder just how many of those people remember that Maplin first became big by selling mail order rather than having many (any?) physical stores.
Bearing in mind that my EETV box (which I did not ask for, it just arrived one day between Christmas and the New Year) does not appear to offer any EE specific content, it will probably still work fine when the EE brand disappears, unless they ask for it back (which they can do, according to the Ts&Cs). But I can't see BT wanting it back after the takeover. It'll cost them more than it's value.
I'm a Sky TV customer (for longer than I've been an EE customer) as well, using EE for my mobile, broadband and phone (the Sky broadband offer was pants in my area, and EE offered a significant multi-play discount). The EETV box is a nice to have but not actually used that much device. It's quite nice (4, count them, 4 freeview HD tuners) and can record three programs off-air while watching a fourth. Not got too many add-on services yet (no Amazon Prime, Netflix, ITV Player or 4OD, only a service called Wuaki, which I've not even used the free tokens that came with the box yet.
I think that they sent it to me so that they could claim me as a 4-play customer, in the same way that they offered me a discount to switch from Orange to EE to claim me as a 4G customer, even though there is no 4G provision in my home area!
I just hope that they continue the discounts until then end of my contracted period.
I keep looking, and I still can't make it work unless it is not purely 2D, and/or the 'units' are more like squashed hexagons than rectangles. Maybe I need to see a fully rendered model that I can rotate.
I had not spotted that the bonds were different colours (probably a problem with my monitor and the ambient light), so I suppose that the dark grey/black bonds are double bonds, and the lighter grey bonds are single. At least that makes the valence correct.
I hope that the structure is more regular than the picture in the article!
At the left-most end, we've got rectangular blocks of 6 silicon atoms in a 2x3 pattern, with adjacent blocks overlapping so that the middle of one of the x3 rectangles forms the corner of the adjacent rectangles.
At the right-most end, we appear to have pairs of silicon atoms forming the corners of a 2x2 square structure.
In the middle, it's all a bit of a mess, with some 'bonds' looking longer than others. I've not counted the bonds properly but the fact that silicon has a valance of 4 (the same as carbon) makes it look wrong. Maybe my chemistry is too rusty!
I suppose that it could be a problem with the projection, but I've looked hard, and I think the atoms are in the wrong place for it to be some form of aspect correction.
UEFI is a BIOS replacement. It will always be in the ROM/Flash memory as a first stage bootstrap.
If you have the part of UEFI hard-coded so that only allows booting of a cryptographically signed OS from the media (and this is what WindowsRT mandated, it would not boot if UEFI was configured to be more relaxed), then you've got a chicken-and-egg situation where you can't break in to run another OS.
Microsoft insisted that WindowsRT systems were locked down like this because they did not want someone buying a Surface, and showing how well Android would run on the rather nice hardware.
As discussed before on these forums, the consensus is that one of the distro owners should provide a UEFI complient cryptographically signed Grub that could be booted to break the straight-jacket that was being planned by the Trusted Computing Group, or whatever it was last called.
A locked UEFI on a RiPi would be a complete disaster.
Plain and simply, no it doesn't.
Same issue. The CD/DVD is just a local extract of the repository. And I'm not too sure how many distros have Apache on the install CD/DVD. Desktop releases of Ubuntu don't.
To put this in context, LibreOffice is on most distro media, and that is not part of Linux. Similarly Firefox.
You've still not understood what Open Software is about.
The cited defect in the Linux kernel is actually a privilege escalation issue.
Now I know that I don't know the full details of the way that this was used, but I would suspect that it is not a remote vulnerability. Looking at it, it appears that in order to exploit it, you need to be able to have a local user session on the system, which implies that the first point of security has already been breached. Looking at the stats, this is probably because of lax user or password administration or issues with input validation of data in web pages.
Indeed, the quoted stats. appear to show that the highest vector for attack is a file inclusion, with the second highest being an attack against the administrator like password stealing or sniffing.
So if web site owners tightened up their code ad administration practices, even if the bug still existed, it would not be nearly as important.
Anyway, the public aspects of the Zone-H web site appear to show that it is not frequently maintained (only two news items in 2014), although there may be more information to logged in users, so it's probably not that creditable source of information.
I think that you are deliberately blurring the distinction between an operating system and an application in a repository, particularly in the Open Source world.
Just because something appears in the repository for a particular OS does not mean that it forms part of the operating system! If it did, then you could imply, by applying reductio ad absurdum, that everything in the Apple App store or Google Play is part of IOS or Android respectively.
What Redhat, Cannonical, SuSE, Debian et. al. do when creating a repository is take a package which has an open or permissive licence, and compile it to run on their distribution. They take ownership of the port and packaging, but pass any security, functionality or performance problems upstream to the package owner. And in some parts of the repositories, there are community maintained packages where the Distro maintainer does even less!
So in the case of Apache, problems that have nothing to do with the build process will be passed to the Apache Software Foundation, not owned by the Distro organisation.
You were correct in pointing out that my analogy with IIS was actually not a good one though, because with IIS, the owning organisation is the same as the owner of the OS.
I don't think that was your intention, however!
"Just look at website defacement stats" - this old chestnut again.
You're looking at the wrong thing. Websites may run on computers running Linux, but the code that delivers the web site is not Linux in the same way that IIS is not Windows, and a website defacement is not the same as an OS exploit. There may be some overlap, but it's very far from an exact match.
I thought we had educated all the AC trolls that cannot distinguish between the OS and the applications running on the OS.
None of the Sinclair machines were built as modular systems. In the case of the ZX80 and 81, the card edge expander was effectively just the naked CPU busses with one or two added lines, and the cases were not produced with an eye to add additional equipment apart from the RAM pack. Even the Interface 1 for the Spectrum was only just fit for purpose.
What happened is that capable and inquiring people found ways of using this 'expansion' bus to do things that it was not intended for. Indeed, if you could see a ZX81 with the Quicksilver expansion board on it, you would marvel at the fact that it worked at all!
I spent time and money on my ZX81 mainly because I could (and I was waiting for my BBC micro to be delivered - about 6 months IIRC). Whilst it was fun, the benefit was very minor (the number of games that made use of the QS sound card was tiny) except for the satisfaction of doing it - exactly the definition of a hobby.
I used to attend local computer user group meetings, and I took it as a challenge to make my '81 appear to do as much as the much capable systems like Acorn Atoms. This was before the days of colour computers, when what you saw at these meetings was Commodore Pets, Apple ][s (normally with a black and white TV or monitor because the color (sic) system was not PAL) and Atoms, with the occasional UK101 or Nascom system.
At one such meeting, we had a demo of a prototype BBC micro (with a serial number below 10), and that sold me on spending the equivalent of a month's pay (I could afford this because I lived with my parents for a year after leaving University) to order one as soon as they opened the order line. I still have it, it's got an issue 3 board, and I think the serial number is in the somewhere around 7000.
I still have my '81 as well, but unlike the BEEB, it no longer works.
I think that Memotech were the first people to do a shaped RAM pack that conformed to the shape of the rear of the case. A nice piece of kit that was made with a metal case.
They also has a pass-through bus, so that you could plug things in behind the RAM pack. Eventually they produced bank-switched 32 and 64K memory packs, and other 'slices' that could be stacked one next to another for other things like high resolution graphics, and RS232 and Centronics printer ports.
By the time you had bought a number of these, you might as well have bought a more capable machine!
I added an external keyboard (adapted from a Tandy keyboard by repainting the conductive tracks on the flexible membrane and keyboard legends cut out from a magazine picture stuck on to the top of the keys with clear tape), together with a power switch on the keyboard. The ZX81 then sat untouched on a shelf, well away from poking fingers.
Once there and safe from unwanted movement, I added a Quicksilver expansion board and sound card, together with an additional modulator to add the sound to the TV signal. I also hacked around with the internal 1K of memory, mapping it into a different address in the memory when the RAM pack was installed so it could be used, and also added a second 1K of static memory on the ULA/ROM side of the bus isolation resistors which allowed me to use it as a programmable character map by manipulating the I register that was used to hold the page address of the base of the character table.
I never had any problems with it until my (homebrew) power supply popped it's bridge rectifier and fried the rampack!
Silverlight was always intended to be an infrastructure lock-in by Microsoft, designed to lever more OS sales and damage the viability of other operating systems/platforms.
Microsoft's collaboration with the Mono team on Moonlight was just lip service. Moonlight was always going to be sufficiently far behind Silverlight to prevent it being a realistic proposition.
I stand by every word I said. I do not think that your post is as clear as you think it is.
You cannot protect from stupidity, and setting world write to both the files and the directories (necessary to delete a file) is something that you only do if you can accept the scenario you outlined. Just because you have "experienced" developers does not mean that they don't follow bad practice ("developers" often play fast and lose with both good practice and security, claiming that both "get in the way" of being productive). And giving world write permissions to files and directories is in almost all cases overkill. Restrict the access by group if you want to share files, and give all the users appropriate group membership. It's been good practice for decades.
You did say "Frankly, if it had been running as root it would probably have trashed (and crashed) the test system before too much external harm was done", but this is probably not true. You did not actually point out that root would not traverse the mount point of the NFS mounted files, but you did say "starting at a root that encompassed the whole NFS-automounted user home directory", implying that it was not the root directory of the system that was being deleted, but just the NFS mounted filesystems.
From personal experience, I have actually seen UNIX systems continue to run damaging processes even after significant parts of their filesystems have been deleted. This is especially true if the command that is doing the damage is running as a monolithic process (like being written in a compiled language or an inclusive interpreted one like Perl, Python or many others) and using direct calls to the OS rather than calling the external utilities with "system".
Many sites have home directories mounted somewhere under /home, so if it were doing a ftw in collating sequence order from the system root, it would come across and traverse /home before it would /usr (the most likely place for missing files to affect a system), so even it it did run from the system root, enough of the system would continue to run whilst /home was traversed. Not so safe.
And the problem here is typified by your statement 'could only delete the files that had suitable "other" permissions'.
Teach your users to set reasonable permissions on files! It goes back to my statement "too many people do not understand the inherent multi-user nature of UNIX-like operating systems".
With regard to running the script as root. You're not that familiar with NFS are you?
If you are using it properly, you will have the NFS export options set to prevent root access as root (it should be the default that you have to override), which is there to prevent exactly this sort of problem. This maps any attempt to use root on the test system into the 'nobody' user on the server, not root. Anybody who sets up a test server to have root permissions over any mounted production filesystem deserves every problem that they get!
There are people who have been using NFS in enterprise environments for in excess of quarter of a century. Do you not think that these problems have not been addressed before now?
Traditionally, in the UNIX world where you normally have more than one user on the system, you backup the system as root. Tools like tar, cpio and pax then record the ownership and permissions as they create the backup, and put them back when restoring files as root. This also allowed filesystems to be mounted and unmounted in the days before mechanisms to allow user-mounts were created.
The problem is that too many people do not understand the inherent multi-user nature of UNIX-like operating systems, and use them like PCs (as in single-user personal computers). To my horror, this includes many of the people developing applications and even distros maintainers!
There is nothing in UNIX or Linux that will prevent a process from damaging files owned by the user executing the process. But that is not too different from any common OS unless you take extraordinary measures (like carefully crafted ACLs). But at least running as a non-root user will prevent bad code like this from damaging the system as a whole.
GiTS is all about the balance between artificial and natural conciousness. It's the main theme from both the film and the TV series, although it's more difficult to see in the original Manga.
There are AIs that aspire to be 'human' with the tachikomas and Project 2501, and AIs masquerading as humans as in Proto. And then you've got cyborgs who wonder whether they still count as human, Motoko and Bateau, with side stories of clones, ghost dubbing onto both clones and artificial bodies, and what being human actually means.
I've not seen this yet, but I seriously doubt that it really brings much more to the subject than what's in fiction already. It will likely be an aspirational story about wanting to be human and the trials it involves like Blade Runner, The Bicentennial Man, Demonseed or even in some respects, Disney's Little Mermaid. But I will look forward to seeing it when it hits Sky or the like.
It strikes me that it is not feasible to do anything reasonable in real-time.
Chances are the amount of processing to identify an instruction from this information would require a processor much faster than the one being analysed. And even if you know the instruction, you don't know the data that it is operating on.
I suppose that if you could know the sequence of instructions used to encrypt the data, you may, in time and given enough examples of the calculation being performed, be able to reverse engineer it, but as most cryptography algorithms are available, the only thing I think you could work out is which method is being used.
So you can hack the region coding of a DVD or Bluray player like this, but this is nothing like being able to see everything that a computer is doing by it's emissions.
No, that was probably to comply with the FCC emissions regulations for consumer devices in the US, which were a real problem to the early home computer manufacturers.
Different manufacturers cam up with different solutions. Some made their computer's case out of metal. Some put full metal enclosures around the electronics inside a plastic case, and others used conductive paint sprayed onto or metal foil bonded to the inside of the plastic case.
I believe this is the main reason why many UK manufacturers had difficulty selling their systems in the US, because our emission regulations were much less strict.
OK. You're right.
But eventually things have to change. Putting in a way to keep things enough the same to satisfy people like you (and me - I do echo your statement about just using it which is why I use Gnome Flashback), whilst allowing adventurous souls to move forward allows a stepping-stone migration of the sort that Windows 8 did not allow.
This was what I meant by choice.
But I wanted to point out that although Unity on Ubuntu looks like they were following the same approach as Microsoft (take the new interface or don't use Ubuntu), sanity prevailed, and a user can still choose something a little more familiar.
I have two family members for whom a new and different UI is completely inappropriate, but who have to stick with Windows because of software issues. One is my 85 year old father, who is comfortable with the WinXP/7 UI, and would find it too onerous to change (he would probably just stop using the computer), and the other is my wife, and I don't do anything to rock the boat there, for fear of the repercussions!
It's possible to get something akin to the traditional Gnome 2 interface with the gnome flashback (previously called fallback) UI that is in the main repositories. It's not quite the old interface (it's actually a Gnome 2 UI built in Gnome 3).
And Cinnamon is in the Ubuntu repositories now.
And it is also perfectly possible to use Xubuntu (community Ubuntu distro) or Lubuntu mainstream release if you don't even want Unity installed.
This is what people wanted all the time. Choice. If Microsoft had provided the ability to select a 'traditional' desktop, maybe they would not have had too many people choosing it initially, but there would have been a slow conversion, and they would not have alienated their customer base.
It's not 'passive', it just doesn't have a battery.
There is an inductive loop in the pen which picks up power from the tablet. I disassembled my daughters Graphire4 pen (the nib pressure sensor tends to stick if you leave it pressing on a surface for an extended time), and there's a significant board with chips on inside.
You missed out a piece of technology. Google "Wacom", and particularly their product "Cintiq".
Graphic tablets and evolutions of them have never gone away. They've just been targeted at the people who really appreciate them, people like graphic designers and illustrators.
Whilst true, that comparison is not fair to Google.
As pointed out in the article (and in several other places) Google do not have control over the deployment of new fixes to the core OS in the Android space. They may well publish a fix, and also issue new releases of Android that would technically work on a multitude of devices, but the devices are tweaked and locked by the manufacturer of the device, and sometimes by the service provider who ship the device, both of whom have no interest in allowing users to extend their use by installing patches.
I would love to see some legislation which mandated manufacturers and particularly service providers to free up boot loaders and other locking mechanisms a fixed time after their final update/patch is made available so they could take a generic release of Android. Maybe something from the reduction of waste legislation.
But it is difficult. Unfortunately, many Android devices have binary blobs, which are pieces of closed source code included in their Android release to handle communication, multimedia or other components in the device. There is nothing Google can do about this, short of changing the licensing model of Android. So even if the devices could take later versions of Android, unless the regression tested versions of the blobs are released (or open-sourced!), some devices will not take new releases of generic Android and remain fully functional.
Pretty much all racks have castors now, including supercomputers and mainframes.
I can check, but I think that all of the IBM P7 775 and z196 and the Cray XC40 frames that I can see in the machine room here have castors.
They also have wind-down feet and load-spreader bars when they are in their final position, so that they don't move.
A similar lift story from after I left IBM, but not as interesting.
We had a Power 4 system delivered in a T42 rack to a site I was working at in Poole, and to keep it under the weight limit for the lift and to get it through the doors (it was too high for the lift doors) we stripped the drawers out of the frame in the loading bay, tipped the frame on it's side, and then re-installed the drawers in the frame once it was on the machine room floor. All without telling the IBM hardware engineers!
The only problem we had was that the SPCN (Sequenced Power Control Network) cables were put back in the wrong locations, which gave us problems with the I/O drawer identification for the remaining life of the systems, even after they were connected correctly.
You got me thinking back more than 25 years to my training on Amdahl's Multiple Domain Facility (MDF) that I talked about in my last post, and I realised that back then, hypervisors did not really virtualise I/O.
What early hypervisors would do was to segregate memory and access to I/O channels (literally in the IBM mainframe world, but I suppose analogous to a set of disks or other devices hung off of a single adapter in more modern thinking), and provide a time-slice scheduler between partitions for the CPU.
All handling of I/O was performed natively by the hosted OS, including boot block requests, and it was only in very rare situations (such as extended I/O interrupts) that the hosted OS even knew it was running in a virtualised environment.
What this meant was that a hosted OS had to have complete and exclusive access to a string of disks, or indeed any other device, and all the hypervisor had to do was check that a hosted system did not try to access disks or other devices that were not presented to it.
The most difficult part of slicing a machine up like this was making sure that device interrupts were handled by the correct hosted OS, the one that had initiated the I/O operation.
There was virtualised addressing for each LPAR, so each hosted OS ran as if it has it's own contiguous address space starting at 0, and running up to the memory address configured. Additional protection was provided by memory having access keys attached to each page, and a hosted OS had to have the correct key to access a page, and each LPAR was only given it's own memory key. I think this memory keying was a hang-over from the early version of IBM VM, which did not have a fully virtualised addressing scheme.
It's only since you have shared virtualised I/O to the hosted OSs that hypervisors have become particularly sophisticated.
Yes, that's quite true, but if you look at PR/SM, the IBM Power Hypervisor, or Amdahl's MDF (the bare-metal hypervisors I've had experience with), they are deliberately very limited in function. The name Hypervisor (derived from an old alternative name for an operating system, the Executive Supervisor) was coined to indicate that it was a supervising program that was not an operating system. It was very deliberate to not call the hosting environment an Operating System.
It's only relatively recently that you've had Type 2 or 'hosted' hypervisors that sit on top of what one would describe as a normal operating system like Linux or Windows. Examples include the original incarnation of VMware, Xen, KVM and Parallels. I understand that HP's Integrety VM sits on top of HP/UX, although I have no experience.
And then you have things like VMware ESXi, which is classed as a type 1 bare metal hypervisor, but is really a canned Linux stripped of all functions that are not required to host other systems. Mind you, you could probably say the same about IBM's Power Hypervisor, but that is so deeply embedded in the firmware of Power systems that it's relatively difficult to see that it is Linux at heart.
Complicating it still further are Oracle/Sun's containers and IBM WPARs, which are not true VMs but still allow you many of the advantages of partitioning.
It's all getting complicated.
What has always worried me is if you have a legitimate set of data in a form that is not recognised by security services, what's to stop them assuming (wrongly) that is is encrypted, and demand the non-existent key.
There is no key so it can't be provided, and in the UK that is enough for someone to be detained.
I was thinking more along the lines of buying the rights and open-sourcing it. Probably can't happen as SCOG never controlled the ownership of the rights, and I guess that Attachmate will allocate some value to them.
As I cut my teeth on Bell Labs. version/edition 6 and 7, BSD was never 'true' UNIX to me. And although it ultimately failed, the AT&T lawsuit against the Regents of the University of California over proprietary code meant that the current BSD releases are only really related to 'true' UNIX (what I tend to call Genetic UNIX) by old code (v7 and before) and some APIs.
The current BSDs cannot even use the term UNIX because (rightly or wrongly) that trademark has to be licensed and any OS wanting to use the term certified by the Open Group against a verification suite, one which *BSD* will probably fail.
In some senses, SunOS came back into the fold with SunOS 4.01 which refactored it's code base around SVR4.
Biting the hand that feeds IT © 1998–2019