The bidding process itself is enough to put most SMEs off.
The time and the cost is just not something that a company that does not turn over millions can risk without some expectation that they could get the contract.
2346 posts • joined 15 Jun 2007
The bidding process itself is enough to put most SMEs off.
The time and the cost is just not something that a company that does not turn over millions can risk without some expectation that they could get the contract.
...and stored on recordable optical media for longevity!
I think 5.25" floppy disk was a flawed media anyway.
You relied on the disk remaining flexible enough to spin in the case which was not flat, stiff enough to not crease while it was being spun up and moved over the heads, and for the glue to remain stable enough to keep the rust particles attached to the disk while it was scoured by the disk head and the 'soft' material on the inside of the case. And you also have the problem of the way that the clamp on the drive grabbed the plastic of the disk itself, and over time damaged the edge of the hole.
From my experience, all 8" and 5.25" floppies ware now of questionable use, regardless of manufacturer. Certainly, I have Verbatim disks that still read, and BASF, Nashua and 3M disks that don't.
The 3.5" and 3" disks were slightly better because at least the case was rigid, and was less likely to rub the surface of the disk while it rotated.
Optical disks are much less likely to suffer, because there is significantly less physical contact (ideally none) anywhere on the disk except where the clamp grabs the disk. I have 30 year old audio CDs that still play, and some CD-R disks that I recorded before the millennium that still can be read.
My personal feeling is that if the media is rated for that long, there will be some effort made to make sure that, at least in the medium term, that the media can be read.
Couple this with the fact that it is piggy-backing on a consumer level technology, and should be able to be read on any BD-XL drive means that there is a higher chance that devices will continue to be made into the near future (decades) that will read it. I know that it is not really a good comparison, but CDs are still readable in current generation BluRay readers, so that shows that a medium with sufficient market penetration can still be readable nearly four decades later.
I know that this is not anything like the 1000 years specified for this media, but it is suitable for medium term (decades) archive of financial data in a way that current generations of disk/tape technology is not.
I'm not going to suggest that we should stop using durable physical media for the intelectual riches of our society, however, because when the technology fall happens, anything that is not readable by eye will be useless anyway!
And do what, once they are out of the seat?
Have you actually tried to do anything using just the minimal set of buttons on the telly itself?
You're normally limited to buttons for power, channel up and down and volume up and down. If you're lucky you may have an input selector and sometimes a menu button. And if you're really lucky, there may be a physical power switch somewhere you can find it.
Whilst checking a Sharp telly I was given (without a remote), I tried to get it to re-scan the DTV channels after I had done a reset. Turns out you can't do it at all without the remote. Fortunately, I came across a code for one of my universal remotes that provided the "DTV menu" button needed. I also think that my main living room telly can't select HDMI as an input from the buttons on the telly.
Somewhere in the house I have a Maplin catalogue from about 1981. I know that this is a few years after they started (I first ordered from them in about 1976, but from an advert in ETI). This was the time that Radio Spares and Farnell would not sell to you unless you has a business account.
I'm pretty certain that even back then they sold gadgets and gimmicks like RC cars, clocks et. al. There just weren't as many things available (remember that back then, digital watches were a pretty neat idea), and basic things like digital multimeters, calculators, breadboards with TTL and LEDS and electronic ignition modules satisfied the techno-lust of the geeks of the day, and other men (sorry, the time was just more sexist) wanted motorbikes, powerful cars and season tickets to their football team.
I never saw a US Radio Shack store, having only seen the UK Tandy stores. I was never impressed by them because they were too small to stock enough of the available Radio Shack range to be useful.
But on one of the previous articles, someone posted a link to a blog that contained a link to the old catalogues site for Radio Shack which shows a proud history of actually selling useful things.
I can see that some older people might look back fondly on the past, but today is a different time. I miss bricks and mortar shops of all types, there's nothing like browsing and being actually able to see what you are buying, but I can see that they can't stock the ranges or match the prices of internet sellers. Unless you live in a large city, it's much easier nowadays to order online and just have the stuff delivered.
People in the UK compare Maplin to Radio Shack, but I wonder just how many of those people remember that Maplin first became big by selling mail order rather than having many (any?) physical stores.
Bearing in mind that my EETV box (which I did not ask for, it just arrived one day between Christmas and the New Year) does not appear to offer any EE specific content, it will probably still work fine when the EE brand disappears, unless they ask for it back (which they can do, according to the Ts&Cs). But I can't see BT wanting it back after the takeover. It'll cost them more than it's value.
I'm a Sky TV customer (for longer than I've been an EE customer) as well, using EE for my mobile, broadband and phone (the Sky broadband offer was pants in my area, and EE offered a significant multi-play discount). The EETV box is a nice to have but not actually used that much device. It's quite nice (4, count them, 4 freeview HD tuners) and can record three programs off-air while watching a fourth. Not got too many add-on services yet (no Amazon Prime, Netflix, ITV Player or 4OD, only a service called Wuaki, which I've not even used the free tokens that came with the box yet.
I think that they sent it to me so that they could claim me as a 4-play customer, in the same way that they offered me a discount to switch from Orange to EE to claim me as a 4G customer, even though there is no 4G provision in my home area!
I just hope that they continue the discounts until then end of my contracted period.
I keep looking, and I still can't make it work unless it is not purely 2D, and/or the 'units' are more like squashed hexagons than rectangles. Maybe I need to see a fully rendered model that I can rotate.
I had not spotted that the bonds were different colours (probably a problem with my monitor and the ambient light), so I suppose that the dark grey/black bonds are double bonds, and the lighter grey bonds are single. At least that makes the valence correct.
I hope that the structure is more regular than the picture in the article!
At the left-most end, we've got rectangular blocks of 6 silicon atoms in a 2x3 pattern, with adjacent blocks overlapping so that the middle of one of the x3 rectangles forms the corner of the adjacent rectangles.
At the right-most end, we appear to have pairs of silicon atoms forming the corners of a 2x2 square structure.
In the middle, it's all a bit of a mess, with some 'bonds' looking longer than others. I've not counted the bonds properly but the fact that silicon has a valance of 4 (the same as carbon) makes it look wrong. Maybe my chemistry is too rusty!
I suppose that it could be a problem with the projection, but I've looked hard, and I think the atoms are in the wrong place for it to be some form of aspect correction.
UEFI is a BIOS replacement. It will always be in the ROM/Flash memory as a first stage bootstrap.
If you have the part of UEFI hard-coded so that only allows booting of a cryptographically signed OS from the media (and this is what WindowsRT mandated, it would not boot if UEFI was configured to be more relaxed), then you've got a chicken-and-egg situation where you can't break in to run another OS.
Microsoft insisted that WindowsRT systems were locked down like this because they did not want someone buying a Surface, and showing how well Android would run on the rather nice hardware.
As discussed before on these forums, the consensus is that one of the distro owners should provide a UEFI complient cryptographically signed Grub that could be booted to break the straight-jacket that was being planned by the Trusted Computing Group, or whatever it was last called.
A locked UEFI on a RiPi would be a complete disaster.
Plain and simply, no it doesn't.
Same issue. The CD/DVD is just a local extract of the repository. And I'm not too sure how many distros have Apache on the install CD/DVD. Desktop releases of Ubuntu don't.
To put this in context, LibreOffice is on most distro media, and that is not part of Linux. Similarly Firefox.
You've still not understood what Open Software is about.
The cited defect in the Linux kernel is actually a privilege escalation issue.
Now I know that I don't know the full details of the way that this was used, but I would suspect that it is not a remote vulnerability. Looking at it, it appears that in order to exploit it, you need to be able to have a local user session on the system, which implies that the first point of security has already been breached. Looking at the stats, this is probably because of lax user or password administration or issues with input validation of data in web pages.
Indeed, the quoted stats. appear to show that the highest vector for attack is a file inclusion, with the second highest being an attack against the administrator like password stealing or sniffing.
So if web site owners tightened up their code ad administration practices, even if the bug still existed, it would not be nearly as important.
Anyway, the public aspects of the Zone-H web site appear to show that it is not frequently maintained (only two news items in 2014), although there may be more information to logged in users, so it's probably not that creditable source of information.
I think that you are deliberately blurring the distinction between an operating system and an application in a repository, particularly in the Open Source world.
Just because something appears in the repository for a particular OS does not mean that it forms part of the operating system! If it did, then you could imply, by applying reductio ad absurdum, that everything in the Apple App store or Google Play is part of IOS or Android respectively.
What Redhat, Cannonical, SuSE, Debian et. al. do when creating a repository is take a package which has an open or permissive licence, and compile it to run on their distribution. They take ownership of the port and packaging, but pass any security, functionality or performance problems upstream to the package owner. And in some parts of the repositories, there are community maintained packages where the Distro maintainer does even less!
So in the case of Apache, problems that have nothing to do with the build process will be passed to the Apache Software Foundation, not owned by the Distro organisation.
You were correct in pointing out that my analogy with IIS was actually not a good one though, because with IIS, the owning organisation is the same as the owner of the OS.
I don't think that was your intention, however!
"Just look at website defacement stats" - this old chestnut again.
You're looking at the wrong thing. Websites may run on computers running Linux, but the code that delivers the web site is not Linux in the same way that IIS is not Windows, and a website defacement is not the same as an OS exploit. There may be some overlap, but it's very far from an exact match.
I thought we had educated all the AC trolls that cannot distinguish between the OS and the applications running on the OS.
None of the Sinclair machines were built as modular systems. In the case of the ZX80 and 81, the card edge expander was effectively just the naked CPU busses with one or two added lines, and the cases were not produced with an eye to add additional equipment apart from the RAM pack. Even the Interface 1 for the Spectrum was only just fit for purpose.
What happened is that capable and inquiring people found ways of using this 'expansion' bus to do things that it was not intended for. Indeed, if you could see a ZX81 with the Quicksilver expansion board on it, you would marvel at the fact that it worked at all!
I spent time and money on my ZX81 mainly because I could (and I was waiting for my BBC micro to be delivered - about 6 months IIRC). Whilst it was fun, the benefit was very minor (the number of games that made use of the QS sound card was tiny) except for the satisfaction of doing it - exactly the definition of a hobby.
I used to attend local computer user group meetings, and I took it as a challenge to make my '81 appear to do as much as the much capable systems like Acorn Atoms. This was before the days of colour computers, when what you saw at these meetings was Commodore Pets, Apple ][s (normally with a black and white TV or monitor because the color (sic) system was not PAL) and Atoms, with the occasional UK101 or Nascom system.
At one such meeting, we had a demo of a prototype BBC micro (with a serial number below 10), and that sold me on spending the equivalent of a month's pay (I could afford this because I lived with my parents for a year after leaving University) to order one as soon as they opened the order line. I still have it, it's got an issue 3 board, and I think the serial number is in the somewhere around 7000.
I still have my '81 as well, but unlike the BEEB, it no longer works.
I think that Memotech were the first people to do a shaped RAM pack that conformed to the shape of the rear of the case. A nice piece of kit that was made with a metal case.
They also has a pass-through bus, so that you could plug things in behind the RAM pack. Eventually they produced bank-switched 32 and 64K memory packs, and other 'slices' that could be stacked one next to another for other things like high resolution graphics, and RS232 and Centronics printer ports.
By the time you had bought a number of these, you might as well have bought a more capable machine!
I added an external keyboard (adapted from a Tandy keyboard by repainting the conductive tracks on the flexible membrane and keyboard legends cut out from a magazine picture stuck on to the top of the keys with clear tape), together with a power switch on the keyboard. The ZX81 then sat untouched on a shelf, well away from poking fingers.
Once there and safe from unwanted movement, I added a Quicksilver expansion board and sound card, together with an additional modulator to add the sound to the TV signal. I also hacked around with the internal 1K of memory, mapping it into a different address in the memory when the RAM pack was installed so it could be used, and also added a second 1K of static memory on the ULA/ROM side of the bus isolation resistors which allowed me to use it as a programmable character map by manipulating the I register that was used to hold the page address of the base of the character table.
I never had any problems with it until my (homebrew) power supply popped it's bridge rectifier and fried the rampack!
“History is written by the victors."
Oft used quote, possibly Winston S Churchill, and maybe similar sentiments by others.
Well, I've got two optical drives and a laser pointer and a mouse.
If I count the supercomputers I look after two floors down, there's a about 88,000 (fag packet calculation) individual lasers driving the optical interconnect!
Has anybody heard anything about proposed mission dates for Lohan yet? I'm half expecting another kickstarter request to top up the funds because of the time it is taking.
Silverlight was always intended to be an infrastructure lock-in by Microsoft, designed to lever more OS sales and damage the viability of other operating systems/platforms.
Microsoft's collaboration with the Mono team on Moonlight was just lip service. Moonlight was always going to be sufficiently far behind Silverlight to prevent it being a realistic proposition.
I stand by every word I said. I do not think that your post is as clear as you think it is.
You cannot protect from stupidity, and setting world write to both the files and the directories (necessary to delete a file) is something that you only do if you can accept the scenario you outlined. Just because you have "experienced" developers does not mean that they don't follow bad practice ("developers" often play fast and lose with both good practice and security, claiming that both "get in the way" of being productive). And giving world write permissions to files and directories is in almost all cases overkill. Restrict the access by group if you want to share files, and give all the users appropriate group membership. It's been good practice for decades.
You did say "Frankly, if it had been running as root it would probably have trashed (and crashed) the test system before too much external harm was done", but this is probably not true. You did not actually point out that root would not traverse the mount point of the NFS mounted files, but you did say "starting at a root that encompassed the whole NFS-automounted user home directory", implying that it was not the root directory of the system that was being deleted, but just the NFS mounted filesystems.
From personal experience, I have actually seen UNIX systems continue to run damaging processes even after significant parts of their filesystems have been deleted. This is especially true if the command that is doing the damage is running as a monolithic process (like being written in a compiled language or an inclusive interpreted one like Perl, Python or many others) and using direct calls to the OS rather than calling the external utilities with "system".
Many sites have home directories mounted somewhere under /home, so if it were doing a ftw in collating sequence order from the system root, it would come across and traverse /home before it would /usr (the most likely place for missing files to affect a system), so even it it did run from the system root, enough of the system would continue to run whilst /home was traversed. Not so safe.
And the problem here is typified by your statement 'could only delete the files that had suitable "other" permissions'.
Teach your users to set reasonable permissions on files! It goes back to my statement "too many people do not understand the inherent multi-user nature of UNIX-like operating systems".
With regard to running the script as root. You're not that familiar with NFS are you?
If you are using it properly, you will have the NFS export options set to prevent root access as root (it should be the default that you have to override), which is there to prevent exactly this sort of problem. This maps any attempt to use root on the test system into the 'nobody' user on the server, not root. Anybody who sets up a test server to have root permissions over any mounted production filesystem deserves every problem that they get!
There are people who have been using NFS in enterprise environments for in excess of quarter of a century. Do you not think that these problems have not been addressed before now?
Traditionally, in the UNIX world where you normally have more than one user on the system, you backup the system as root. Tools like tar, cpio and pax then record the ownership and permissions as they create the backup, and put them back when restoring files as root. This also allowed filesystems to be mounted and unmounted in the days before mechanisms to allow user-mounts were created.
The problem is that too many people do not understand the inherent multi-user nature of UNIX-like operating systems, and use them like PCs (as in single-user personal computers). To my horror, this includes many of the people developing applications and even distros maintainers!
There is nothing in UNIX or Linux that will prevent a process from damaging files owned by the user executing the process. But that is not too different from any common OS unless you take extraordinary measures (like carefully crafted ACLs). But at least running as a non-root user will prevent bad code like this from damaging the system as a whole.
GiTS is all about the balance between artificial and natural conciousness. It's the main theme from both the film and the TV series, although it's more difficult to see in the original Manga.
There are AIs that aspire to be 'human' with the tachikomas and Project 2501, and AIs masquerading as humans as in Proto. And then you've got cyborgs who wonder whether they still count as human, Motoko and Bateau, with side stories of clones, ghost dubbing onto both clones and artificial bodies, and what being human actually means.
I've not seen this yet, but I seriously doubt that it really brings much more to the subject than what's in fiction already. It will likely be an aspirational story about wanting to be human and the trials it involves like Blade Runner, The Bicentennial Man, Demonseed or even in some respects, Disney's Little Mermaid. But I will look forward to seeing it when it hits Sky or the like.
It strikes me that it is not feasible to do anything reasonable in real-time.
Chances are the amount of processing to identify an instruction from this information would require a processor much faster than the one being analysed. And even if you know the instruction, you don't know the data that it is operating on.
I suppose that if you could know the sequence of instructions used to encrypt the data, you may, in time and given enough examples of the calculation being performed, be able to reverse engineer it, but as most cryptography algorithms are available, the only thing I think you could work out is which method is being used.
So you can hack the region coding of a DVD or Bluray player like this, but this is nothing like being able to see everything that a computer is doing by it's emissions.
No, that was probably to comply with the FCC emissions regulations for consumer devices in the US, which were a real problem to the early home computer manufacturers.
Different manufacturers cam up with different solutions. Some made their computer's case out of metal. Some put full metal enclosures around the electronics inside a plastic case, and others used conductive paint sprayed onto or metal foil bonded to the inside of the plastic case.
I believe this is the main reason why many UK manufacturers had difficulty selling their systems in the US, because our emission regulations were much less strict.
Locked UEFI bootloader maybe?
OK. You're right.
But eventually things have to change. Putting in a way to keep things enough the same to satisfy people like you (and me - I do echo your statement about just using it which is why I use Gnome Flashback), whilst allowing adventurous souls to move forward allows a stepping-stone migration of the sort that Windows 8 did not allow.
This was what I meant by choice.
But I wanted to point out that although Unity on Ubuntu looks like they were following the same approach as Microsoft (take the new interface or don't use Ubuntu), sanity prevailed, and a user can still choose something a little more familiar.
I have two family members for whom a new and different UI is completely inappropriate, but who have to stick with Windows because of software issues. One is my 85 year old father, who is comfortable with the WinXP/7 UI, and would find it too onerous to change (he would probably just stop using the computer), and the other is my wife, and I don't do anything to rock the boat there, for fear of the repercussions!
It's possible to get something akin to the traditional Gnome 2 interface with the gnome flashback (previously called fallback) UI that is in the main repositories. It's not quite the old interface (it's actually a Gnome 2 UI built in Gnome 3).
And Cinnamon is in the Ubuntu repositories now.
And it is also perfectly possible to use Xubuntu (community Ubuntu distro) or Lubuntu mainstream release if you don't even want Unity installed.
This is what people wanted all the time. Choice. If Microsoft had provided the ability to select a 'traditional' desktop, maybe they would not have had too many people choosing it initially, but there would have been a slow conversion, and they would not have alienated their customer base.
Make the mouse a Microwriter or CyKey as well, and you could cover all the bases.
A full computer with keyboard equivalent and mouse that you could hold and use in one hand. Need to work out some display that could be used while mobile. Maybe Glass or another HUD system.
It's not 'passive', it just doesn't have a battery.
There is an inductive loop in the pen which picks up power from the tablet. I disassembled my daughters Graphire4 pen (the nib pressure sensor tends to stick if you leave it pressing on a surface for an extended time), and there's a significant board with chips on inside.
You missed out a piece of technology. Google "Wacom", and particularly their product "Cintiq".
Graphic tablets and evolutions of them have never gone away. They've just been targeted at the people who really appreciate them, people like graphic designers and illustrators.
Whilst true, that comparison is not fair to Google.
As pointed out in the article (and in several other places) Google do not have control over the deployment of new fixes to the core OS in the Android space. They may well publish a fix, and also issue new releases of Android that would technically work on a multitude of devices, but the devices are tweaked and locked by the manufacturer of the device, and sometimes by the service provider who ship the device, both of whom have no interest in allowing users to extend their use by installing patches.
I would love to see some legislation which mandated manufacturers and particularly service providers to free up boot loaders and other locking mechanisms a fixed time after their final update/patch is made available so they could take a generic release of Android. Maybe something from the reduction of waste legislation.
But it is difficult. Unfortunately, many Android devices have binary blobs, which are pieces of closed source code included in their Android release to handle communication, multimedia or other components in the device. There is nothing Google can do about this, short of changing the licensing model of Android. So even if the devices could take later versions of Android, unless the regression tested versions of the blobs are released (or open-sourced!), some devices will not take new releases of generic Android and remain fully functional.
Pretty much all racks have castors now, including supercomputers and mainframes.
I can check, but I think that all of the IBM P7 775 and z196 and the Cray XC40 frames that I can see in the machine room here have castors.
They also have wind-down feet and load-spreader bars when they are in their final position, so that they don't move.
A similar lift story from after I left IBM, but not as interesting.
We had a Power 4 system delivered in a T42 rack to a site I was working at in Poole, and to keep it under the weight limit for the lift and to get it through the doors (it was too high for the lift doors) we stripped the drawers out of the frame in the loading bay, tipped the frame on it's side, and then re-installed the drawers in the frame once it was on the machine room floor. All without telling the IBM hardware engineers!
The only problem we had was that the SPCN (Sequenced Power Control Network) cables were put back in the wrong locations, which gave us problems with the I/O drawer identification for the remaining life of the systems, even after they were connected correctly.
You got me thinking back more than 25 years to my training on Amdahl's Multiple Domain Facility (MDF) that I talked about in my last post, and I realised that back then, hypervisors did not really virtualise I/O.
What early hypervisors would do was to segregate memory and access to I/O channels (literally in the IBM mainframe world, but I suppose analogous to a set of disks or other devices hung off of a single adapter in more modern thinking), and provide a time-slice scheduler between partitions for the CPU.
All handling of I/O was performed natively by the hosted OS, including boot block requests, and it was only in very rare situations (such as extended I/O interrupts) that the hosted OS even knew it was running in a virtualised environment.
What this meant was that a hosted OS had to have complete and exclusive access to a string of disks, or indeed any other device, and all the hypervisor had to do was check that a hosted system did not try to access disks or other devices that were not presented to it.
The most difficult part of slicing a machine up like this was making sure that device interrupts were handled by the correct hosted OS, the one that had initiated the I/O operation.
There was virtualised addressing for each LPAR, so each hosted OS ran as if it has it's own contiguous address space starting at 0, and running up to the memory address configured. Additional protection was provided by memory having access keys attached to each page, and a hosted OS had to have the correct key to access a page, and each LPAR was only given it's own memory key. I think this memory keying was a hang-over from the early version of IBM VM, which did not have a fully virtualised addressing scheme.
It's only since you have shared virtualised I/O to the hosted OSs that hypervisors have become particularly sophisticated.
Yes, that's quite true, but if you look at PR/SM, the IBM Power Hypervisor, or Amdahl's MDF (the bare-metal hypervisors I've had experience with), they are deliberately very limited in function. The name Hypervisor (derived from an old alternative name for an operating system, the Executive Supervisor) was coined to indicate that it was a supervising program that was not an operating system. It was very deliberate to not call the hosting environment an Operating System.
It's only relatively recently that you've had Type 2 or 'hosted' hypervisors that sit on top of what one would describe as a normal operating system like Linux or Windows. Examples include the original incarnation of VMware, Xen, KVM and Parallels. I understand that HP's Integrety VM sits on top of HP/UX, although I have no experience.
And then you have things like VMware ESXi, which is classed as a type 1 bare metal hypervisor, but is really a canned Linux stripped of all functions that are not required to host other systems. Mind you, you could probably say the same about IBM's Power Hypervisor, but that is so deeply embedded in the firmware of Power systems that it's relatively difficult to see that it is Linux at heart.
Complicating it still further are Oracle/Sun's containers and IBM WPARs, which are not true VMs but still allow you many of the advantages of partitioning.
It's all getting complicated.
Fortunately, the incidence of significant earthquakes in Basingstoke is very low.
I know the innards are probably different, but the 9125-F2C which from the picture looks like it uses the same frame, each frame when full weighs 3.5 tonnes.
The z13 won't weigh quite so much, but the racks themselves are pretty substantial.
It should be noted that IBM pretty much invented virtualization with the 370 mainframe systems in the early 1970's. About the same time, Intel were making 4 bit microprocessors and TTL chips.
The virtualization will be performed either by the PR/SM type 1 (hardware) hypervisor or z/VM.
Read up on Type 1 hypervisors. There does not have to be a host OS, at least not as I think you understand them.
... I planned the installation of a full height 9076 SP/2 into a normal office space in an IBM building in Basingstoke.
When installed, it was about half full, and did not quite exceed the floor loading weight.
After I left, I heard it had been filled. I had visions of it descending through the 11th floor, then the 10th, the 9th and on to the ground!
Probably, if you think you have the space.
This looks like the same racks that the 9125-F2C P7 775 system is packaged in (they're both products from IBM Poughkeepsie, NY), an if so, this is 2 racks side-by-side, with each rack over 2.10 metres tall and 1.8 metres deep. Both racks together would be around 2 metres wide.
In addition, they will not take standard 19" wide rackmount devices without some additional mounting hardware as the 'gap' is 26" IIRC (sorry, I realise I've mixed measurement units).
IBM actually have some quite fancy doors available for their standard T-series racks, if you want to pay for them!
What has always worried me is if you have a legitimate set of data in a form that is not recognised by security services, what's to stop them assuming (wrongly) that is is encrypted, and demand the non-existent key.
There is no key so it can't be provided, and in the UK that is enough for someone to be detained.
I was thinking more along the lines of buying the rights and open-sourcing it. Probably can't happen as SCOG never controlled the ownership of the rights, and I guess that Attachmate will allocate some value to them.
As I cut my teeth on Bell Labs. version/edition 6 and 7, BSD was never 'true' UNIX to me. And although it ultimately failed, the AT&T lawsuit against the Regents of the University of California over proprietary code meant that the current BSD releases are only really related to 'true' UNIX (what I tend to call Genetic UNIX) by old code (v7 and before) and some APIs.
The current BSDs cannot even use the term UNIX because (rightly or wrongly) that trademark has to be licensed and any OS wanting to use the term certified by the Open Group against a verification suite, one which *BSD* will probably fail.
In some senses, SunOS came back into the fold with SunOS 4.01 which refactored it's code base around SVR4.
You and tens or hundreds of thousands of other people.
In the event of a breakdown in cities, you need to get out fast, and with as many guns as possible to deprive the people already in the country, and to stop others following and taking what you took first!
There is a semi-rational a reason why US isolationists build defendable enclosures.
Just converted my youngest son to the merits of IBM buckling spring technology (I graciously allowed him to use my 1990 built Model M for a few weeks).
Bought him a Unicomp USB 'IBM Classic' keyboard for Christmas.
They're still about as heavy as the originals.
I obviously look at this differently from many of the people here, although I don't believe that the OP was stating anything about why he purchased the Sony or the size of the TV he purchased. I presumed he did it because of the failure of his previous one, bearing in mind he did not want any of the shiny features.
I buy primarily on price. At the moment, if I were in the market for one, I would prefer to buy a £250 1080p TV now, and another one (probably better) in a couple of years should I feel the need (read on), rather than a £500+ one with a five year guarantee now.
I can see that there is a difference in quality, but not one that I feel is worth the extra money . And quite often, the cheaper ones can be 'life extended' by capacitor replacement or board-swap maintenance. My current TV (a Digihome bought from Tesco, in case you wanted to know) is seven years old, and has had a power supply capacitor replacement and a t-con board at a total additional cost of about £20 plus a little of my time. It lasted the best part of four years without any work, and I can see it lasting another couple of years, although it may be relegated to another room at some time.
Maybe I've been lucky. I feel the picture on this one is good enough, although the blacks could be blacker. The upscaler on non-HD content is good, and I do not suffer from block decoding artefacts or noticible high-speed smear, although I will admit that the best quality signal it gets fed is from a 3rd generation Sky HD box. I've certainly seen some stinkers (I also have a similar aged 32" Sanyo TV bough second hand for the kids game consoles, which is pretty bad).
But what I have is 'good enough'. I've seen many big name (but not necessarily THE big names in TVs) that are no better, and were much more expensive than my current TV. I'll certainly not be buying a TV for more than about £350 absolute maximum any time soon, and I suspect that there is a huge segment of the buying public who will think the same.
Maybe not. Maybe the author has not realised that One
Dimension Direction (I'm sorry, that's how I read 1D, which they use as an abbreviation) are all now over 20, and have pretty much shed their 'boy band' image (they're often seen sporting stubble and other 'adult' stylings).
Mind you, Take That are still occasionally called a 'boy band' even though they are all over 40.