1700 posts • joined 15 Jun 2007
Re: @AC 14:58
I'm not doing the GNU/Linux 'drivel' as you call it. I'm just pointing out that SSH is as much a part of Linux as Audacity or LibreOffice, or a host of other Open Source projects. They're part of most distros, sure, but not a part of Linux itself. I suggest that you just don't understand what a Linux distribution is.
As an analogy, would you claim that Apache or VMWare player or even Skype is part of Windows if a particular machine vendor chooses to pre-install it on the systems that they sell?
It's not even the case that OpenSSH is the only SSH implementation out there. F-Secure have their own completely separate SSH implementation, as have SSH Communications Security, and there are also other free SSH implementations like LSH and Putty (client).
@AC re slipstream SSH datastream
Yes it would be, especially if it could be done from outside the SSH client/server communication stream. But this does not appear to be what has happened. This is hijacking one end or the other, and intercepting/injecting the data at one end of the secure pipe as it were.
Just to point out that SSH is *NOT* part of Linux. It's not in the kernel, nor part of the GNU toolchain, and although it is in the repositories of most distributions, it's also available for most UNIX systems, and also for Windows and probably any other network enabled operating system as well. It's a cross-platform tool. What is important is how and by what vector it was compromised.
So there is a vector (possibly OS specific) that was used to break into SSH, and SSH itself is a vector to compromise whatever OS is being used. Which may be Linux.
Symantec writeup very poor
I know it's difficult to publish information about a vulnerability without providing a means of using it, but the Symantec write-up is pants! I mean, what does "Rather, the backdoor code was injected into the SSH process " actually mean?
Was it added to the binary before it was run, was it added to one of the run-time libraries, was one of the in-core runtime libraries hacked, or was the running instance of the process altered?
It also does not state whether this is a ssh server attack or an attack via the ssh client.
I can think of several ways of compromising the client side of things (each ssh session has it's own instance of the ssh client process), and these can be attacked using well known PATH and LD_LIBRARY_PATH attacks without needing privileged access to the client system, or the on-disk binary or the libraries can be attacked and altered if you have access to a privileged account.
Once into the client process, you will have access to all of the private key information on the current system (although you may already have access to that anyway), but I can see how you could catch and re-use key and password information as it passes through the compromised client process. You would also be able to subvert any and all stream traffic, including fixed passwords, SSH passphrases, sudo passwords etc. for any session that is run through the SSH client (using the client as a keylogger). About the only thing that you would not be able to do would be to compromise one-shot authentication devices.
Injecting arbitrary commands would be a minor trick, although hiding them is more difficult.
And if the SSH key management is lax (same key used for multiple servers and user identities, especially if some of them are privileged), then you have a recipe for system compromise on a massive scale.
But don't blame this on the Linux security model. Any system with some form of trusted remote execution could be compromised in a similar way.
Re: Shills @Bill
You need not be a MS shill, just part of a system where one supplier can control a market, compelling ordinary people like yourself to defend the indefensible. Microsoft want you to not have an alternative.
There is no reason why Linux cannot become as good or a better gaming platform than Windows. It's only market penetration that make gaming companies develop on Windows. It's possible that the Steam effort or Crossover may just change things.
Re: Shills @Ken
But that's the point. Nobody can become famous posting as an AC. They just merge into the crowd.
I'm not saying that the Reg should remove the ability to post AC, hell, I use it myself when commenting on something that may upset someone in my acquaintance. It's just that I'm so pissed off trying to work out who is who when they are making such cowardly accusations.
Why is so much of the muck-slinging, accusing everybody of being shills, being done by AC's.
Really, folks. If you want your comments to be taken seriously, at least post them with an identifiable handle, even it it is not your real name!
In case you forget, it's not possible to differentiate one AC except by content.
In it's true meaning...
...I'm sure that if you really do, you don't need a tool to grok the information!
Never impressed by the 380Z
I always regarded the 380Z as a bit of a lemon, mainly because I did not see one until 1982, after I had my own BBC Model B. I guess that if I had seen it earlier, I might have had a different opinion, although I'm not sure, having first used UNIX in 1978.
It always struck me as slow (especially with the high resolution graphics board), but I did appreciate that it ran CP/M, and thus had a large library of software, provided that you could get it on the slightly unusual disk format (not that there was a standard disk format at the time).
The one I had control of used to be used mainly by one member of staff who wanted to use Wordstar and the QUME Sprint 5 daisy-wheel printer. There was one postgrad who had a strange project to try to connect it up to the Newcastle Connection (aka UNIX United!) as a client machine over RS232 - there being no Cambridge Ring hardware for the 380Z (daft really, as the filesystem API was too different between CP/M and UNIX). He never completed the project, because it turned out that he was a draft dodger from his home in Greece, and he went home to see his family, and was promptly arrested as he stepped off the plane! It did mean that I got to see the UNIX United! source code, as I had to add it to 'my' V7 UNIX PDP11.
I used to be all for DAB when it really was new. Over the years, I've bought two mains powered DAB radios, a car DAB radio which re-transmitted on FM, now no longer made, a pocket DAB radio, and an add-on for an iPod.
Slowly, all of the interesting stations I used to listen to have dropped off DAB, or gone low-bit rate/mono (really - Planet Rock in MONO!).
And to cap it all, there are vast swaths of no DAB reception where I live.
I still keep the DAB radio in the car, but only for Radio 4 Extra. None of the others even get turned on any more. Instead, I normally listen to Radio 4 on FM or occasionally Radio 2 for some of the ex-Radio 1 DJs, and sometimes ClassicFM or Radio 3 when I'm in a classical mood. Other than that, it's music and podcasts stored on my phone.
It's a technology that has failed, and should either be turned off or re-launched in a form common with the rest of the world.
Re: DAB is pointless @Ben
A two word retort to your Internet access in cars comment - usage caps.
Re: 6502/6809's rool btw... @ Jamie
The problem with may of the complex instructions on the Z80 was that they took so many T-states to execute. This meant that on paper, a 4MHz Z80 looked like it should outperform a 2MHz 6502, but as the average Z80 instruction took 3.5 T-states, a 6502 clocked at half the speed, with an average of 1.5 T-states per instruction could run more instructions in the same time.
This meant that with careful programming, it was often possible to get functionally identical code running faster on the 6502 than on a Z80. It was horses for courses, of course, but many of the sorts of things that these processors would be running would be integer, simple data handling or block memory problems that did not need the more powerful instruction set of the Z80 anyway. I've commented on this with a worked example before here
But this comes back to the crux of the article. In order to get the best out of the machines back then, it was necessary to know the instruction set very well. And this is what is missing in today's programmers.
Re: 6502/6809's rool btw... @Steve
What made Page 0 really special on the 6502 was the ability to treat any pair of bytes as a vector, and jump using a 2 byte instruction (one for the op code, the other for the address in page 0) to anywhere in the systems address space very quickly. Because this was used extensively in the BBC Micro OS for almost all OS calls (see the Advanced BBC Micro User Guide), it mean that you could intercept the OS call and do something else instead (it was called re-vectoring).
I used this many times. For example, in Econet 1.2, all file I/O (but not loading programs) across the network was done a byte at a time (very slow, and crippled the network, which only ran at around 200Kb/s anyway). I wrote a piece of intercept code which would re-vector OSREAD and OSWRITE so that they would buffer the file a page (256 bytes) at a time (IIRC I hijacked the cassette file system and serial buffers to hold the code and the buffered page), which sped things up hugely. Could only do one file at a time, but would handle random access files correctly.
When used with the Acorn ISO Pascal ROMs, it sped up compiling a program from disk from a couple of minutes to seconds, and meant that it was possible for a whole class to be working in our 16 seat BBC Micro lab at the same time.
Talking about ISO Pascal, which came on 2 ROMs, I also re-vectored the Switch ROM vector (can't remember it's name) so that I could load the editor and runtime ROM into sideways RAM, edit the Pascal program, issue a compile command (which would switch to the compiler ROM), and have it overwrite the editor/runtme ROM image with the compiler ROM image, compile the code, and then switch back after the compile was finished. Great fun! Infringing on Copyright, of course, but meant that I could work in Pascal on my BEEB that did not have the ROMs installed!
Re: peek? @Simon
OK, I accept that the Atom (and probably System 1 and System 2) had them first,
BTW. My BBC micro was mine. I paid for it, not my parents. I ordered it on the day that they opened the orders process to the public, and it's got an issue 3 (an early) board, has a serial number in the 7000's, came with OS 0.9 in EPROM, and last time I powered it on 18 months ago, still worked.
I had an advantage that I knew C, PL/1 and APL before I got my BEEB.
Re: I'm kinda conflicted...
6502 was an elegant and orthogonal machine code, spoiled by the gaps in the instruction set for instructions that didn't work in the original MosTEK silicon.
By the time the 6510 came along (as well as some of the later 6502B and C chips) many of these missing instructions would work, but nobody used them because of backward compatibility.
6809 was probably a more capable and complete machine code and architecture (it benefited from being a later chip), but I still have a fondness for 6502 (and PDP11).
Oh no. Peek and Poke.
I prefer ? and !
<smug>Guess what machine I had</smug>
Re: Really? @dogged
If the screen is damaged, then there is a good chance that the front glass/touchscreen will also be damaged anyway.
And the rest is really just a plastic moulding, so won't add significantly to the cost.
When I've replaced the screen on a couple of phones, I've always decided to replace the glass as a matter of course. If you're going to take the effort to dismantle a phone, replacing the class seems like a minor extra expense.
Re: VMware Snapshots? For real?
I do not know VMware Snapshots, but I'm assuming that they work like other snapshot systems.
Blockwise filesystem snapshots can have a place in regular backups, but only really if you limit the time you want to go back to the number of snapshots you keep. And this is determined by the amount of change in your systems and the amount of storage (usually disk) that you are prepared to keep back for the snapshots. In addition, they are probably useless for disaster recovery, unless you are maintaining cross-site snapshots (I don't actually know if you can do this, but I would guess that if you had cross-site mirroring, it would also be possible to keep snapshots on your other site).
If your backup requirements are longer term, or require recovery of individual files, then an agent based backup scheme is about the only way you can satisfy the requirements, IMO. This is especially true if you have a heterogeneous environment.
Of course if you are backing up the C: drive of all of your identical virtualised Windows boxes, then there are probably huge benefits in just backing up one copy of a de-duplicated, shared image at the de-dup'd level, rather than agent based backups of each system. But that is a particular system deployment method that does not match all requirements.
It might be to weaken the case that Google is selling in the UK. The 'sales' staff roll up in a barge moored in the Thames, host all their junkets, sell all their advertising, and then sail away,
HMRC and the Parliamentary Select Committee will not be able to express incredulity at Google reporting so little UK business.
Re: Cloud analogy @ribosome
That's as much a pun as an analogy!
Re: "calling into question how effectively Redmond has partitioned its service"
I'll upvote you for once. It's stupid that by default Linux distro's only create a single filesystem. But you do get asked whether you want to create other partitions during a normal install (and in a more guided way than Windows 7 does) and most experienced Linux admins do it as a matter of course (me - I come from a UNIX background and expect to have at least /, /usr, /var, /tmp, and /home as separate filesystems, with other filesystems set up according to the use of the system)
The problem here is that MSDOS partition table format, which was the default up until Windows XP (SP1?) only allows 4 primary partitions, and then extended partitions in one of the primary partitions, which many boot loaders will not allow you to boot from (I know GRUB does - I'm talking historically)
This meant that when you write a distro installer intended to co-exist with other OSs, unless you are prepared to probe the partition table type, you take the option of only using one of your primary partitions to be as unintrusive as possible.
Unfortunately, although the world has moved on, bad habits die hard, and most installers take the same decisions as they have always done.
I must learn more about the more recent partition table formats to bring myself up-to-date. Although I've installed Windows 7 from scratch twice, I've never created a dual-boot system with Linux (I've done a dual boot XP and Win7 system). All my systems tend to not have any Windows on at all!
Re: That graph suggests @AC 12:38
I suspect that this is the same AC who always says this, but when challenged provides references to statistics on Web defacements.
There are vulnerabilities in Linux. Many are discovered and posted as a result of code examination (when people started looking for memcpy calls on unbounded buffers a few years back, there was a huge jump in the number of vulnerabilities reported against Linux, even though many of them were unlikely to be exploitable. We just don't know how many of these are present in Windows.
But as a basic desktop box, the protection that UAC provides on Windows Vista+ has pretty much always been there on Linux since it became popular. And as a result it is axiomatic that Linux is more secure for day-to-day use. And out-of-the-box, Linux is much safer to connect to the Internet because fewer services are turned on by default. This is something Microsoft have taken on board in recent Windows releases.
Of course, there are still exploits that take advantage of the wetware, but they will be present on any OS unless it is so locked down that the users cannot do anything.
I could not have put it better myself!
Re: Projection displays @John Smith 19
Troff (Typesetter roff), not nroff. Nroff used a fixed character set described in a tmac file, and did not have the ability to scale characters to different character sets.
One of the interesting things is that most people who used nroff assumed that it could only handle fixed-width font devices, because that is all they saw it driving (typically dot-matrix printers). It actually did allow partial character spacing, and I wrote a tmac file to use nroff with a HPLJ compatible OKI laser printer with the advanced character set option, that allowed nroff to produce right-margin justified proportional spaced text using micro-spacing.
It could not handle pic or grap output, although I got tbl to produce nice solid-box outlines for tables. I believe that it could also do some basic eqn as well.
Re: No, please not 7bit AScii @Steve Davies 3
OK, I was using 7-bit ASCII as it allows upper and lower case characters (one of the requirements). 6-bit ICL code only contains upper case characters, although I understand (I only briefly used an ICL 1904 machine in the late '70s, and never got to grips with the available character set) that one of the characters was used as a shift, to provide lower-case characters.
I admit that using an American standard was a bit low, but I could not think of a suitable non-US one. In any case, it would have to have been invented, because ASCII did not exist before 1960. If you wanted it to be authentic Steampunk, you would probably have to use the Cooke and Whetstone telegraph system!
Re: Forgot editing
I think basic electricity use was discovered in the same general timeframe as steam. Michael Faraday was credited with inventing the electric motor in 1821.
Nixie tubes are much later. Wikipedia suggests 1955.
So I contend that basic electricity (not electronics, mind) is totally consistent with Steampunk.
Re: Just to tighten up the parameters a little...
The etch-a-sketch would be like a mechanical version of a Tektronix Storage tube terminal (Tek 4010 or 4014).
Re: Here's one someone made earlier, out of Lego
All you need is some way of changing the strowger selector to a particular position, and then rotating the split-flap character to the new character.
I'm sure that there is a 'return to space' operation that can be applied to all character positions at the same time to clear the display.
I thought this would be difficult, until I realised that you could use a pulse-coded dial (like an old-fashioned telephone dial), linked up to something similar to a Strowger exchange and a Solari split-flap display.
I know I'm cheating a bit using pulse-encoders and electric motors, but I'm sure that you could use rachets, rotating shafts and slip-clutches everywhere that the modern displays uses electric motors.
If we take 7-bit ASCII as the character set, that would mean 96 different displayable characters which include all upper and lower case English characters and numbers plus sufficient punctuation. This could be encoded using a 32 place dial like a rotary telephone dial, together with two shift keys shifting to different rachets to generate upper and lower case, and numbers, together with the punctuation. These work well with strowger type gear, and all you would need to do would be to pulse each successive position in the split-flap.
The difference in the sandbox approach is that it denies access to resources by checking what they are doing at the API boundary of the sandbox, rather than allowing the underlying OS to control access.
Any suitably designed OS should have controls to contain rogue actions (like the permissions system on the filesystem and IPC resources and Role Based Access Control) already, and many do. But things like Windows up to XP, whilst it had the underlying technology were so compromised by the way that the systems were implemented (users running as an Administrator by default, and too many critical directories having write access to non-administrator accounts) that it became necessary to add the extra 'sandbox' to protect the OS!
Unfortunately, the way that OSX deploys applications is fundamentally flawed (they've added an application deployment framwork into user-space so that you don't need to be root to install an application, or it was this way the last time I looked at OSX), and this unfortunately opens it up to applications being altered by other applications without requiring additional privilege. The OS remains protected, but the applications are vulnerable. This is the reason for implementing a sandbox.
Anyway, sandboxes are not new. On UNIX systems since seemingly forever (certainly since Version 7 in 1978), you've had chrooted environments that you can use to fence particular processes to controlled sub-sets of the system
Re: It says EE on my phone
I think it is that when Orange and T-Mobile merged, they spun off the business of operating the infrastructure to a separate legal entity that is EE. Orange and T-Mobile manage the customers, and 'rent' access to the infrastructure from EE.
It is not clear to me whether Orange/T-Mobile own EE, whether EE is now also a holding company with Orange and T-Mobile as subsidiaries, or whether they are completely separate companies.
I'm sure that it makes sense to somebody, but I'll bet there's some international shenanigans about where the profit is declared!
Re: Won't be sad
At the time, the 480Z with the network and file/print server option appeared good (though expensive), because it ran a CP/NOS (a network capable CP/M compatible - OS the industry standard when the 480Z came out), and allowed files to be stored centrally so students did not need personal media or to work on a specific machine all the time.
Unfortunately, CP/M completely dropped out of favour when the IBM PC was launched.
I actually preferred the BBC Micro with Econet and a Econet Level 2 hard-disk server. Back in about 1983, the Poly. I worked at built 2 similar computing labs, one by the Computer Unit, and one by the academic Computing School. There were similar bugets, and both were installing 16 seats, networked with a file-server and printer.
The 480Z lab (Computer Unit) had 16 computers, with screens, a fileserver and printer, and a basic productivity package. And that was pretty much it.
The BBC Micro lab (mine) had 16 computers with screens, a fileserver and printer. It also had basic productivity packages, but it also had light pens for all computers, and a selection of other hardware items including CAD software and hardware (BitStik and 2 different digitizers), teletext and speech synthesis hardware, speech recognition hardware, 2 types of digital camera, robot arms, touch screens, and a pen-plotter. And on the software side, it had a full ISO pascal compiler for all of the systems, together with a selection of other languages including Forth and Lisp.
My BBC Lab was built to teach people who did not know what a computer was the vast range of things they were cabpable of, in an affordable way. It could als be used for the computing students to teach programming, networking (sticking an oscillascope onto the Econet was a great way of demonstrating what a network was) and it was great fun building it.
Re: The promise of automation @Semaj
Good luck with your promotion prospects! It seems at the moment that only those who are prepared to go the extra yard are even considered for advancements.
Whilst I agree that it should not be this way, I am increasingly upset by the divisive nature of the appraisal system that most companies use now to measure performance. It now appears to be used as a tool to get high-skilled, expensive people out of the door because of arbitrary 'poor performance', rather than a mechanism to reward people good at their jobs.
"like Microsoft Office ...... allowing people to work at much faster speeds"
I find Microsoft Office has always slowed me down compared to other, pre-WYSIWYG tools! The use of inappropriate tools (like Excel rather than a database for storing and parsing data, or a proper document preparation system for technical reports) is IMHO one of the biggest productivity blocks around!
Re: how much?
This bothered me. I have an original EeePC 701 4G, ~700MHz single core Celeron. It's slow, but not remarkably slow. It runs Linux at least as well as low end Pentium 4 systems with the same OS, and can do basic browsing (with FlashBlock and NoScript), play streaming and local media well enough to be useful even now (although 4GB is a major hurdle).
I've played with Intel Atom netbooks, and they are not hugely better, although the battery life is generally longer.
I'm sure that using benchmark comparisons, the difference is evident, but from a usability basis, doing what netbooks were originally designed to do, the original concept was sound (only damaged by the stupid Linux distributions that were originally shipped, allowing Microsoft a chance to ruin [IMHO] the concept with a cut-down WinXP).
So I wonder whether the Atom based devices in the £400/$500 mark were actually a real improvement, or more a means of the manufacturers being able to sell more expensive kit.
Re: Bricked or somewhat confused.
I like "borked". The Urban Dictionary describes it as "To have totally f**ked something up. Usually by doing something stupid. Specifically used to describe technology that is broken."
Re: Bricked or somewhat confused. @Khaptain
OK, so if a device has no user recovery path, but the manufacturer can re-flash the internal memory using the initial manufacturing process, is that not bricked?
Or how about they can rescue your data, and replace the main system board, but use the same case, serial number etc. Is that not bricked?
I agree the sentiment of what you are saying, and in this case the term is not justified, but I don't believe that there is really any hard-and-fast definition of bricked, and whether a device is bricked or not depends on the device state and the available resources to the user.
If you were on a desert island, with a satellite Internet feed and no other computing devices, unpacking a zip file onto a USB file to boot would not be possible, so this would effectively be a bricked device.
Rhetorical question. Is Ubuntu still Linux?
The answer to that question should be easy, but try finding out from the www.ubuntu.com web site.
I was watching the Nixie Pixel channel on YouTube (no, not just for the eye-candy factor, she has an interesting perspective on OpenSource as well as other notable points... ummm, oops - maybe it is the eye-candy after all), and one of her recent videos pointed out that "Linux" has been expunged from the Ubuntu website, at least from all of the first level pages and those they link to.
I've said it here before, and I'll say it again, I think that Shuttleworth is trying to set Ubuntu up as a competitor to OSX, something based on open source, but differentiated and at a distance from Linux. If this turns out to be true, and Ubuntu gets divorced from the perception that it is a Linux distro, then I hope that he is prepared for many of the long-term Ubuntu advocates jumping ship.
Re: @eter Gathercole (novels and editing)
It's been done, but not using Open Source software!
Re: novels and editing
My wife was looking at self-publishing on Kindle, and all of the tutorials assume that you use Microsoft Word as one of the last-steps in the formatting process (just prior to submitting the resultant file for final conversion by Amazon).
When I pointed out that Libre Office could write .doc and .docx files, she showed me various reports that somehow the resultant files would be unreadable on a Kindle, all written by people who appeared to know very little about font, typeface and styles from what I read, so I don't totally believe that you can't use other tools. Seems like there is either some (unspecified) fundamental lock-in, or a lot of misinformation washing around, which creates a feeling of FUD amongst writers.
Eventually I ended up putting one of our limited number of Home and Student 2007 licenses we have onto her machine (licence obtained to satisfy our local secondary school for whom using Impress to create assessed presentations would result in the work not being marked!) just to stop her complaining.
Maybe I ought to write something myself and see what the process really is. If I can get it working and document it, it may enable self-published ebook writers to escape from the Microsoft hegemony.
Yes, but not sleeping in the same bed means that you effectively have to make an appointment for sex. It also removes the opportunity for a quickie when neither of you can sleep, or when you both wake up early.
I certainly don't get nearly as much since my wife decided she could not stand being disturbed by my early starts to get to work. At least that's the reason she gave for wanting to sleep in another room....
There are lots of TV channels that offer live streams of their broadcast programs off of their web sites. This includes the BBC.
"Live TV" means that it is possible to watch it over the Internet while it is still being broadcast. An example of "Live" would be a 30 minute sit-com that is available over the internet 20 minutes after it starts on a broadcast medium (overlapping by 10 minutes). For things like football matches, this means that it must not be available on-line before the broadcast program finishes, i.e. at least 1:45 (and maybe longer) after the match starts.
The distinction about whether you do watch live TV online or not is very blurred. Iplayer, for example, allows both archived (catch-up) and Live, so how to you prove that your use of Iplayer is only for catch-up. It all looks a bit murky to me.
I wonder if the TV licensing people are able to get a court order to request your browsing history from your ISP?
I still wonder whether an analogue TV without a digital tuner still counts as TV receiving equipment. In my view, it shouldn't.
Re: Tape for Longterm Storage
Are you sure about "you can generally interface a disk drive to a machine for a long long time"? Have you tried to connect a SCSI disk (anything earlier than UltraSCSI, but try a SCSI-1 SE disk for a real challenge) to a modern server? And older interfaces like ESDI, ST506, MASBUS or ESMD are long dead.
Even more modern technologies like IBM SSA are now dead. IDE and EIDE interfaces no longer appear on modern motherboards. Even when older HBAs are still available, they will be PCI adapters, and these are being eliminated in newer systems.
I believe that the SAS technologies are expected to be N+2 compatible, i.e. a first generation SAS disk will work with a SAS 3 adapter, but there is no guarantee that they will work with later adapters. Given the speed of evolution of such things, I expect disks to remain portable to current machines for 5 or so years after manufacture, and after that, you will have to rely on legacy hardware to be able to read them. Does not encourage me to use disk as a long-term archive medium.
This is partly by design, as the disk and system manufacturers want to continue selling systems, and they have built-in obsolescence.
My personal thought is that you would have more success reading a 2400' 1/2" NRZI mag. tape at 800bpi recorded using ar or tar from 30 years ago that you would an early SCSI disk, especially if it came from a Netware, VAX, PrimeOS or other proprietary OS.
Longevity of SSD as a medium
Has anybody published figures about how long an SSD will hold data in a cold state? I get the impression that most rely on the wear leveling capability of the drives to re-compute damaged data from checksum information during the re-write of the data. And this requires the drive to be powered.
I'm really not too confident about putting a flash-memory (or any other static electronic) device on the shelf, and coming back to it in a few years time and expect the data to be still there. I would be confident with a tape.
Memory technologies are moving along so fast that there is no chance to time-test any of them before they are obsolete. And accelerated environment testing that vendors claim to have done is not really a good indication of the data retention capability of a medium.
But then I would probably recommend carving the data on granite tablets if you want it to last millennia.
If you remember, the swimming pool moved under the patio when TB1 took off or landed, so he did not have that option.
The torus shaped building was called 'the roundhouse'. Quite what you use such an unusually shaped building for, I don't know.
I'm sure that the few times they show the clip of TB3 landing, they've even managed to collect the smoke again. That's really environmentally aware!
Re: Awesome soundtrack
It's the flame licking back up the lower part of the rocket that makes me wonder whether Derek Meddings was prescient or had a way of looking into the future (he was a model making technological genius after all).
I always though 'it would never look like that' when I watched TB1 and TB2 land, but I was so obviously wrong!
Damn. You beat me to it.
Although I can say that I handed in my first year University computing project written using roff and printed on a DECprinter ( a faster DECcwriter II without a keyboard).
I once wrote a document handling system in make, troff, tbl, pic, grap, and to top it off, integrated SCCS to do the versioning, including the version history dynamically added to the document at extract time.
Re: It ain't that damn hard - Ummm
From your figures, it looks like the estate you are using is 3000 seats. So. $3,700,000/3000 gives us, um, $1,233 (rounded) per seat. You really think this is not a lot?
Even if you do have a payment plan (and I'm betting that Microsoft would prefer a subscription plan rather than a deferred payment plan), that is still loading the business with costs that they may not have if they opted to stick with XP.
And the majority of those costs are in license fees, which you may not have if you can find an open-source solution that is adequate.
You've also not factored in any testing, specific business related software costs, or loss of productivity or training costs. If you are doing 3000 seats over a 6 month period, that's 500 a month, or about 25 a day (assuming that you're doing most of the estate during the working week). That's a tall order for 1-3 admins, even assuming you do across the network upgrades in place (which is disruptive to the users). Of course, if you have a homogeneous estate, you could do a replace, upgrade, replace rolling operation which is less disruptive to users, but you will need spare kit to do that, and will need the time to physically move the kit around..
Your earlier comment about a dual-core system with 4GB of memory is interesting. I'm sure that many, many business users of XP will have the majority of their estate running on P4 systems running with <2GB of memory. Places like call-centres do not regularly replace working systems, and the demands of filling in screen forms is such that you don't need much oomph.
For those users, dropping new kit in may not only be essential, but possibly cheaper as well.
These are all pretty old devices. I have a DI-524 on my network (and a DI-604 lying around somewhere spare), and they both have firmware issues that really mean that anybody still using them must have a masochistic streak, or not care (which may include the majority of users, unfortunately).
There has not been a firmware update for something like 8 or 9 years, and it is not possible to set the date on them (either manualy or by pointing it at an SNTP server) to any date after December 2008 if I remember correctly. I would expect that most people would have tossed theirs whenever they updated their broadband package.
Just in case anybody was tempted to try hacking into mine, I'm not using the WAN side at all, merely using it as a WiFi router on one of my wireless zones behind my Linux firewall.
Disks have more moving parts than tape and tape has no electronics at all (with one or two exceptions), so tapes should be more durable. True you do get tape failures, with the tape sometimes breaking, but as you say, you have at least two copies, as you would know if you worked with large backup/archive/DR systems, as some of us do. Tape is also more portable, is probably more resilient to data during transport. Removable disk, whilst possible, is probably more complex to manage. There have been attempts at disk libraries in the past, and I believe that the encasing of the disks and the connectors proved problematic.
Don't get me wrong, there is a place for data replication on disk, and systems like HPSS and TSM can use both disk and tape in a hierarchy to achieve high capacity as well as good speed for recently copied data. But I just don't see any serious manager responsible for large amounts of data drinking the disk cool-aid and ditching tape any time soon.
- Fee fie Firefox: Mozilla's lawyers probe Dell over browser install charge
- Did Apple's iOS make you physically SICK? Try swallowing version 7.1
- Pics Indestructible Death Stars blow up planets using glowing KILL RAY
- Review Distro diaspora: Four flavours of Ubuntu unpacked
- Neil Young touts MP3 player that's no Piece of Crap