Feeds

* Posts by Peter Gathercole

1816 posts • joined 15 Jun 2007

Microsoft hires Pawn Stars to shaft Google

Peter Gathercole
Silver badge

Re: The overwhelming message I get from these ads

Microsoft also had an advertising campaign in Japan based around the recent Ghost in the Shell: Arise anime.

It showed various members of Section 9 doing what they do holding and passing around a Surface, which contained some important data or something.

The problem with this, as anybody who is familiar with GITS knows, is that the people shown (I remember at least Motoko and Batau in the ads) are cyberised, i.e. they are cyborgs with cyber-brains (implanted computers) together with some kick-ass comms. They had absolutely no need for a Microsoft surface to do the things that they were supposed to be using it for!

Just showed that either MS or their advertising agency really did not know what they were doing. I suspect that the animators probably felt a bit dirty to have done the ads, but only until the money hit the bank!

I think that the ads are still knocking around on YouTube if you want to see them yourself.

9
0

We flew our man Jack Clark into Facebook's desert DATA TOMB. This is what he saw

Peter Gathercole
Silver badge

Re: Expansion @John Tserkezis

the Dark Lord was commenting about moving new kit into the machine room (normally this involves rolling it across the floor, which on a suspended floor would cause significant vibration, certainly more than having the disks powered up and the head moving.

I would actually have thought that the main reason why the disks were powered down was because of power consumption and temperature, rather than vibration. Disks are not that fragile.

480 drives in a rack is not that dense. 384 disks in a 30" rack mounted 4U enclosure is a much higher density (I have a rack with 5 of these disk enclosures in each of the HPC's I look after, totalling 1920 disks in 20U of space - about half a rack), and all of these are spinning all the time.

0
0
Peter Gathercole
Silver badge

@Destroy All Monsters

My knowledge of telco machine rooms may be rather dated, but it used to be that almost all telephone exchanges put the kit directly on a concrete floor because of the weight (a practice evolved from having vast and very heavy mechanical exchanges). With a solid floor for load bearing, it made sense to take the cables up to the ceiling. Old habits die hard, and many modern exchanges were installed in old buildings.

It may be that modern electronic exchanges more closely resemble computer machine rooms, but in this case, you can see from the picture that it is a solid floor, with the cabling to the ceiling.

2
0
Peter Gathercole
Silver badge

Re: Expansion @The Dark Lord

These are telco style machine rooms, no suspended floor and wiring from above.

The floors are solid sealed concrete, so probably don't vibrate too much.

2
1

Decades ago, computing was saved by CMOS. Today, no hero is in sight

Peter Gathercole
Silver badge

@another_vulture

And there you have PERCS. If you look at an IBM 9125-F2C (Power7 775 cluster), they are very dense, are water cooled (CPU, I/O Hub, memory and power components) with integrated electro/optical network interconnects eliminating external switches, and storage moved into very dense arrays of disks in separate racks.

When where I work moved from Power 6 575 clusters (which were themselves quite dense), they kept to approximately the same power budget, increased the compute power by between three and five times, doubled the disk capacity, all in about one third of the floor footprint of the older systems. And to cap it all, they actually cool the ambient temperature of the machine room.

But these systems proved to be too expensive for most customers, and IBM was a bit ambitious about the delivery timetable. Take this with a contraction in the finances of many companies, and IBM failed to sell enough of them to keep them in volume production. But they are very impressive pieces of hardware.

Replacing them with a 'next' generation of machines is going to be hard.

0
0

XBOX ONE owners rage as HDMI SNAFU 'judders' Brit and Euro tellies

Peter Gathercole
Silver badge

Re: Tellies can handle 60Hz input @AC

I understand what you are saying, you've not understood what I've said. But anyway, using linear light is introducing an additional motion blur component, as you've effectively got to interpolate intermediate frames that do not exist at the re-sample point, and they will always be in one way or another a guess. Also, doing it in near real-time may require more compute power for HD video than is in the Xbox.

What I said would still work, although as I also said, it is impractical.

0
0
Peter Gathercole
Silver badge

Re: Tellies can handle 60Hz input @Mage

I'm assuming that it was you who down-voted me.

I was not suggesting frame conversion. I was suggesting that you used a frame rate between the Xbox and the TV that allowed exact timing of both frame rates to prevent the need to re-sample. This is why I chose 300 fps, as that is an integer multiple of both 50 and 60. This allows an EXACT number of frames for each of the different video sources. For the 50 fps source, you would leave the image up for 6x300HZ frames, and for the 60 fps source, you would leave an image up for 5x300HZ frames. A perfect fit, with no resampling, allowing both videos to be side-by-side at their native frame rates.

Of course it's completely impractical as well, and would only work for these two frame rates (or other divisors thereof).

If you assume both videos are interlaced, you could probably take that down to 150 fps, but that is a big assumption.

0
1
Peter Gathercole
Silver badge

Re: Tellies can handle 60Hz input @Brangdon

Well, that's it. The Xbox is trying to harmonise the frame rates for two different video sources. It's not really a surprise that one or the other will be affected.

In order to be able to simultaneously display a 50 fps and a 60 fps picture perfectly, you would need to output from the Xbox to the TV at 300 fps (so the 50 fps image would appear on 6 consecutive frames, and the 60 fps image would appear on 5 consecutive frames).

This would be beyond most TVs, even modern ones.

0
2
Peter Gathercole
Silver badge

@SpeakTruth

Apart from the obvious power cable (and check the voltage ratings on the label on the back of the telly as well, although most European countries use between 220 and 250V), you have the problem of the DVB-T format, although most European tellies do DVB-T2, which is backward compatible with DVB-T.

You may have to tell it to scan different frequencies, and sometimes this is in a hidden menu. It depends where you are coming from.

If you are just using external video sources (DVD, consoles, set-top boxes etc), things should just work.

1
0
Peter Gathercole
Silver badge

Don't understand this!

Modern flat panel televisions just do not have the old mains frequency lock or problems with the 'flyback' frequency that old CRTs have.

I very much doubt that there is any difference in the hardware for a Korean or Chinese television destined for the UK or for the US.

Tellies have a frame buffer (or two). The frame buffer is painted, and the picture is displayed. This can be asynchronous from any other timing signal external to the TV. As long as the hardware can keep up with fastest frame rate, it should be able to sync with any slower rate without any difficulty.

However, if the XBox is re-sampling the frame rate of an external video source as it passes through, then this could conceivably cause missed frames (50Hz->60Hz means some frames will be sampled twice). Anybody who has played around with frame rate when transcoding video will have experienced this, although I suspect that most people who believe they have done this probably used 'canned' settings rather than really experimenting.

So I suspect that the XBox is re-sampling at 60Hz, or possibly screwing around with the de-interlace settings (Sky broadcast HD at 1080i), rather than it being a problem between the XBox and the TV.

5
0

Romance is dead: Part-time model slings $1.5bn SUEBALL at Match.com

Peter Gathercole
Silver badge

Re: Hmmm. Extract from the lawsuit.

It was the scale of the claim. "billions of images" and "near instantaneously" that I was mocking.

I'm sure that there are tools which will look at images and spot similarities, but I'm also sure that they're not instant. Lets assume the images are 100KB each, and there are "a billion" of them. That's 1x1014 bytes (hey, lookie what a silver badge allows me to do!), or approximately 100TB of image data. If they can read that and process it "near instantaneously" then they have a better system than the top 100 HPC system that I'm looking after at the moment.

6
2
Peter Gathercole
Silver badge

Hmmm. Extract from the lawsuit.

Claim 29: ...which can scan billions of images nearly instantaneously......

Gosh. I really could do with some of these systems that Match.com must have. Near infinite disk bandwidth, and very sophisticated image hashing and analysis tools.

With that technology, I wonder why they're in the dating business. They ought to be coining it in from the application of this technology.

6
1

What's wrong with Britain's computer scientists?

Peter Gathercole
Silver badge

Re: New universities

I totally agree about your comments about 'New' Universities/Polytechnics.I think that giving them the option of becoming Universities was the worst thing that could possibly have happened to the Polytechnic sector.

I agree that most Poly's had a big chip on their collective shoulders, but I worked in Newcastle Polytechnic for 6 years, and I met people there who knew what the Poly's were for, and understood how to represent them. But I remember at the time how surprised the ministers were that all the Polytechnics decided to convert when given the chance.

Older established Universities are academic. They turn out people with a largely theoretical slant on most science and technology subjects. Poly's were set up to be practical skills based. They could take students and equip them to take on high-skill practical work. You could see them as a alternative to business led apprenticeships, leading to BTEC HNC and HND qualifications. Both of these were valuable but different facets of the education system in the UK.

Generally, academically orientated students with the highest 'A' level results (in the days when 'A' levels could be used to differentiate between students) gravitated to Universities, those with adequate results could go to Poly, and still get highly useful qualifications, just not necessarily degrees.

But there was also a difference in teaching methodologies.

'Old' Universities were more likely to drop the student in at the deep end with comparatively little support, and if they sank, threw them out. Those who swam (who were self-motivated and with sufficient discipline to actually get the work handed in and pass the exams despite the distractions), when they graduated, an employer knew that they could resist the temptations of student life, and still get the job done.

Polytechnics, on the other hand, used to offer better support to the students. The staff-to-student ratio was higher, and there was more emphasis on making sure that the students were coping (at least this is was what I saw at Newcastle). This meant that Poly's were a better bet for kids who were still in the 'school' mindset.

In the Computer Studies area, Newcastle Poly. offered HMC and HND courses in Computer Studies, but not a degree, which was catered for by Newcastle University. The one computing degree course offered by the Poly was a business orientated degree, specialising in COBOL as the programming language (we're talking 1980's here), with business oriented methodologies, system analysis and case studies, together with crossover courses from Business Studies so that the students would have an understanding of Data Processing and where it fitted in to a business.

The HNC and HND CS courses turned out people who's skills meant they knew enough about computer systems so they could program effectively, but had a less deep understanding of the fundamentals of a computer than their University contemporaries.

With the generally useless 2-year 'foundation' degrees replacing many of the BTEC qualifications, I really don't know what the split is now, and I think that employers have similar lack of understanding.

4
0

Dude, relax – it's Just a Bunch Of Disks: Our man walks you through how JBODs work

Peter Gathercole
Silver badge

60 disks in 4U!

I admit they are special racks (they are nearer 30" wide and goodness knows how deep), but in the IBM P7 775 supercomputer disk enclosures, you can get 384 2.5" disks in 4Us of vertical space.

On more mainstream systems, and having used dual-connected SAS drives for about the last 5 years, I will say that the biggest problem here is the repair of a failed expander card in the disk drawer. The problem is that although they are redundant, so the loss of a SAS expander does not stop the service, the repair action is not normally concurrent. This means that you have to take an outage in order to restore the full resilience, even if you have the connected to dual servers unless you have the data moved or mirrored to disk in another unit. The saving grace is that you can plan the outage, but you have to be careful if you are wanting very high availability.

I learned this the hard way when planning for service work in what had been delivered as a totally redundant system. A bit embarrassing when you end up having to stop all of the workload on a top 500 HPC system just to carry out the work for a single expander card (no, I was not responsible for the design, I only help run it, and it could have been mitigated with a bit more thought)

By the way, this dual connectivity is not a new thing. IBM's SSA disk subsystem also had dual connectivity for both disks and servers back in the mid 1990's. Very popular for HA/CMP configurations, and allowed for 48 disks in 4U of space.

1
0

Codd almighty! How IBM cracked System R

Peter Gathercole
Silver badge

Ingres and 2BSD

Ingres was available for free (or at media and postage costs) to Universities and Colleges who had a UNIX source code license. I believe that it was on any 2 BSD add-on tape (it was certainly on the 2.6 BSD tape I had in 1982).

The University I was at (Durham, UK) was using Ingres to teach relational database in 1978, and I came across it in my second year in 1979.

I must admit that I could not stand it as a subject, because the lecturer was using set theory to try to teach relational algebra, and my maths was beginning to look a little shaky by that time, but when I ended up actually doing real work, I found QUEL quite usable. It took quite an effort to switch to SQL when I had to work on Digital's Rdb, Oracle and DB/2.

I don't count databse as a current skill now, but I still regard the experience I gained as invaluable.

6
0

Linux backdoor squirts code into SSH to keep its badness buried

Peter Gathercole
Silver badge

Re: Symantec writeup very poor @Gorbachov

And that is the point of my OP. The writeup is so vague that we're all guessing.

I admit that the client side attack I sketched out requires access as a user on the client system, but that is a lot easier to get than breaking privileged access. All the usual vectors of Java, side-jacking and social manipulation etc could end up with a process owned by the user in question, which would have whatever access the user has on the client system (but no special privilege). This would mean that it could execute a series of shell commands as the user, run an SSH client program itself, read the user's public key and any private keys stored on the client system, and if the keys are passphrase-less keys, use them to gain access to other systems.

Here is a scenario, possibly far-fetched, and I've not worked it all through, but it could set LD_LIBRARY_PATH somewhere like .bashrc so that a local directory appears before some of the system library directories. It then looks at the Linux distro, and fetches a specific hacked SSL or other (including libc, I suppose) library for that distro off of the internet, and puts it in the directory.

Following this, every legitimate program including SSH client sessions that the user starts could be running with malicious code from the bogus library. If it replaced the right routines, you could have a key-logger, and this key-logger will be able to capture the passphrases as they are typed, giving access to all of the user's private SSH keys. It could also capture any passwords that are typed for remote systems.

OK. No breach of privilege required so far. Everything has been done as the user in question.

So, the user is an admin, who foolishly has the private key of a remote account that has some privilege in their keystore. The malicious code then has access to the remote system with privileged to attack that system.

Or, say, the admin has sensibly used a non-privileged account to access the far system, but then uses sudo to issue commands on that system via a compromised SSH session. Compromised client can then capture the password that the user uses with sudo, and again has access to the remote system with privileged to attack that system (unless sudo is really locked down hard).

In both cases, it could inject commands, or even start it's own SSH client session using the captured credentials.

How safe do you feel?

Please note that this attack could be used on almost any OS that allows dynamic binding of libraries at runtime, and provides an over-ride of the default system paths to the libraries. I've sketched it out as a Linux/UNIX attack as that is what I know best, but I seriously suspect that similar attacks are possible on other OSs.

Eternal vigilance is called for, especially for admins, regardless of the platform they are using.

0
0
Peter Gathercole
Silver badge

Re: @AC 14:58

I'm not doing the GNU/Linux 'drivel' as you call it. I'm just pointing out that SSH is as much a part of Linux as Audacity or LibreOffice, or a host of other Open Source projects. They're part of most distros, sure, but not a part of Linux itself. I suggest that you just don't understand what a Linux distribution is.

As an analogy, would you claim that Apache or VMWare player or even Skype is part of Windows if a particular machine vendor chooses to pre-install it on the systems that they sell?

It's not even the case that OpenSSH is the only SSH implementation out there. F-Secure have their own completely separate SSH implementation, as have SSH Communications Security, and there are also other free SSH implementations like LSH and Putty (client).

8
0
Peter Gathercole
Silver badge

@AC re slipstream SSH datastream

Yes it would be, especially if it could be done from outside the SSH client/server communication stream. But this does not appear to be what has happened. This is hijacking one end or the other, and intercepting/injecting the data at one end of the secure pipe as it were.

Just to point out that SSH is *NOT* part of Linux. It's not in the kernel, nor part of the GNU toolchain, and although it is in the repositories of most distributions, it's also available for most UNIX systems, and also for Windows and probably any other network enabled operating system as well. It's a cross-platform tool. What is important is how and by what vector it was compromised.

So there is a vector (possibly OS specific) that was used to break into SSH, and SSH itself is a vector to compromise whatever OS is being used. Which may be Linux.

8
0
Peter Gathercole
Silver badge

Symantec writeup very poor

I know it's difficult to publish information about a vulnerability without providing a means of using it, but the Symantec write-up is pants! I mean, what does "Rather, the backdoor code was injected into the SSH process " actually mean?

Was it added to the binary before it was run, was it added to one of the run-time libraries, was one of the in-core runtime libraries hacked, or was the running instance of the process altered?

It also does not state whether this is a ssh server attack or an attack via the ssh client.

I can think of several ways of compromising the client side of things (each ssh session has it's own instance of the ssh client process), and these can be attacked using well known PATH and LD_LIBRARY_PATH attacks without needing privileged access to the client system, or the on-disk binary or the libraries can be attacked and altered if you have access to a privileged account.

Once into the client process, you will have access to all of the private key information on the current system (although you may already have access to that anyway), but I can see how you could catch and re-use key and password information as it passes through the compromised client process. You would also be able to subvert any and all stream traffic, including fixed passwords, SSH passphrases, sudo passwords etc. for any session that is run through the SSH client (using the client as a keylogger). About the only thing that you would not be able to do would be to compromise one-shot authentication devices.

Injecting arbitrary commands would be a minor trick, although hiding them is more difficult.

And if the SSH key management is lax (same key used for multiple servers and user identities, especially if some of them are privileged), then you have a recipe for system compromise on a massive scale.

But don't blame this on the Linux security model. Any system with some form of trusted remote execution could be compromised in a similar way.

7
0

How to relieve Microsoft's Surface RT piles problem

Peter Gathercole
Silver badge

Re: Shills @Bill

You need not be a MS shill, just part of a system where one supplier can control a market, compelling ordinary people like yourself to defend the indefensible. Microsoft want you to not have an alternative.

There is no reason why Linux cannot become as good or a better gaming platform than Windows. It's only market penetration that make gaming companies develop on Windows. It's possible that the Steam effort or Crossover may just change things.

3
0
Peter Gathercole
Silver badge

Re: Shills @Ken

But that's the point. Nobody can become famous posting as an AC. They just merge into the crowd.

I'm not saying that the Reg should remove the ability to post AC, hell, I use it myself when commenting on something that may upset someone in my acquaintance. It's just that I'm so pissed off trying to work out who is who when they are making such cowardly accusations.

1
0
Peter Gathercole
Silver badge

Re: Shills

Why is so much of the muck-slinging, accusing everybody of being shills, being done by AC's.

Really, folks. If you want your comments to be taken seriously, at least post them with an identifiable handle, even it it is not your real name!

In case you forget, it's not possible to differentiate one AC except by content.

9
1

Who's hogging Amazon's cloud CPUs? I'll kill 'em ... oh, look, it was me

Peter Gathercole
Silver badge

In it's true meaning...

...I'm sure that if you really do, you don't need a tool to grok the information!

0
0

The micro YOU used in school: The story of the Research Machines 380Z

Peter Gathercole
Silver badge

Never impressed by the 380Z

I always regarded the 380Z as a bit of a lemon, mainly because I did not see one until 1982, after I had my own BBC Model B. I guess that if I had seen it earlier, I might have had a different opinion, although I'm not sure, having first used UNIX in 1978.

It always struck me as slow (especially with the high resolution graphics board), but I did appreciate that it ran CP/M, and thus had a large library of software, provided that you could get it on the slightly unusual disk format (not that there was a standard disk format at the time).

The one I had control of used to be used mainly by one member of staff who wanted to use Wordstar and the QUME Sprint 5 daisy-wheel printer. There was one postgrad who had a strange project to try to connect it up to the Newcastle Connection (aka UNIX United!) as a client machine over RS232 - there being no Cambridge Ring hardware for the 380Z (daft really, as the filesystem API was too different between CP/M and UNIX). He never completed the project, because it turned out that he was a draft dodger from his home in Greece, and he went home to see his family, and was promptly arrested as he stepped off the plane! It did mean that I got to see the UNIX United! source code, as I had to add it to 'my' V7 UNIX PDP11.

2
0

NO! Radio broadcasters snub 'end of FM' DAB radio changeover

Peter Gathercole
Silver badge

Re: Technoluddites

I used to be all for DAB when it really was new. Over the years, I've bought two mains powered DAB radios, a car DAB radio which re-transmitted on FM, now no longer made, a pocket DAB radio, and an add-on for an iPod.

Slowly, all of the interesting stations I used to listen to have dropped off DAB, or gone low-bit rate/mono (really - Planet Rock in MONO!).

And to cap it all, there are vast swaths of no DAB reception where I live.

I still keep the DAB radio in the car, but only for Radio 4 Extra. None of the others even get turned on any more. Instead, I normally listen to Radio 4 on FM or occasionally Radio 2 for some of the ex-Radio 1 DJs, and sometimes ClassicFM or Radio 3 when I'm in a classical mood. Other than that, it's music and podcasts stored on my phone.

It's a technology that has failed, and should either be turned off or re-launched in a form common with the rest of the world.

4
0
Peter Gathercole
Silver badge

Re: DAB is pointless @Ben

A two word retort to your Internet access in cars comment - usage caps.

2
0

Your kids' chances of becoming programmers? ZERO

Peter Gathercole
Silver badge

Re: 6502/6809's rool btw... @ Jamie

The problem with may of the complex instructions on the Z80 was that they took so many T-states to execute. This meant that on paper, a 4MHz Z80 looked like it should outperform a 2MHz 6502, but as the average Z80 instruction took 3.5 T-states, a 6502 clocked at half the speed, with an average of 1.5 T-states per instruction could run more instructions in the same time.

This meant that with careful programming, it was often possible to get functionally identical code running faster on the 6502 than on a Z80. It was horses for courses, of course, but many of the sorts of things that these processors would be running would be integer, simple data handling or block memory problems that did not need the more powerful instruction set of the Z80 anyway. I've commented on this with a worked example before here

But this comes back to the crux of the article. In order to get the best out of the machines back then, it was necessary to know the instruction set very well. And this is what is missing in today's programmers.

1
0
Peter Gathercole
Silver badge

Re: 6502/6809's rool btw... @Steve

What made Page 0 really special on the 6502 was the ability to treat any pair of bytes as a vector, and jump using a 2 byte instruction (one for the op code, the other for the address in page 0) to anywhere in the systems address space very quickly. Because this was used extensively in the BBC Micro OS for almost all OS calls (see the Advanced BBC Micro User Guide), it mean that you could intercept the OS call and do something else instead (it was called re-vectoring).

I used this many times. For example, in Econet 1.2, all file I/O (but not loading programs) across the network was done a byte at a time (very slow, and crippled the network, which only ran at around 200Kb/s anyway). I wrote a piece of intercept code which would re-vector OSREAD and OSWRITE so that they would buffer the file a page (256 bytes) at a time (IIRC I hijacked the cassette file system and serial buffers to hold the code and the buffered page), which sped things up hugely. Could only do one file at a time, but would handle random access files correctly.

When used with the Acorn ISO Pascal ROMs, it sped up compiling a program from disk from a couple of minutes to seconds, and meant that it was possible for a whole class to be working in our 16 seat BBC Micro lab at the same time.

Talking about ISO Pascal, which came on 2 ROMs, I also re-vectored the Switch ROM vector (can't remember it's name) so that I could load the editor and runtime ROM into sideways RAM, edit the Pascal program, issue a compile command (which would switch to the compiler ROM), and have it overwrite the editor/runtme ROM image with the compiler ROM image, compile the code, and then switch back after the compile was finished. Great fun! Infringing on Copyright, of course, but meant that I could work in Pascal on my BEEB that did not have the ROMs installed!

2
0
Peter Gathercole
Silver badge

Re: peek? @Simon

OK, I accept that the Atom (and probably System 1 and System 2) had them first,

BTW. My BBC micro was mine. I paid for it, not my parents. I ordered it on the day that they opened the orders process to the public, and it's got an issue 3 (an early) board, has a serial number in the 7000's, came with OS 0.9 in EPROM, and last time I powered it on 18 months ago, still worked.

I had an advantage that I knew C, PL/1 and APL before I got my BEEB.

1
1
Peter Gathercole
Silver badge

Re: I'm kinda conflicted...

6502 was an elegant and orthogonal machine code, spoiled by the gaps in the instruction set for instructions that didn't work in the original MosTEK silicon.

By the time the 6510 came along (as well as some of the later 6502B and C chips) many of these missing instructions would work, but nobody used them because of backward compatibility.

6809 was probably a more capable and complete machine code and architecture (it benefited from being a later chip), but I still have a fondness for 6502 (and PDP11).

2
0
Peter Gathercole
Silver badge

Re: peek?

Oh no. Peek and Poke.

I prefer ? and !

<smug>Guess what machine I had</smug>

4
1

Google Nexus 5: So easy to fix, it's practically a DIY kit - except for ONE thing

Peter Gathercole
Silver badge

Re: Really? @dogged

If the screen is damaged, then there is a good chance that the front glass/touchscreen will also be damaged anyway.

And the rest is really just a plastic moulding, so won't add significantly to the cost.

When I've replaced the screen on a couple of phones, I've always decided to replace the glass as a matter of course. If you're going to take the effort to dismantle a phone, replacing the class seems like a minor extra expense.

3
0

Want a unified data centre? Don't forget to defrag the admins

Peter Gathercole
Silver badge

Re: VMware Snapshots? For real?

I do not know VMware Snapshots, but I'm assuming that they work like other snapshot systems.

Blockwise filesystem snapshots can have a place in regular backups, but only really if you limit the time you want to go back to the number of snapshots you keep. And this is determined by the amount of change in your systems and the amount of storage (usually disk) that you are prepared to keep back for the snapshots. In addition, they are probably useless for disaster recovery, unless you are maintaining cross-site snapshots (I don't actually know if you can do this, but I would guess that if you had cross-site mirroring, it would also be possible to keep snapshots on your other site).

If your backup requirements are longer term, or require recovery of individual files, then an agent based backup scheme is about the only way you can satisfy the requirements, IMO. This is especially true if you have a heterogeneous environment.

Of course if you are backing up the C: drive of all of your identical virtualised Windows boxes, then there are probably huge benefits in just backing up one copy of a de-duplicated, shared image at the de-dup'd level, rather than agent based backups of each system. But that is a particular system deployment method that does not match all requirements.

0
2

Google barge erection hypegasm latest - What's in the box?

Peter Gathercole
Silver badge

Re: Showroom?

It might be to weaken the case that Google is selling in the UK. The 'sales' staff roll up in a barge moored in the Thames, host all their junkets, sell all their advertising, and then sail away,

HMRC and the Parliamentary Select Committee will not be able to express incredulity at Google reporting so little UK business.

3
0

Windows Azure Compute cloud goes TITSUP PLANET-WIDE

Peter Gathercole
Silver badge

Re: Cloud analogy @ribosome

That's as much a pun as an analogy!

0
0
Peter Gathercole
Silver badge

Re: "calling into question how effectively Redmond has partitioned its service"

I'll upvote you for once. It's stupid that by default Linux distro's only create a single filesystem. But you do get asked whether you want to create other partitions during a normal install (and in a more guided way than Windows 7 does) and most experienced Linux admins do it as a matter of course (me - I come from a UNIX background and expect to have at least /, /usr, /var, /tmp, and /home as separate filesystems, with other filesystems set up according to the use of the system)

The problem here is that MSDOS partition table format, which was the default up until Windows XP (SP1?) only allows 4 primary partitions, and then extended partitions in one of the primary partitions, which many boot loaders will not allow you to boot from (I know GRUB does - I'm talking historically)

This meant that when you write a distro installer intended to co-exist with other OSs, unless you are prepared to probe the partition table type, you take the option of only using one of your primary partitions to be as unintrusive as possible.

Unfortunately, although the world has moved on, bad habits die hard, and most installers take the same decisions as they have always done.

I must learn more about the more recent partition table formats to bring myself up-to-date. Although I've installed Windows 7 from scratch twice, I've never created a dual-boot system with Linux (I've done a dual boot XP and Win7 system). All my systems tend to not have any Windows on at all!

2
0

Win XP? Your PLAGUE risk is SIX times that of Win 8 - NOW

Peter Gathercole
Silver badge

Re: That graph suggests @AC 12:38

I suspect that this is the same AC who always says this, but when challenged provides references to statistics on Web defacements.

There are vulnerabilities in Linux. Many are discovered and posted as a result of code examination (when people started looking for memcpy calls on unbounded buffers a few years back, there was a huge jump in the number of vulnerabilities reported against Linux, even though many of them were unlikely to be exploitable. We just don't know how many of these are present in Windows.

But as a basic desktop box, the protection that UAC provides on Windows Vista+ has pretty much always been there on Linux since it became popular. And as a result it is axiomatic that Linux is more secure for day-to-day use. And out-of-the-box, Linux is much safer to connect to the Internet because fewer services are turned on by default. This is something Microsoft have taken on board in recent Windows releases.

Of course, there are still exploits that take advantage of the wetware, but they will be present on any OS unless it is so locked down that the users cannot do anything.

8
0

IBM to shutter SmartCloud, move customers to SoftLayer

Peter Gathercole
Silver badge

Huzzar!

I could not have put it better myself!

0
0

A steam punk VDU ?

Peter Gathercole
Silver badge

Re: Projection displays @John Smith 19

Troff (Typesetter roff), not nroff. Nroff used a fixed character set described in a tmac file, and did not have the ability to scale characters to different character sets.

One of the interesting things is that most people who used nroff assumed that it could only handle fixed-width font devices, because that is all they saw it driving (typically dot-matrix printers). It actually did allow partial character spacing, and I wrote a tmac file to use nroff with a HPLJ compatible OKI laser printer with the advanced character set option, that allowed nroff to produce right-margin justified proportional spaced text using micro-spacing.

It could not handle pic or grap output, although I got tbl to produce nice solid-box outlines for tables. I believe that it could also do some basic eqn as well.

0
0
Peter Gathercole
Silver badge

Re: No, please not 7bit AScii @Steve Davies 3

OK, I was using 7-bit ASCII as it allows upper and lower case characters (one of the requirements). 6-bit ICL code only contains upper case characters, although I understand (I only briefly used an ICL 1904 machine in the late '70s, and never got to grips with the available character set) that one of the characters was used as a shift, to provide lower-case characters.

I admit that using an American standard was a bit low, but I could not think of a suitable non-US one. In any case, it would have to have been invented, because ASCII did not exist before 1960. If you wanted it to be authentic Steampunk, you would probably have to use the Cooke and Whetstone telegraph system!

0
0
Peter Gathercole
Silver badge

Re: Forgot editing

I think basic electricity use was discovered in the same general timeframe as steam. Michael Faraday was credited with inventing the electric motor in 1821.

Nixie tubes are much later. Wikipedia suggests 1955.

So I contend that basic electricity (not electronics, mind) is totally consistent with Steampunk.

0
0
Peter Gathercole
Silver badge

Re: Just to tighten up the parameters a little...

The etch-a-sketch would be like a mechanical version of a Tektronix Storage tube terminal (Tek 4010 or 4014).

2
0
Peter Gathercole
Silver badge

Re: Here's one someone made earlier, out of Lego

Wow!

That's awesome!

0
0
Peter Gathercole
Silver badge

Forgot editing

All you need is some way of changing the strowger selector to a particular position, and then rotating the split-flap character to the new character.

I'm sure that there is a 'return to space' operation that can be applied to all character positions at the same time to clear the display.

1
0
Peter Gathercole
Silver badge

I thought this would be difficult, until I realised that you could use a pulse-coded dial (like an old-fashioned telephone dial), linked up to something similar to a Strowger exchange and a Solari split-flap display.

I know I'm cheating a bit using pulse-encoders and electric motors, but I'm sure that you could use rachets, rotating shafts and slip-clutches everywhere that the modern displays uses electric motors.

If we take 7-bit ASCII as the character set, that would mean 96 different displayable characters which include all upper and lower case English characters and numbers plus sufficient punctuation. This could be encoded using a 32 place dial like a rotary telephone dial, together with two shift keys shifting to different rachets to generate upper and lower case, and numbers, together with the punctuation. These work well with strowger type gear, and all you would need to do would be to pulse each successive position in the split-flap.

0
0

Naughty Flash Player BURIED ALIVE in OS X Mavericks Safari sandbox

Peter Gathercole
Silver badge

@Def

The difference in the sandbox approach is that it denies access to resources by checking what they are doing at the API boundary of the sandbox, rather than allowing the underlying OS to control access.

Any suitably designed OS should have controls to contain rogue actions (like the permissions system on the filesystem and IPC resources and Role Based Access Control) already, and many do. But things like Windows up to XP, whilst it had the underlying technology were so compromised by the way that the systems were implemented (users running as an Administrator by default, and too many critical directories having write access to non-administrator accounts) that it became necessary to add the extra 'sandbox' to protect the OS!

Unfortunately, the way that OSX deploys applications is fundamentally flawed (they've added an application deployment framwork into user-space so that you don't need to be root to install an application, or it was this way the last time I looked at OSX), and this unfortunately opens it up to applications being altered by other applications without requiring additional privilege. The OS remains protected, but the applications are vulnerable. This is the reason for implementing a sandbox.

Anyway, sandboxes are not new. On UNIX systems since seemingly forever (certainly since Version 7 in 1978), you've had chrooted environments that you can use to fence particular processes to controlled sub-sets of the system

1
0

Call me maybe: Orange loses a segment as competition bites

Peter Gathercole
Silver badge

Re: It says EE on my phone

I think it is that when Orange and T-Mobile merged, they spun off the business of operating the infrastructure to a separate legal entity that is EE. Orange and T-Mobile manage the customers, and 'rent' access to the infrastructure from EE.

It is not clear to me whether Orange/T-Mobile own EE, whether EE is now also a holding company with Orange and T-Mobile as subsidiaries, or whether they are completely separate companies.

I'm sure that it makes sense to somebody, but I'll bet there's some international shenanigans about where the profit is declared!

1
0

PC addict RM finally quits its building habit, plans to axe 300 jobs

Peter Gathercole
Silver badge

Re: Won't be sad

At the time, the 480Z with the network and file/print server option appeared good (though expensive), because it ran a CP/NOS (a network capable CP/M compatible - OS the industry standard when the 480Z came out), and allowed files to be stored centrally so students did not need personal media or to work on a specific machine all the time.

Unfortunately, CP/M completely dropped out of favour when the IBM PC was launched.

I actually preferred the BBC Micro with Econet and a Econet Level 2 hard-disk server. Back in about 1983, the Poly. I worked at built 2 similar computing labs, one by the Computer Unit, and one by the academic Computing School. There were similar bugets, and both were installing 16 seats, networked with a file-server and printer.

The 480Z lab (Computer Unit) had 16 computers, with screens, a fileserver and printer, and a basic productivity package. And that was pretty much it.

The BBC Micro lab (mine) had 16 computers with screens, a fileserver and printer. It also had basic productivity packages, but it also had light pens for all computers, and a selection of other hardware items including CAD software and hardware (BitStik and 2 different digitizers), teletext and speech synthesis hardware, speech recognition hardware, 2 types of digital camera, robot arms, touch screens, and a pen-plotter. And on the software side, it had a full ISO pascal compiler for all of the systems, together with a selection of other languages including Forth and Lisp.

My BBC Lab was built to teach people who did not know what a computer was the vast range of things they were cabpable of, in an affordable way. It could als be used for the computing students to teach programming, networking (sticking an oscillascope onto the Econet was a great way of demonstrating what a network was) and it was great fun building it.

0
0

Nice job, technology. Now we have to work FIVE TIMES HARDER

Peter Gathercole
Silver badge

Re: The promise of automation @Semaj

Good luck with your promotion prospects! It seems at the moment that only those who are prepared to go the extra yard are even considered for advancements.

Whilst I agree that it should not be this way, I am increasingly upset by the divisive nature of the appraisal system that most companies use now to measure performance. It now appears to be used as a tool to get high-skilled, expensive people out of the door because of arbitrary 'poor performance', rather than a mechanism to reward people good at their jobs.

3
0
Peter Gathercole
Silver badge

Really?

"like Microsoft Office ...... allowing people to work at much faster speeds"

I find Microsoft Office has always slowed me down compared to other, pre-WYSIWYG tools! The use of inappropriate tools (like Excel rather than a database for storing and parsing data, or a proper document preparation system for technical reports) is IMHO one of the biggest productivity blocks around!

7
0