* Posts by Peter Gathercole

2100 posts • joined 15 Jun 2007

Rackspace in Crawley: This is a local data centre for local people

Peter Gathercole
Silver badge

Encryption @Lusty

If you were just using cloud storage, such that the data was being encrypted as it left your site, and decrypted as it entered your site, this may work.

Unfortunately, if you actually processed any data in a cloud service, it would need to be able to decrypt and encrypt the data as it was used, requiring the encryption keys to be on cloud servers themselves, and thus as vulnerable to being snaffled as the data itself!

So, unfortunately, encryption is not the answer to all the issues.

0
0

Windows 10 Device Guard: Microsoft's effort to keep malware off PCs

Peter Gathercole
Silver badge

Re: IOMMU? @Bronek

My main career focus recently, AIX on IBM Power servers has been providing virtualised I/O, with the hypervisor doing all of the basic device manipulation, and the communication from the hosted OS being handled by virtual devices for close on a decade (the main features were implemented in Power 5 systems running AIX 5.3, although basic LPARs and mapped/guarded device control was in earlier hardware and versions of AIX), so I do understand how a hypervisor can sanitise device access.

I also understand service Virtual Machines and also quite a lot about how I/O MMUs and the associated CPU MMU features work, included how nested page tables and hardware protection rings are implemented. There may be some novel aspects of controlling access to particular adapters/busses at a hardware level that is unique to Intel hardware, but although that appears to be the main function of Device Guard, it was not how the article was presented.

I was working on Virtual Machines using a hardware hypervisor on Amdahl mainframes (running UNIX) with device and memory page level hardware protection back in the late 1980s, so very little of this is new to me.

It is not me that is confused, except possibly about the way that the article was written.

1
0
Peter Gathercole
Silver badge

Re: Kernel has control

In machines running type 1 hypervisors (I'm going to use HV because I'm tired of typing "hypervisor"), the kernel very rarely "gets the rest". Once you start slicing and dicing with a HV, you can have as many OS images as the HV and the hardware MMU supports, and each OS only sees the bits it's given access to by the HV.

This is the very nature of Virtual Machines. In some implementations, the OS does not even have to know it's running in a VM, as it's given what it thinks is real-mode access to it's own virtual address space, so it does not even know that other VMs and OS images exist on the same hardware, let alone be able to see or tamper with their memory.

3
0
Peter Gathercole
Silver badge

IOMMU?

I'm sure that there are aspects of this that I haven't appreciate, but from the Minix paper on IOMMU, I really cannot see how this specific feature provides the protection.

IOMMU is not a new concept. It's there to allow bus attached devices controlled access to the real memory address space of the machine for DMA type transfers. I first came across a feature to implement this was in the Unibus I/O address mapping system (Unibus map) in 16 bit PDP11 computers with 18 and 22 bit addressing extension back in the 1970s. The basic concept is to allow an I/O adapter controlled access to part of the main system memory in a way that does not allow access to bits outside of the control.

In that implementation, the OS set up the Unibus map for the I/O (Most Unibus devices were only 16 bit capable, so they needed a translation mechanism to be able to write outside of the first 64K of memory), and the DMA then occurs (it was more simplistic then, because there were no overlapped I/O operations, so differed I/O operations requiring the state of the UNIBUSMAP to be saved through context switches were not an issue). The protection offered was actually a side effect of the mechanism. This gave protection from rogue Unibus DMA transfers, but left control in the hands of the OS.

This is what is described in the IOMMU Minux paper, nothing else.

In order to implement something like this to provide protection from from the OS itself, it is necessary to have the checking code in a higher protection ring than the OS. This is normally reserved for type 1 hypervisors, and the capabilities for this have existed for many years. It would have been perfectly possible to add this type of function to the hypervisor or to a service VM running parallel to the OS, so the OS makes a hypervisor call to check the validity of, well, pretty much anything at all including checking the cryptographic signature of new code. In this way, running Device Guard as a service VM controlled by the hypervisor rather than the OS means that it cannot be tampered with by anything in the OS. This is what I think Device Guard actually is, supported by the statement "with its own minimal instance of Windows". Make the hypervisor and Device Guard also signed by UEFI, and it's pretty difficult to tamper with the system as a whole.

Of course, VM segregation requires an MMU and an appropriate security protection ring, and it is possible that this is why there is some confusion about which part of the MMU is providing the protection, but IMHO, it's not the IO function of the MMU described by the Minix paper, more the general features of a VM capable Memory Management Unit. It's probably the Extended Page Tables feature that is actually required for Intel processors.

This is the type of thing that IBM have been doing in their mainframe operating systems running under VM (the mainframe hypervisor product) or PRISM for many years. As I understand it, the RACF security system runs in a separate VM to provide additional security.

6
0

The data centre design that lets you cool down – and save electrons

Peter Gathercole
Silver badge

Re: Dealing with the waste heat

When I was at University in the late 1970s, the heat generated by the s360 and s370 was fed into the heating system for Claremont Tower in Newcastle.

Nothing is really new any more.

0
0
Peter Gathercole
Silver badge

Re: Sooo out of date!

I don't understand the issues with water cooling and humidity.

The water is totally contained in sealed pipes, so there is no chance of it entering the data centre atmosphere.

In the case of the PERCs systems, there are actually two water systems, one internal to the frames which is a sealed system with the requite corrosion inhibitors and gas quenching agents , and the other a customer water supply, with heat-exchangers between them.

The only time water can get into the air is if there is a leak. Where I work did have a leak at one time, which was caused by cavitation erosion to the case of one of the pumps. but that is one minor leak in the six years I've worked here.

1
0
Peter Gathercole
Silver badge

Re: Heat pipes @AC

If you were referring to 'fabric' chips in my earlier comment, they are a little bit like what you might describe as "northbridge" or "southbridge" chips in older Intel servers (although only in concept, not in the detail). They provide the copper and optical interconnect to glue the components together into a cluster (both external network, and internal processor-to-processor traffic), and also the PCIe and other peripheral connections.

I could have called them Host Fabric Interconnect (HFI) or maybe Torrent chips, but that would probably have been even less meaningful.

Heat pipes are not ideal. Because of the way they are constructed, they are very sensitive to leaks, which because of the critical partial pressure within the pipe, render them useless almost immediately once a leak happens. I think that the distance that they can move heat is limited.

I've seen far too many laptops that rely on heat pipes overheat whenever they've been on for any length of time because the heat pipes no longer function properly.

Oh. By the way. Proper mainframes don't run Windows!

3
0
Peter Gathercole
Silver badge

Sooo out of date!

Put some water provision in the data centre. Water is a much better medium than air to extract heat, and it is much more efficient to scavenge heat from water for things like the hot-water in the handbasins in the restrooms than it is from air (although it does depend on the exit temperature of the water).

Use water-cooled back doors. It takes significant amounts of the heat away before it even enters the airspace. Even better, put them both in the front and back, so the air enters the rack cooler than the ambient temperature, and gets any heat that is added taken out as it leaves the rack.

I know I've said this before, but look at the IBM PERCs implementation. Water cooling to the CPUs, DIMMS, 'fabric' chips, and also in the power supplies. There is still some air cooling of the other components, but from experience, I can say that these systems actually return air back to the general space cooler than it went in!

There are some really innovative things happening, much more than just the decades old hot-cold aisles, hanging curtains and under-floor air ducts.

1
0

DRONE ALONE: US Navy secretary gives up on manned fighters

Peter Gathercole
Silver badge

And thus were The ABC Warriors born...

I can't actually remember any quotes. Must dig out my original collected editions.

1
0

Rand Paul puts Hillary Clinton's hard drive on sale

Peter Gathercole
Silver badge

Re: Never came across SASI.

80MB of disk! Luxury.

The first UNIX system I was sysadmin for had 2 x 32MB SMD disks and 1MB of memory (although the disks were short-stroked, and we eventually persuaded the engineers to remove the limit, doubling the available disk space).

The first UNIX system I used was a PDP11/34 with 2 RK05 (2.5MB removable disks), and a 10MB Plessey badged fixed disk that was about 10MB. When I first logged on in 1978 it had 128KB memory, although that was max'd out to 256KB later, it was running UNIX Edition/Version 6 originally, although V7 (with the Calgary mods to allow it to work) was installed later, and supported 6 Newbury Data Systems glass teletypes (not screen screen addressable, so no screen editors) and 2 Decwriter II hardcopy terminals. And it supported a community of about 60 computing students, and was permanently short on disk space!

1
0
Peter Gathercole
Silver badge

Before SCSI, I was using ESDI and (E)SMD disks. Never came across SASI.

0
0
Peter Gathercole
Silver badge

Re: Email servers - @Peter Gathercole

Yes, you're right. I was indulging in rose-tinted glasses. Life was much more simple then (as long as you didn't have to configure sendmail rules by hand), and I really miss those days.

Most users at that time would probably be using their modem-attached microcomputers as termials to either their work place or a bulletin board.

User data on the multi-user systems was also backed up normally (users tend to get a bit irate if a system failure wiped out their files, including their mail), so control of their data was never totally in their hands. Even if they deleted the mails, they may exist on backup tapes, and most users had absolutely no idea about how long the backup regime would keep copies of their files.

At one point I was a system owner as defined by the original UK Data Protection act. I was petrified of a request to amend all copies of some incorrect data, because I had no idea how to edit the backup tapes that I kept for significant amounts of time. I was told that there was provision for this in the act, but nobody told me what it was!

1
0
Peter Gathercole
Silver badge

Re: Email servers

It depends whether you mean an MTA, MDA or MUA.

Really traditionally (in the days of UUCP mail), the MDA and the MUA were often the same system, quite frequently a multi-user UNIX system, and the mails often remained on the system in peoples own mail folders. It was only the MTA that only kept a transient copy of the mail, and in the very early days, a single server was often MTA, MDA and MUA all rolled together.

The first time I really encountered what would be regarded as a pure MTA was a system called IHLPA at AT&T Indian Hill, Chicago, which seemed to act as a UUCP mail router for pretty much the whole world. If you remember routing UUCP mail, you can't have failed to notice ...xxx!ihlpa!xxx... somewhere in the mail route.

But that was a long time ago.

0
0

Radio 4 and Dr K on programming languages: Full of Java Kool-Aid

Peter Gathercole
Silver badge

Re: This is exactly the problem

Thumbs up for Earth Story. It's an excellent example of a cross-discipline scientist (Aubrey Manning, a zoologist who was sufficiently interested to learn about geology, and how the change to the Earth conditioned life) who has very good presentation skills.

I particularly like the description of the Long Term Carbon Cycle on one of the later episodes which comes up with the conclusion that in geological time scales, our knowledge of climate is pretty much informed guesswork.

I really wish there were more TV series like this.

5
0

Welcome to the FUTURE: Maine cops pay Bitcoin ransom to end office hostage drama

Peter Gathercole
Silver badge

Re: Wouldn't fly in my office @Crazy

Um. How would this have helped in this case?

Presumably, all the users must have access to the file servers in order to copy the files there. And I'm guessing that these shares are mapped all the time.

So the malware follows every path it has access to, and encrypts all of the files it finds. This includes the files on the hot file server.

How is this the fault of any individual (apart from the person clicking the link)?

Having on-line copies on permanently mounted shares is no protection from this type of malware unless one of the following is true:

1. The copy is made by a high-privilege task that puts the copies in an area of the file servers that general users who may run the malware cannot write to.

2. The copy is made to worm devices, which do not allow files to be overwritten or deleted, just new versions created.

Even having the backups done by a high privilege task is not perfect unless there are some form of multiple versions kept, as it may be overwriting good data with bad. You've still not prevented the problem, and you've said as well as an (singular) offline replica, and the server is continuously wiped and rebuilt from the backups, which would imply that if the problem goes undetected, one backup and restore cycle later, you're still screwed.

It strikes me that there is a general failure of file sharing in many organisations. There ought to be a much finer granular permissions system, where a user only has permission to write to the parts of the file store that they need to for their job. This would prevent wholesale encryption of the data, but would not completely solve the problem.

Couple this with a proper off-line backup system (where the malware cannot overwrite the media, because it's not writeable by ordinary processes, either by permission or because the media is physically unavailable), which keeps copies of various ages (daily kept for a week, 1 copy per week for 6 weeks, 1 copy per month kept for an extended period, for example). Or use a managed backup solution with offline media that keeps multiple versions (TSM, Arcserve, Amanda etc.)

In the medium and large systems environment, this is a well established process. I'm sure I preaching to the converted here, but the lesson just does not seem to sink in to some SAs.

I know that the amount of data that kept is now quite huge, even for relatively small organisations, but it seems to me that the current some of the current IT world have totally ignored the best practices of previous generations.

This may be, of course, because the Management and bean counters are allowed to squash the required good practice because of cost, and over-ride any suggestions from their experienced technical administrators (or engineer them out of the company), in which case they (the management) should be held entirely responsible.

Oh. And seriously control the ability of the users to run any code, trusted or untrusted directly from web-pages or emails. At least make it a two stage process where they have to download it first, and then explicitly execute it. It's not much protection, but it will prevent casual click attacks, and as it's an explicit action, means that it is easier to discipline the culprit. This should extend to scripts in any language.

0
0

Microsoft, Getty settle image snatch 'suit

Peter Gathercole
Silver badge

Re: Eddor meets Ploor

The Eddorians were the top of the bad pile. They were introduced in the books earlier because the whole premise of the story arc was that the struggle between the Arisians and the Eddorians was of necessity fought by proxy through the subordinate organisations each of them created, because neither the Arisians nor the Eddorians could defeat the other directly.

The struggle was basically between the side that would hold on tightly to the reins of power, and the one that which would hand on control to those who were more capable, whose creation was necessary to completely defeat the other.

It is necessary to think in terms of the entire story line, from Triplanitary through Children of the Lens (forget The Vortex Blaster, that really wasn't part of the story line, and was a great disappointment when reading the last books in sequence for the first time, because at the end of "Children.." you thought that you still had one more super-epic story to go).

The point that Smith was trying to make was that The Evil could not see it's own limitations, whereas The Good embraced their own limits, even when it would lead to their own demise. The fact that each layer on the bad side thought that they were the top suited the episodic nature of the books, and allowed the story to get progressively more epic with each book. I still feel that it would be possible to produce films based on the books that would suit the effects-led film industry that we have.

In my view, the sequence of films should start with the story in Galactic Patrol, with possibly more than one film per book, and the stories in Triplanitary and First Lensman interwoven as 'prequel' films.

In my formative years, the concept of the stories seemed so simple. It's a shame "the real world" is not like this.

2
0

Tech troll's podcasting patent blown out of the water by EFF torpedo

Peter Gathercole
Silver badge

TLDR - properly.

I've just skimmed the patent, and it really is a load of guff.

The text talks about everything from the storing and indexing of the files on the source server, through transmitting it to a media device, and sow to the level of describing "prev", "next" and menu keys on the device.

It's so difficult to work out that is novel in the patent, and that was not prior that the person filing it should really have been made to strip it down to what was really the nature of the innovation. You would have to wonder the thought process of the person who originally accepted the patent at all, but then again, I don't really know the process.

0
0

+5 ROOTKIT OF VENGEANCE defeats forces of gaming good

Peter Gathercole
Silver badge

@jake

My comment was not meant for you, more at the Perl developers who wrote the "pop" function referred to by the AC who suggested it for the origin of the meme.

I never had any doubt that you know what a stack is!

0
0
Peter Gathercole
Silver badge

Re: "kernel driver providing a rootkit-like functionality to hide activity"

I'm seriously losing faith in the people that work in computing.

Who in their right mind will take a concept like 'push/pop', which have traditionally been used to work on a stack or a fifo or other buffer-like construct (maybe), and then applies it to an array?

Looking at the Perl document referenced, it looks like it is used on a one-dimensional array, like an argument vector, but that still appears to me to be a serious misuse of a previously used term!

10
1
Peter Gathercole
Silver badge

Confusing paper

I'm a little confused. I understand that this is a client-side attack on the games, and as such, it's pretty obvious that it is possible to modify the client machine, which is totally in the cheater's control, to do all sorts of things to manipulate the game and prevent the anti-cheat code operating. After all, with this level of access, you could do anything, including (for open systems) running their own kernel. There ain't no way that a user-land anti-cheat system is going to prevent that.

But looking at the paper, at one point they are talking about Direct3D and DLLs, which is mainly Windows terminology, and then they dive of to describe a Linux attack. Maybe they are trying to show that problem spans OSs, although I did not see a reference to that.

There is another way of preventing this type of attack, although it brings back something that I was hoping was dead.

If the hardware/OS/games are created using the generally hated (at least here) concepts proposed by Trusted Computing Group (previously known as the TCPA and the previous Microsoft Palladium project), it would be possible to implement a hardware and software stack that would prevent client side privileged access to the system unless it was signed by a recognised key. This would at a stroke prevent almost all of this type of client side attack, but at the same time would wrest almost total control of a machine from it's owner, making it a data appliance rather than a PC.

Because the detail in the paper is so scant, it looks to me like it is a scaremongering piece to bring security back into focus, to try to allow vendors of software to take more control of the PC away from it's owners.

Where's the tin-foil hat. I think I need it now.

8
0

Saudis go ape, detain Swedish monkeys at border

Peter Gathercole
Silver badge

Re: Thank gawdess the poor little critters didn't get shipped! @Jake

The Al Saud regime are actually wanting to be more moderate. According to a recent article on the BBC Radio 4 "Today" program, The Al Saud family have no control over the judiciary in Saudi Arabia, which is controlled by a council of quadis (religious clerics), totally independent from the King. This body implements and maintains Sharia law in the country, and is generally understood to be the main reason why Saudi Arabia has widely publicised harsh sentencing for certain crimes.

This situation has come about because of concessions King Abdulaziz Ibn Saud (the founding father of Saudi Arabia) made with the Islamic clerics of the day in order to maintain control of the tribes he conquered in the early part of the last century to found the country.

The only area where they overlap is in the final appeal process, which goes to the King. But it is generally accepted that the King has limited leeway in overturning any judgements of the courts, because of the fear that the Royal Family could be ousted from their current position by the rest of the government, particularly the judiciary. And there is a looming problem in that they are running out of sons of King Abdulaziz to become King (the title has moved sideways through one generation of the family by prior nomination from the recently deceased King, rather than down through the younger generations like most Royal Houses). When the current King dies, there may well be a dispute about the next King.

If there is a dispute, the situation in the Middle East could well get so much worse, as because of the House of Al Saud, Saudi is one of the few stable western-leaning countries in the region, even if it does have some undesirable aspects.

As in so many things, the situation is not as simple as portrayed by the media, particularly that in the US.

It's a shame that the lessons of a century of marginal British colonial policy in the Middle East have been ignored by the western governments since the second world war, as it was clear at the end of the Victorian era that the best thing that can be done was to stop interfering, and accept these people will find their own form of government. If that had been allowed to happen, we would probably have a much more stable and moderate region that wanted to co-exist with western countries, rather than the fragmented reactionary religious mess that we currently have that wants to tear down and conquer The West and their allies in any way possible.

7
0

IBM claims new areal density record with 220TB tape tech

Peter Gathercole
Silver badge

Re:HSM

Yes, its true that HSM has been around for ages, but it's much better integrated now, with the arrival a few years ago of LTFS, which enables the tapes to be standalone (file metadata is stored on the tape with the files themselves) and portable between systems while still being able to form part of an HSM solution.

It's actually quite a cool innovation, if you can work out how to use it.

Of course, it does not prevent you using recent generation LTO tapes and drives as raw data storage under the control of, say, TSM.

1
0

iOS, OS X apps sent into infinite dizzy DoS by this one weird kernel bug

Peter Gathercole
Silver badge

Re: Documentation has always been iffy for unix system APIs @boltar

TCP includes flow control, which should prevent this on a connection-by-connection basis. It is possible that you could create so many connections and fill them up to exhaust mbuf or other buffer use, but there are normally mechanisms that refuse connections if the target system is running short of resource.

I can't see OOB data being handled any differently.

There are also time-to-live timeouts on the packets, which normally mean that stale packets are discarded once they reach a certain age, to prevent the never read scenario.

0
0
Peter Gathercole
Silver badge

Re: Documentation has always been iffy for unix system APIs @boltar

It depends how far back you go.

If you look at the UNIX Version/Edition 7 man pages (admittedly much simpler than modern UNIXes), then they documented the behaviour of the system much more completely.

More recently (last thirty years or so), features have been added and copied to each UNIX (often reimplemented from another UNIX) implementation without the correct emphasis on the documentation. And don't get me started on the appalling state of Linux documentation, especially the complete abortion that is the man page/info system that appears to suggest that the documentation has been written, when in many cases, it hasn't.

Only yesterday, I realised that the both the AIX and Linux man page for tail do not include in either the prototype or the description the tail -200 filename type operation (it's deemed obsolete in POSIX V2, but still works, and I've been using it without thinking for 30+ years). This obsolete use is described in the Linux Info article on tail invocation, but not in the man page (RHEL 6.5).

What books allow are practical examples of how an interface is used, so that (lazy) programmers can crib other people's work without having to find out for themselves.

Not that I think that is a bad idea! I think that a lazy streak (of the right kind) is an essential feature of a good programmer or system admin. It encourages finding out how to do things efficiently, saving time and effort later.

2
0

Operation Redstone: Microsoft preps double Windows update in 2016

Peter Gathercole
Silver badge
Headmaster

Re: Subscription model?

Just to make sure people know my position so they can get their whinges out of the way first, I'm almost completely Microsoft free on my own systems. But that does not mean I do not have to consider Windows, as my wife and children all have Windows systems that I do not use, but which I am expected to help with (and pay for, in the case of my wife). I'm also playing devil's-advocate, because I am completely speculating here, and am hopefully completely wrong.

I still see nothing that conflicts with there being a subscription model license in the pipeline. Take the statement that Windows chief Terry Myerson made in February (lifted from here)

"we will continue to keep it current for the supported lifetime of the device – at no additional charge"

Notice what it says and does not say. It says no additional charge, not no charge. And also note the supported lifetime of the device.

So, you've got a subscription model license for, say, £60 per year. You're going to get updates and upgrades without paying another penny. But you are still paying the £60 per year. His statement is correct, but leaves sufficient wiggle room for a subscription model.

It also does not rule out there being a one-time-purchase option, either.

Let's look at it another way. Let's suggest you roll up to PC World next time you want to get a new PC. On offer are two Windows options for a particular PC. One is a subscription model, initially free for 6 months, and then £60 per year, and the other is a £300 up front purchase (equivalent to 5 years of subscription) for a non-transferable license (not transferable to another system, or another owner) for the lifetime of the system on top of the price of the hardware. Hmm. Interesting choice.

I can see many, many people opting for the subscription model merely so they can get the system home with the minimum outlay. It's the same reasoning behind the £40+ monthly contract to get the latest shiny phone.

I chose 5 years in this example, because it would probably be expected that devices will not last more 5 years and still be usable. If they were wanting to seriously skew it in favour of the subscription, it could be made more than that. Of course, you would then have the question of how much a device can change and still be the same, although the expected movement to more integrated systems with fewer upgrade options could easily close that off.

I think there are some very carefully worded statements coming from Microsoft. As I say, I hope this doesn't happen.

Mind you, if you take the subscription model machine, dump Windows completely and do not follow up on the subscription, then we (the Linux community) will finally have got rid of the Windows Tax, and maybe MS will have lost the lever that stops Linux being installed by the system builder.

0
0

Can you recover your data if disaster strikes? Sure?

Peter Gathercole
Silver badge

Whilst I agree with you, and don't condone cloud services myself, it is becoming quite clear that the cloud pundits are singing a song that the beancounters of this world want to hear, even while not understanding it.

It is inevitable that the steam-roller of this technology will flatten a large part of the corporate IT world, whether we want it to or not. It is happening quicker that I am comfortable with, because it is affecting my livelihood in a way that I will need to change what I do, something I don't relish at my age.

But the article does make some quite valid points. If you find yourself working in a cloudy environment, then a lot of the advise in the article make a lot of sense.

I hope that some of the wisdom gets as far as the people holding the purse strings.

Is there a "Cloud services for Dummies" yet, because we sure as hell need one.

0
0
Peter Gathercole
Silver badge

Ultimately, though...

...whether you know that it won't come up, even if you're arse is covered by secured unheeded warnings from you to the Management, you still have to piece it all together using whatever is available.

Unless, of course, the first thing that will happen after a disaster is your resignation hitting the temporary desk of your manager.

Which is exactly what a number of sysadmins at a major UK financial institution told me would happen a number of years ago if their primary data site was destroyed. They knew the plan would work (it had been tested pretty well, but piecemeal), but they did not fancy the long nights, location disruption, bickering about what sequence the business workloads needed to come back in, and the almost complete inability to fail back to the primary site if it was resurrected.

Of course, professional system administrators would not do this, would they?

0
0

Samsung's bend blame blast: We DEMAND a Galaxy S6 Edge do-over

Peter Gathercole
Silver badge

Re: Glass is not meant to be bendy

In my view, the 6510 was smaller than the 8210. It was thicker, but was slightly less wide.

I lost mine on a night out in Swansea. I think it fell down the back of a seat in one of the establishments on Wind Street, but I can't remember the evening that well.

0
0

Is this what Windows XP's death throes look like?

Peter Gathercole
Silver badge

Re: Windows 365? @P. Lee

I can see that reinforcing a monoculture may be what MS are after, but I can also see the "hard times" putting pressure for MS to move to a different revenue stream, one that is on a per user basis, rather than on a per-purchase basis.

The software vendors have often looked enviously at the way that IBM managed it's mainframe software model, which leads to it being one of the most consistent and regular income items on IBM's balance sheet. MS have already done Office 365, and I cannot see why they would not consider offering the same model for all of their software, including Windows. They've already trademarked "Windows 365".

One of the interesting things that they have revealed is that in order to be considered for the Win10 free upgrade, a system has to be able to connect to the Internet, and has to have Windows Update turned on. Whilst this could be to prevent the offer being abused, it could also be trying to make sure that Win10 has a vector for license enforcement. This is me being speculative, but it's sometimes interesting to bat possibilities around.

3
1
Peter Gathercole
Silver badge

@Cristopher Lane

OK, so you have a user who has bought MS office to install on their home machine, and have tried to install it on a Linux system?

The reason why people don't get it is because of the virtual mono-culture that Microsoft have managed to evolve by preventing vendors shipping machines with other OSs.

Let's make the playing field level, shall we. Let's make all the system vendors have to ship and charge for a full retail license for Windows, and take away the lever Microsoft can use to prevent vendors shipping systems with Linux installed.

Salesperson: Hello. I can offer you this machine with Windows at £500, or you can buy the same machine with Linux, for £450.

Customer: Can I browse the Internet, play games and watch media on the Linux machine?

Salesperson: Yes, with some restrictions, although the vendor has paid for all the licences to allow it to do all the normal things, and it's still cheaper.

Customer: And can I write letters and spreadsheets?

Salesperson: Yes, although you will have to use Libre Office, but then again, that is included, rather than costing you an additional £70.

Customer: So I can save over £100 for the same machine! Where do I sign?

Of course, it won't be quite like this, but if Microsoft had been prevented from blocking Linux 10 years ago, we would have been in a different place now.

And don't quote Netbooks at me as a Linux failure. The versions of Linux that was shipped, and Microsoft effectively giving XP away after it was withdrawn from support poisoned that market.

7
3
Peter Gathercole
Silver badge

Re: Windows 365?

Please, Mr AC, post details of the exact Ts & Cs that Microsoft have committed to. If you like, you can also point to other posts in these forums where you have accused me of being wrong. Or, why not come out from behind that AC veil.

Here is what Microsoft currently (I mean, like 5 minutes ago) say about the details from the Windows 10 upgrade page.

"We will be sharing more information and additional offer and support terms in coming months.".

And that is all they have to say about the Ts & Cs. I have looked, and there is nothing that I can find that Microsoft have said that would conflict with them making it an upgrade to a subscription model license. Please post a link that definitively rules this out. I will not accept any interpretation of the announcement from the technical journals without some corroborating evidence, because I think certain members of the press have been taken in by the statements as much as us mere users, and have jumped to the conclusions they want to hear.

Yes, I have no direct evidence, and you can class what I am saying as FUD. But until the final details are published, jumping to the other conclusion about how generous this is is just as bad. Remember, this is Microsoft, who have a proven track record of uncompetitive business practices and lack of concern about their customer base that we are talking about.

Oh. By the way. You might want to look at this.

6
5
Peter Gathercole
Silver badge

Re: I see desktop OS's similar to TV dinners...

How many times do we need to say this!

Please try Ubuntu, Mint, Debian or maybe Fedora or SuSE. Install and run using the GUI just like any other OS, completely without using the command line. (OK, I accept that some of the restricted packages need to be added using the package manager, but hey! it tells you what you need to do, and that is only needed because of the restrictive licensing imposed by other parties on certain components).

The only difference is that you can't buy systems with it already installed from any of the normal channels, and you have to actually install something yourself, but it's pretty easy for even novices.

8
3
Peter Gathercole
Silver badge

Re: Timing

I would love to see a breakdown of home/SOHO vs. commercial use of XP.

I'm still of the opinion that XP will remain on a whole slew of home machines until those machines either break and are replaced, or can no longer be used because of being OS/browser blacklisted by sites on the Internet.

3
0
Peter Gathercole
Silver badge

Windows 365?

There's enough conjecture in various forums to be credible that what Windows Vista and 7 users will be offered will be a years subscription to the currently unannounced Win10 Windows 365 pay-as-you-use offering, so tying people into a subscription model by the back door.

Microsoft just will not give users of their old OSs something for free!

6
10
Peter Gathercole
Silver badge
Unhappy

Re: "Microsoft will never realize another penny from this household"

I'm sorry to say that all the time Microsoft get royalties for the FAT patents, or any of the patents owned by MPEG LA or any of the other patents Microsoft defend but will only tell people about under an NDA, you cannot definitively make that statement.

9
0

SPY FRY: Smart meters EXPLODE in Californian power surge

Peter Gathercole
Silver badge

@Jim 59 re:"A huge gov IT project"

Are you sure? Just what are the Government doing to implement this?

All I can see is them mandating that the power companies implement it through legislation and regulation.

Get this straight.

This is being done by the power companies, paid for through the bills, by us, the customers!

This is no Government IT project.

2
0

Nuclear waste spill: How a pro-organic push sparked $240m blunder

Peter Gathercole
Silver badge

@MrDamage

And the first, as you pointed out, is Science.

There was lots of bad science in all of the Gerry Anderson works, and they were all set in the near future, so they could not really play the radical new technology card.

Dose it detract from the tremendous stories, strong characters (even though most were plastic or plasticine), or the fantastic achievements of AP Films and Century 21 Productions in the field of special effects? No it doesn't.

I am a huge fan of all of Gerry Anderson's work (well, Dick Spanner was a bit strange, and Terrahawks was below par IMHO), but that never stopped me cringing sometimes at the "Science", even when I was a child (My formative years were during the original runs of the "classics'" in the 1960's and 70's; I am of the Century 21 Productions generation, and am almost exactly the same age as Joe 90 would be).

(P.S. Answer me this. Why do Thunderbird 1 and 2 come to a dead-stop in the air, and only fire their landing jets when they want to decend?)

1
0
Peter Gathercole
Silver badge

Re: Fast Integral Reactor. @otto

As a story-teller and realiser on the small screen, Gerry Anderson was pretty good. As a scientist, not so hot.

There were plenty of plot holes. Like why was the moon able to avoid being captured by the stars/planets it passed close to. And where did they get their energy, especially in a form suitable for the Eagles. And how about the seemingly unending supply of Eagles when they were destroyed. And how come they could cross interstellar space so fast, but still slow enough to allow planetary exploration missions. And why the moon was not torn apart by tidal forces when it passed within the Roche limit to planets and even the black hole it went through. And how come so many Earth spaceships found the moon. And how they managed to get enough Sinclair Pocket TVs to make their communicators 10 years after most of them had broken.

And, to cap it all, why was there so little furniture in the Control Centre that everybody had to stand around, punching buttons on the walls!

Still, the first season was a good romp, although I thought widening the plot in the second season to include metamorpths and the like was going a bit too far.

10
1
Peter Gathercole
Silver badge

Re: Fast Integral Reactor. @Hadvar

That's so 1999!

10
0

It's the FALKLANDS SYNDROME! Fukushima MELTDOWN to cause '10,000 Chernobyls' in South Atlantic

Peter Gathercole
Silver badge

Best to read it all!

I read the headline and the first paragraph or so and thought that it was a bit superficial, because if the cores descended through the crust, then gravity would hold it at or near the centre of the Earth.

I then read on, and realised that this is all covered, at least to a non-scientist's view. It's all preposterous really, but detailed enough to appear serious, and very well done.

Congratulations on making an obvious April Fool's article worth reading.

5
0

Snakes on a backplane: Server-room cabling horrors

Peter Gathercole
Silver badge

If you think that is bad....

...you don't remember the bad-old days of RS232, co-ax and twin-ax terminal cables.

I wish I had taken a picture of it, but when an IBM building was decommissioned and we left it in the early '90s, I had a chance to look around as I had a access to help move the kit I was responsible for.

I found the comms room that contained the 3270 comms controllers for about half of the desks in the building. I kid you not, the 3174s, which were supposed to be floor standing devices, were on shelves stacked three high, with about 2 feet deep of co-ax cable cascading from them to the ground, and then to the patch panels and on to the risers to be distributed around the building. It was far, far worse than the pictures in the article.

Like another poster said, the reason for the cables, some of which were clearly not plugged in, was that they were so laced into the mish-mash that they could not be removed without risking breaking some of them, so they just left them there.

Good cable management is possible, and in the long run will save both time and money. It's a shame that most projects do not have ongoing supportability as one of their deliverables.

0
0

Smart meters are a ‘costly mistake’ that'll add BILLIONS to bills

Peter Gathercole
Silver badge

"just there to remind you to actually replace and turn off powered stuff"

But that's exactly the point.

In my case, it gave me the impetus to actually find these things, so I would judge that it was more valuable than a placebo. It was also a useful demonstration to the other family members that what they leave on has an effect on the household consumption, and that was tremendously valuable on it's own. If it gave over inflated readings, then that made it more valuable still!

As I understand it, recent equipment with a CE mark has to have a power factor close to 1, so that this type of meter will be more relevant once older devices age out of the house.

0
0
Peter Gathercole
Silver badge

Re: short term benefit @John 48

You have a very good point, but that is not what Tezfair was talking about.

What he said was that it enabled him to reduce his consumption. OK, he might have replaced some things that did not need replacing because the reactive load was not being taken into account, but he still reduced his consumption. That is what was important to him, not checking the accuracy of the billing system.

What he wrote mirrors my experience exactly. I too used one of the cheap clamp on meters to monitor instantaneous use, and I spotted a number of things that I could do that reduced the consumption, and managed to drop my base load as measured by the meter by about 45% (although peak use is still about the same because of high current devices like washing machines and tumble driers). My bills have gone down (or at least they did not go up as fast), and I now do not keep my meter running either. It achieved it's aim.

I would probably not benefit particularly from having a 'smarter' meter, apart from not having to provide meter readings.

2
0

Belgium to the rescue as UK consumers freeze after BST blunder

Peter Gathercole
Silver badge

Re: Don't away with BST, don't blame farmers @Richard Jones 1

Time is not quite arbitrary. The definition currently accepted around the world is based on the rotation of the Earth on it's axis in relation to the Sun, and is intricately associated with popular angular measurement.

Putting aside the discussion on units of time, my view is that noon should be when the Sun is highest in the sky. There's no particular reason it should be so, I just think that this should be the case.

I'm not (quite) suggesting that we go to completely local time measured solely by the Sun, but quantized into hour-wide zones with some geographical adjustments for national reasons seems like a reasonable compromise to me.

6
0
Peter Gathercole
Silver badge

@Mine's a Guinness

Funnily enough, most people I know get the switch the wrong way round.

Most people I know think that the clock change is to adjust time to suit the sun in Winter. This is the argument some people used to try to prevent children going to school in the dark, but that argument is bogus, because winter has the clocks aligned with sidereal time.

So all abolishing BST will do is to make the lit evenings an hour shorter in the summer months. At one time, when people worked in the fields, this may have made a small difference, but with the reduction in manpower required to run farms now, most farmers will be pretty indifferent to this. They get up when required, and often work the fields by floodlight to extend the working day in the evening.

I have no problem with abolishing BST, but not with aligning the clock to BST permanently, which some people suggest, or even aligning the UK to CET/CEST, which some business leaders want (blooming Gallophiles)!

11
1

TOP500 Supers make boffins more prolific

Peter Gathercole
Silver badge

Re: Chemists are... @Michael Wojcik

I appreciate the information, and I put my cards on the table. I am only talking generalisations here as I am not an HPC coder myself, but I do know several people doing serious work on existing HPC environments.

They obviously have time and effort invested in the existing systems and the languages that they currently use, but many times I have seen these people working at the generated machine code levels trying to sort out bit comparison errors, and convergence errors in large HPC programs.

The fact that they don't have layers of code generation and optimisation tools between the code being executed and the source language is a real benefit. These are people who care enough about the code and the specific machine they are working on to successfully manually unroll loops and other hand-hacks in order to reduce execution time.

Reading up on one of the examples you quote for efficient code generation, I find that Julia is still pretty immature for the largest problems out there. If you look at top 100 supercomputers, it is essential that you have a message passing paradigm that scales across multiple machine images.

The standard currently being used is MPI, although OpenMP and MPICH are alternatives that the likes of Cray and IBM are supporting. These are very important because of the need to be integrated with the interconnect in these systems.

I came across a recent thread (still going on) on Google Groups that is a discussion between various people associated with Julia about how to get it working with MPI and the problems they are having. This means that although Julia may be suitable for generating efficient code, it's still very immature for everything that does not use a shared memory model visible to all threads.

I found a quote from an article on ArsTechnica from just under a year ago where the team that created and is developing Julia—namely Stefan Karpinski, Viral Shah, Jeff Bezanson, and Alan Edelman said:

"Our general goal is not to be worse than 2x slower than C or Fortran"

So while you have very valid points, the consensus at the moment is that Fortran (and to a lesser extent C++) is and will remain for some time the preferred language for the largest real problems in the HPC world.

0
0
Peter Gathercole
Silver badge

Re: Chemists are... @YAAC

I was writing in the present tense, so I was commenting on what I've seen, not on what should be done.

But I seriously doubt that you are correct. If you get some computer scientist on board, they will want to write in something like Python if they've just coded as part of their degree, C++, Java, or derivatives if they've been taught formal programming languages, or something obscure like Haskill or Erlang if they are working in the field of functionally correct programs.

Like it or not, writing efficient HPC code is still best done in a relatively simple language like Fortran, because you can get so close to the machine code actually being executed that if necessary you can tweak it at the assembler level to wring out the last few clock cycles in critical parts of the code. Depending on which HPC segment you're looking at, owning an HPC is normally not just about running your code fast, it's about running it as fast as you possibly can.

Don't believe any hype that for this type of programming, an IDE is ever going to generate more efficient code than something closer to the bare metal, And I don't think that you will get any Computer Scientist seriously considering Fortran as a language to work in, unless they are already involved in the HPC field.

I am involved in the field myself at the moment (as a mere system admin), and I talk to people involved in solving big problems using HPCs, and this is what I am told by people actively writing for such systems.

0
0

What is HPC actually good for? Just you wait and see

Peter Gathercole
Silver badge

Re: it's time

Is that Timothy Prickett Morgan behind that AC, touting his current, Register associated venture by any chance?

0
0

Part of CAP IT system may be scrapped after digital fail – MPs

Peter Gathercole
Silver badge

Re: The last time I was involved in paper maps for field registration...

The problem was not getting the maps, it was getting the maps at the same time as all of the other farmers in the area doing the same! You've never seen such a group of grizzled, wind-burnt old codgers outside of a farm deadstock sale.

Of course all of the large farmers just sent one of their workers to queue, or got their farm-agent to do it for them. As an IT specialist, I felt most out of place, not being able to talk of the field yield, soil heaviness, milk quota, lambing figures and the myriad of other farming jargon.

It did make me think how intelligible we must be to other people sometimes!

1
0
Peter Gathercole
Silver badge

The last time I was involved in paper maps for field registration...

...what ended up happening were very long lines at the local Ordinance Survey offices trying to purchase the relevant maps to send off as the deadline approached.

You could not use any of the popular and readily available scales, you had to use the 1 to 10,000 scale (about 6 inches to the mile) which show field boundaries, and which were only available in person from an OS local office. The queues were incredible. I spent over 8 hours in one trying to get three sheets for my father-in-law's farm.

That was some time ago, but it was a real pain. I hope that what they've introduced is better now, because I understand that the OS local offices are no longer there!

2
0

Forums