Re: Supplies for Download festival !!!! @PhilBuk
+1 for the "Cities in Flight" reference.
2438 posts • joined 15 Jun 2007
+1 for the "Cities in Flight" reference.
Your dream line-up can only be a dream.
Jack Bruce, Rick Wright, John Bonham (OK, officially replaced), Jon Lord, Cozy Powell, Peter Banks (OK, he left Yes a long time ago), Dio, Kelly Groucutt (and Mike Edwards, although they both left ELO), Eric Carr, all gone.
And these are just the ones I found from the bands you mention!
I was supposing that the concession operators balance could be accessed in a timely manner. If that is not the case, then they and their customers are effectively giving the organisers a free, high risk credit facility for no return. I would not accept the risk.
But surely, that should come under some form of regulation, because the festival could be seen as a bank without a banking license?
Many retail tags are 'burned out' at the till (never wondered why they are waved over a plastic pad on the counter that warns you not to put bank cards and mobile phones on it?).
This is supposed to permanently deactivate by destroying the tags. (tags are powered by inductive power - these pads provide so much power that it blows a fuse in the tag).
Doesn't always work, though.
Completely removing the tag would be better, but it's been suggested that many retailers would like to hide the tag so that you don't even know where it is to remove.
Technology like the RFID cards can be used for many purposes, both good and bad.
From the organisers perspective, using the RFID tag as a payment method reduces the amount of cash on site, so it will probably reduce some of the petty theft that happens at these events. This will benefit the customers, the people who run the concessions (they don't need to maintain cash floats), the organisers (who don't need to have cash handling facilities for the concession operators), and the Police who will probably see a reduced number of theft reports, especially if the RFID has a second factor (PIN?) to authorise payment.
It also makes sure that people only have access to what they have paid for, making it more likely that people at the festival actually pay the right price to see what they want.
Tracking people who move around is something completely different. While it will happen as a side effect of the entitlement checking, it's no different to having a barcode on a ticket, which is frequently used at attractions.
The only time it may be intrusive is if they have silent, unmarked RFID scanners scattered throughout the event, not just at the gates.
I think if I was there (which is not going to happen, partly because of these concerns), I would probably want to take a foil-lined pouch or tin to keep the tag in (depending on whether it is a wrist-band or a dog-tag, both terms are used in the article), and only take it out when it is necessary.
I don't really agree with facial recognition, but as that can be applied both in real time and in retrospect to captured CCTV video, there's not really much point in objecting to that, because it will happen anyway.
It's not that I'm paranoid (well, not that much), rather than I object to the concept of there being the ability to track me.
When I learnt it, it was FORTRAN. It's only since that trendy upstart Fortran 90 came along, with it's free-form input, long variable names and pointers (amongst other corruptions) that the capitalisation changed.
I've not written much in the last 25 years, so it's still FORTRAN to me.
But please not, a lot of the Unified Model is still written in FORTRAN 77 and earlier, so the case is a moot point!
Blind benchmarks would work if you take identical code and run them on two separate machines.
Unfortunately, when comparing different languages, the way that the problem is coded is partly conditioned by style, fashion (you don't think fashion is present? Wait until you've been around a while, accepted standards for writing code changes over the years), and personal preference. This makes direct comparisons more difficult, because different people will code the same problem in the same language differently, and some differences can have quite an effect on performance.
Beautifully written code is not always the fastest!
It's not so easy to determine what is right and wrong when the forecast gives probabilities, not yes/no answers.
What you are talking about is the forecasters attempt to turn a complicated forecast into something that numpties like you have some chance of understanding, all in the space of three minutes or less. It's always going to be wrong for someone, because the weather for a whole region over a space of hours will never be the same across that whole region.
What you are complaining about is the generalisations that you hear on the radio or TV not being detailed enough for where you are. Look at the more detailed local forecasts on the BBC or Met Office web sites or apps, and you will find it is quite a bit closer to what actually happens.
There's another thing though. When you have ensemble runs (you run the forecast with slightly different parameters, and the reason why there are up to 3000 per day) it is quite likely that at least one of the ensemble will actually be right!
I had this discussion with a couple of FORTRAN programmers some time back, and actually wrote some test code to see what they were talking about.
The problem with C derived languages is that everything is done by reference (essentially pointers), and this adds an indirection to follow the pointer that needs to be resolved to almost all structured data reference, particularly arrays (very commonly used for this type of work).
On the other hand, FORTRAN works much more directly with the addresses of data in memory (once it has the base address of an array, for example, it can do direct arithmetic on the index rather than having to resolve the pointer again). This means that it is much easier for the data prefetch mechanism to spot consecutive address so as to fill the cache. This is especially as the FORTRAN standards dictate the way that certain structures have to be laid out, which has actually conditioned processor design in the past, and enables the FORTRAN programmer to make some intelligent decisions about which rank of a multi-dimensional array to traverse to maximise the effect of the cache.
I can't remember the figures exactly, but I found that FORTRAN code actually ran faster than it's similarly written C equivalent. You had to write some very unusual C to narrow the gap. There was not a lot in it, but when you are trying to get as much as you can from a system, every clock cycle counts.
This is comparing FORTRAN and C. If you want to include the OO overheads of C++ to the equation, then things get even worse! And the discrepancies are not always fixed by optimising compilers.
The problem is twofold.
Firstly, the Unified Model needs to have quite large sets of data per cell. Currently, the systems are sized with ~2GB per core, and each core is at any one time calculating one cell on the grid. This is to do with the way that the information is arranged, and although current GPUs can address large(ish) amounts of memory, they cannot manage to provide enough memory per core for a "few thousand compute units on a single card". Until the GPUs have the same level of access to main memory that the CPUs and DMA communication devices have, this will always be a block.
Secondly, all of the time steps are lock-stepped together, and at the end of each time step, results from each cell are transferred to all of the surrounding cells in three dimensions (called the 'halo'). As I understand it, the halo is being expanded so it is not just the immediately neighbouring cells, but the next 'shell' out as well. This makes weather modelling more of a communication problem than a computational one, and one of the deciding factors over the decision over the architecture was not how much compute power there was, but how much bandwidth the interconnect has.
To do this work on a system using GPUs for some of the computational work would require significantly more memory than can conveniently be addressed in the current GPU models, and because there are different GPU-to-main-memory models around with each generation of hybrid machine, getting the data into and out of the GPUs is not generic, and currently requires to be written specifically for every different model at the moment. There are also no standardised tools to assist.
Personally, I feel that the current GPU hybrid machines are a dead-end for HPC, as were the DSP assisted systems 30 years ago (nothing is new any more), but what we will see is more and more different types of instruction units added to each core, making what we see as GPUs today just another type of instruction unit inside the core (think Altivec crossed with Intel MIC if you like).
I can't comment on this attack, but if you have a processor with a different instruction set, then many of the stack smashing and buffer overrun vulnerabilities disappear, at least until the malware becomes clever enough to identify the processor architecture before dropping machine code into the target system.
The issue here is that we are fast approaching a monoculture, with x86-64 processors becoming ubiquitous, so there is only one processor target. Granted that different OSs give some protection, but however you do it, if you can get some valid machine code injected and executing on a system, then many things are possible.
Obviously, x86-64 machine code is not valid on, say, a system with an ARM or POWER or Z processor, so this type of attack becomes invalid in the short term. But this only remains the case until another processor type is sufficiently widely deployed to make it worth attacking, where upon you have the existing problems, just with some additional wrinkles.
Maybe schools should not be allowed to get extra income by allowing cell towers to be built on the school buildings. After all, what is all the EM radiation doing to our kids!
The film works on railway carriages because they are mostly metal, and the windows are the only place the signal can get in or out.
The same is not true for schools. Brick, cinder block, low density concrete blocks, curtain wall on steel skeletons, terrapin huts (sorry, showing my age there) are all porous to mobile phone signals. You'd have to line the whole room with the film.
Maybe there should be a tongue-in-cheek icon as well as a joke icon.
I meant this in a very light-hearted way, and it was actually addressed at other people than you. You had already demonstrated with your comments about another network that you were far from the average person who just plugs in a router and leaves it with it's default settings.
If I had actually addressed it at you, I would have done it in the same way as I have here, by actually referencing your handle.
I meant no offence.
OK, it should not be that simple for an IoT device to join a network. Presumably, you've got WPA2-PSK set up as a minimum for your Mum's network.
So, a new device entering the house cannot even join the network.
So, nothing to do.
Of course, if you've got WPS enabled, then every time you press that button on the router, all your IoT devices that have been denied access to the network so far have an opportunity to jump on to it, but you don't use WPS and have support turned off, haven't you.
Wait. What! you haven't........... And you're allowing uPNP as well!!!!!
Excuse me, I've somewhere else to be.
Universities are a completely different kettle of fish to your normal company network. It is defined by BYOD, because the Universities are not capable of providing the number of devices needed by the students.
Basic security in a University is that you have a number of relatively untrusted networks (normally by location) that the devices attach to with fairly basic security (registered MAC address, normally), with island networks containing all of the main University servers with strong firewalls on the borders of the islands that only allow a small number of trusted services through. Within each untrusted network you will have some routing and maybe print services, but any file repository will be in the islands.
Any special access to departmental servers for specialist services is controlled on a device-by-device basis, with increasing levels of control requirements, registration and mandatory patching to allow this access.
In addition, most Universities (AFAIK) operate a blacklist policy where if a device is found to be affecting other users seriously (viruses, deliberate intrusion attempts etc.), it is prevented from connecting to any of the networks until the issue has been resolved to the satisfaction of the University techies, and normally at a fee.
So the networks that the students connect to is much more like the guest networks that companies operate (with a little more security), and the island networks are more like a core company network.
This makes the analogy much clearer, and probably puts the break between the networks in a bit more context.
But the flip side of this is that some of the things asked for in a business suggest that the person asking is unable to make sensible choices because of lack of knowledge.
The problem here is that IT don't always appear to understand business, and the Business doesn't understand what is necessary to operate IT safely and securely. That used to be the reason why they set up an IT department in the first place.
It's a balance, but at the moment, IMHO, it's skewed too far to the Business.
That's true, policies do not protect you from these things by themselves. Good people who you know and trust who apply sane access control policies can.
But if things go wrong, at least you can knock heads together, and if necessary sack people if you employ them, rather than having to have to claim against a contract that will probably end up in an expensive court case before it offers any redress.
If you go down a managed service route, then your protection is only as good as the people who your service provider employ, and you have no control over that.
That's all well and good until one of the "poor choices" land the company with some regulatory failure, loss of data, successful hacking event or in extreme cases, inability to function after an unforeseen event (like a disaster).
Even if a company goes down the route of alternative service provider, it is essential that they keep some IT expertise, even if it is only at an architectural level, otherwise the remaining managers who get to chose whether to switch to another alternative supplier (in the case of dissatisfaction with the first one they choose) either run the risk of being bamboozled by whatever marketeers they speak with, or end up having to pay for external consultants, who may (because of self interest) not recommend what the company actually needs!
I agree that IT departments are an endangered species, and not because they do anything wrong, but because they're not saying what the non-technical managers think they should be hearing. Too often, influential managers in companies are more prepared to listen to the salespeople trying to sell snake-oil rather than their own IT people.
I doubt that any of these companies run their own cloud, so you are vulnerable to any of the companies or their cloud suppliers going bust, pulling out of the market or increasing their price model once they have your data.
I would not bank even on the 10 year retention period.
I reckon I've bought (new) two large TVs, three smaller TV's, and at least 4 CRT monitors in that time.
Mind you, I'd have terrible trouble finding the receipts now!
It must have been a fair few years back!
Firstly, re. nappies and wet wipes.
You young things got it lucky! My oldest was a child when disposable nappies were, well, crap, and it was still fairly normal to use towelling nappies with nappy pins and rubber pants. It would not surprise me if some of the older commenters were to tell me that even rubber pants were too new.
And wet wipes. Well, a flannel and a bowl of water, which had to be changed after each nappy change was the order of the day. Wet wipes were a huge leap in convenience, so just think yourself lucky!
Secondly, if the technology is disposable like the nappies, is there a problem with exhaustion of MAC addresses here? It they piggyback on WiFi, then there is a finite number of MAC addresses available, so unless they intend to re-use or rotate MAC addresses, this could lead to problems in the future. You would also want to make the things destruct once used to make sure that your bin full of dirty nappies don't give false readings. Maybe the wood derived chips that were in a story last week?
If they're serious, they really ought to use some re-usable sensing and sending technology which was added to a nappy, but this would make it less of an IoT story.
Out of interest, what is your printer?
Is it a Lexmark, or one of the WinPrint laser printer that were popular a while back?
I had problems getting an HP LaserJet 1000 working on one of the older Ubuntu flavours (I think it's Lubuntu - the netbook I'm re-purposing as a print server has a Celeron processor that doesn't support PAE), and ended up having to use the HP provided binary blob.
Dreadful printers outside of Windows, but I wanted a laser printer for a particular purpose, and I grabbed it for very little at a car boot sale!
It's not Earthworm Jim that worries me. Let me know if Queen Slug-for-a-Butt or Professor Monkey-for-a-Head are in the capsule!
The Department of Transport's Driving manual states that a driver should always drive within the stopping distance for their vehicle.
It does not matter if you are driving on a dual carriageway, a motorway or a single track country road, the onus is always for the driver to be able to stop if an unexpected hazard appears.
This means that the police's instance on killing the cow because it was a danger to traffic is really bogus.
If there was really an issue with this, then they should be out culling all of the deer, wild ponies and badgers that are very, very frequently seen on the roads I drive on in Somerset, including A roads and dual carriageways. The number of times I have to take avoiding action, especially at twilight, is almost uncountable.
This was clearly a gross over-reaction by police in a rural area who should really have known better!
That's interesting. I knew about the data bus isolation resistors, but did not know what their real function was. I assumed that the Z80 was effectively idling or halted during the display cycle, and also that the ULA was driving the data bus address lines, but what you say makes a lot of sense, and would make the ULA much less complex.
Presumably at the beginning of every display frame or line, there was some form of context save so that the Z80 could resume where it left off after taking an excursion to walk through the display memory addresses.
I'm still a bit puzzled, though. The ZX81 display file was not bit-mapped, so it would have to read each character position 8 times (one for each row of the character), one for each of the 8 horizontal scan lines, so what was being addressed was the index into the character generator table, with an offset to get the correct line. The display hardware would then have to look up the line in the CG table to get the 8 bits to serialise out to the modulator. I suppose the ULA may have been able to buffer 32 characters and serialise them. I'll probably never know!
For any of you who didn't understand the bit of halting at the end of the line, when the ZX81 did not have a RAM pack, it used a collapsed display 'file' (this is what it was called) that only contained the characters that had been written on the screen. Any trailing whitespace (well, trailing unwritten character locations really) on a line did not have memory allocated, so the actual display could be held in as few as 25 bytes if there was nothing on the screen.
In this case, each line of characters on the display would be empty, with just the "end of line" character that Mr Coder mentions. As the cursor was always one character, at least one of the last two lines normally had a single character, leading to the 25 bytes (23 empty lines with just the EOL character, and one with a single character and the EOL character.
It was amusing to see a ZX81 without a RAM pack struggle to reorganise memory every time you added to the display.
Of course, with a RAM pack, the display file was always it's full size.
There was nothing particularly cheap about the modulator (well, actually it was probably the cheapest Sinclair could source, but it was the same as fitted in any number of other home computers), but it was not unusual for audio crosstalk in any of the machines of this generation, especially when displaying 'busy' pictures on cheap black and white televisions.
I bought an add-on (effectively a second modulator) which remodulated the signal to include sound, which I fed from a Quicksilver sound board (it has an AY-3-8910), controlled by a number of memory-mapped locations. I used it, along with a programmable character set mod (remember what I said about the I register) to get mine to display and play music. Unfortunately, in slow mode, ZX Basic was more than a bit slow, so there was an appreciable delay in displaying the notes and them playing. But it kept me occupied on dark nights back in 1981 while I waited for my BBC Micro to arrive.
I must admit that I'm a bit puzzled as well.
This appears to be creating line 10, which contains the code "GOTO 10", and doing it repeatedly on a single line (the line parser allows multiple basic statements on a single line, separated by colons, and I think it also worked when entering lines of code).
It looks weird, so I think that if it's doing something unusual, there must be a bug somewhere. I don't remember any sort of bug like this in the BEEB, but then again, I didn't do anything like this.
Alternatively, he might be using the arrows and copy key, recursively copying the characters from the line until the maximum line length was reached, at which time the BEEB would sit there beeping at you for as many copy keystrokes as were buffered in the keyboard buffer. But that would not be white noise, which would not actually be that disruptive in a shop, even if it were at maximum volume.
The sort of stupid things that people did were to reprogram F10 (the break key) to contain "OLD:RUN", so that it was more difficult to stop (especially if ESC was also trapped). [CTRL-Break got it back].
My favourite was on a ZX81, writing about a 4 byte machine code program in a REM statement in the first line of the program (and thus a fixed address) that put a value in the Z80 I register (which was used to contain the high-byte of the address of the character generator table), which led to the screen becoming scrambled. You could tell something was there, everything worked OK, except that you could not read any of the characters,
I had a Radionics set when I was about the same age. I don't remember it being made by Philips, though.
I don't actually know what happened to it. It's probably still buried in a box in my Father's loft. I remember that I used to burn out the transistors, and soon became proficient enough with a soldering iron (while repairing the component blocks) to no longer need the kit! So a double whammy learning experience.
I doff my hat to you, sir.
That is clearly a much better backronym, and makes my suggestion pale into insignificance.
I was actually thinking something like Decluttered Operating System. There's something about MS DOS as an acronym. I can't quite put my finger on it.....
And then you could also have the client OS for phones and tablets. I don't know. Something like Phone Compatible Decluttered Operating System.
With no GUI, we should really stop referring to it as Windows.
Visio is a stunning example of how a large organisation manages to adversely and unduly influence rival systems.
There is nothing inherent in Linux that would prevent it having a Visio replacement. And there is nothing that would prevent someone from producing commercial software to run on Linux. What prevents it is the self maintaining mantra that "Visio is not on Linux, so Linux is not suitable for the desktop; without Linux on the desktop, commercial software for Linux is not economically viable to produce because it has no penetration".
It is not necessary for a suitable piece of software to be written for free by volunteers, and LGPL is sufficiently relaxed that you can use most of the application development tools without being bound by the full GPL.
The problem that Microsoft exacerbates is that they deliberately make it very difficult to write software that is file and feature compatible with Visio, and supports this by pushing Visio as being necessary software in office packages.
But please ask yourself this. How many Visio licenses does an organisation have? Probably relatively few, as it's quite expensive. The people who use it are the only people for whom Visio is a show-stopper. Everyone else does not have this excuse not to use Linux.
Of course, there are plenty of other Windows packages that you can make the same argument about. But answer me this. How many times is a monoculture (or a monopoly, if you want to put it a different way) actually a good thing? If there was no chance that another OS could take over from Windows, can you actually believe that Microsoft would not start gouging their customers more than they currently do?
I wonder how much the cost of The Ribbon interface, or the switch from XP/Vista/7 to Windows 8/8.1/10 was/will be for users and administrators? As much as the switch to Linux? Who has actually costed out the full impact of Microsoft changes to business?
I'm not sure you've understood what I've suggested.
"Multiple users running SAMBA on the same host"??? This is not what I suggest (and I would certainly not make them automount for each user)
If you have your shares arranged in a suitable manner, you have one (or a small number) of shares mounted and 'shared' between the multiple concurrent users of your large Linux machines, and let the normal file permissions secure the files. In terms of the SMB server, it's probably less demanding to have one share per several users, rather than one per user, and almost certainly less resource hungry on the client side.
I'll accept your point about the Internet and public cloud. My suggestion is really all about private infrastructure, not public.
VNC is not actually that much better than X in terms of network use (it's swings and roundabouts, some things are more efficient, some are drastically less efficient), and it is probably much heaver on the shared Linux server as it has to maintain multiple virtual X servers, one per user, rather than just the clients it would need if it were using the remote X server on the desktop machine. And when using VNC, I often find it much slower and full of display artefacts than native X11.
I'm not suggesting using Linux VMs on a per user basis. I'm proposing single (or a small number - possibly VMs but better on separate hardware for resilience) of large Linux systems, with multiple users using them at the same time
Bearing in mind that UNIX/Linux and X11 has always been network capable, I have to ask Why? but from a different perspective.
Configure your humongous server as a single Linux machine (or a small number of large machines). Put a thin deployment Linux distro on the desktop machines, running XDMCP or a modern alternative. Configure for the X11 sound extensions on the thin clients. Manage the single system for multiple users.
You have multiple thin clients with no user local storage and a single system image on the large server to maintain. And none of the Citrix infrastructure or costs.
I know I'm playing devil's advocate here, but this is the tradditional way of managing shared UNIX systems.
If you were just using cloud storage, such that the data was being encrypted as it left your site, and decrypted as it entered your site, this may work.
Unfortunately, if you actually processed any data in a cloud service, it would need to be able to decrypt and encrypt the data as it was used, requiring the encryption keys to be on cloud servers themselves, and thus as vulnerable to being snaffled as the data itself!
So, unfortunately, encryption is not the answer to all the issues.
My main career focus recently, AIX on IBM Power servers has been providing virtualised I/O, with the hypervisor doing all of the basic device manipulation, and the communication from the hosted OS being handled by virtual devices for close on a decade (the main features were implemented in Power 5 systems running AIX 5.3, although basic LPARs and mapped/guarded device control was in earlier hardware and versions of AIX), so I do understand how a hypervisor can sanitise device access.
I also understand service Virtual Machines and also quite a lot about how I/O MMUs and the associated CPU MMU features work, included how nested page tables and hardware protection rings are implemented. There may be some novel aspects of controlling access to particular adapters/busses at a hardware level that is unique to Intel hardware, but although that appears to be the main function of Device Guard, it was not how the article was presented.
I was working on Virtual Machines using a hardware hypervisor on Amdahl mainframes (running UNIX) with device and memory page level hardware protection back in the late 1980s, so very little of this is new to me.
It is not me that is confused, except possibly about the way that the article was written.
In machines running type 1 hypervisors (I'm going to use HV because I'm tired of typing "hypervisor"), the kernel very rarely "gets the rest". Once you start slicing and dicing with a HV, you can have as many OS images as the HV and the hardware MMU supports, and each OS only sees the bits it's given access to by the HV.
This is the very nature of Virtual Machines. In some implementations, the OS does not even have to know it's running in a VM, as it's given what it thinks is real-mode access to it's own virtual address space, so it does not even know that other VMs and OS images exist on the same hardware, let alone be able to see or tamper with their memory.
I'm sure that there are aspects of this that I haven't appreciate, but from the Minix paper on IOMMU, I really cannot see how this specific feature provides the protection.
IOMMU is not a new concept. It's there to allow bus attached devices controlled access to the real memory address space of the machine for DMA type transfers. I first came across a feature to implement this was in the Unibus I/O address mapping system (Unibus map) in 16 bit PDP11 computers with 18 and 22 bit addressing extension back in the 1970s. The basic concept is to allow an I/O adapter controlled access to part of the main system memory in a way that does not allow access to bits outside of the control.
In that implementation, the OS set up the Unibus map for the I/O (Most Unibus devices were only 16 bit capable, so they needed a translation mechanism to be able to write outside of the first 64K of memory), and the DMA then occurs (it was more simplistic then, because there were no overlapped I/O operations, so differed I/O operations requiring the state of the UNIBUSMAP to be saved through context switches were not an issue). The protection offered was actually a side effect of the mechanism. This gave protection from rogue Unibus DMA transfers, but left control in the hands of the OS.
This is what is described in the IOMMU Minux paper, nothing else.
In order to implement something like this to provide protection from from the OS itself, it is necessary to have the checking code in a higher protection ring than the OS. This is normally reserved for type 1 hypervisors, and the capabilities for this have existed for many years. It would have been perfectly possible to add this type of function to the hypervisor or to a service VM running parallel to the OS, so the OS makes a hypervisor call to check the validity of, well, pretty much anything at all including checking the cryptographic signature of new code. In this way, running Device Guard as a service VM controlled by the hypervisor rather than the OS means that it cannot be tampered with by anything in the OS. This is what I think Device Guard actually is, supported by the statement "with its own minimal instance of Windows". Make the hypervisor and Device Guard also signed by UEFI, and it's pretty difficult to tamper with the system as a whole.
Of course, VM segregation requires an MMU and an appropriate security protection ring, and it is possible that this is why there is some confusion about which part of the MMU is providing the protection, but IMHO, it's not the IO function of the MMU described by the Minix paper, more the general features of a VM capable Memory Management Unit. It's probably the Extended Page Tables feature that is actually required for Intel processors.
This is the type of thing that IBM have been doing in their mainframe operating systems running under VM (the mainframe hypervisor product) or PRISM for many years. As I understand it, the RACF security system runs in a separate VM to provide additional security.
When I was at University in the late 1970s, the heat generated by the s360 and s370 was fed into the heating system for Claremont Tower in Newcastle.
Nothing is really new any more.
I don't understand the issues with water cooling and humidity.
The water is totally contained in sealed pipes, so there is no chance of it entering the data centre atmosphere.
In the case of the PERCs systems, there are actually two water systems, one internal to the frames which is a sealed system with the requite corrosion inhibitors and gas quenching agents , and the other a customer water supply, with heat-exchangers between them.
The only time water can get into the air is if there is a leak. Where I work did have a leak at one time, which was caused by cavitation erosion to the case of one of the pumps. but that is one minor leak in the six years I've worked here.
If you were referring to 'fabric' chips in my earlier comment, they are a little bit like what you might describe as "northbridge" or "southbridge" chips in older Intel servers (although only in concept, not in the detail). They provide the copper and optical interconnect to glue the components together into a cluster (both external network, and internal processor-to-processor traffic), and also the PCIe and other peripheral connections.
I could have called them Host Fabric Interconnect (HFI) or maybe Torrent chips, but that would probably have been even less meaningful.
Heat pipes are not ideal. Because of the way they are constructed, they are very sensitive to leaks, which because of the critical partial pressure within the pipe, render them useless almost immediately once a leak happens. I think that the distance that they can move heat is limited.
I've seen far too many laptops that rely on heat pipes overheat whenever they've been on for any length of time because the heat pipes no longer function properly.
Oh. By the way. Proper mainframes don't run Windows!
Put some water provision in the data centre. Water is a much better medium than air to extract heat, and it is much more efficient to scavenge heat from water for things like the hot-water in the handbasins in the restrooms than it is from air (although it does depend on the exit temperature of the water).
Use water-cooled back doors. It takes significant amounts of the heat away before it even enters the airspace. Even better, put them both in the front and back, so the air enters the rack cooler than the ambient temperature, and gets any heat that is added taken out as it leaves the rack.
I know I've said this before, but look at the IBM PERCs implementation. Water cooling to the CPUs, DIMMS, 'fabric' chips, and also in the power supplies. There is still some air cooling of the other components, but from experience, I can say that these systems actually return air back to the general space cooler than it went in!
There are some really innovative things happening, much more than just the decades old hot-cold aisles, hanging curtains and under-floor air ducts.
I can't actually remember any quotes. Must dig out my original collected editions.
80MB of disk! Luxury.
The first UNIX system I was sysadmin for had 2 x 32MB SMD disks and 1MB of memory (although the disks were short-stroked, and we eventually persuaded the engineers to remove the limit, doubling the available disk space).
The first UNIX system I used was a PDP11/34 with 2 RK05 (2.5MB removable disks), and a 10MB Plessey badged fixed disk that was about 10MB. When I first logged on in 1978 it had 128KB memory, although that was max'd out to 256KB later, it was running UNIX Edition/Version 6 originally, although V7 (with the Calgary mods to allow it to work) was installed later, and supported 6 Newbury Data Systems glass teletypes (not screen screen addressable, so no screen editors) and 2 Decwriter II hardcopy terminals. And it supported a community of about 60 computing students, and was permanently short on disk space!
Thumbs up for Earth Story. It's an excellent example of a cross-discipline scientist (Aubrey Manning, a zoologist who was sufficiently interested to learn about geology, and how the change to the Earth conditioned life) who has very good presentation skills.
I particularly like the description of the Long Term Carbon Cycle on one of the later episodes which comes up with the conclusion that in geological time scales, our knowledge of climate is pretty much informed guesswork.
I really wish there were more TV series like this.
Um. How would this have helped in this case?
Presumably, all the users must have access to the file servers in order to copy the files there. And I'm guessing that these shares are mapped all the time.
So the malware follows every path it has access to, and encrypts all of the files it finds. This includes the files on the hot file server.
How is this the fault of any individual (apart from the person clicking the link)?
Having on-line copies on permanently mounted shares is no protection from this type of malware unless one of the following is true:
1. The copy is made by a high-privilege task that puts the copies in an area of the file servers that general users who may run the malware cannot write to.
2. The copy is made to worm devices, which do not allow files to be overwritten or deleted, just new versions created.
Even having the backups done by a high privilege task is not perfect unless there are some form of multiple versions kept, as it may be overwriting good data with bad. You've still not prevented the problem, and you've said as well as an (singular) offline replica, and the server is continuously wiped and rebuilt from the backups, which would imply that if the problem goes undetected, one backup and restore cycle later, you're still screwed.
It strikes me that there is a general failure of file sharing in many organisations. There ought to be a much finer granular permissions system, where a user only has permission to write to the parts of the file store that they need to for their job. This would prevent wholesale encryption of the data, but would not completely solve the problem.
Couple this with a proper off-line backup system (where the malware cannot overwrite the media, because it's not writeable by ordinary processes, either by permission or because the media is physically unavailable), which keeps copies of various ages (daily kept for a week, 1 copy per week for 6 weeks, 1 copy per month kept for an extended period, for example). Or use a managed backup solution with offline media that keeps multiple versions (TSM, Arcserve, Amanda etc.)
In the medium and large systems environment, this is a well established process. I'm sure I preaching to the converted here, but the lesson just does not seem to sink in to some SAs.
I know that the amount of data that kept is now quite huge, even for relatively small organisations, but it seems to me that the current some of the current IT world have totally ignored the best practices of previous generations.
This may be, of course, because the Management and bean counters are allowed to squash the required good practice because of cost, and over-ride any suggestions from their experienced technical administrators (or engineer them out of the company), in which case they (the management) should be held entirely responsible.
Oh. And seriously control the ability of the users to run any code, trusted or untrusted directly from web-pages or emails. At least make it a two stage process where they have to download it first, and then explicitly execute it. It's not much protection, but it will prevent casual click attacks, and as it's an explicit action, means that it is easier to discipline the culprit. This should extend to scripts in any language.