1859 posts • joined 15 Jun 2007
"Although the fact Lessig retracted his counter-notice..."
I think the post by vagabondo above sums it up very well. Go and read it.
Although Lawrance Lessig is a Harvard professor, he's probably not in a position to defend a copyright infringement claim without somebody backing him up financially (american law is very expensive, and although he is a Professor in Law, he's probably not qualified to defend himself in court without professional representation). If this had been a free lecture for Harvard, then the University might have done so, but this was for Creative Commons, and they are probably not cash rich enough to assist.
What gets me is the fact that Liberation Music believe they have any chance of winning. If anybody understands DMCA and copyright law in America, it has to be Professor Lessig. If I were LM, I would be running away very fast, and trying to settle as quick as possible, not that Professor Lessig will allow that now he has decided to get a precident set.
My guess is that LM will lose, and pull up the drawbridge to the US, and never pay the damages. I only hope that if this happens, the US court will attempt to extradite the board of LM to the US. If that happened, a little bit of my rapidly diminishing respect of the US court system would be restored
Re: Just when you thought it couldn't get any worse..@AC 11:23
and then review your comment.
It's now 10 years old, but lays down what Trusted Computing means to Microsoft and other vendors.
Re: 32K- the BBC Micro's most annoying limitation
The basic limitation of the BBC Micro was the way that the memory map was laid out. There was 32KB of the address space reserved for ROMs, normally 16KB for the OS, and 16KB for the Basic, or whatever sideways ROM you were using. This was at a time when Sinclair has all of their OS and Basic in a16KB. This only left 32KB without some address-trickery for RAM.
The segregation of the OS and sideways ROMs was a great feature for speed, and allowing separation of the OS and other packages, and really allowed you to do a great deal. The architecture allowed you to have 'service ROMs', essentially add-ons to the OS to handle interrupt driven hardware (the OS could bank-switch the ROMs to handle interrupts), which meant that you could add things like floppy disk drives, mice, teletext adapters, software sprites (Acorn's Advanced Graphics ROM), sophisticated music hardware and even networks and hard disks relatively easily.
With one of the ROM positions populated by static RAM (there were several side-ways RAM boards, mine is an ATPL board with a write-protect switch) you could even (dare I say it) load ROMs from disk. I got the Acorn ISO Pascal Compiler (two ROMs, one an editor and runtime, and the other the compiler) running in a single 16KB bank of RAM by re-vectoring the OSCLI ROM bank switch vector, and loading the compiler from floppy or Econet and then swapping back at the end of the compile.
The BBC OS was a masterpiece of good software engineering, and with the associated Advanced User Guide, which mapped the OS and rest of the system out like a blueprint (and even contained a board schematic), enabled magical things to be done.
When the B+ came along, Acorn copied what Solidisk and Watford had done as add-ons, and moved the 20K graphics screen and some of the low memory pages used by the sound, floppy disk, and other queues into "shadow" memory in the address space normally occupied occupied by the ROM and OS by bank switching, meaning that the low 32KB above 0x700 (I believe, it was 0xE00 on a normal model B without additional filesystems) to be used for programmes. The Master 128 took this even further by adding bank-switched ROM images as standard. "Shadow" screen memory generally broke programs that directly manipulated the screen bitmap without using the OS.
Of course, if you wanted the full 64K of memory, the you could have bought the 6502 Second Processor, which not only gave you a lot of mode-independent memory, but ran at a screaming 3MHz. Playing Elite on a BEEB with a second processor and a Bit-Stick attached gave you smoother full mode-1 four colour graphics (without the screen-tearing divide between the two colour mode 4 and four colour mode 5, something the Electron version could not do because it was missing the interrupt timer used to switch modes at the appropriate position), but also gave you incredible control of the ship!
I always thought that the 64KB claim of the Commodore 64 was a swizz, because the first thing it did when turned on was copy the OS and Basic out of the ROM and into RAM, effectively leaving you with only 39KB (if I remember properly) for any programs, and it did not have the high resolution modes (640 pixels wide) that allowed you to do 80 column text, which enabled us to use the BBC as a terminal to the minicomputers at the Polytechnic where I worked at the time. On the C64, you could use something approaching the full 64KB, but only if you wrote the whole thing in machine code, and disabled Basic.
The BBC Micro also had a ULA, so Acorn were not treading new ground. It appeared to be a troublesome technology, because as far as I am aware, everybody who used them had production problems.
The ULA on my issue 3 BEEB always overheated on warm days (cue the freezer spray), and I noted that on issue 4 and onward, passive heat-sinks started appearing on both the ULA and the Teletext chip.
It seems strange nowadays to have a system that did not have a single fan in the case, and as a consequence would have been silent if it had not been for the incessant buzz of crosstalk interference from the speaker. I suppose the silent end of computing has gone to tables. At least they owe a legacy to these machines.
Re: persistability Doesn't Matter @Matt re:foreign subsid
The point here is that the subsidiary is subject to the law in that country they are established in, as are all of the employees working for the subsid (even those who may be US citizens while they are in the country). The US owners may be free from the effects of the local law, but the local employees certainly aren't.
If this wasn't the case, companies like IBM and HP would never get UK Government contracts with organisations like the MoD or GCHQ, yet they obviously do.
Having worked on Government projects with personal data involved, I know that data security is drummed into the local workers, as is the fact that they personally are liable to prosecution should data in their control be leaked. I'm sure as hell I would prefer the wrath of an employer rather than a jail sentence if I were asked to copy data across a national border.
Ban 3D printers?
And vacuum and injection moulding machines as well?
It's well within the realm of your average senior school metal/woodworking/craft shop to fabricate some very convincing and well made devices without using a 3D printer. It's just a bit slower, needs some reasonable skill, and you can't distribute the model data over the Internet as data. But you can still send the dimensions and blueprints.
Re: @DougS: @M Gale @AC 17th Aug 17:52
The way that the 40 bit addressing works on a 32 bit ARM is by the use of segment registers allowing you to offset the virtual address space for a process into more that 4GB of memory. It's not new technology, and has been a cornerstone of the instruction sets of processors since the mid-1970's.
The first architecture I saw address extension done was the 16-bit PDP-11, which had it's address space stretched from 16 to 18 and then to 22 bits in different models. I do not know the ins and outs of Intel's PAE, but I suspect that it is something similar. The Power processor family also does something similar for it's virtual address space, although it does not need it to stretch the address space. Most other modern processors (those designed in the last 30 years) do something similar to support virtual addressing (but not necessarily for address extension).
The basic method involves breaking up the virtual address space into chunks called segments, and then adding a real-address offset to the base address (normally designated as a page number) in the address decoding hardware. This allows a process to see a linear address range scattered over a larger possibly non-contigious address space. The impact to the code-writer is ZERO. There is nothing that needs to be done for a user-land process to cope with this technique. All multi-tasking OS's have done this for what seems like forever.
It does make the OS have to a bit more work every time you start or context switch a process (it has in some way to manipulate the segment registers - it's different in different architectures), but it's well understood what needs to be done, and has been a standard technique. And it is perfectly possible to write the OS itself to work in a virtual linear address space (an example was the 32-bit AIX kernel running on 64-bit RS64 and later Power processors), where the OS is in control of manipulating the segment registers for itself, as well as for all of the other processes. The 32-bit kernel could manage 64 bit processes, with more than 4GB of real memory on the system, which when I explained it used to puzzle people for whom the 32-bit to 64-bit migration in Windows seemed like a huge deal.
The major limitation to this is although the system may have more memory than the size of an address, it can only be used in chunks determined by the width of an address. So for example, an individual process in an ARMv7 with 40 bit LPAE can only address 4GB of the address space, even though the architecture will support 1TB of real memory. But of course, you can have more than one process, allowing you to utilise all the available memory. And as a side effect, you have the ability to share pages across multiple processes for in-core shared libraries, shared memory segments, and memory mapped-files.
This is not even a problem for the OS, because all the writers have to do is to keep at least one segment free, and then manipulate the segment register to allow the OS to see any of the real memory. Of course, it can't see all of memory at the same time, but it can get access to any of the memory.
The issue of whether 64 bit addresses will add any more inefficiency over 32 bit addresses is all to do with whether half-word aligned load and stores can be done natively. On some architectures, performing a half-word operation (for example a 32 bit load or store on a 64 bit machine) requires loading an entire 64 bit word, and then masking and shifting the required part of the word to obtain the correct half word value. This may be microcoded, but in some architectures had to be done by the program itself. This is slower, and on some architectures, the decision about whether to 'waste' 32 bits of memory verses the performance costs of half-word operations was a difficult decision.
I would have to research the ARMv7 and ARMv8 ISA to know whether this is the case, although I would welcome someone in the know to provide an answer.
Whether floating point load or store operations can be done in units other than the word-length is different from architecture to architecture. For example in Power 6, it was necessary to load a floating point value through a GP register (or two in the case of a double-word FP value), and then move it to a floating point register. For Power6+ and Power7, it is possible to directly load from memory to a floating-point register, allowing you to do double-word FP loads (128 bits) in a single load operation. This decouples the FP processor from the natural word size of the CPU.
Cadbury used to produce a bar called "Bar 6" which was a similar confection, but with 6 "bars" rather than fingers. Terrys also produced a two fingered wafer in chocolate bar called Riva.
There have also been numerous supermarket look-a-likes for ages, of both the 2 and 4 fingered variety.
I was sad when the writing on the top of each finger changed from Rowntrees to KitKat, although recently I was happy that Cadbury returned the Chocolate Cream confection to the Fry's banner again. Just waiting for the same to happen to the Crunchie.
Was the recent limited edition 5 fingered KitKat an attempt at a trademark landgrab, I wonder?
Re: I'd buy that @ObSolutions
Tell me, how do you attach a ST-506 or ST-412 drive to a modern machine? You can't even plug in the ISA controller card into any machine built in the last 10 years or so.
I mean, even EIDE and SCSI are disappearing rapidly.
Re: Is there a Gerry Anderson fan around? @Annihilator
So you are saying that the physics of aerodynamic flight is not natural?
Is there a Gerry Anderson fan around?
If you ignore the lack of a vertical tail fin, the configuration looks uncannily like Zero X, or maybe a little like Fireball XL5.
Were the model designers at APF prescient, or knowledgeable beyond their time, or is there some plagiarism involved. Or maybe there is just a natural way of doing these things.
Just put a boundary network device with a sniffer, something like a Linux firewall. Allows you to record all IP traffic flowing through it using something like tcpdump. Doesn't need any antenna.
I kept reminding my kids that I could, if I wanted, see the URL of all web pages they visited and a lot of their other traffic over the household LAN (can't do much about 3G, but that's another story). Made them much more 'net aware. I know that they could obscure the data using encryption or a VPN, but this would actually achieve part of what I want them to do, and that is to understand what it is to be on-line safely.
For the record, although a lot of the URL and connection data the data is kept on the Firewall for months, I've never felt it necessary to snoop on them (although I have used the data to prove I could, and also to resolve bandwidth contention between them). It's amazing what being open about what you could do can achieve, without actually doing it.
Re: If I were to get a phone for my kid
My 'phones all filter down to the kids as I upgrade the phones, although I gave each of the kids their first phone at whatever age they started spending significant non-school time out of the house, normally early-mid teens. First phone always low end, low value phones, often hand-me-downs on PAYG to give them a means of calling-home, never as a means of keeping tabs on them.
Currently my daughter is waiting for me to replace my Sony Xperia Neo so she can have it, and her Samsung Galaxy (one of the low end ones which was my first Android phone) will then go down to my youngest son, which will replace his Nokia clamshell. This will mean that everybody in the household except my wife will have smartphones, and she just doesn't want a mobile at all.
Eventually, the low end phones just end up sitting in a drawer as 'spares' (like my old Nokia 7110, now really only kept as a curio). The exception is my Palm Treo 650, which I am keeping as my active spare (with a PAYG SIM in it), because its not that desirable to anybody who was not a Palm user, and I like it too much.
The only real thing that bugs me is how soon service provider locked Android phones stop being updated by the service providers. My oldest son noticed this, and as a result always buys SIM-free phones (he's old enough to have his own money to spend) that get the updates direct from the phone manufacturer or Google, not waiting so see whether the service provider is prepared to package the updates. Maybe SPs should be forced to admit that they will never update old phones, and allow them to be un-branded so stock ROMs can be installed on the phones without hacking them.
Maybe I have a pedantic mind, but when I read Monty's comment, I immediately thought US protectionism, and had to read it carefully in order to get any other meaning. So, no, I don't think it was obvious what he meant, let alone what he implied.
The full context in the original comment is "and this could have been a real kick in the nuts for Apple that possibly could have costs jobs and affected real people"
There is an implication here that the subject of the potential kick would be Apple, and by association, that the jobs and real people that would be affected would similarly be associated with Apple. I agree that this could be the S. Korean and Chinese workers, but bearing in mind that any displacement of product would probably have meant that another brand made in South East Asia would have benefited, possibly more than if the Apple product was sold. So maybe a blow to some workers, but a benefit to others.
I don't feel at all guilty not worrying about US jobs at the moment, as I believe that most US based multinationals are currently screwing over their non-US subsidiaries for jobs and profit, and I'm not in the US.
Re: Correct descision, even if the taint lingers @Monty Burns
Is the implication in your statement "affected real people" mean that Samsung employees in S. Korea or even China don't count as "real people"?
We on this site...
...tend to be informed and technoliterate.
Older members of my extended family still watch mostly the first 5 channels of Freeview, because that is what they know, and they know where they are with "1", "2", "3", "4", and "5" on the remote. Channels under 10 do have a real premium when it comes to people who are used to press just one button per channel.
I've tried and tried to make them more aware of the +1 variants of ITV and Channel 5, to no avail. I just have to assume that they are too set in their ways to change, or maybe that they cannot read the programme guide on the screen!
It's a while since I did any education on the power factor, but it is quite clear from the reading I've been doing over the last few weeks that the whole power factor issue is much more complex now that it used to be.
Back in the days of inductive loads, the power factor was mainly due to a phase shift caused by the load (and in fact, devices that use significant amounts of power nowadays have to have additional components to bring the power factor down to close to zero before they can get a CE mark in Europe).
But since such simple times, the increased use of switched-mode power supplies, used because they are much smaller and more efficient, has lead to the waveform of the neutral being not only phase-shifted, but corrupted so that it is no longer anything resembling a sine wave. I still cannot get my head around what is needed to work out the real power use in this case. I'm sure it is all factored in, but without further research, it's beyond me.
All I know is that the two clamp-on power meters I have rarely agree on how much power is being used in the house, but they still are a good indicator of when the consumption goes up and down.
Re: @Amorous Cowherder
Fortunately, the smart meter will not be able to power individual appliances down until either smart sockets are installed, or the appliances start implementing remote control. Both features are in the pipeline, but not generally here yet.
As I understand it, when you do get remote control from the meter, you will be able to assign certain devices (like fridges or freezers) a higher priority, so that other devices will be powered down first.
The savings I would get would be minimal, because I am already using a power consumption monitor on the house as a whole, and a plug in consumption meter to measure the power of individual devices.
"Never underestimate the bandwidth of a truck full of tapes".
Over the net is fine as far as it goes, but it does not have to be the only mechanism used. That's why most large datacentres use tape with offsite storage pools for their DR plan.
Not too shabby
The CentOS version problem and not storing the VM definitions in both sites should not have happened, but I would not bash yourself over the head wrt the sendmail config.
Sometimes it is not enough to do a restoration test. For some services, it's necessary to actually run for a period of time in your alternate location. I suspect that any number of 99% tested DR plans may hold something like your sendmail problem.
This is normally because of the high cost of a full DR test. As a result, 5 minutes after the last DR test has been concluded sucessfully, an apparently minor change somewhere in the depths of the environment may invalidate it!
Of course, if you do run from your alternate location for enough time to make sure that you've got most of the bugs, it introduces another problem, that of fail-back. This is something that many, many administrators just do not think about. If you run from your alternate location for any length of time (to rattle any connectivity problems out), you have to have a procedure to revert back to your primary site. And it's not always a reverse of the DR plans, because these are often asymmetric.
The background to this is that most businesses don't think beyond restoring the service. One bank I worked for acknowledged (or at least their DR architect did) that it would be almost impossible to revert back to the primary site if they invoked their full site disaster plan for their main data centre. The services would be back up, but vulnerable to another failure.
Walk the iPhone Shuffle
Is this because they stuff everything else they keep in their pockets into the one they don't put the iPhone in, just so that they doesn't scratch or mark the phone?
I read the first sentence, and was preparing to flame, until I realised that you were being ironic!
Re: @J.G.Harston - again
You may log on to a system, but there is a HUGE difference between a system and the network, and I say again that if you do not understand the difference, you should not be commenting on stories like this.
You really don't log in to a home network, not unless you have implemented domain level accounts and an authentication server, in which case you are really logging into the domain. I strongly suspect that you haven't, although I do admit the possibility.
On all Windows systems I've administered outside a company environment, the network settings are set up on a per system basis, not a per account basis. This means that once logged in to a system with any account, all network access is the same. And it is normally not possible for a web site to know what user account is in use on a particular PC (that's why they go to so much trouble putting cookies in your cache, so they can track who wou are). So to the ISPs web site that the popup comes from, there is no way of knowing whether the account is Tarquin's or Dad's. That level of information is just not available to the web site.
What the ISPs may end up doing is directing you to a site where you have to log in to the web site, using an account that was set up when the account was set up. This would do what they need, but would render the entire home network unusable until the account owner was available. And I suspect that many users (like me) do not use that account, so may not remember the user id and password for that site.
I suspect that I have been locking down my Windows PCs so that most users are not using Admin for longer than you. My background is 30+ years of administering UNIX systems, so privilege separation is engrained in my psyche, and I learned how to do it for my PCs (together with a mechanism of relaxing it for those STUPID programs that need admin rights) almost as soon as I got an NT based system in the house, which was after I started putting Linux on all my PCs.
Re: Gesture politics at its worst @Peter Gathercole
I was not advocating it. I was just suggesting it as an alternative to DPI or a simple DNS lookup which are either too complex or too naive to be considered.
And as I said, I am not claiming to be any wizard, although I do believe I have a working knowledge of DNS and IP. I'm sure that the ISPs will do something much more complex.
I understand about shared servers serving many sites. I must admit that I had not fully considered this while drinking my tea, but were I really designing this, I would have spotted it, I'm fairly certain. But the majority of most site visits are probably to servers that do not serve more than one service, (Google, Facebook, YouTube, Ebay, Twitter, the TV channels), or if they do, the sites are closely related, so it would work for a sizeable proportion of users.
Anyway, my point was that it does not have to be DPI, and in fact DPI is probably exactly the wrong way of trying to block porn, as you would have to assemble a complete picture or frame of a video, and then subject it to image analysis to try to determine what the image was. This is clearly more than the ISPs will be prepared to do.
Re: Gesture politics at its worst @N000dles
It does not have to be DPI. all they have to do is reverse lookup the IP addresses of the initial TCP session setup packets, then see whether the name or domain is on the blacklist. For UDP services (which do not include web browsing) you may need to look up every packet.
And if the lookup does not return a FQDN at all, then they block anyway it as a precaution. It could be a dark network!
This gets around all of the alternate DNS workarounds, but would not stop proxies via systems that are not blacklisted.
I've thought this up over a cup of tea. I'm sure that people much better than I can think of even better ways of implementing this!
Your post makes no sense. Individual users on a normal shared home network do not 'log on' to the network (even security concious people such as I do not operate a RADIUS server at home). ADSL connections are almost always-on, logged in using stored credentials in the ADSL router, and individual machines just connect to the network (using a pre-shared key), get a DHCP address (if this is how they are configured), and off they go. Your post shows a remarkable lack of understanding.
What was being said on the radio this morning was that the first time a user from a household connects after the control is turned on, they will be presented with the pop-up which would prevent further web access until the level of filtering had been selected. The way I understood it was that it would be from whatever device attempts to access the web first. This could be from one of the kids computers, logged in as their own account on the system.
In this day and age, people do not share a single computer. I have (believe it or not) more than 30 devices in the house that can connect to the network and browse the net (computers, laptops, phones, tablets and consoles), and on a regular basis, I would expect to see at least 15 connect on a daily basis (7 active computer users in the house, each with more than one device).
It is possible that it could be made per-device, but that would need something like cookies, and would thus only affect browser traffic. But this would not work, because I regularly clear out the cookies on my systems, and would also mean that the kid's computers would be allowed to set their own policy.
In my case it is mostly academic. The youngest member of my household is 17, so strictly speaking does currently count as a child, but they will be 18 when it is likely that these controls kick in. But a household with a scattering of laptops and tablets, often the kids will have their own devices, and could see the request to set the filtering first.
I was listening about the 'pop-up' or 'splash screen' that would come up on Radio 4 this morning on the way to work.
Neither of the people interviewed who were supporting it said anything about how they were going to make sure that it was the account holder who clicked 'allow'. What if the kids saw it first?
I like my internet to be unfiltered, and I would love to see how the ISPs are intending to implement this. I suspect DNS filter, reverse IP lookup and subsequent DNS filter in a content filter in the ISP (gets around using alternate DNS servers), and direct blocking of specific known IP addresses. Extend this to IP addresses that do not reverse resolve (just to be on the safe side), and it would be possible to do what is being talked about.
But all of this is very intrusive, and will probably rely on blacklists in order to work. And it will have to be stateful in order to be remotely efficient. This means that over and above what the ISPs already keep, there will be mine-able information, and also there will be the ability to control what the country sees by controlling the blacklist.
If you can't differentiate between the OS and an application that runs on the OS (the forum software), then I suggest that you go and do some education.
Any application that runs it's own authentication mechanism, regardless of the OS it runs on, has the same degree of vulnerability.
I have an account on that site, but is it using the lowest grade of password that I use, so any site that may share the same password is probably not going to have any serious consequences to me.
Re: Bomb Proof @plrndl
That may have been how it was designed, but that does not mean that it the way it now works.
The current Internet had a number of very serious pinch-points, where disruption would not necessarily damage total connectivity, but would cripple performance. Certain organisations and particular buildings around the world are regarded as hubs, and have a disproportionate amount of the connectivity for a region, country or for international traffic.
But that is not what this article is about. If you are a stock or futures trader, and either your systems or the systems that you need to talk to on t'internet are DDoSd, then you may be unable to trade. If this happens, and the news leaks, then your share price may take a tumble, and you may also end up losing company value as well as revenue. Ditto any company that relies on connectivity to trade or operate, and there are a large number of those.
"most likely be configured to perform boring, tricky tasks like parking"
I though there were cars that pretty much did this already.
Re: There was technology max maximise hardware usage before virtualisation
Generally completely agree with you.
But there are situation where it is useful, and also where it is essential.
It's useful to allow two different operating systems run on the same hardware. Back in the late 1970s, the University I was at turned of their IBM 360/65 running OS/360, and migrated the workload onto a proto-VM on their 370/168. Normally the 370 was running MTS (look it up), but by using a VM, it could also do the legacy OS/360 work at the same time.
Currently, you might do the same to run Windows next to Linux on the same system.
In addition, many enterprise OSs running today were initially designed more than a couple of decades ago. Back then, 2 CPUs in a system was novel outside of the Mainframe world, so the same OS facing a machine with 1024 CPUs may struggle. OK, the OS should have been updated, but when these OSs were written, people probably did not foresee such large systems (640KB anybody), and built in serious limitations that require a lot of work to overcome. Unfortunately, these OSs are often becoming legacy for the vendors, so it seems unlikely that the necessary work to overcome the limitations will be done. So often, it makes sense to divide up your workload into separate OS instances, and stick each into it's own VM.
Re: DNS look up @Irongut
They can knobble this as well. All they have to do is block TCP and UDP to port 53 on any systems other than their DNS servers in either the router they supply to you, or within their infrastructure.
Would be hugely unpopular with most of the readers of this site, but would make no difference to the majority of their customers.
Re: Don't forget X
I have no knowledge of Netware myself, but if you are talking X11, then it's UNIX, not Linux. Linux had X11 servers and clients (of course), but X11's home was UNIX (and to an extent, some proprietary OS's like VMS).
If it was X11, then what it gave you was the ability to run the GUI administration client programs remotely on any workstation with an X11 server (if you are unfamiliar with it, the server controlled the screen, keyboard and mouse, and programs that attached to this X11 server were clients, wherever they ran), meaning that you would have the ability to remotely administer the Netware server, long before RDP, VNC, or Citrix were on the scene.
X11 servers were available for UNIX and Linux workstations, OS/2 and even Windows NT and later systems, as well as thin clients from people like NCD and Tektronix, so there were a wide variety of workstations that you would have been able to use.
People tend to forget what an enabler X11 was.
Re: No, No,Thrice No
I was involved in reviewing and updating part of the platform security standards at a large UK bank, and I can tell you that the IT department are the police, not the legislators.
What happens is that a security policy is defined by either an IT security department, or by specialist consultants. This states things in very broad language, such as controlling user access and data flow between security zones. They don't specify technologies, protocols or methods.
The IT department gets this deliberately woolly and poorly defined policy (by definition, as it will be architecture independent), and then has to try and implement it.
Security people are all about saying no to things that they don't understand. The business people want to be able to do anything without restrictions. There is a natural and totally understandable conflict here.
The IT department has to work out what the business users really need, rather than what they want, and then convince the IT security people, who always have a veto that it is safe. This normally means that the IT architects are between an irresistible force and an immovable object. And always, one of the ends of the process think that the IT department have failed.
Having come up with a design that they have fought tooth and nail to be able to implement, and done so at the lowest cost possible, often in completely unreasonable timescales, the IT department then have to defend the decisions taken to the users, who very rarely have any thought about why security is there for anything other than stopping them doing their job.
Unfortunately, the group with the most influence are the people who feel that they earn the money for the company, even though they are the least qualified.
It's a no win situation.
Re: I came close re. MS Office Home and Student
used to allow three installs.
The current incarnation only allows one, and is more expensive.
Full tests are good
I did most of the technical design for the backup/recovery and DRM of UNIX systems at a UK Regional Electricity System back in the late '90s.
The design revolved around having a structured backup system based around an incremental forever server and a tape library.
One of the requirements of getting the operating license for the 1998 deregulated electricity market in the UK was passing a real disaster recovery test. A representative of the regulator turned up on a known day, and said "Restore enough of your environment to perform a transaction of type X". The exact transaction was not known in advance.
We had to get the required replacement hardware from the recovery company, put it on the floor, and then follow the complete process to recover all the systems from bare metal up. This included all of the required infrastructure necessary to perform the restore.
First, rebuild your backup server from an offsite OS backup and tape storage pool, and reconstruct the network (if necessary). Then rebuild your network install server using an OS backup and data stored in the backup server. Then rebuild the OS on all the required servers from the network install server and data from the backup server. All restores on the servers had to be consisntent for a known point-in-time to be usable. Then run tests, and the requested transaction.
And where possible, do this using people other than the people who designed the backup process, from only the documentation that was stored offsite with backups, using hardware that was very different from the original systems (same system family, but that was all).
Apart from one (almost catastrophic) error in rebuilding the backup server (the install admin account for the storage server solution had been disabled after the initial install), for which the inspector was informed, but allowed us to fix and continue because we demonstrated that we could make a permanent change that permanently overcame the problem while he was there, the process worked from beginning to end. Much running around with tapes (the kit from the DR company did not have a tape library large enough!), and a frantic 2 days (the time limit to restore the systems), but was good fun and quite gratifying to see the hard work pay off. I would recommend that every system administratror gos through a similar operation at least once in their career.
We were informed afterwards that we were the only REC in the country to pass the test first time, even with my little faux pas!
When supply and distribution businesses split, we used the DR plan to split the systems, so having such good plans is not always only used in disasters, and I've since done similar tests at other companies.
Re: Point 3 is wrong
My view is that it depends entirely on ho much has changed in the OS since it was installed, and that is probably determined by the function of the system being backed up.
I've worked in an environment where every server in the server farm is a basic install with scripted customisations, with all the data contained in silos that can be moved from one server to another (the bank I used to work for had been doing this on a proprierty UNIX since the turn of the century, before Cloud was fashionable). These systems can be re-installed rather than restored.
I've also worked in environments where each individual system has a unique history that is difficult to replicate or isolate. These systems need to be restored.
One example of this latter category is the infrastructure necessary to reinstall systems in the former category!
There just is not one fixed way of doing things. Each environment is different.
Re: Don't blame Microsoft but... @ShelLuser
Bloody bloody. I must be slipping.
I actually read the whole of Section 9 of the service agreement policy to see the link with GiTS before the obvious smacked be in the face!
Re: Keeping the beaurocracy alive... @beck13
I was the one who brought up Tax discs, and I did refer to the Post Office being used to obtain Tax discs, although I did not sufficiently discriminate between the Post Office and Royal Mail. My mistake.
My other points about the Post Office in rural areas still stand IMHO.
If it were profitably for TNT et. al. to put a last mile delivery service in, they would. They don't, so it can be assumed that they have judged that it is not worth it. IIRC, Royal Mail originally said that they would at best break even doing the last mile (although that is really not descriptive of what is done), and would more likely end up doing it at a loss. Unfortunately, they were forced to do this in order to allow other companies to break the total monopoly that Royal Mail had for many years.
It is probable that residents of most medium sized or larger towns could live without a local Post Office day-to-day. It is similarly likely that rural areas need Post Offices more. But I would bet that many of the people who say that they can live without it probably do not know what they could use it for. They are for far more than just buying stamps.
"There is no such thing as a Tax disk" @David Cherry
You might like to tell the DVLA and the gov.uk websites that.
Re: Keeping the beaurocracy alive... @Me
Damn. Bloody Americanisms. Of course I meant disc.
Re: Keeping the beaurocracy alive... @AC 8:13
If you can live without a mail service, then I suspect that for you the Post Office is irrelevant.
But I also suspect that when you need your next car tax disk (assuming you drive), you may find one of the Post Office and Royal Mail services useful, either to collect in person or to deliver the disk. And if you don't drive then you are not typical, and your comment is irrelevant.
Or you want your next bank card to be securely delivered, or that job application that the employer wants documentary evidence for and you want to be tracked, or any number of things for which a physical delivery is required.
What you may not realise is that people like TNT and DHL (I think) and others actually use the Royal Mail for last-hop delivery, because they can't be bothered to raise the money to put a national delivery mechanism in place for themselves. If there was no Royal Mail to do this, these alternative services would become much more expensive.
And for may people, particularly in rural areas, Post Offices fulfil the function of Bank, basic shop and news agent, and social hub, when no other shop would remain open.
Royal Mail and the Post Office are not perfect organisations (especially in light of this report), and their role is definitely diminishing, but if they were to disappear overnight, you, along with everybody else, would notice at some point.
You're missing the fact that these are not single networks, but networks of networks, with fenced links between them, and at arms length from the core University networks. The only really complex part is the distributed user authentication that allows access to the core systems.
It really is a case of divide and conquer.
Re: Does this really count as BYOD? @John H
If you look at large corporate BYOD programs, one of the conditions is often that you surrender a lot of control of your own device. This normally means purchasing hardware from a list, installing company supplied tools like VPN, encryption and AV, and also surrender some control (have additional administrator accounts created). Certainly challenges the idea of it being your device.
What most Universities do is to have an open(ish) student network (or, in fact, many of them, often firewalled from each other and the main University campus network), together with a portal or gateway on each that allows them restricted access to the central file servers and other facilities of the core University networks. In addition, there is firewalled access to the Internet.
I don't see why that model cannot be used by business. It keeps your core network safe, while providing much of the access that is required by the user.
My kids were always told that it was their responsibility to make sure that their systems were adequately secured, and the only assistance given by the collage was to perform standalone virus scans. If the system failed the scan, they were offered one of the free AV packages, and told to either install and run it, or get someone to do it for them. Their machines/accounts were blacklisted until it had been proved to be virus free.
Re: Increased energy density leads to increased risk @Craigie
But in order to liberate that energy from a chocolate bar, you need to oxidise (i.e. burn) it in one way or another, and you need atmospheric oxygen, so you ought to take the mass of that into account as well.
Chocolate can be made to burn if you try hard enough, but I'd love to see you 'recharge' your burnt chocolate bar.
But the nature of a battery means that you cannot take the cheap route of just setting light to it. I suspect that the calorific value of oxidising the components of a battery may be even higher than the rated re-usable capacity of a battery.
In short, you're not comparing like figures.
Do I spot a supplier tie-in?
In order to use this, you have to be an Office365 registered user?
OK, this is currently just for UK Government employees and information partners. and I know that I have to temper my dislike of Microsoft's business practices, but this feels like Microsoft just having to wait for all UK Government on-line services to use this mechanism before signing up the entire UK adult population on a subscription service.
Where's the openness, fairness and competition.
Re: Hang on a sec
The difference is that while a Linux update will reboot a system once, there is a good chance that if you are updating Windows with other components (like hardware drivers), Windows will reboot more than once, sometimes many more times. It's got better than it used to be, but.....
Updating a kernel of any operating system on-the-fly is difficult, regardless of whether it is a desktop or a server system.
The problem is that the kernel is more than just another programme, and is being used all the time by running processes, and one of the things the kernel does is to track and allocate resources to the running processes. In theory it is possible to replace the kernel while it is running without disrupting the processes that it is controlling, but to get it right under all circumstances is difficult, time-consuming to test and thus costly.
A micro kernel implementation may be easier to update, but that assumes that you can re-bind running processes to new instances of a service on-the-fly. But even if you can do this, it is likely that there is one or more components that will require a system re-start if they are updated (the thread scheduler is one example).
With modern on-the-fly service migration, it may be possible to boot the new kernel in a different VM, and then migrate processes into the new VM, but most people just put up with losing their system for 10 minutes.
I used to drive past there every day for months without even knowing what it was!
Re: Router Costs @Why Not?
That's one of the reasons why I always provision my own router. It's a cost I bear, but one I believe is reasonable to maintain independence from any ISP.
I don't trust them not to put some nasty spying functions in their firmware to leak information about my network and the devices installed on it.
- Analysis iPhone 6: The final straw for Android makers eaten alive by the data parasite?
- First Crack Man buys iPHONE 6 and DROPS IT to SMASH on PURPOSE
- First Fondle Reg journo battles Sydney iPHONE queue, FONDLES BIG 'UN
- TOR users become FBI's No.1 hacking target after legal power grab
- Vid Reg bloke zips through an iPHONE 6 queue from ZERO to 60 SECONDS