Re: And the 755 is not water cooled.
Since I posted the last two comments, TPM has corrected the article without adding a corrction note. Just saying to explain what I was commenting on.
1979 posts • joined 15 Jun 2007
Since I posted the last two comments, TPM has corrected the article without adding a corrction note. Just saying to explain what I was commenting on.
The 755 is what Watson was created using, and is a cluster of slightly altered P7 750 nodes with Infiniband gluing it all together.
The Blue Waters machine would have been a P7 775 cluster, not a 795, which is the large commercial system.
Back in the day, you had a cabinet or two for the processor(s), at least one for the memory (especially if it were core), another for each disk string controller, and then more for the disks themselves, and then additional cabinets for front-end processors, tape drives and any other ancillary devices.
It was perfectly possible to add and remove memory, disk controllers and strings of disk without having to replace the computer as a whole. Or you could replace the processors, and leave the rest of the system untouched.
I remember on weekend in 1985 when I went home on a Friday night, after using NUMACs crusty old IBM 370/168 which was collapsing under the strain, and came back on Monday morning to the same system with an Amdahl 5860 that to the user was identical, just a lot faster.
Professor Harry Whitfield (director of the computing laboratory at Newcastle University at the time) wrote the following in his annual report for the year:
"The installation of the Amdahl 5860 in late September 1985 and its introduction into service in early October must be regarded as the major event of the year. The whole process went so smoothly (and unannounced) that users 'merely' noticed that the system had suddenly become much more responsive and five times faster."
I admit the analogy is not perfect, but there are serious similarities.
It's the model BT has used for telephone lines forever. For metered services (like telephones used to be), it made absolute sense for BT to split out the maintenance and equipment cost from the usage cost, so that they still got money to provide the service even if no calls were made.
Nowadays with everybody offering packages with inclusive calls, it makes less sense, apart from the ability for the provider to hide some charges in the headlines of the advertising ;-)
For people asking for no line rental, which do they prefer. £13 a month for broadband and £14.60 line rental, or £27.60 a month for broadband without line rental, because that is the choice they would get.
It does not matter how it is charged, the ISP (possibly through BT) has to pay for the cost of the upkeep of the wires/fibre from the exchange to the premises, the exchange itself, and the equipment in the exchange. It will either be in the line rental, or added to the package cost. Assuming that taking the line rental out would leave the package costs unchanged is just lose thinking.
For the specific statement 'Telcos in other countries are happy to provide a "dry-pair" for the DSL without voice services' that would be true if there were really separate bits of kit in the exchange for the analogue phone line and the DSL link, but I suspect that in modern digital exchanges, that is not the case. Even if the line was not used for voice, I suspect that the kit would be the same.
Why? For normal users who do not provide internet visible services, but only use client services, the change will be almost completely invisible. Outbound connection requests will still be given ephemeral port numbers, just like they are at the moment, and these will be recorded by the NAT server to allow packets to be routed back correctly.
In fact, if you have a cable or ADSL router/modem, you are almost certainly running NAT already.
It is only if you offer inbound services to your network that you are likely to notice anything at all, and if you are, you probably already know how to get around any problems. And it's not like they are not telling you what is happening.
IPv4 or IPv6 addressing is largely irrelevant to most internet users. DNS and stateless address autoconfiguration or DHCPv6 takes the pain out of knowing IP addresses.
Let me ask you. Do you know, off the top of your head, any IP addresses of servers on the Internet?
And do you care what the address that systems have on your private network?
For most home users, the answer to both of these is no, in which case, apart from the pain of switching your router and systems over to only use IPv6, the change will be almost entirely unnoticed.
Of course, some of us (and I am in this category), do care, and I am dreading the switch, because I want fixed addresses in my network for certain systems (no uPNP for me, no sir). I have to do some learning to find out what I need to do to, and I'm not looking forward to that.
If Plusnet give a fixed IP and port number(s), then it is still possible to do port forwarding even in a double NAT environment. You just have port forwarding on both NAT devices.
I would be quite happy to be given a range of ports (say 16) for input services on a fixed IP address, as long as I knew what the external port range was, and what ports each would map to when presented to the local NAT device. This would be preferable to me than having all the ports available on an indeterminate IP address, and having to use a dynamic DNS solution to find my servers on the Internet.
A more complex setup, but I'm fairly certain that the people who want it are the ones most likely to understand how to set their side up.
Alternatively, you could run your ADSL/cable router in bridge mode, and have them map directly to your servers (only having ISP run single NAT in this case), but that is not a configuration I would want as the ISP would then have sight of your private network unless you put another firewall in.
"NAT makes it impossible for anyone on the internet to establish a connection to a computer behind it"
Not true. You just have to include port information in the address, and set up an inbound port redirect on the device doing the NATing. So outside, you advertise, say, port 2080 for your web server, and have the NAT device redirect inbound packets received on the 'RED' side port 2080 to port 80 on the private address of the device on your 'GREEN' or 'ORANGE' network. All of the devices that I have used that provide NAT have this functionality, so I'm sure that an ISP could deploy it.
In case anybody does not understand, a valid URL can include a port number, so you can have a URL like www.mywebsite.co.uk:2080/home.html
It works, but there are caveats, particularly on URLs that refer to other pages on the same site. But it works very well indeed for single port services such as SMTP as long as it is known to use a non-standard port.
IIRC, DNS has support for providing port information as well as IP addresses for name lookups, it's just not used.
Cable only covers cities and large towns. Once you get into the sticks, cable is almost non-existent.
I'm surprised by central London, though.
I had not considered using NOPs to make the return address less critical, nor the fact that you could find the absolute address of the stack frame relatively easily (although it is compiler specific). That stack_smashing paper is dynamite.
Each exploit has to be taylored to the OS and processor, but I guess that Wintel is a big target.
Is this PDF safe to read in Foxit?
In this case, it cannot be the kernel stack.
OK, I've read the page, and it falls into the "change the return address" scenario that I mentioned. Having read that, and done what I should have done before and worked out the way stacks are stored, it looks as if most systems grow their stack 'down (higher to lower addresses)', and I admit that the return address will be stored in memory with a higher address than the buffer, so could be overwritten.
But I still think for several reasons, that this will be more likely to cause a DoS, rather than a remote code execution problem in this case.
I'm always a bit sceptical about the danger of this type of bug. Sure, it will cause unpredictable errors, but lets look at what could happen.
As they talk about stack overflows, I'm presuming that the URL is being copied into a variable stored on the stack, i.e. a local variable. When this exploit runs, whatever is in the memory locations after this variable will contain some data that is under the control of the exploiter.
So. The memory locations after the variable will be another variable, or possibly a stack frame header including the return address and possibly some saved register contents.
If it's another variable or saved register contents, then the previous contents will be lost, and/or some unpredictable behaviour might happen when the variable is used. It might be a pointer, which may mean that some other data address could be clobbered later in the code. It could be a vector (pointer to some code), but in order to exploit this, you'd have to understand the rest of the code really well. If it's a stack frame (and I've not checked the direction of stack growth so don't know whether it will be the frame for this function or another), then the return address may be damaged, which could be used to control where the code returns to.
The comment from Paul Ducklin from Sophos, re. "The crash, which is a side-effect of a stack overflow, pretty much lets you write to a memory location of your choice," seems an over-reaction, as it is likely that you could overwrite an address following this buffer in the page the stack is in or a contiguously later memory page. Any point after this will probably generate a segmentation or address violation, as soon as it tries to write to an unallocated address. To me, this is not the same as "a memory location of your choice".
You've potentially got some executable code (if that is what the URL contains) stored in a memory location you should not have access to, but it's not in the program text, and I've not yet seen a method described of triggering that code (the return address in a stack frame header is the only one I can see which would affect the execution stream). This does not appear to be a practicle means of injecting code, much more likely some DoS attack against the user running Foxit.
So it is important (all bugs should be regarded as such), and I'm sure there may be some special cases I've not spotted, but on casual inspection it can only be described as a DoS vulnerability with a 'potential' remote execution problem. saying any more would be FUD.
Possibly someone could educate me if I am wrong.
I hope that you are only using usenet for your content, because if you watch 'live' TV over the Internet (yes, it's a bit of an ambiguous definition, but I believe that it means material that is broadcast over the Internet while being broadcast to air, even if delayed by a few minutes), then you still need a TV license. Your computer becomes TV receiving equipment under the terms of the law.
But if you are using usenet, expect a letter from your ISP accusing you of copyright infringement.
What's not clear is whether the fact that you could watch Internet broadcast TV but don't is enough to remove the requirement for a license.
It's not quite that straight forward.
Shareholders are already on the hook, as they are unlikely to get the money they invested back. They are just as much creditors as the workers who are owed pay.
In the case of a company that is negligently driven into large debts, especially if money is owed to HMRC (in the UK), then the directors can be sued for corporate negligence, which can result in them being banned from becoming a director for a period of time, personally heavily fined, and in some cases, sent to prison, especially if fraud can be proved.
Limited Liability companies do not offer complete protection, but I admit that there are ways of extracting value from such a company and walking away without the debts.
There's two points here.
One is that it allows companies to hire more people in order to be able to select the ones worth keeping after three or six months, and the other is that companies are petrified by the redundancy conditions such that they do not take people on until absolutely necessary, for fear that having to make them redundant at a later date if there is a downturn in their business is so costly. This is also the reason why so many companies prefer to use agency staff until they know an increase is really justified, to avoid the redundancy packages.
In both cases, if it was easier to get rid of workers more easily, companies might be prepared to employ more people.
I personally would prefer to work for a company knowing that they could shed staff more easily if it stops the business going bust and everyone being made redundant, even it it did lessen my job security. There are some safeguards needed, but giving a company in difficulty the choice of going bankrupt because they keep too many employees on and have to keep paying them, or going bankrupt because the redundancy packages they have to offer can't be afforded, is no real choice at all. They both drag the company down.
What is happening here is that the Indian government is retrospectively changing the tax rules, and then expecting foreign companies on just roll over and pay more tax for years they already thought were closed. It is a policy that is specifically designed to extract more money from non-Indian companies that are operating in India.
It's within what a government can do, but is clearly not going to make companies operating in India happy.
I think that it depends on whether you are a form-follows-function person or not.
Thinkpads are functional. There is little wasted weight or space, the screens and keyboards are/were the best in the business, they are not too bulky, and they will suffer the day-to-day wear and tear that a road warrior will put them through. And there is nothing in their design that makes them unpleasant to use. The lips and edges you talk about are all deliberately engineered so that when shut, they all lock together, so there is not too much strain put on the hinges. Seen many Thinkpads with broken hinges? No, I didn't think so.
Add to this an engineering, maintenance and warranty strategy that means that they will can and will be fixed if they break in warranty, and have the full maintenance manuals available for third party maintainers to fix them when they are out of warranty, with a large pool of donor systems for parts means that they have an extended 2nd and 3rd user lifetime where you will still see 6-7 year old Thinkpads in regular use (my T30 has a manufacturing date of 2005, and the A20 which runs as my linux firewall is even older).
I'm sure that it you look, you will still be able to buy brand new OEM batteries from one of the auction sites for any Thinkpad built this century. Try that for a decade old Dell or HP.
Of course, if style is more important, then a Sony Vaio or any of the Ultra books will do the job, but don't expect them to have the same life expectancy. But if you are after style, it does not matter if it breaks after 12 months, because you will probably be replacing it for the latest 'shiny' toy anyway.
It may have been that way in the US, but I was involved with a customer still installing new TR kit beyond Y2K. I admit it was mainly because the customer had a large investment in it, but when the organization split, the bit I went with dumped TR, and jumped straight to 100baseT.
In a lot of commercial organisations, being able to use a Premises Distribution System to organise your cabling for TR (and twisted-pair Ethernet, phone and RS232 terminal traffic) was a real benefit, and one that 10base2 thinwire Ethernet could not take advantage of. Thus Token Ring persisted.
I saw the benefit of a PDS when I saw 1MB/s AT&T StarLAN installed for the first time in the late '80s.
You're confusing the physical MAC layer with IP.
Token Ring and Ethernet are comparable. IP can run over either, and many more physical networks as well. Although it does not directly follow the 7 layer OSI network model, it is a layered protocol (MAC, IP, TCP/UDP, application protocol), and provided it meets some basic requirements any physical layer can be used to transport IP.
Token Ring is exactly as routeable as Ethernet when running IP. Routing has nothing to do with the MAC layer, except in very simple protocols as IPX or NETBIOS.
I have worked at numerous locations where there were multiple networks using Token Ring, Thinwire (10base2) Ethernet, Twisted pair (10baseT) Ethernet, ATM, FDDI and even SLIP and PPP all routed together using Layer 2 routers.
What makes Token Ring better than 10base5 or 10base5 bussed Ethernet is that it did not use CSMA/CD to arbitrate use of a network segment, so works much better at higher utilisation rates. As soon as 10baseT switched Ethernets came along, that was no longer enough of an advantage, and Token Ring died.
If you look at network topologies, those with multiple tokens or a slotted ring (such as the Cambridge Ring) could carry much more data than Token Ring, but were more complex to set up.
If you had ever had to debug a token ring implemented with MAUs, when one system was running at the wrong speed and causing lost beacons (or beaconing), then you will be glad that TR eventually died!
I was only thinking this morning as I connected my tablet to the cable that had fallen down behind the table and I had to fish out again, and which my wife complains about whenever she vacuums, how useful it would be to be able to just put the tablet in the same place and know that it would charge.
Ditto all the cables in the car.
So yes. wireless charging would be a good thing. Even better if there was a standard, and I could have a couple of them scattered around the house, charging all the phones, remotes, media players and other gadgets wherever I wanted to be in the house.
Yes, there was a terrific dynamic tension between these two characters that persisted throughout the entire show.
I see Andreas Katsulas on other shows (from the past, obviously), and when I do, I can't help seeing him as G'Kar.
Off topic, I know, but....
I much preferred Michael O'Hare as the commanding officer of B5. With Sinclair being the re-incarnation of Valen, and being involved with a part human Delenn (as would probably have been the case), it would have led to an interesting dynamic. Having 'The Scarecrow' dropped in at the beginning of Series 2, even if he was introduced as the 'Starkiller', lost some of the world-weary ordinariness (quite remarkable for a SF series set in the future) that Series 1 had.
Series 1 did not really start the main story arc, (although throughout there were plenty of forward references that only became important later on, such as B2), it set the back-story for the way Babylon5 operated that was necessary in the later stories. None of the ST franchises managed to achieve the same level of detail, although DS9 probably came closest.
I really would like to have seen how Series 4 would have turned out if JMS had not had to shoehorn in the Shadow Wars conclusion and compress the Earth liberation storyline into the same series. The Telepath Wars storyline for Series 5 was too weak (especially after seeing what happened in 'Endgame' in Series 4), and the loss of Commander Ivanova and Marcus, together with the changed role for Michael Garibaldi meant that there was too little continuation in the last series.
I must admit that I was a bit tearful the first time I saw the final episode "Sleeping in Light", especially seeing B5 finally destroyed, and again when doing a frame-by-frame on the easter-egg cast and crew video at the end of the closing credits. Makes me a bit of a sad geek really.
One last question. Whatever happened to Lennier (I know, I've read what the Lurkers Guide and Wikipedia has to say) but I'm sure there is an interesting story in there somewhere.
Point well made, but I'm sure that Xerox (Star), MIT (X Windows), Sun (SunTools), Digital Research (GEM) and even Apple (Lisa) were using the term "Window" and it's plural form in relation to computer systems a long time before MS Windows version 1 went to market.
Unlike us (because of the NDA that is part of the settlement), Samsung know which patents Microsoft have hit them with over Android in the past. If they avoid those patents, they may be able to avoid having to pay the license fee, which may save them dollars per phone. They will also have some control in order to avoid the Apple ones as well.
I suspect that the main one that MS roll out frequently are the FAT patents, some of which will expire shortly, but I believe we've never found out the full set.
WRT the FUD claim and the links to URLs that you claim will affect iOS and Android.
Question. Do you understand the application deployment model in either iOS or Android?
In both cases, the way applications run is handled by a layer ABOVE the OS. So when you talk about it 'rooting' the OS, that is almost certainly not the correct terminology. Rooting by definition means getting access to the root account on UNIX-like OSs.
What has been compromised here is the application framework, *NOT* the underlying OS. In both cases, the underlying OS will be untouched. In terms of what a user sees, the result may appear to be superficially the same, but if you are going to make such claims, it is vitally important that you understand what you are talking about. Anything else is FUD, especially if you are spreading fear as a result of your uncertainty and doubt.
These specific issues are rather analogous to a Facebook application or account being hacked or a vulnerability in IE or other browser, while the underlying OS, whatever that is, remains untouched (unless, you run the browser from an admin account of course, in which case all bets are off).
This one of the historical differences between UNIX based OSs and Windows. Unless you take specific actions, you will *NOT* be running applications as a privileged user on UNIX, BSD or Linux. This was not the case on Windows before Vista, where many people's normal accounts had full Administrator privilege. This has changed, for which I say Hurray! but it took a long time for MS to recognise this (although NT was designed with a good security model from the ground up, even though it was rarely used to full potential).
I say again, this time to RICHTO. Read the article you link to.
This statistic is for defaced websites, not OS vulnerabilities. If you don't know the difference, then you should probably not be taking part in these discussions.
I'm also not sure about the data from Zone-H. The stats you point to are for 2010, and looking at the dates on the news pages (latest, September 2012, total news items posted in 2012, 2, total posted in 2011, 5), it looks like it is a site in decline.
Read and comprehend the article you point to.
It is talking about what the rootkit does once it is installed, and you are right, it does look quite sophisticated, and unpleasant.
But there is nothing in the article about how the rootkit gets onto the server, and this is where the strength of the OS security model comes into play.
As long as an OS has some privileged mode that allows the OS to be changed, it can be compromised. This is true about all currently deployed OSs around at the moment, and is necessary in order to be able to install patches. If you look at it from another angle, there is little difference between a rootkit and an OS patch, apart from the fact that one is supposed to improve the system, and the other is not.
If you were to look at compromised Linux systems, and work out how they were compromised, I'm certain that most of them will have been initially infected as a result of a human error rather than a deficiency in OS security. You know, something like an administrator using the same password or SSH key for multiple accounts, or having trusts set up from untrusted to trusted systems. And I also think that I am on safe ground in saying that it you were to look at the ratio of compromised systems to total number of systems of a certain type, Windows would show as having a higher rate of infection than Linux.
It is true that Windows AV solutions are able to detect rootkits and other persistent infections once they are present, but this article is talking about zero day detection rates. I would much prefer to use a system that is less vulnerable but which had poorer detection tools, than one that let malware in but detected most of it sometime after the infection.
It should be seen as axiomatic that AV software is a market that only exists because of poor OS security in the past. There is no market for Linux or OSX AV because there is no history of significant infections on those platforms. If there were, there would be creditable AV solutions for them.
What the AV software vendors have to accept is that in an ideal world, their comfortable little niche should disappear as OS security gets tighter. This is currently why they need to spread FUD in order to protect their income stream, and the tone of some of the comments here add to this.
I'd noticed the fact that there were videos missing from the Android YouTube app compared to the same search on a desktop. This also appears to be the case on the YouTube support incorporated into Blu Ray players and SmartTVs.
I think that it is the case that if the YouTube app does not think the correct container or codec is installed on the device, it won't display the video in the search.
I think it is possible to get YouTube in a browser to tell you the format of any video, but I can't remember how, and I can't check as YouTube is blocked/filtered at work.
I remember when I got my first self-winding watch about 40 years ago (it was also a cheap bit of crap, made by Timex), and I remember thinking how bulky it was compared to the cheap Ingersol I had before it.
Some people just can't wear heavy watches.
I still prefer slimmer watches, even though I'm now wearing a lump of stainless steel that must weigh close to 100g because I cannot find something durable and lighter that does everything I think I want (although the last time I used the stopwatch was months ago) at a price I'm prepared to pay (when did watches become so expensive!)
There is a school of thought that suggests that some errors are introduced intentionally by the publishers, and are used to identify the original source of copies of printed works.
This is particularly said of music manuscripts, so that if someone copies by hand some sheet music still in copyright into Sibelius or Rosegarden to produce 'clean' copies, supposedly free of copyright, the publishers can still identify and take appropriate action.
I keep reminding members of the choir I sing with what they can't do when it comes to music copyright. All I can say is thank heaven for the library service in the UK, who can loan/rent out multiple copies of music to choirs and orchestras at reasonable rates to reduce the temptation to buy one copy and just photocopy it.
Do you really believe that surveillance satellites work like those in films such as "Behind Enemy Lines"?
Put a linux based firewall running on anything with a pentium3 or later as a boundry firewall. Almost free, and perflectly capable of doing this. Use something like Smoothwalll with some of the community mods.
"In other words, parents living with their children must remember to click on "no thanks" to filtering, otherwise their internet access will be restricted accordingly to block supposedly harmful material."
So. Am I going to get a mail from my ISP asking whether I have children living with me? Or are they going to look to see whether anybody is visiting moshimonsters? And how are they going to contact me? (I rarely use the mailbox provided by my ISP because the mail name is crap).
I really think that all MPs should be made to attend compulsory "How the Web works" training, sit an exam to show they've understood, and if they fail it, be barred from taking part in debate or votes on laws affecting Internet access. Internet access is becoming so essential to daily life that the people agreeing legislation have to understand enough to stop suggesting stupid, unworkable laws.
Ultimately I agreere is about private cloud, so data security is still your/their responsibillity.
I do not understand why companies will be prepared to put their data - which to many companies define them, onto a public infrastructure where you have no control about who can access the data, and in many cases, cannot even tell which geographical location their data resides in (which can be important if they don't want the FBI and DHS trawling your data).
It's all very well saying that the cloud providers will ensure that your data is secure, but that is about as trustworthy as a bank saying that their traders do not try to influence LIBOR. They may not even know what individual employees are doing. At least if one of YOUR employees leak data, you can take appropriate action without a commercial contract being between you and the guilty party.
I know that you could put cryptography in place to make the stored data not useful to a third party, but that may not give you the security that you expect if your data is stored in certain territories which require key escrow or disclosure.
I can see that some services may be suitable for deployment in a public cloud, but there are, and will remain, many that are only suitable for a private cloud or within controlled boundaries, requiring physical data centres.
IMAP is a protocol, and does not imply any structure on the way mailboxes (folders) are set-up and named.
The problem is that now people use a hybrid of reading their mail on a web enabled mail server, and downloading mails to a local mail client, you need some structure on the server, something that IMAP was never explicitly written for. There is code to handle it, mainly by treating folders as separate mailboxes, but there is no standard structure defined, and nor should there be in a protocol standard.
As Gmail does not really support folders (from what I remember, one of the design criteria was that it would not use folders, anything that looks like a folder is really a set of mails indexed using tags), this probably adds difficulty to communication with another mail server that does use folders. Add this to a protocol that does not embrace folders in the first place, and it is clear that it will never be smooth, and how well it works is probable more to do with the mail server and mail clients than to imap.
I would say that typing them in from a magazine was damn good practice. You either learned to debug other peoples code (and your typos), and could become a software engineer, or gave up, and became merely a user.
If the administrators had no way of making sure they got paid (i.e. by being a preferrential creditor), then they would not do it unless forced to by legislation.
...then why get clocks that auto adjust!
If you are talking about computers, both UNIX and Windows can be told not to adjust for DST, so I don't see your problem.
But be careful what you ask for. DST is applicable in summer, not the winter (which is what most people assume). We are lucky in the UK, because normal time (GMT) ~= UTC, so it is very clear to us which should be 'normal' time, and which is DST, but it is not so clear cut for any other timezone.
I would not object to losing DST, but only if time was such that the sun was at it's highest in the sky at noon.
This is one of the security 101 things to check on any UNIX-like OS. The fact that it was allowed to happen indicates that there are too many people working creating these systems without the requisite knowledge and/or experience.
It is not uncommon to come across UNIX or Linux software that creates world-writable files, but that does not excuse such stupidity. What makes this worse is that it appears to be the primary interface to the memory system, which will negate all other security measures.
I'm sure that somewhere in the package documentation you got there was an alternative dial-up service that you can use when ADSL is not working.
What do you mean! You no longer have a V90 modem?
I stopped the car at the darkest point on the journey (Exmoor can be really dark), and got out of the car.
The sky was a jewelled spectacle, and I said a farewell to Patrick with a heart both sad and joyous at the same time.
Google Maps and Navigation are only any good if you get a data service.
I recently had to go a long way out of my way to get home from work because of a combination of weather and several accidents. I turned on the data service on my phone and got.... zilch. And, of course, I had not maintained a paper map book in the car. As it turns out, the switch from Orange to EE was not as smooth as it was supposed to have been.
I reckon that I probably drove at least 10 miles further than I needed because of the stupid road signs that I had to rely on to get me back to somewhere I knew (this was in Devon, UK, where even major roads can be quite small, poorly lit and badly signposted), and I've vowed to never rely solely on Google Navigation again.
In that, a representative of the NHS (can't remember who, and the summary transcript is not on the BBC Web site yet) stated that the genome of (specifically) cancer sufferers would be taken if the patient consented, with a view to try to identify what factors in a person's DNA make-up controlled how a cancer developed once they had the condition. The data would be anonymised so that summary data would be released to research organisations would not contain information able to identify individuals. The fact that it was going to be restricted to people who already have a cancer diagnosis makes the information less useful
to the insurance industry.
I know that collecting the data at all (and building the "data infrastructure" to hold it) could only be the tip of the iceberg, but it certainly did not sound like a wholesale sequencing of the entire population. I am as worried about this type of information becoming available to other parties as the next person who gives-a-damn, but from what I heard, it should not yet ring the alarm bells.
When you consider it, it would be perfectly possible for the NHS to sequence the DNA of any patient who gave any form of blood or tissue sample, but that is not what they were talking about. I'm not even sure whether that would be illegal, because I'm sure that personal medical notes probably contain blood-sub grouping and other information that could be used to identify an individual or their susceptibility to certain conditions already.
You know, being a sysadmin can be seductive. When I was faced with remaining a techie, or crossing the divide to become something else, I decided........ to go contract.
I've been calling myself a system administrator/system integration/support specialist (there is really not much difference if you are good at it) for 30+ years, and I still enjoy it.
What is one such as I to do. Where do I go and still expect to enjoy working? Certainly not into a a supervisory or management role. I possibly could have become a system architect, but the opportunity did not present itself.
I cannot see myself changing what I do before I retire, unless I have to.