1808 posts • joined 15 Jun 2007
Re: Please share my medical details, far and wide. @MrsC
...and you do realise that opting-in to Care.Data won't help prevent you being given the wrong anaesthetic at all.
As you say, it may help a company develop a new one that won't trigger your problem, but Care.Data does not make your data any more readily available within the NHS than it ever was.
The loading of your data into a Summary Care Record would be something that you would not want to opt-out of, but that's completely separate from Care.Data.
This illustrates how even well informed people can misunderstand the mess that the NHS has got themselves in.
Re: This is all thanks to those ... @Eradicate...
It's funny that they never do factor in the number of expensive minutes lost having skilled people gathering up and throwing away their coffee cups and other rubbish compared with the cheap minutes of the cleaners.
I'm not trying to belittle cleaners, but there is a 3:1 or more ratio in cost of trained and skilled IT professional vs. (often minimum wage) cleaners.
Re: Publishing the code?
It would only be any code that is covered by GPL that has been modified that would have to be included anyway.
Most of the application development tools and library runtimes are published under LGPL, so it is perfectly possible to add the controlling layer as an application that sits on top of Linux linking to LGPL code without having to provide the source to anybody, even the people who buy the binaries.
If you are extending it comment about modified code to the previous comments about stripping Linux down to stop housekeeping, the stuff that is likely to affect performance is all in user space, and can be configured out by modifying the runtime configuration. Similarly, any parts of the kernel that are not required can be stripped out at kernel build time by configuration. The configuration files for the kernel build and runtime daemon configuration are not covered by GPL, so would not have to be published.
This perception that anything that runs on Linux has to be covered by the full GPL is just crap, and the sooner more people understand this, the more likely it is we will see commercial applications appear to run on Linux, something that is definitely required for Linux to be perceived as a viable full alternative to other operating systems. The opportunity for Linux to take the desktop is past (unless it's Android!), but I'm still hoping that it can achieve sufficient traction that it does not die as a desktop OS.
Re: Given its a stealth designed aircraft
The U2 was never really a 'stealth' plane. When it was designed, it's main benefits were it's high operational altitude (higher than the Russians Surface-to-Air missiles or fighters), which lulled the Americans into a false idea of it's safety, and the high endurance that allowed it to overfly most of the Soviet Union. In the years before surveillance satellites, this was the main method of identifying what the Russians were doing.
That's why Gary Powers being shot down was such a shock!
The SR71 added some stealth features, along with very high speed, which enabled the Americans to continue surveillance operations.
I've been saying for a long time that most users really dislike change for valid reasons.
I know that there have been layout changes, but the Windows interface introduced in Windows 95 is still recognisable in WinXP SP3, and even to a certain extent Win7. This needs to be recognised by the "change for change's sake" people. Whilst they can rationalise the changes themselves, they really should take their target audience's opinion more.
I am finding the same in the most recent re-skinning of Firefox. I'm just waiting for my Father to ask me how to find some of the things that have moved around.
On the subject of URLs and DNS names, it is perfectly normal to configure DNS to resolve a name into a number of IP addresses, in order to spread the load across multiple machines. The DNS server can be configured to rotate around the list of possible systems in a variety of different ways, and there were also ways to set up a dynamic DNS to allow the service state of the accessed systems to be reflected in the returned results.
If you think something like 18.104.22.168 (one of the IP addresses that Google responds on) is a problem, try typing in http://2915189354 as a URL!
Re: Those were the days (and acronyms)
I have a copy of the book "A Programming Language" from the 1970s (I bought it second hand in 1978), which defined both the language and it's name!
Of course there could have been an "Atlas Programming Language", but that's not the APL we know now.
Re: More fun with the Oric
When there were display ZX81s in the WH Smiths, I would type in a REM statement in the first line of a program, using the Sinclair special block characters, a small piece of assembler that would put a different value in the Z80's I register that was re-purposed by Sinclair to point to the page number of the first page of the character generator table in the ROM, and then call the code from the relevant address. I often added this to the program that was loaded, and then run the program.
What this resulted in was a screen of garbage, You could see that there was something there, and it would respond to all of the right commands like list, but the text was unreadable. If I remember correctly, the funniest thing was to put a value in that was the base page of the character table offset by one. This had the effect of shifting the displayed characters along by a number (32?) of characters, so the result was effectively a block-shift cypher of the program.
Re: BBC BASIC FTW @Teiwaz
Was probably and Econet network. And the login screen was in BBC Basic anyway. We told users to do a <Ctrl>-<Break> before logging on to prevent this type of thing.
I ran a Level 3 Econet network back in the 1980s. If only the security had been better enforced (the concepts were good, but it was trivial to get around), then it would have been a great low-cost network for file and print sharing. But there was no concept of privilege mode in the BBC Micro OS, so it was simplicity itself to set the bit in the Econet page in memory to give you privilege at the network level. And once you had this, you did not need a users password to be able to get at their files.
Still, I suppose that you can't have everything in a single-user 8-bit micro. But I agree, the BASIC was good, with the exception of it not having a while-wend construct.
@Fibbles re: Pascal
Pascal was created as a teaching language. It's prime goal was to be highly structured, and have a very concise syntax that encouraged students to think in the way that matched the good programming practices of the time (highly structured, functional and procedural programming). It generally succeeded in these aims.
It is quite clear that someone who learned Pascal could convert to other scientific languages (like Fortran or or Algol) relatively easily, and I know lots of people who moved to C with little difficulty.
But as a language, it was strongly disliked by students. Because of the precise syntax and strict type checking, it was a very pedantic language to write. In other languages at the time, you might get a successful compile, but have a completely broken program because of an escaped syntax error.
Now Pascal would never force you to write programs that worked, but it would protect you from some of the pitfalls that other languages might allow. But the repeated compile/fix cycles without a run caused many colourful moments in the classes I was involved in. But I'm not sure whether that was preferable to the compiler incorrectly attempting to fix simple errors like the PL/1 subset teaching compiler called PL/C, which is what I learned formal programming in.
The other drawback of strict Pascal implementations (and here I am explicitly excluding all Borland/Zortech and other 'extended' products) was that there was comparatively little support for some operations that were needed in order to cope with real-life problems. Files with multiple record types were complex (you had to use a construct like variant records to do this), and the very strong data typing did not have the equivalent of a cast operation (I'm still talking strict Pascal here), which made some of the tricks that you do in other languages difficult or impossible. There was also no variable length string construct (there were fixed length character arrays), and as a result, almost no what you would describe as string operations. This meant that you quite often had to do code some comparatively simple operations yourself. And there was no form-filling or screen handling features at all. But at least that was not unique to Pascal. Almost none of the high level languages of the time had that built into the language itself (it was normal for these to be added by library routines, the most obvious example being Curses for C).
This meant that kids who learned BASIC on 8-bit micros at home regarded Pascal as a backward language that restricted what they could do, whereas people from a formal teaching environment regarded it as very good language for precisely the same reason!
The other reason kids had difficulty with any compiled language was the fact that it was not interactive. The whole compile thing compared to just running it seemed wrong to them.
Re: Its my data, not yours... @bigtimehustler
The data protection act talks about personally identifiable data, and defines it as being about someone, not belonging to them.
It has always been an exception to the data protection act that information stripped of the identity of the person that it is about is no longer covered by the act.
The problem lies in what is identifiable data. Obviously, name, address, telephone number all count as identifiable data. But hair colour, height, route you travel to work, and even things like salary are not actually unique enough to be considered as identifying data. But where this breaks down is that several pieces of data which by themselves do not identify a person, might in combination be enough to provide a key to link the all data in a particular record to an individual.
This is a problem that has come about because of the increase in power of the computers, and the increased sophistication of the analytical software that processes the data. This is was the crux of the arguments against care.data. So-called anonymous data is rendered identifiable.
On the subject of ownership. My house has an address. This has personal relevance to me because I currently live there. The fact that I live there currently does not mean that the text that make up the address is in itself is owned by me. I cannot ask the Royal Mail to remove it from their post-code database. I have no control over it. I do now 'own' the text of the address.
I totally agree with what Jason Bloomberg said in a follow up comment to my original. Jason. Have a thumbs-up from me.
Re: Its my data, not yours... @TopOnePercent
I'm not sure I agree.
Data is data. It may be information about you, but you probably cannot claim to own it. In the case of the HMRC, they could be the custodian of the data, but even then, the only reason they could claim to own it was because they have gone to the effort to collect it.
But not everything they know about you is provided by you. Your employer is under a legal obligation to provide data to HMRC (as indeed you are). They may also have data about what benefits you have received, and if you have been under any form of tax investigation, they may have been given access to other data kept by other parties about you. I'm not saying that they don't have an obligation to keep the information private, nor am I saying that other people knowing it could not put you at a disadvantage, but don't claim ownership.
The only data that you can truly claim to own is that which you create yourself. If you do something like write, then what you write (assuming that it is not done under any pre-agreed contract) is probably yours, and you can claim ownership. If it is data about you, then you did not explicitly create it, and you cannot claim to own it.
This is my opinion, and not based on any legal knowledge. I would be interested to hear what other people think.
Re: Menus in windows @DrXym
What you call the frame area, which I believe corresponds to what I called the menu bar is rendered by whatever toolkit you are using from inside the application (and, critically, the application's process space). The application can totally control what appears on that bar, although it will normally use standardised toolkit routines to do it.
This does not make it the Window Manager that is providing that menu bar. The Window Manager controls the encapsulating frame, and all of the widgets that it used to do this are outside of the application's process space (but do note, however, they may not be in the Window Managers process space - it could defer these to other processes under the way X11 is structured).
Do not confuse the Window Manager with the widget runtime shared object/library. They are not the same thing.
I see what you are getting at, however. The toolkit routines that create the menu bar are normally in shared objects/libraries that are dynamically bound in to the executable at run time. By providing a compatible but different set of routines at runtime, I can see that compliant programs could have their behaviour changed by the system, so that it would indeed be possible for the runtime to intercept and alter some of the expected behaviour.
But note that I said compliant programs. What about those that do not use the Gtk and QT runtimes to manage their menu bar. What if they do, but have statically linked the routines available at compile time. What if they are so old that they use the Xtk or Andrew Toolkit, or Motif, or CDE. Or, heaven forbid, coded all of the menu bars themselves!
If the modification is done at the runtime-call level (and this could be the bit I was unable to see when I wrote my earlier posts), it would be necessary for Canonical to patch each and every dynamically bound widget toolkit, and they would totally fail to manage statically bound binaries.
Regarding your comments on Wayland, remember that Canonical is not implementing Wayland. Their alternative to X11 is Mir, but this is not in current releases of Ubuntu. We are not talking about Wayland.
If, however, Wayland is making it the responsibility of the application to draw all window decorations, then I can see problems ahead when applications hang or crash. Having things like the "Close" button handled by another process, to allow mis-behaving process to be closed, is such a good idea that I wonder about the sanity of the Wayland developers in throwing this away. I have often wondered whether their drive to eliminate the overheads of X11 will end up throwing the baby out with the bath water. X11 may be old, but the concepts it introduced were mostly very sound, with the possible exception of the poor security model.
@ Hans 1 re: Puppet
Puppet does indeed look interesting, but it is not like AD because it is layered on top of Linux, rather than being a part of the Linux infrastructure in the way that AD is integrated into Windows.
MS chose to use a registry for many or all of the important Windows and application settings, and then plugged AD into this to allow any program which used the registry to instead get the settings from AD. It's elegant and well thought out, something that I don't say about Microsoft very often.
Puppet relies on discrete 'modules' to perform specific functions. This means that every time you need to control a new application, it will probably be necessary to obtain or write a new module. This is very flexible, but ultimately more technically involved.
I am not currently running an environment that requires this degree of control (the problems in supercomputers with no system disks is not a problem that needs this type of solution), but I would certainly look at Puppet if I were in control of an environment that needed that level of control.
Re: Menus in windows @DrXym
I don't actually agree that the Linux desktop normally puts any menus inside the application window frame. The menu bar (the one that often starts with "File") is part of the application window. The Title bar is not.
The title bar, which normally contains the close, minimise and maximise buttons and a title of the window, is outside of the application window, as are any resize handles (which are normally the 'border' that surrounds a window). They sit in an encapsulating window which is inserted in the window hierarchy and surrounds and is it is the 'parent' of the application window. This is normally larger than the application window itself, and is owned by the window manager. The window manager acts on any events that occur in the encapsulating window, not the application. If the action is something like a resize, minimise or maximise operation, the window manager will then instruct the application, through the X11 protocol, to perform the action. In a re-size operation, the window manager will resize the encapsulating window, and then tell the application to resize it's own window accordingly and redraw the contents.
This is the trick that allows the window manager to run on a separate system to both the X11 server and the application itself, and is a fundamental feature of X11 that too many people overlook.
Unity appears to be fiddling with the window handling behaviour, possibly by telling the application to use a non-rectangular window, and I worry that this may not work well with some older X11 applications.
Re: That may be a bit of a deal-breaker for me @Pookietoo
You've not followed what I've said. The default setting of a global menu bar is absolutely a deal breaker. I've tried using it both in Unity and in OSX, and I don't care how much people quote Fitt's Law to me, I find it more difficult to move to the top left of the screen every time I want to do something with a window, especially if I am using an incremental pointing device like a track point or trackpad.
Putting the window controls back on each window is what I want to do, but what Canonical have provided is a half-way house that may be enough, but may not.
I will have to try it before I can decide.
Re: @Peter Gathercole Menus in windows @Vic and frank ly
Hmm. I'd not even considered that the window title has gone.
That may be a bit of a deal-breaker for me when I have multiple terminal/xterms on local and remote systems. Like you, I use the title to differentiate the windows from each other.
I still think that the biggest difference between those who can work with Unity and those who can't is whether they use multiple windows visible on the screen at the same time. Multiple overlapping windows - it's a struggle to use Unity. Single window or maximised whole screen application, it really does not matter too much. Actually, thinking about it, those who use whole-screen windows probably do not notice the difference with the placement of the window control buttons. It's always at the top-left of the window whether it is attached to the window, or at the top of the screen as far as they are concerned.
I use multiple overlapping windows, so you can see my preference.
Re: @AC, whatever. (was: whatever.) @AceRimmer
You have a point about Ubuntu being parasitic, but I believe that there may be more people committed to maintaining the Ubuntu repositories than there are the Debian ones. I think that it is a two-way thing, and Canonical pass any changes they make back into the Debian tree, so it is not as parasitic as it first appears. I'm not sure the same can be said of Mint.
I do mention LMDE (which I called Mint Debian) in my post. Check back at it to see why I have difficulty recommending it.
Re: Menus in windows @Vic and frank ly
There is a difference. Look at a normal Gnome or KDE system. Look where the maximise and minimise buttons are compared to the normal set of File, Edit etc.
They are on different bars.
What this release of Unity does is to put all of these on the same bar. So on the left, going from left-to-right, you've got the Unity Close, Minimise, Maximise buttons, and then on the same bar, you have File, Edit, View etc.
I'm sort of interested in how it does this. Normally, the menu bar is totally the responsibility of the application, and the window decorations are an encapsulating window or windows that enclose the whole application window, and also provides the resize bars around the window (that is one of the reason why you have a window border, to enable you to 'grab' the resize handles which are managed by the window manager). This is achieved using a parent/child relationship between all of the windows that allow a window closer to the root window to either grab and process events, or pass them into the application.
By combining the two bars, Unity appears to alter or even break the normal X11 way of doing things. I presume it is using the shape extension to try to dictate to the application a non-rectangular window. I am wondering whether there will be any unpleasant clashes between some old-style X11 applications and Unity somewhere down the line.
Re: @AC, whatever. (was: whatever.) @h4rm0ny
I used to feel this. When Unity appeared to be the 'way forward', I lost that feeling as Canonical appeared so dogmatic about it, but although I've tried, I cannot find another distribution that ticks all the boxes..
I though that Mint might be a possibility, but it feels too parasitic to be regarded as a distribution in it's own right. This is true of all of the Ubuntu derived distributions. They rely on Canonical being there to exist. Sure, they can fork the entire source and repository trees, but most of them are shoe-string operations that would not have the resource if they could not leech off of Canonical. I worked with Mint Debian, but the installation process and associated huge initial update (it's a rolling release, and the available installation media available is so old now that the update process downloads more than the original size of the installation) means that it is not suitable as a consumer distribution.
The same can be said about any of the Red Hat derived distributions. Indeed, I seem to remember a story about two years ago about Centos nearly winding themselves up because one of the critical maintainers dropped off of the radar.
Fedora is too fast moving, and RHEL has too many barriers put up by Red Hat for either to be considered to be consumer releases.
OpenSUSE appears to be a little media unfriendly, although does tick the box for support.
Slackware and Debian, being a long-established distributions are not going away and are totally usable, if a bit of a hair-shirt experience (especially Slackware), but are generally considered as too slow-moving and staid for a modern desktop distribution.
So I come back around to Ubuntu. Currently, rather than using Unity, I'm using a combination of Gnome Classic and Xfce (depending on the size of the system) on LTS releases for my own use, and do not have a current recommendation for people who ask me what they should use. I've listened to and welcome that Canonical have at least listened to at least some of the criticisms, and will evaluate 14.04 on my primary laptop with and without Unity sometime in the next few weeks.
Who knows, it may win me back. I'll try to keep an open mind.
Re: "Less so for organisations running Ubuntu on lots of PCs and moving to 14.04"
Group policy is one of the strongest advantages for Windows on the desktop. It is something that Linux distributions have not so far replicated (although I'm fairly certain that there have been attempts that have not gained enough traction to become generally accepted). Providing a configuration method like the registry that can be abstracted into a remote directory at the system level was a very clever move by Microsoft, although not their idea (LDAP, Kerberos and DNS were around before Active Directory became common).
But it always used to be that any changes you wanted to roll out system-wide on a UNIX-like system could be scripted and rolled out through some privileged remote execution method, something that UNIX-like operating systems excelled at (Kerberos came from UNIX-like systems). Coupled with the ability to completely segregate ordinary users from privileged users, this meant that you could roll out and configure UNIX-like desktop systems, and keep the configuration locked down and secure (at least from idle tampering). I have been doing this for 25 years or so, with UNIX and UNIX-like operating systems.
Nowadays, with so much of the Linux desktop and the associated infrastructure using things like XML configuration files, it is no longer an easy job to set some of these options. I've always had significant difficulty, and as a result a lot of scorn for XML configuration files. I know, it is possible to deploy parsers that can interpret and change these files from scripts, but it strikes me that this is very difficult if you do not have the schema for the options in the file.
I'm old-school, so can probably be laughed at by the younger generations, but I have tried to write my own parsers in shell and awk (still my tools of choice, because I absolutely know that they will be there on all but the most restricted UNIX-like system), and I just can't seem to do it. I know I'm not as sharp as I used to be, but it appears a non-trivial problem, even though XML is a well-defined language.
Coming across a new tool or desktop component, and not knowing or being able to find the available options, which may be missing from the XML file if the defaults apply, means that it is a hopeless task without much research. And all too often these oh-so-clever new tools, which work great, have not been documented in enough detail to enable you to script such changes. Sometimes, a tool comes with a CLI application to manipulate the settings, but you have to know what it is, and how to drive it. And the next tool along might have a completely different configuration tool. It's a nightmare.
Forgive me from ranting, but in this respect, I think that all Linux distributions have lost their way, and this is probably the flip-side of one of it's strength, choice.
Re: Backwards compatability @MJI
It depends on how you link it. If you resolve all external dependencies and statically link all library routines, and do not rely on any runtime services (like dbus etc), then it is perfectly possible for a binary compiled today to run on any Linux system as long as it is the correct processor type and the kernel API doesn't change.
In fact, looking at it, I would expect that many Linux programs compiled 15 years ago would still run, as many of them that old may well not have been linked against shared object files, and certainly would not have used dbus, dcop, bonobo et. al. Possibly more of them than were compiled 5 years ago.
The dependency on dynamically linked shared objects and runtime services is in my view one of the worst things that ever happened to Linux. It makes building programs that you want to work int the future without having to recompile more difficult than it needs to be.
Interestingly, but on a different note, I picked a binary of one of my tools off of one of my archives from a 32 bit AIX 4.1.4 system from about 1998, and successfully ran it without re-compiling it on an AIX 7.1 64 bit system.
Re: Sorry Neil McAllister.
I have several dozen 5.25" floppy disks that were created on my BBC micro ~30 years ago, and I am finding that a significant of them now have difficulty being read. The main problem is that the adhesive that is holding the oxide to the Mylar disk is breaking down, so each time I read a disk, I have to clean the drive!
The disks I have are mainly BASF, with some Verbatim and Nashua.
This is probably because they are truly 'floppy', and were not protected as well as the 3.5" hard-cased disks that the Amiga used.
I tried to embark on a process of capturing the disk images, but stopped when I had difficulty finding any new blank double sided double density floppies.
I now need to look at either reading them on a BEEB and squirting them over an RS/232 link (I think I have a PC with one of those left, and I came across the strange 5 pin DIN to 25 way D cable that I used to use, although I'll probably have to find a 9-25 way converter and a null-modem).
The alternative is finding a 5.25" DSDD floppy disk drive for a PC!
Re: @ Fazal Majid
You also have to factor in that IBM develops POWER and Z series processors in parallel. Much of the technology in chip design (and quite a bit more under the covers) is common between the two families of processor. So POWER does have a high revenue earning sibling to help it out.
They also have some history in the embedded processor market. POWER chips are not as common as ARM, but they did get some traction in NAS and set-top boxes, and although they lost out in the most recent generations of consoles, the Xbox 360, Playstation 3, and Wii all used PowerPC processors, and the WiiU still does.
I call fake!
That's not the original EeePC picture.
That should be an EeePC 701. Whatever lapbook(sic) that is has been clearly photo-shopped in as she appears to have lost the ends of the fingers on her left hand.
If people have been around here long enough, they will remember that we had decided in the comments that the hands were photo-shopped in to the original picture anyway.
Re: Sheer naked greed...
This is why they like putting additional material on the DVDs. Even though the original film may be out of copyright, the 'extras' being recently made, are not. So they feel they are justified in charging that amount.
Whereas I would like just the film. No extras, maybe some scene selections, and very little more.
After detecting the transmission of copyright material...
...FACT and MPIAA are already preparing the legal papers to get the data feed turned off, and trying to calculate the damages of transmitting the films to a significant part of the Universe covered by the spread of the lasers. They are a little uncertain about the number of entities who may have received it, as they don't want to ask the people they are acting against, but are believed to be erring on the high side "just to be safe".
Fortunately, the sums involved, with the added punitive damages, have overflowed their 8 digit calculators.
GPFS is an old-school product. It's been around for a long time (I first heard about it as mmfs about 20 years ago), and as such it is configured like an old-school product.
But I would say that it seriously benefits from not being set up by a point-and-click GUI. It is a very high performance filesystem, and really benefits from the correct analysis of the expected workload to size and place the vdisks and stripe the filesystems accordingly. It's just one of those systems that is traditionally deployed in high-cost, high function environments where the administrators are used to/prefer to work using a CLI. If it were to appear in more places, it may need to change, but then that is what I thought SONAS was supposed to provide.
I have been working with GNR and the GPFS disk hospital for the last two years on a P7IH system, and now that the main bugs have been worked out (which were actually mostly in the control code for the P7IH Disk Enclosure which provide 384 disks in 4U of rack space, although it is a wide and deep rack), it really works quite well, although like everything else in GPFS, it's CLI based. But to my mind, that's not a problem. But it is very different, and takes a bit of getting used to, and it could be integrated with AIX's error logging system and device configuration a bit better.
Re: Was it a MITM or what?
This does seem very specific. For them to positively know that the data was leaked via Heartbleed, they would have had to log the out-bound packets, and I severely doubt that they have this level of logging enabled.
I also find the term 'removed' a bit strange, because to me, that means that they disappeared from the source. Maybe I'm being a bit too literal, but I find it strange.
Re: What's new? @Dave 126
It is only a matter of time before the 'current' landfill Android phone will have Bluetooth LE. But you could get a Samsung S3 with android 4.3 for £200-300 and still be quids in.
The article was not emphasising the self-contained hearing-aid aspect of this device, but the remote microphone aspect. I agree that as a complete device offering the combined features may be desirable, but I said 'similar' behaviour', not 'the same'. But three grand for what is a digital hearing aid with Bluetooth and an app seems quite a lot.
Re: My wife...
It's acquired selective deafness. You just get desensitised to her voice!
I actually fail to see what is new here. OK, the specific hearing aid is different from a bluetooth headset, and probably less intrusive, but the noise cancellation is part of many phones, and adjusting the tone balance to give maximum boost is just a high precision tone control or graphic equaliser.
I would have thought that it would be possible to get similar behaviour at a fraction of the cost using an app and a normal high quality bluetooth audio device designed for listening to music, rather than phone calls.
Of course, it would not function as a normal hearing aid without the phone, but at this cost, you could also buy a landfill android phone built specifically with a small screen and a large battery to be used all the time, and still be thousands better off.
Re: Like they care @Me
Grrrr. Pedantic Grammar NAZI against myself! Did not spot lose/loose mistake until after the edit period had expired!
Re: Like they care
Yes, I was rather lose in my description of the PIN being stored on the card. It's a complicated issue where the PIN is not actually stored, but a hash of the PIN and some information unique to the card is stored, so that the PIN you type in is hashed with the card-specific information, and is then compared with the stored hash to determine whether the PIN was correct. It's a one-way hashing process, so even if the information on the card could be read, the PIN cannot easily be determined.
But the point is that it is completely on the card (as is the cryptographic processor that computes the hash - I'll bet you did not know that your bank card had a processor on-board). This is how the calculator-type authentication devices can work in isolation from any data connection, as all the authentication device is doing is providing the PIN to the card, and initiating the hash/compare.
It should not be the case that the card-issuing authority should know the PIN, because that breaks the personal secret that the bank claims tie a transaction down to you, and as a result absolves them of any responsibility for card-fraud.
In the UK, all bank issued cards, whether credit, debit or charge cards use the same mechanism for chip-and-pin, although it is different from other countries. Your point about the magnetic stripe is interesting, because UK cards do actually still have mag-stripes, so that they can be used abroad.
That does suggest that the card issuer does have to know the PIN.
Re: Virtualisation @Steve Todd
I'm not sure that the 360/168 was a real model. The Wikipedia article does not think so either.
As far as I recall, the only /168 model was the 370/168, one of which was at Newcastle University in the UK, serving other Universities in the north-east of the UK, including Durham (where I was) and Edinburgh.
They also still had a 360/65, and one of the exercises we had to do was write some JCL in OS/360. The 370 ran MTS rather than an IBM OS.
Re: Unix philosophy @Christian Berger
I cannot upvote you enough for this statement. I thought I was the only person left who thought along these lines.
I've been working at source level on UNIX on and off for 30+ years, and I'm finding the complexity of what is being added to Linux bewildering. I thought it was time to start thinking about retiring, but knowing that there are other people out there who think the same refreshes me.
Re: The dinosaurs live
OK. 2e2 were an outsourcer, but it is not the viability of the provider nor the moving of the service that I was mainly commenting on, it is the copies of the data that will be out of your control that I was trying to indicate.
2e2 were an example. It is unlikely that Amazon or Microsoft would go out of business, but could IBM choose to ditch their cloud services in five years time if it does not return the projected revenue, or some of the smaller players decide that the margins are just too slim?
It always puzzles me how you keep any dynamic application that is hosted by two separate cloud providers in sync with each other. Do you pay to have dedicated bandwidth between the suppliers with some geographical lock in? Do you have explicit cables laid between them? Virtual circuits or VPNs through established teleco infrastructure or the Internet? Or do you run it as a distributed application with both installations processing data.
All these questions can be answered, I'm sure, but how many people really think things through to this depth before deciding to go down the cloud route. I'm sure that there are customers who are already there who are considering their DR strategy for a cloud provider failure with some trepidation.
Whilst I don't mind being called a bit of a dinosaur, I have lived through the Mainframe->VAX/UNIX->Windows/Linux transitions that have happened, and what this has taught me is that the latest cool-aid that is being served up by the marketing boys is never as simple or cost effective as the projections. Let's just call it the result of experience!
I take on board everything you say about the security of the data centre.
But the difference is that if you host your applications in your own data centre, the security is entirely within your control. How good you make it is up to you.
If it is in a service providers, you trust that their security is as good as they say and as good as your contract with them. It's that same trust question that you sensibly query when it comes to availability.
Similarly, you trust that the barriers they construct between your service and all of the other services running in the data centre, and you trust that they will not move the data/service outside of the region you've specified. It's almost certainly good enough, but if that trust breaks down, what is the comeback you've got from the provider. Check the penalty clauses in the contract.
You've also missed out a vital question. What happens if the service provider has a state change, if they get bought, or, heaven forbid, fold (like 2e2). You need to consider where and how to move your service and all it's data, and also whether there is a residual risk in your data being left in various stages of protection on equipment that includes the backup solution that may be bought in toto, or appear on the broker market.
Encryption may seem like an obvious solution, but if your service actually processes data rather than just serves it out, there will be the means to decrypt the data present on the systems that you may no longer have any control over.
Re: "Windows XP is a thirteen year old operating system .." @Hans 1
I've tried to get her to use Linux (strong Linux advocate here - see my other posts). Indeed, when she uses Firefox on my laptop, she barely notices the difference.
But if I suggest that I put it on her machine (actually it's already there, I installed it as a dual boot system before I gave it to her), she's irrationally negative. She is one of those people who absolutely knows that what someone else (especially me - what does that tell you about trust) tells her is a good idea is some nefarious plot. She's the same with advise from the Doctor, Vet or Financial Advisor, but trusts that the news on local commercial radio is more accurate and informative than the BBC!
Re: "Windows XP is a thirteen year old operating system .."
The worrying thing is that the issues they are patching now may have been in Windows for over a decade. We just don't know how long some of these vulnerabilities have been exploited without us having been told about them.
We remained happy in blissful ignorance of the problems, even though they could have been exploited. And how many more are there that are either currently unknown, or are known about but not published?
I am expecting Security Essentials for XP, which has had it's life extended for a while more, to start issuing dire warnings about every little thing it finds, just to increase the fear and uncertainty amongst the remaining XP users, to encourage them to change.
I am not planning to change my Wife's XP system that sits behind the house firewall, as long as she keeps using and updating Firefox and Libre Office. She does little else on the system (not even email), so I am pretty sure that she is unlikely to be affected by new vulnerabilities, and has nothing of any real value on the system even if it does get compromised. Must remind her to keep it backed up, however.
Re: Windows 7 upgrade advisor anyone?
One thing that constantly annoys me is that with windows, there does not appear to be anything like a generic driver for a particular device's chipset.
With Linux, as long as the identifier ID's are listed against the correct generic driver, there is a great chance that it will work. You end up with about a dozen drivers installed that will cope with 95% of all devices available.
With Windows, even though you may have a driver for the same chipset as that on the card you've got, you can't make it work without the specific driver from the card's manufacturer.
This was brought home to me years ago with Belkin CardBus WiFi devices, where v1, v2 and v2.1 versions of a particular numbered model of WiFi card needed different drivers, and it was very difficult identify at the time which driver was needed, because they were not well labelled (why could they not just change the model number?).
Putting any of the devices into my Linux instance on the same machine worked immediately, without further action.
Re: " toughest substance in the known universe"
Yeah, had Meccano as well, but I really did not like the square nuts that had sharp poorly formed corners that would scratch the enamel off the coloured metal panels (that probably puts a date on the sets, as more recent Meccano had hex. nuts). Never had an electric motor, but did have the clockwork motor. This was really my older brother's, not mine.
I moved on to building control-line aircraft instead of building things from Meccano!
Re: " toughest substance in the known universe"
The early Lego bricks were almost exclusively just red and white, no yellow, black or grey. The bases were a cream colour, and instead of a circle pattern to lock the 'knobs' of the bricks in, used to have square holes in the bottom that the 'knobs' would fit in. Additionally, the 2x1, 6x1 and 10x1 narrow bricks would not have pins to help hold them on, but had cross-wise narrow divisions to help keep the sides in enough to grab the knobs of the lower bricks. It did not work as well, and often a complex model would be difficult to build because it just would not lock together.
The plastic of Lego from 50 years ago is different to modern bricks, and I think it was a styrene based plastic, and a bit brittle (yes, I am talking real Lego here, not the Betta Bilda and like copies, which we also had). Consequently, it would break on occasion. My older brother and I used to build models, and then use the spring powered suction dart guns (with the suckers removed - never be allowed these days) to 'blow' the models up, in scenes reminiscent of Stingray and Thunderbirds. Every now and then, we would break a brick. (Side note. In the film Thunderbirds Are Go, some of the houses that Zero-X crashes into are clearly made from Lego if you frame step the DVD!)
There used to be completely different windows and doors, with glazing in as well. I remember the garage bases with up-and-over doors, which were the right size to allow you to build a garage for a Matchbox sized car. The garage door auto-opened (it was weighted) and was held down by a flap that caught the bottom of the door. 'Drive' a car up, and over the flap to press it down, and the door would open. Push the car in, and close the door, and then trigger the door, and the car would roll out because it parked on a shallow ramp that formed the base of the garage.
Originally, the roof bricks were steep, almost 45 degrees so that a 2x4 roof brick had 1x4 knobs on the top to allow you to build the roof.
There also were wheel bricks, with wheels with rubber tyres (originally white/beige, but replaced quite quickly with black tyres) that had metal pins that would push into the wheel brick. If you stood on on one of the wheels which was pin-up, you really know about that! This was extended to train tracks and special flanged wheels (we originally used the wheels with the tyres taken off), complete with electric motors.
Things started getting different in about 1968, with different plastic, curved bricks, and specialist fence, trees, flowers and less steeply raked roof bricks, with additional colours and clear bricks, different plastic, and more brick sizes. And then they introduced models with special parts made only for a particular model, which would always go missing. People started building the models and leaving them built, rather than using their own imagination.
My youngest son, who is 18, has his complete lifetime's worth of bricks from special models (he's a real Lego fiend). We've just done a tidy and consolidation, and we have many thousands of bricks, filling all the drawers of a desk, along with storage tubs of the more common bricks, and glass coffee jars for the more specialist bricks. I don't reckon he could make any of the models up now, but he has vowed to find all the bits for the X-Wing kit he had! We may have to go to the Lego site and order a piece or two (yes, they sell single bricks from almost everything they've done in the last 20 years, but they tend to be expensive). They will even print on bricks (particularly body parts) from your own design if you are prepared to pay for it!
And I'm about to take ownership of the remains of my Lego set from the 60's from my father.
@Paul S. Gazo
I don't think you've followed the story.
I totally agree that Microsoft have the right to scan their employee's work provided mail account.
But that does not appear to be what they did. They scanned one or more of their customer's mailboxes, and used that to identify which employee was the culprit, and then provided that information to the police. So it appears Microsoft provided the private mail of one of their customers to the law enforcement agency without a warrant. Now, it's not clear whether the mail provided to the police was the mail from the customer's inbox, from the outbound mail transmission log, or the employee's outbox. You would have to look at the headers on the mail the police were given to be sure. If it was from the outbox or the transmission log, then that is within Microsoft's internal domain. If it comes from the customer's inbox, then it is not, even if it is hosted on a Microsoft mail server.
I agree that you would be stupid to expect that mail travelling through any part of the Internet is particularly safe from prying eyes unless you encrypt it, but you would not expect the mail host to use your (as a customer's) mailbox to as evidence against either you or someone else without the correct legal authorization.
Reading between the lines, the article suggests that Microsoft may have scanned many of their customers in order to identify who had received the mail. Without a warrant, that may be illegal, but difficult to prove, because an mail service provider must have the right to read their mail server's contents, at least for backup purposes. How different is that to grepping (I know, it's Microsoft, but grep means more than just saying find) a phrase from the mailboxes. Not really any different at all. It's not like anybody is reading and comprehending the mail.
So nothing that's happened is definitely illegal, but some of it is definitely questionable.
Re: Does not add up! @Neil Barnes
I really like your apropos of nothing, but there could be exceptions due to in vitro fertilization, (and, dare I say it, rape).
Re: Head to head
Any post like this under an AC banner will be either treated as a troll, or ignored.
Why can't you post under a recognisable pseudonym? Your comments will be much better regarded!
I think that the format of the media is also a factor.
Unless you have a computer, tablet or phone with an HDMI port (or you have some expensive Smart TV or STB), you are unlikely to watch digitally delivered media on the large screen in the living room. You will watch it on another device, because it's easier. I also find this is the case for personally ripped media.
That is until you get a device that sits in the living room, receives digital media, and can play it on the big screen. When that happens, you move back to the TV (as long as you have control of it).
I can plug my phone and my tablet into the living room TV, and it is more useful than you might think, extending what you use the TV for. I also use Sky On Demand, and a number of internet capable devices like BluRay players and consoles in various places around the house, attached to different TVs. If I can use the TV, I will.
I will be interesting interesting to see whether Roku and similar devices catch on before people start replacing their current TVs.
Re: What was 2.0 really known for?
It was probably more like RT-11, which was a precursor to both RSTS/E or RSX-11, or whatever the OS was called for the PDP-6.
Maybe they had Johnny 5 help them!
"Hello, Lucy, I'm home!"
Re: Prison time is required @AC
That's just silly. I hope it was meant as a tongue-in-cheek remark.
If you fine them (personally) mega-millions (even if the law allows this), they will just declare themselves (personally) bankrupt, It also makes the fines largely meaningless if they are in prison for 20 years. By the time they get out of prison, they will be a discharged bankrupt, with the fines written-off. Grab their assets, OK (bankruptcy laws already allow this) , but they probably won't actually have very much to grab. I don't think Michael Robertson was another Kim Schmitz.
By the time they come out of prison, they will not be worth anything much at all anyway, so there is no point in trying to recover any money from them, especially as their skill set will be so dated that they will probably not be able to set up any high revenue service.
And you think that the prison sentence should be more than that for assault or homicide? I would put the sentence for anything that resulted in a person being physically harmed as much higher than just a financial loss.
Don't believe the music industry's assessment of what they have lost. In reality, it's a fraction of what they claim, but the US court system allows them to assess the loss as a certain amount per track, and then add punitive damages. In reality, a streamed-play of a track will not equate to a lost sale, which is what they claim. Most freeloaders will not go and buy a track if they can't stream/download it, they will just move on to something that they can.
I've just received a text from EE (well, actually my phone shows it as an 'Unknown Sender') that apologises, and says you may need to reboot the phone to get it working.
It's good to know that anybody who has a phone that is not working will receive this text so they know to reboot.
Oh! Wait a minute......
- Updated HIDDEN packet sniffer spy tech in MILLIONS of iPhones, iPads – expert
- Peak Apple: Mountain of 80 MILLION 'Air' iPhone 6s ordered
- Students hack Tesla Model S, make all its doors pop open IN MOTION
- BBC goes offline in MASSIVE COCKUP: Stephen Fry partly muzzled
- PROOF the Apple iPhone 6 rumor mill hype-gasm has reached its logical conclusion