Re: Domestic? @Soruk
OK, point taken. But I was assuming that you were looking for workable solutions.
2413 posts • joined 15 Jun 2007
OK, point taken. But I was assuming that you were looking for workable solutions.
"idiots don't understand..."
But if they are in a position of power (as the CSPs are w.r.t data over their own infrastructure), what they don't understand, they can block, using the precautionary principal.
And even if the data is fetched via a GET, it can still be DPI'd, and again, precautionary principal applies if they don't understand it.
The only thing you can do is have some infrastructure that is not run by a CSP (I've never heard Communication Service Provider used as a term before, but whatever...) that runs over a UK border to a friendly neighbor, like a satellite link, direct wire, microwave link, or even a focused WiFi antenna.
But that could be made illegal as well.
I seriously suspect that being in or out of the EU makes not a jot of difference to these pie-in-the sky policies.
Ummm. Who provides the telephone line for the dial-up service?
One of the CSPs mentioned in the article. All the CSP needs to do is to put some traffic analysis on the line. If it looks encrypted, or even just unintelligible (like if you've created a new modulation technique), it drops the call, or just puts some phase modifying filters to corrupt the modulation.
The result is that there is no data flow. With no data flow, there is no encryption.
They don't have to control encryption as such.
Before I go on, this is just a thought experiment, OK. I'm not actually suggesting the following.
It would be perfectly possible for ISPs to block everything by default and whitelist allowed services, and then use DPI to see whether the allowed services were being subverted to tunnel encrypted traffic. That would mean as soon as you put traffic that was not allowed down your link, it would be quenched.
They would also have to make sure that non-IP data circuits (dark fibre etc.) services out of the country were also banned. That would just leave bi-directional satellite services and point-to-point microwave/wireless across national boundaries (like the Northern Ireland border with Eire) to worry about.
Mind you, the Internet in the UK would then bear no resemblance to what it looks like at the moment, and it would look more restrictive than China.
Unfortunately, there is something in the Home Office that seems to make seemingly ordinary cabinet ministers and MPs adopt completely stupid ideas once they become Home Secretary. And we now have an ex-Home Secretary as PM, and a new one with the same ideas.
We're doomed, I say!
If you thought that only poor people bought Trash-80s, you obviously did not look at the prices. In the UK The TRS-80 model 1 was seriously expensive (in the UK, it even needed it's own special monitor), though it was a quite well engineered machine.
But it fell into the Commodore Pet and Apple ][ generation, and should have been replaced, or at least price reduced when the likes of the VIC-20, Commodore 64 and Spectrum came along. Instead, Tandy RadioShack introduced the TRS-80 Color (sic) Computer was produced, again expensive but also incompatible with the Model 1 and III.
Interestingly, the Dragon32 was moderately compatible with the TRS-80 Color (sic) Computer, but I doubt that this Dragon64 port of the Hobbit would run on a TRS-80 CoCo (too little memory).
This was an interesting machine, although it never received sufficient market penetration in Europe to give it critical mass for games writers (many of whom were based in the UK) to port to.
It was quite popular in the US, despite it's high price, but the differences between the US and UK specs (mainly the different screen size) meant that the US games were not able to be used. The dollar-to-pound exchange rate used for US computers sold in the UK at the time made it unaffordable in the UK.
The other problem was that although it's processor was 16 bit, the machine was terribly slow, but that could have been the TI-Basic implementation (although I seem to remember reading about the memory implementation being a major factor in the slowness). This made it one of the slowest ever machines in the Personal Computer World Basic benchmark (which was dominated by the excellent Basic implementation of the BEEB for several years).
The problem with the Oric was that the graphics format of the display (in-scan, or horizontal, line colour attributes, IIRC) was almost as eccentric as the Spectrum's per-character cell colour attributes.
This made it difficult to port games to the Oric, as you had to completely re-write the way that the graphics were coded.
You really needed a fully bit-mapped multiple bits-per-pixel for a full display, and that took memory, as BBC micro owners had to contend with. BEEBs should have been shipped with 6502 second processors. That made them really fun to use (64K of memory plus full graphics and even faster than a normal BBC). But they decided to go with the shadow memory of the B+ and B+128 instead.
...was a bit of a joke. It did not have the graphics, and ran in Mode 7 (Teletext mode).
Mind you, I accept that this would have been difficult, given that even if you had used mode 6, which used 10K of the 32K available on a BBC model B, there would have been insufficient memory to store the game data in memory.
I has a moan to WH Smiths where I bought my original cassette copy (I don't think it was sold on disk at the time), and they pointed out the small note inside the sealed box that said something like "Because of the memory limitations of the BBC micro, some features of the game are not available". Yeah, right. All of the pictures!
They would not get away with that in this day and age, but apparently it was acceptable then.
It is still possible to claim travel and accommodation expenses even if you run a Ltd company, or even work through an umbrella, so long as your business contract fall outside of IR35. The wording of the legislation that came into effect this April is convoluted, but clear.
There are, however, a number of accountancy practices (including some of those that will manage the finances of your PSC for you) who seem to want to play it very safe, and recommend stopping claiming expenses now. Whether they are being over cautious or overly risk averse is debatable.
George Osborne was quite clear that he wanted this practice stopped completely for umbrellas and PSCs, by reworking/replacing what is still known as IR35. Hopefully, once Philip Hammond gets his feet under the table, we may get a more fair policy. We'll have to wait and see.
The automation part of the Wikipedia article is there to suggest that in order to be able to do rapid development and deployment (which is really an agile concept, not necessarily a DevOps one), it is necessary to be able to do rapid and consistent regression and functional testing and deployment with minimal effort.
Unfortunately, automated regression and acceptance testing is good at finding the problems you've seen before. It's not so good at finding new problems. That requires time and rigor in the testing processes.
So, by reducing the testing effort to enable rapid deployment of new code, you're actually exposing yourself to unexpected problems closer to the live environment. To my mind, this is the single biggest issue with agile development, and by extension DevOps. IMHO, large organizations that have a critical reliance on their IT systems will remain with their traditional testing regimes, which will make DevOps difficult to integrate into their working practices. It's a Risk thing.
It's interesting that in several organizations I've worked at over the 35+ years I've been working in IT, Operations team members have been present during all phases of projects, and on the distribution and approval lists of the change processes, so communication isn't a new thing. It just seems to have dropped out of favor a bit in recent decades as IT has become more silo'd.
... was seriously overpriced even when compared to it's contemporary, the BBC micro (which was also expensive).
There were niches where the 480Z was a more appropriate machine than a BEEB, but IMHO, if you didn't need CP/M compatibility, the BEEB was more versatile and accessible machine for schools.
The 380Z was from a different time, several years before most schools had budget to buy computers (and could be built and upgraded piecemeal) and before cheaper machines were available. They were well built, however, and survived for years, especially as they were often locked away from general use, or used as file and print servers for 480Zs.
Bringing this in in 18.10 means that there is one more LTS release (18.04) for Ubuntu on 32 bit Intel hardware, and as the article points out, this means that there will be security updates well into the 2020s (Ubuntu LTS has four years before the repositories stop being updated, and years more before they are retired).
Even though I use older kit for all of my systems, I seriously doubt that even I will have non x86-64 Intel kit doing serious work. As it is now, my daily system, a Thinkpad, is a Core 2 duo, as is my desktop mule that I use for things too large for my laptop (and I have some Core 2 quads sitting in a drawer waiting to be deployed).
I have a quick-and-dirty Atom based netbook running 32 bit Ubuntu, and my mostly retired Linux firewall is still 32 bit, but both of these are close to the end of their life. My wife's laptop is not 64 bit capable, but it probably won't last until 32 bit support is dropped.
My wife wanted to stay with WinXP. I told her she couldn't, and built a Win7 machine for her, which she hates (she's such a techno-luddite, she wouldn't learn the XP->7 UI change). If she needed to use a PC, she reluctantly asked to borrow my Thinkpad (Ubuntu LTS), and asked me to start "Google" for her (Google is the Internet, as far as she is concerned).
When I replaced my Thinkpad (with another one, of course), she asked whether she could have my old one. As a final piece of maintenance work on that system, I put in an SSD and one of the XP skins on Gnome.
She is now happily using this Linux laptop daily for genealogy research, Whilst she knows it's not Windows XP, it works and looks pretty much as she expects. She even uses LibreOffice on occasion, and does not appear to miss MS software at all.
I check it on occasion (I now borrow it if I just need to look up something quickly), and install any updates, but she admits that even she could manage this if I didn't.
The question we need to know is whether you have some esoteric or maybe cutting edge graphics card, or are maybe trying to use the proprietary binary graphics driver from AMD or Nvidia on an older graphics card..
For new high end cards from both Nvidia and AMD, the proprietary Linux drivers often lag the availability of the cards by some months, and the open drivers may not support the newer hardware until some bright spark works out how the API has changed.
There are also some obscure cards that there may not be drivers for in the Linux repositories, but this is rare.
What is more annoying is that the proprietary drivers are dropping support for older cards. I was caught out when I upgraded an LTS release on a system with an Nvidia fx7800 onboard that had the proprietary Nividia drivers loaded. After upgrading, I suddenly was down to un-accelerated 800x600 256 colour (i.e. basic VESA) rather than the 32 bit colour 1280x1024 that I was expecting. This sounds similar to your situation. I've had similar problems with older AMD/ATI cards as well.
The new release of the proprietary Nvidia binary had silently dropped support for the older chipset, leading to the lowest-common denominator driver being used. Unfortunately, the main way of removing the binary driver, which is required to get the open source drivers configured correctly, is normally written using dpkg from the command line. It is also possible from Synaptic (which is no longer installed by default), but is rather more difficult from the Ubuntu Software Centre (which seems to decide that removing software is something that users should be dissuaded from doing).
Unless you actually desperately need them, I would nowadays always suggest that you use the open drivers, and if you do use the proprietary drivers, switch back to the open drivers before doing a dist-upgrade.
Of course, this is not Linux's fault (if Linux can actually have fault attributed to it). It actually shows up a fundamental support issue with the companies that produce PC hardware without a full commitment to Linux. This should even extend to the obsolete chipsets IMHO, because Linux is very often deployed on old kit. Companies should either fork their proprietary drivers and leave the old ones in the repositories so you can keep using the old drivers without having to hold them back (and don't get me started on this, it has huge problems), or open-source the drivers, or even just the full API for the cards they deem obsolete to allow the community to support the cards without having to reverse-engineer the chipsets.
I was not saying that there is not consumer legislation or small claims courts outside California, but that article itself says that that case would be unlikely to succeed outside of California.
I don't understand your analogy of Jane Fonda. Whether you had watched it or not is irrelevant. I appreciate that in the case of a tape, you retain physical control of the tape, whereas the MS product you never own a physical copy, but the point I was trying to make is that technologies become obsolete. The difference I will admit here is that MS are able to declare the technology obsolete, but suppliers are not legally bound to provide alternatives.
Maybe the providing servers run on Windows Server 2003 with one of the withdrawn windows application deployment frameworks, and porting it to a more recent version is not feasible/cost effective, rendering it obsolete.
Crossed purposes. That case relies on the particularly consumer friendly court system in California. It's pretty much not applicable anywhere else in the world.
And I'm not sure that the small claims courts elsewhere would be prepared to rule on this issue, as the perceived loss verses the use the customer has already had from the product is debatable (how many people are prepared to try to claim back the cost of their VHS fitness tapes, because you can no longer buy a tape player). They may well require it ti be handled by a higher court.
It will be covered in the small print of the EULA, or at least if it's not, it will be soon (according to the same EULA, it will be your responsibility to check online for changes in the conditions).
Of course, that cannot trump local consumer legislation, however MS or any other company choose to fence their responsibility, but how many people are prepared to take on a company like MS in the courts!
Supersonic in a dive is known as transonic.
The Vodafone Group PLC holding company probably does not employ many people, so income tax and NI is not a huge issue. Corporation tax is another matter, but I'm sure a company like the Vodafone Group employs advanced financial engineering to minimize it's corporation tax.
I actually think you'll find that the Vodafone Group PLC, the holding company which owns Vodafone UK along with all the other national Vodafone operations, is in Paddington.
You are right that Vodafone UK, the UK operating company, has it's registered office in Newbury.
It's sometimes awkward to work things out when companies set themselves up for international operations. There have to be separate tax entities set up in each tax jurisdiction (at least at the moment, until further EU integration to form a superstate creates a single tax region for the whole of the EU).
Rare would be welcome.
On my way to work yesterday, I was on the A34 southbound overtaking the commercial vehicles, in a stream of three BMWs immediately in front of me, and at least two behind!
BTW, I'm in something that could, just, be described as a British car, although it is a little elderly.
No, I'm thinking of the X-9 Ghost in Macross Plus, or the F/A-37 Talon in Stealth.
Arthur - "What's that smoke"
Ford - "It's just the Golgafrinchams burning the trees"
Too upset (reaches for Lohan mug for consolation, while wishing I'd got the glass for a more appropriate beverage).
What do you run the VM's on? Intel Mainframes of course.
Back when UNIX was effectively a Bell Labs. internal project, with educational institutions given source code access for the cost of the media, a lot of this UNIX software you talk about was actually written by people working in the educational institutions. As such, they almost certainly did not own all of, or even very much of the work they did (most institutions take part or all of the rights to inventions by their employed staff, with research sponsors taking some of what's left).
Back in those days, it was much more simple to write something for your local use, and make it available for free or media costs to other educational institutions, rather than trying to monetize it. As such it was often provided with little or no license other than something like "free to use for educational institutions". This effectively meant that by sharing it, you had already lost control.
RMS himself understood this. In order to be free of these restrictions so that he could assert his right to make software freely available, he resigned from MIT shortly after starting the GNU project.
For applications that are compiled and run on Linux, the fact that most of GCC and many libraries are actually published under the Lesser GPL (LGPL) means that it is possible to ship code that is compiled and uses these libraries under any license you wish.
The problem here is that ZFS requires code that runs in kernel-space, and thus uses more of Linux that a user-space program. There have been discussions about whether kernel modules use enough of the interface (specifically the kernel symbol table) to the kernel to mean that GPL licensing restrictions apply.
In my view, there needs to be clarification of the state of kernel modules. IMHO, I feel that there should be some exemption, like the LGPL derivative work exemptions for statically linking LGPL libraries into binaries so that correctly written kernel modules can be added without violating the GPL. In this respect, I think that the stance RMS is taking, for all his good deeds and words, is akin to cutting off his nose to spite his face.
I understand that he has a glorious vision, but pragmatically, it will never be possible to have the whole world's software infrastructure running under GPL.
You know, it used to be that the correct user of permissions, separate UID and GIDs for applications, and the standard UNIX permission model was deemed sufficient to protect from most application programs, without a sandbox.
The whole idea of sandboxes came about on other OSs, which needed the OS to be protected from applications.
I appreciate that modern sandbox implementations allow resources to be fenced as well as access, and also that chroot used to be commonly used to give more protection, but still the assumption that a sandbox is required on a Linux system suggests that there is something wrong in the implementation or configuration of the systems (and that is what the article suggests).
The Linux sound system was not made any easier when Pulse Audio came along (thanks Lennart, you should have been shot for this long before you were allowed to meddle with init).
The Linux audio environment is overly complicated by the need to be backward compatible to all of the previous sound systems. Because there are so many applications still in use that are no longer developed, it's necessary to maintain ALAS and OSS compatibility (thankfully, the Enlightenment Sound Daemon is mostly dead and can be ignored), and layering on new meta-systems like PA and Jack don't make life easier.
In some respects, I would have been happy sticking with ALSA and avoiding all the pain PA brought.
That argument stands up well until the UI that everybody grew up with is changed almost beyond recognition.
It then becomes just as easy to retrain to a sane UI on another platform as it does to retrain them to the new Windows one.
MS appear to have taken one step back from the brink, but the changes are still pretty radical.
What does that make a real physical calculator?
Some locations I work at don't allow phones or laptops to be brought in. They allow calculators, so I have one just for those locations.
(P.S. I also have a slide rule, but that is just a curio, not because I actually use it in anger!)
The IBM press release is quite clear. "..access through a first-of-a-kind quantum computing platform delivered via the IBM Cloud".
So, the quantum computer itself is not part of a cloud, the means of getting access to it and creating jobs is. The access will be via a SoftLayer application running within the "IBM Cloud". Note the qualification. It's not "The Cloud", it's the "IBM Cloud". So it is whatever IBM defines is the "IBM Cloud", and given the wide scope of most people's definition of what cloud computing is all about (PaaS, IaaS, SaaS etc.), they've neither lied, nor have they misunderstood cloud computing.
The release also talks about a 5 qbit system, and then about being able to use individual qbits. This potentially means that up to 5 jobs may be running at the same time, and if the cloud access platform allows jobs to be queued up in some form of batch system (I don't know, but I would set it up that way myself), then many, many people could be using the access platform at the same time. I very much doubt you get a command prompt directly on the quantum computer itself (I'd love to see the source code of the OS if you could!)
Whether it's misleading is a very subjective matter, and will be based entirely on whatever definition of cloud computing the reader wishes to believe.
Me, I subscribe to Cloud as being "someone elses computer", as stated by an AC at the beginning of the comment thread, so this fits that definition.
The UK ZX printer caused a lot of radio interference when it was printing. There is no way it could have been marketed in the US because of the FCC rules on interference.
I think there must have been more differences.
The ZX Printer, using metal covered foil, plugged directly into the ZX expansion port via the board-edge connector. When used with a ZX-81, it had a pass-through port for the RAM pack.
If this printer works in the same way, I am interested in what he was driving it with, because the ZX-81 and the original ZX Spectrum were the only machines capable of driving it. Anything else would require a box of tricks to emulate the ZX-Expansion bus.
I suspect that this is an RS-232 printer, which would have been plugged into a ZX Interface-1, and thus could be used with any other computer with a serial port and the correct cable (IIRC, the serial port on the Interface-1 used a non-standard [not that there were standards back then] pin layout, but it was documented, so within the ability of anyone with a soldering iron to make a cable).
If he is actually using a Spectrum (sorry, Timex-Sinclair 2068), then I suspect that would be much more newsworthy.
Ah, but more frequent single failures in a raid set is an annoyance, but not putting your data at risk (as long as you replace the failed disks)
Multiple concurrent failures risk your data!
I will opt every time for a scenario where I have to replace single drives more frequently, as opposed to one with less frequent work, but increased risk of data loss.
... the read-writes that go on under the covers performed by most RAID controllers to prevent bitrot. It could very well be that there is further 'amplification' (and, it should be noted, will also happen as long as the RAIDset is powered, even if it is not being actively written to.
This probably makes it even more important to not buy all of the disks in a RAIDset at the same time or from the same batch of disks.
The S.M.A.R.T data maintained by the drive actually does contain counters of all sorts, which includes the total amount of writes, I believe, so that could be used to try to enforce this type of limit.
But along with the conclusion about a 'bad year' they also offered an explanation, in that the number of celebrities that were recognized increased over the decades from the 1950's onward because of the influence of television (and before that, radio, film, newspapers and theater would all have had their effect on boosting the number).
Those celebrities, who would have been in their 20s and 30s when telly was new, are now in their 70s and 80s. And in the decades after the 60s, celebrities tended to do things that damaged their long-term health ("I hope I die, before I get old"), so are probably candidates for an earlier death.
So it's really not a huge surprise. I predict that the number will increase year-on-year for the next 25 years or so, and then plateau, and then people will lose interest as the Internet age celebrities reach an age when they start dying. Either that, or we'll get Logan's Run type euthanasia, or people will transferred their conscienceless into robots.
I've got some 8" floppies formatted to V7 UNIX UFS standard from 1978 and 1979. We had to use them as overflow storage on a PDP11/34 when I was at university, because there was too little space on the RK07 drive packs!
That's ef'ing genius! I love it.
Hasn't he had his US passport invalidated, or did Russia issue him one of theirs?
If he doesn't have a valid passport, he cant really travel anywhere (except maybe the US).
Edit: Oops, looks like I should have read all of the comments before saying this! I'm an idiot.
I think you're objectifying him!
Complementing him on his looks, that's clearly sexual harassment!
Quick, call the reverse-feminist brigade.
(what do you mean he's not complaining)
Your comment contains a oxymoron. A "globally unique identifier" cannot clash, by definition.
Adding the geo-datacentre makes it hierarchical, and actually means that it becomes difficult to address an object if it moves to another datacentre.
In case you had not realized, there are many ways to get files from filesystems that do not require mounting (if you know a file handle, some past implementations of NFS allowed you to access a file without mounting the filesystem, but that was a bug!). You're just applying current thinking to make an artificial distinction to try and preserve the definition of an object file store.
Despite your completely valid points, I still maintain that an object filestore is just a filesystem by another name.
My use of the POSIX example was just to illustrate the use of inodes, and that things can be familiar and different at the same time. I was not saying that all filesystems need to be POSIX compliant, and the use of things like SSHfs, which is in essence stateless but runs on top of existing filesystems indicates that the APIs you suggest can (and probably are in most instances) just a layer on top of existing filesystems.
Yes, but all you're doing is storing an index, in the same way that the permuted index for old-style UNIX man pages from 40 years ago allows you to identify pages that mentioned particular key words.
And if you break it down, in a UNIX-like system, the objects are actually tracked by inode which links blocks to objects (files), and the file tree structure is just a way of indexing the inodes.
It could be perfectly possible, if a little unwieldy, to have an index of inodes other than a hierarchy of directory indexes, but you would have to do something about permissions, as although the inode itself includes permissions that can be checked, UNIX also requires a permissions check on the path to a file, not just the file itself.
In fact, I understand that a number of POSIX compliant filesystem implementations do allow this type of access. GPFS (sorry, IBM Spectrum Storage, or whatever it's called this week) for example, has a policy engine that allows files to be accessed outside of the traditional file tree.
I know that responding to my own comment is a bit... well, poor form but -
How global is global? If it's really global, what is the arbitration system to make sure that there are no collisions with other systems and organizations? And are objects immutable so that you have to version them as part of their globally unique identifier?. I cannot really believe that there are people who believe that a non-hierarchical unique identifier is really possible at any scale.
Is there any structure at all imposed upon the identifier and format of the metadata? If there is a structure, then it's just another type of file system with a different indexing system. Tree based filesystems are not the only type that have been used, they've just become almost standard because they mostly fit the requirements of most users.
I know that, in theory, if you can segregate the object from the path to access the actual storage of the object, you become storage agnostic, such that objects can be moved to different stores and it still be found, but under the covers, there will still be something that resembles a filesystem.
This whole concept still sounds a bit like buzz words, even though CAFS have been around for more than 30 years.
From Wikipedia (I know, but it's a useful first description).
"Each object typically includes the data itself, a variable amount of metadata, and a globally unique identifier"
Hey, I've got an object storage system, and didn't know it! The "globally unique identifier" starts with "/home/peter/Media..." or some such, and each object has some metadata that can be seen using examination tools like "ls -l", "istat" and "file"
Wow. Whoda thuk it!
This is a security issue, as it allows spammers to identify real email addresses in an organization. If it doesn't bounce, it's a real address.
I know of many large organizations that just black-hole them for exactly this reason.
... commodity hardware, attached via Infiniband with some software-defined storage solution is not particularly difficult to build nowadays. It's like Lego, and putting Linux/Lustre/Rocks/Slurm & LSF/Open-MPI on top is very formulaic.
All you need is the money. Of course, whether it actually does anything useful depends on the detailed design, and the skill of the people using it.