Re: KITT is screwed, then.
KITT was always interfering with other car's ignition and locking systems (it was one of it's/his normal tricks), and I'm pretty sure was hacked more than once.
But of course that was fiction.
2924 posts • joined 15 Jun 2007
Unfortunately, many small-business merchant services do use the Internet as a communication path (small shops don't want the cost of a separate communication infrastructure, and dial-up is becoming history), either via *DSL lines or mobile, and this means that the central servers for the merchant systems must also be connected to the Internet.
One hopes that they establish secure VPNs for the actual transmission of the transaction details, and that the central servers are properly secured, but I'm afraid with the advent of payment services run via mobile phones, like PayPal and others are doing, it could be the security of the mobile phone and attached card devices that will become the attack target,
It needs to be funded separately, from sources not directly controlled by the Government.
This is so it can maintain some sort of independence from the Government, especially when it comes to news coverage, and not be accused of being a mouthpiece for whichever party is in power.
So called TV detector vans did really exist and used to be technically feasible in the days before digital TV, but they were largely a psychological instrument of FUD. As a previous commentor said, many of them were probably non-functional and just for show, with a deliberately obvious 'antenna' on the roof to make them visible.
My Mother-in-Law claimed to have seen one in the last couple of weeks, but I'm not sure whether she could really differentiate one from, say, a satellite installer, or another van with a transport tube on the roof.
Whilst I think you have the wording correct, the original intention of this type of clause was to allow caravan owners to watch TV under their home license (it would have been very difficult to buy a license for a caravan, which has no fixed address). It also used to say that you should not simultaneously use a portable device and the TV in the licensed address at the same time.
Very few portable TVs had internal batteries until the advent of Sir Clive's Micro TV and the following advent of LCD TV's i the '80s and '90s. They either relied on the battery of the towing car, or had a car-type battery in the caravan.
Nowadays, with technology moving as fast as it is, it's almost impossible to come up with some sensible definition of a device capable of receiving broadcast TV. Tying iPlayer to the license is desirable from the BBC's perspective, but makes a mockery of the fact that the license was supposed to cover the operation of receiving equipment, not access to the BBC's content.
I don't know the answer, and I don't want the BBC's independence from commercial pressure or government interference to change, but something needs to be done. Moving to a pure subscription model with encryption appears to be the best and most fair model IMHO, but would require an increase in cost, and a similar upheaval to that when DVT came in!
Ah, but SSH works at a user level, so you can use it to do tricks through systems that you don't have admin access to (like most organizations with silo'd platform support.)
Whether this is a good or bad thing depends on whether you're a sysadmin in a heterogeneous environment that you don't completely control, or whether you're in IT security!
Octal use in UNIX comes from it's DEC roots, not anything IBM did.
DEC PDP-6, PDP-7, PDP-8, and PDP-10 (aka System10) variously had 12, 18 and 36 bit word lengths, which fitted octal (3 bits per nibble) very well, especially as they used 6-bit characters.
When the PDP11 came along, which was a 16 bit system, DEC programmers had octal so ingrained in their mindset, that they stuck with octal, rather than switching to hexadecimal, which works better for 8/16 bit words. This stuck with the original UNIX developers at Bell Labs.
You will also remember that Unics(sic) started on the PDP-7.
"idiots don't understand..."
But if they are in a position of power (as the CSPs are w.r.t data over their own infrastructure), what they don't understand, they can block, using the precautionary principal.
And even if the data is fetched via a GET, it can still be DPI'd, and again, precautionary principal applies if they don't understand it.
The only thing you can do is have some infrastructure that is not run by a CSP (I've never heard Communication Service Provider used as a term before, but whatever...) that runs over a UK border to a friendly neighbor, like a satellite link, direct wire, microwave link, or even a focused WiFi antenna.
But that could be made illegal as well.
Ummm. Who provides the telephone line for the dial-up service?
One of the CSPs mentioned in the article. All the CSP needs to do is to put some traffic analysis on the line. If it looks encrypted, or even just unintelligible (like if you've created a new modulation technique), it drops the call, or just puts some phase modifying filters to corrupt the modulation.
The result is that there is no data flow. With no data flow, there is no encryption.
They don't have to control encryption as such.
Before I go on, this is just a thought experiment, OK. I'm not actually suggesting the following.
It would be perfectly possible for ISPs to block everything by default and whitelist allowed services, and then use DPI to see whether the allowed services were being subverted to tunnel encrypted traffic. That would mean as soon as you put traffic that was not allowed down your link, it would be quenched.
They would also have to make sure that non-IP data circuits (dark fibre etc.) services out of the country were also banned. That would just leave bi-directional satellite services and point-to-point microwave/wireless across national boundaries (like the Northern Ireland border with Eire) to worry about.
Mind you, the Internet in the UK would then bear no resemblance to what it looks like at the moment, and it would look more restrictive than China.
Unfortunately, there is something in the Home Office that seems to make seemingly ordinary cabinet ministers and MPs adopt completely stupid ideas once they become Home Secretary. And we now have an ex-Home Secretary as PM, and a new one with the same ideas.
We're doomed, I say!
If you thought that only poor people bought Trash-80s, you obviously did not look at the prices. In the UK The TRS-80 model 1 was seriously expensive (in the UK, it even needed it's own special monitor), though it was a quite well engineered machine.
But it fell into the Commodore Pet and Apple ][ generation, and should have been replaced, or at least price reduced when the likes of the VIC-20, Commodore 64 and Spectrum came along. Instead, Tandy RadioShack introduced the TRS-80 Color (sic) Computer was produced, again expensive but also incompatible with the Model 1 and III.
Interestingly, the Dragon32 was moderately compatible with the TRS-80 Color (sic) Computer, but I doubt that this Dragon64 port of the Hobbit would run on a TRS-80 CoCo (too little memory).
This was an interesting machine, although it never received sufficient market penetration in Europe to give it critical mass for games writers (many of whom were based in the UK) to port to.
It was quite popular in the US, despite it's high price, but the differences between the US and UK specs (mainly the different screen size) meant that the US games were not able to be used. The dollar-to-pound exchange rate used for US computers sold in the UK at the time made it unaffordable in the UK.
The other problem was that although it's processor was 16 bit, the machine was terribly slow, but that could have been the TI-Basic implementation (although I seem to remember reading about the memory implementation being a major factor in the slowness). This made it one of the slowest ever machines in the Personal Computer World Basic benchmark (which was dominated by the excellent Basic implementation of the BEEB for several years).
The problem with the Oric was that the graphics format of the display (in-scan, or horizontal, line colour attributes, IIRC) was almost as eccentric as the Spectrum's per-character cell colour attributes.
This made it difficult to port games to the Oric, as you had to completely re-write the way that the graphics were coded.
You really needed a fully bit-mapped multiple bits-per-pixel for a full display, and that took memory, as BBC micro owners had to contend with. BEEBs should have been shipped with 6502 second processors. That made them really fun to use (64K of memory plus full graphics and even faster than a normal BBC). But they decided to go with the shadow memory of the B+ and B+128 instead.
...was a bit of a joke. It did not have the graphics, and ran in Mode 7 (Teletext mode).
Mind you, I accept that this would have been difficult, given that even if you had used mode 6, which used 10K of the 32K available on a BBC model B, there would have been insufficient memory to store the game data in memory.
I has a moan to WH Smiths where I bought my original cassette copy (I don't think it was sold on disk at the time), and they pointed out the small note inside the sealed box that said something like "Because of the memory limitations of the BBC micro, some features of the game are not available". Yeah, right. All of the pictures!
They would not get away with that in this day and age, but apparently it was acceptable then.
It is still possible to claim travel and accommodation expenses even if you run a Ltd company, or even work through an umbrella, so long as your business contract fall outside of IR35. The wording of the legislation that came into effect this April is convoluted, but clear.
There are, however, a number of accountancy practices (including some of those that will manage the finances of your PSC for you) who seem to want to play it very safe, and recommend stopping claiming expenses now. Whether they are being over cautious or overly risk averse is debatable.
George Osborne was quite clear that he wanted this practice stopped completely for umbrellas and PSCs, by reworking/replacing what is still known as IR35. Hopefully, once Philip Hammond gets his feet under the table, we may get a more fair policy. We'll have to wait and see.
The automation part of the Wikipedia article is there to suggest that in order to be able to do rapid development and deployment (which is really an agile concept, not necessarily a DevOps one), it is necessary to be able to do rapid and consistent regression and functional testing and deployment with minimal effort.
Unfortunately, automated regression and acceptance testing is good at finding the problems you've seen before. It's not so good at finding new problems. That requires time and rigor in the testing processes.
So, by reducing the testing effort to enable rapid deployment of new code, you're actually exposing yourself to unexpected problems closer to the live environment. To my mind, this is the single biggest issue with agile development, and by extension DevOps. IMHO, large organizations that have a critical reliance on their IT systems will remain with their traditional testing regimes, which will make DevOps difficult to integrate into their working practices. It's a Risk thing.
It's interesting that in several organizations I've worked at over the 35+ years I've been working in IT, Operations team members have been present during all phases of projects, and on the distribution and approval lists of the change processes, so communication isn't a new thing. It just seems to have dropped out of favor a bit in recent decades as IT has become more silo'd.
... was seriously overpriced even when compared to it's contemporary, the BBC micro (which was also expensive).
There were niches where the 480Z was a more appropriate machine than a BEEB, but IMHO, if you didn't need CP/M compatibility, the BEEB was more versatile and accessible machine for schools.
The 380Z was from a different time, several years before most schools had budget to buy computers (and could be built and upgraded piecemeal) and before cheaper machines were available. They were well built, however, and survived for years, especially as they were often locked away from general use, or used as file and print servers for 480Zs.
Bringing this in in 18.10 means that there is one more LTS release (18.04) for Ubuntu on 32 bit Intel hardware, and as the article points out, this means that there will be security updates well into the 2020s (Ubuntu LTS has four years before the repositories stop being updated, and years more before they are retired).
Even though I use older kit for all of my systems, I seriously doubt that even I will have non x86-64 Intel kit doing serious work. As it is now, my daily system, a Thinkpad, is a Core 2 duo, as is my desktop mule that I use for things too large for my laptop (and I have some Core 2 quads sitting in a drawer waiting to be deployed).
I have a quick-and-dirty Atom based netbook running 32 bit Ubuntu, and my mostly retired Linux firewall is still 32 bit, but both of these are close to the end of their life. My wife's laptop is not 64 bit capable, but it probably won't last until 32 bit support is dropped.
My wife wanted to stay with WinXP. I told her she couldn't, and built a Win7 machine for her, which she hates (she's such a techno-luddite, she wouldn't learn the XP->7 UI change). If she needed to use a PC, she reluctantly asked to borrow my Thinkpad (Ubuntu LTS), and asked me to start "Google" for her (Google is the Internet, as far as she is concerned).
When I replaced my Thinkpad (with another one, of course), she asked whether she could have my old one. As a final piece of maintenance work on that system, I put in an SSD and one of the XP skins on Gnome.
She is now happily using this Linux laptop daily for genealogy research, Whilst she knows it's not Windows XP, it works and looks pretty much as she expects. She even uses LibreOffice on occasion, and does not appear to miss MS software at all.
I check it on occasion (I now borrow it if I just need to look up something quickly), and install any updates, but she admits that even she could manage this if I didn't.
The question we need to know is whether you have some esoteric or maybe cutting edge graphics card, or are maybe trying to use the proprietary binary graphics driver from AMD or Nvidia on an older graphics card..
For new high end cards from both Nvidia and AMD, the proprietary Linux drivers often lag the availability of the cards by some months, and the open drivers may not support the newer hardware until some bright spark works out how the API has changed.
There are also some obscure cards that there may not be drivers for in the Linux repositories, but this is rare.
What is more annoying is that the proprietary drivers are dropping support for older cards. I was caught out when I upgraded an LTS release on a system with an Nvidia fx7800 onboard that had the proprietary Nividia drivers loaded. After upgrading, I suddenly was down to un-accelerated 800x600 256 colour (i.e. basic VESA) rather than the 32 bit colour 1280x1024 that I was expecting. This sounds similar to your situation. I've had similar problems with older AMD/ATI cards as well.
The new release of the proprietary Nvidia binary had silently dropped support for the older chipset, leading to the lowest-common denominator driver being used. Unfortunately, the main way of removing the binary driver, which is required to get the open source drivers configured correctly, is normally written using dpkg from the command line. It is also possible from Synaptic (which is no longer installed by default), but is rather more difficult from the Ubuntu Software Centre (which seems to decide that removing software is something that users should be dissuaded from doing).
Unless you actually desperately need them, I would nowadays always suggest that you use the open drivers, and if you do use the proprietary drivers, switch back to the open drivers before doing a dist-upgrade.
Of course, this is not Linux's fault (if Linux can actually have fault attributed to it). It actually shows up a fundamental support issue with the companies that produce PC hardware without a full commitment to Linux. This should even extend to the obsolete chipsets IMHO, because Linux is very often deployed on old kit. Companies should either fork their proprietary drivers and leave the old ones in the repositories so you can keep using the old drivers without having to hold them back (and don't get me started on this, it has huge problems), or open-source the drivers, or even just the full API for the cards they deem obsolete to allow the community to support the cards without having to reverse-engineer the chipsets.
I was not saying that there is not consumer legislation or small claims courts outside California, but that article itself says that that case would be unlikely to succeed outside of California.
I don't understand your analogy of Jane Fonda. Whether you had watched it or not is irrelevant. I appreciate that in the case of a tape, you retain physical control of the tape, whereas the MS product you never own a physical copy, but the point I was trying to make is that technologies become obsolete. The difference I will admit here is that MS are able to declare the technology obsolete, but suppliers are not legally bound to provide alternatives.
Maybe the providing servers run on Windows Server 2003 with one of the withdrawn windows application deployment frameworks, and porting it to a more recent version is not feasible/cost effective, rendering it obsolete.
Crossed purposes. That case relies on the particularly consumer friendly court system in California. It's pretty much not applicable anywhere else in the world.
And I'm not sure that the small claims courts elsewhere would be prepared to rule on this issue, as the perceived loss verses the use the customer has already had from the product is debatable (how many people are prepared to try to claim back the cost of their VHS fitness tapes, because you can no longer buy a tape player). They may well require it ti be handled by a higher court.
It will be covered in the small print of the EULA, or at least if it's not, it will be soon (according to the same EULA, it will be your responsibility to check online for changes in the conditions).
Of course, that cannot trump local consumer legislation, however MS or any other company choose to fence their responsibility, but how many people are prepared to take on a company like MS in the courts!
I actually think you'll find that the Vodafone Group PLC, the holding company which owns Vodafone UK along with all the other national Vodafone operations, is in Paddington.
You are right that Vodafone UK, the UK operating company, has it's registered office in Newbury.
It's sometimes awkward to work things out when companies set themselves up for international operations. There have to be separate tax entities set up in each tax jurisdiction (at least at the moment, until further EU integration to form a superstate creates a single tax region for the whole of the EU).
Rare would be welcome.
On my way to work yesterday, I was on the A34 southbound overtaking the commercial vehicles, in a stream of three BMWs immediately in front of me, and at least two behind!
BTW, I'm in something that could, just, be described as a British car, although it is a little elderly.
Back when UNIX was effectively a Bell Labs. internal project, with educational institutions given source code access for the cost of the media, a lot of this UNIX software you talk about was actually written by people working in the educational institutions. As such, they almost certainly did not own all of, or even very much of the work they did (most institutions take part or all of the rights to inventions by their employed staff, with research sponsors taking some of what's left).
Back in those days, it was much more simple to write something for your local use, and make it available for free or media costs to other educational institutions, rather than trying to monetize it. As such it was often provided with little or no license other than something like "free to use for educational institutions". This effectively meant that by sharing it, you had already lost control.
RMS himself understood this. In order to be free of these restrictions so that he could assert his right to make software freely available, he resigned from MIT shortly after starting the GNU project.
For applications that are compiled and run on Linux, the fact that most of GCC and many libraries are actually published under the Lesser GPL (LGPL) means that it is possible to ship code that is compiled and uses these libraries under any license you wish.
The problem here is that ZFS requires code that runs in kernel-space, and thus uses more of Linux that a user-space program. There have been discussions about whether kernel modules use enough of the interface (specifically the kernel symbol table) to the kernel to mean that GPL licensing restrictions apply.
In my view, there needs to be clarification of the state of kernel modules. IMHO, I feel that there should be some exemption, like the LGPL derivative work exemptions for statically linking LGPL libraries into binaries so that correctly written kernel modules can be added without violating the GPL. In this respect, I think that the stance RMS is taking, for all his good deeds and words, is akin to cutting off his nose to spite his face.
I understand that he has a glorious vision, but pragmatically, it will never be possible to have the whole world's software infrastructure running under GPL.
You know, it used to be that the correct user of permissions, separate UID and GIDs for applications, and the standard UNIX permission model was deemed sufficient to protect from most application programs, without a sandbox.
The whole idea of sandboxes came about on other OSs, which needed the OS to be protected from applications.
I appreciate that modern sandbox implementations allow resources to be fenced as well as access, and also that chroot used to be commonly used to give more protection, but still the assumption that a sandbox is required on a Linux system suggests that there is something wrong in the implementation or configuration of the systems (and that is what the article suggests).
The Linux sound system was not made any easier when Pulse Audio came along (thanks Lennart, you should have been shot for this long before you were allowed to meddle with init).
The Linux audio environment is overly complicated by the need to be backward compatible to all of the previous sound systems. Because there are so many applications still in use that are no longer developed, it's necessary to maintain ALAS and OSS compatibility (thankfully, the Enlightenment Sound Daemon is mostly dead and can be ignored), and layering on new meta-systems like PA and Jack don't make life easier.
In some respects, I would have been happy sticking with ALSA and avoiding all the pain PA brought.
That argument stands up well until the UI that everybody grew up with is changed almost beyond recognition.
It then becomes just as easy to retrain to a sane UI on another platform as it does to retrain them to the new Windows one.
MS appear to have taken one step back from the brink, but the changes are still pretty radical.
What does that make a real physical calculator?
Some locations I work at don't allow phones or laptops to be brought in. They allow calculators, so I have one just for those locations.
(P.S. I also have a slide rule, but that is just a curio, not because I actually use it in anger!)
The IBM press release is quite clear. "..access through a first-of-a-kind quantum computing platform delivered via the IBM Cloud".
So, the quantum computer itself is not part of a cloud, the means of getting access to it and creating jobs is. The access will be via a SoftLayer application running within the "IBM Cloud". Note the qualification. It's not "The Cloud", it's the "IBM Cloud". So it is whatever IBM defines is the "IBM Cloud", and given the wide scope of most people's definition of what cloud computing is all about (PaaS, IaaS, SaaS etc.), they've neither lied, nor have they misunderstood cloud computing.
The release also talks about a 5 qbit system, and then about being able to use individual qbits. This potentially means that up to 5 jobs may be running at the same time, and if the cloud access platform allows jobs to be queued up in some form of batch system (I don't know, but I would set it up that way myself), then many, many people could be using the access platform at the same time. I very much doubt you get a command prompt directly on the quantum computer itself (I'd love to see the source code of the OS if you could!)
Whether it's misleading is a very subjective matter, and will be based entirely on whatever definition of cloud computing the reader wishes to believe.
Me, I subscribe to Cloud as being "someone elses computer", as stated by an AC at the beginning of the comment thread, so this fits that definition.
I think there must have been more differences.
The ZX Printer, using metal covered foil, plugged directly into the ZX expansion port via the board-edge connector. When used with a ZX-81, it had a pass-through port for the RAM pack.
If this printer works in the same way, I am interested in what he was driving it with, because the ZX-81 and the original ZX Spectrum were the only machines capable of driving it. Anything else would require a box of tricks to emulate the ZX-Expansion bus.
I suspect that this is an RS-232 printer, which would have been plugged into a ZX Interface-1, and thus could be used with any other computer with a serial port and the correct cable (IIRC, the serial port on the Interface-1 used a non-standard [not that there were standards back then] pin layout, but it was documented, so within the ability of anyone with a soldering iron to make a cable).
If he is actually using a Spectrum (sorry, Timex-Sinclair 2068), then I suspect that would be much more newsworthy.
Ah, but more frequent single failures in a raid set is an annoyance, but not putting your data at risk (as long as you replace the failed disks)
Multiple concurrent failures risk your data!
I will opt every time for a scenario where I have to replace single drives more frequently, as opposed to one with less frequent work, but increased risk of data loss.
... the read-writes that go on under the covers performed by most RAID controllers to prevent bitrot. It could very well be that there is further 'amplification' (and, it should be noted, will also happen as long as the RAIDset is powered, even if it is not being actively written to.
This probably makes it even more important to not buy all of the disks in a RAIDset at the same time or from the same batch of disks.
Biting the hand that feeds IT © 1998–2019