Re: Is it legal? @arrbee
2924 posts • joined 15 Jun 2007
I'm not going to quibble. I appreciate all of your points, and I sympathise with the people who are caught in this trap.
I was just objecting to the use of the term "tax".
I would actually rather prefer to have a more realistic balance between living costs and wages such that things like housing benefit and other subsidised housing (yes, I'm including council houses and housing associations) were only needed by a much smaller number of really needy households, rather than those who just can't afford to live where they do, whatever the reason why.
For what it's worth, and for reasons beyond most of these people's control, they are forced to rely on the state and it's devolved institutions for support far more than is healthy for the nation's finances. I wonder if enforcing the living wage and reducing benefit paid, and then carefully change the tax and/or NI on companies by an appropriate amount to shift the money away from benefits and on to a more income and tax basis would be a reasonable first step? Possibly Tim Worstal cold crunch the numbers and comment?
I appreciate what you are saying about buy-to-let, but the right-to-buy, which is what caused the public housing to be sold off was intended to benefit the original purchaser. Some of them will have bought and then sold, making significant profit for the people taking advantage of the right, but when they sold, it would be at the market rate. The b-t-l landlords would have paid market rates, and the only benefit thy has was that the houses were available at all. But the house transferred to private when the original purchasers bought, not when they were sold on.
What has helped the b-t-l purchasers most is the mortgage guarantee and incentive packages that were introduced to try to support the first time buyer market, and thus the whole of the housing ladder. These were not sufficiently guarded to prevent b-t-l landlords from using the schemes. Other than that, it's the relentless rise of house prices that allows a landlord to borrow against their existing portfolio to fund purchases, and then rely on the price rises to provide them a capital gain so they can either borrow more, or sell in the future and make a profit far higher than commercial interest rates.
There's lots wrong with the economy, much more than the loss of public housing.
UK "Bedroom Tax" is not a tax. The media and opposition parties deliberately misrepresent it to increase the emotive impact of the issue.
It is actually a rebate on the housing benefit (a welfare payment) given to someone who lives in a house with an unused bedroom. (Rebate is used a little strangely here, because you don't normally expect it to be used to benefit state institutions).
If you do not get housing benefit, you are not affected at all by this.
Queue downvotes, but I don't think that the state should pay welfare benefits to people so that they can live in houses larger than they need, although I do think there needs to be significant exceptions for people with intermittent requirement for an extra room, for example if they need to house significant amounts of medical equipment, an occasional live-in carer, or children and members of the armed forces returning home for holidays and leave.
If it were a true tax, everybody would have to pay it, even people not receiving benefit and/or people owning their own house outright.
On the story, I thing Chicago are ill-advised to introduce a tax that is going to be difficult to enforce.
I still run a lot of stuff over X both at home and at work (obviously through SSH X forwarding).
My primary go-to system is a Thinkpad T43, 2.0GHz Dothan Mobile Pentium with 2GB memory running Ubuntu Trusty Tahr and Gnome-flashback. It runs very well as an X server connected to other systems, both more and less powerful.
What kills X is the appalling way that some applications, particularly Java ones, are implemented. Too many client applications render the screen locally, doing thing like all of the font handling locally, and then sending the rendered screen to the server.
Now I know that this is the only way that a client application can guarantee that certain fonts are available, and rendered as they expect, but it's seriously ugly in use, and it breaks the ethos of X11, where very efficient network primitive operations are sent across the network rather than the screen bitmaps.
Certain things, like video, are clearly not suited to X, but properly written X applications can be exceptionally snappy.
I think back over 20 years to running IBM X-Station 130s in a live operation (actually the IBM UK AIX Support Centre) from RS/6000 320H servers, about 10 X-Stations per server, over 16Mb/s Token Ring, and it was not X that slowed things down, it was the processing power and memory on the servers running the clients (isn't the X computing model confusing sometimes!). At the time, this was a very realistic proposition, and I'm sure that the increase in processing power and network speed could make thin-clients technically feasible again, but there is no cost benefit any more.
With the rising popularity of delivering apps. through HTML, I can see future thin clients being <£100 android tablets with keyboards (and maybe mice), possibly built into desks rather than sitting on them!
A stick like this could be useful in a scenario like this, but once you take into account the cost of whatever you are displaying it on, and the keyboard and mouse, the tablet option I described above looks much more attractive. The real benefit of these systems would be in a household that does not want to have a desktop PC, but may occasionally want more than can be done on a tablet.
I have a first generation Acer Aspire One (1.6GHz Atom N270, 1GB RAM and 8GB flash) which sounds like it's similar spec, and I have Trusty Tahr on it, and I still use it for basic web browsing, and playing some media.
I use gnome-flashback, partly because I prefer the interface, and partly because Unity is painful with slow graphics and only 600 vertical pixels. The other drawback is the SSD is seriously slow, possibly because it does not support TRIM.
The biggest problem is keeping enough of the filesystem free during updates. The update tool leaves a trail of downloaded deb files after they've been installed, and never cleans up old kernels.
If I did not make serious effort to clean up after every update, it would have run out of space ages ago. I hope that the version installed on this stick has been tweaked to do some of this automatically.
fwiw, ubuntu-tweak and computer-janitor are seriously useful for keeping this cruft down.
Remember that Pluto is in it's own orbit, and moving quite fast (4.67 Km/s), so late as in crossing Pluto's orbit after it has passed by. 20 seconds would have increased the closest distance, but probably not by much compared to the 7,750 miles distance.
But the answer is in the quoted article from NASA. "...[JHAPL] says without the adjustment, New Horizons would have arrived 20 seconds late and 114 miles (184 kilometers) off-target from the spot where it will measure the properties of Pluto’s atmosphere. Those measurements depend on radio signals being sent from Earth to New Horizons at precise times as the spacecraft flies through the shadows of Pluto and Pluto’s largest moon, Charon."
So yes, late.
I suspect that it's not a case of them thinking they're going to lose money. Sam*eil (or Ch*ung maybe) will almost certainly make them more money than most corporate investments.
It's more a case that they think that Samsung on it's own will make more money, and will be easier to influence than the enlarged combined company, in which the individual investors will have a diluted share holding.
I'm sure that I read somewhere that in part of the contract to supply the DoD with weapon, control or maintenance systems, there was a clause requiring a 10 year withdrawal of support notification.
This means that the supplier has to warn the DoD of the date that the kit would not be maintainable 10 years before the support was withdrawn.
That makes the 3 year notice rather abrupt, don't you think.
It's interesting that you can see some of what you say in the deregulation of the bus network from the Passenger Transport Executive (PTE), run by Tyne and Wear around Newcastle and the immediate area (and probably the other PTEs from around the country).
The PTE ran a fully integrated transport system that included the bus services and the Metro rail system.
When it was set up, it was arranged around interchange hubs, and your journey would often be bus to a hub, Metro between hubs and bus to your destination. This was paid for as a single ticket via 'zone' pricing, so you could buy your ticket on the bus by asking for a Ticket to the zone your destination was in (that you could find easily on a map), and that would cover all steps. Public transport across the Tyne was mostly restricted to the Metro, and all of the zones were well served by a comprehensive bus network.
The result was that almost everybody used it. It may have been a bit slower than travelling by car, but if all you needed to do was get yourself across the region, it was a no-brainer. You just used it, like you use the underground in London.
Roll forward to 1986. The bus regulation the PTE ran was removed, and the PTE themselves had to divest the bus operations (although they continued to run the Metro). Suddenly multiple operators started competing for the lucrative routes and ignoring the less profitable ones. Buses started crossing the Tyne again. Road traffic across the bridges slowed to a crawl. People did not know how to price a journey, and had to buy several tickets from different operators. Use of the Metro started to decline, because it was no longer the common trunk to link the hubs together.
Eventually, all of the local bus companies ended up being bought by the national bus operators, which essentially stifled the competition, resulting in effective monopolies.
They are finally trying to re-introduce a common fare system AFAIK. I no longer live there, so I'm a bit out of the picture.
I've done the disk clone thingy with dd several times. I'm not concerned by it, although you can get a bit screwed by the UUID in /etc/fstab of some partitions under some circumstances.
The Windows side is more tricky, because moving the image to a new system will trip the Windows Genuine
disAdvantage 'it's no longer the same system' in later Windows versions, and you have to change the license key to match the new system (believe me, I've done it several times). But I only keep a real windows partition going for those very rare circumstances when I can't do what I need to under Linux or in VirtualBox. It gets booted about twice a year!
The Windows drivers are not really an issue. It'll always come up with VGA graphics, and as long as the usb 1.1 drivers are installed or the optical media works, you can install the correct drivers (My Thinkpad history: 365X ->365XD ->380D ->T20 ->T23 ->T30 (multiple mobo's) ->T43, and I had an N33sx and an L40sx before the Thinkpad brand). During this time, I've gone through many disks, although for the last 10 years or so, I've only changed the disk as I've needed more space, not when I've changed machine.
I think you're very lucky. The T30 was a bit of a bogey machine, the worst of any of the Thinkpads that I've ever had. Don't get me wrong, I kept my T30 running for as long as I could, but...
The T30 was the first Thinkpad that was completely made in the Far East AFAIK, and they got the mechanical design a bit wrong. They are notorious for the RAM sockets to break solder joints. Finding a T30 with both memory slots still working without some paper wedged in to put pressure on the slots is a very, very rare thing. I think that the designers recognised this, because there was no T31 or later T3X systems, and the T40 was launched not that long afterwards. I think the motherboard was put under some strain, because several of them I've had have had different types of foam pads to act as strain relief, but they never really worked.
I know Thinkpads are repairable. I had 4 different motherboards in my T30, and eventually resorted to re-soldering the RAM sockets myself, but I don't have a re-soldering station, and using a normal dry soldering iron to melt the solder already there eventually burns the surface mount pads off the motherboard.
I kept it running until I could no longer find any mobos on eBay, and the ones I had could no longer be re-soldered. I eventually decided to replace it with a T43 (this machine) when I could find one with a Dothan processor (and the cost dropped to lower than a T30 mobo, even one not guaranteed to work. But the hard disk (swapped out of the T30 to keep the 'machine' the same even though it's different hardware, is flagging SMART errors, and large IDE 2.5" disks (100GB+) are also getting rare. Core 2 duo T60s (with SATA hard disks) are beginning to look cheap on eBay at the moment, so I may switch again, but this swap will require copying between disks, not just a disk swap.
T30's also has a definitely silly bit of design. If you tried to remove the disk with the lid shut, you were guaranteed to break the top left corner of the bezel. It is such a common problem, that if you see a T30, it's almost certainly broken there.
If there was a modern Thinkpad, in approximately the same form-factor as the T20-T60 ranges, available at a reasonable price, I may just skip to a new one rather than a used one. But, unfortunately, I think Lenovo will take the interest as a sign that people would pay a high price, and they will introduce them at ultra-book prices. If they do this, they've not really looked at what people want.
T20 running as a firewall. 700 MHz Pertium 3, 256MB ram and PC-CARD second Ethernet card.
Too slow even for a firewall really, but been on 24x7 for more years than I can remember.
I bought it second hand in about 2003.
Typed on a T43, my main workhorse, running Ubuntu LTS 14.04. Still fast enough for most day-to-day purposes.
If you were using ISO standard Pascal (ISO 7185), it may have made for safer code, but in order to achieve that safety, you had to put up with a language that was so limiting that doing something like writing an OS would have been a virtual impossibility.
I mean, really. ISO 7185 Pascal does not have anything remotely like a pointer. It also does not have any way of addressing data objects that have not been declared. It's also pretty difficult to handle variable length records in data files, because of the inflexible nature of the I/O system.
So what happened is that you got things like Turbo Pascal and Delphi, which were not Pascal, and introduced enough of the methods needed to write systems, which also added some of the same vulnerabilities that C had.
You should have used something like Ada as a counter to C, not Pascal. Although Ada was a strongly typed language, it's very reason for being was to write correctly coded systems for such things as military application, so it had the necessary constructs to interface with hardware.
Unless you're prepared to have massive inefficiencies in your code (like surrounding all data structures with hardware write protected regions), it's always going to be a matter of trust.
If you use a language that does strong boundary and type checking for all data objects in software, you're trusting that the compiler and/or run-time is correct, and does not contain any flaws You're also always going to find that your code runs slower with these checks.
I'm not going to suggest that there are not sound reasons to stop using C, but using a language with better protection does not guarantee total safety.
C is still excellent for what it was written for, generating code that closely fits with the underlying computer architecture. But it's not a perfect language, by any means.
When C was first developed, it was necessary to have a language that would map very closely to the system ISA (and it did map to the PDP11 instruction set very nicely), because even minor inefficiencies in code size may have pushed the kernel beyond the 56KB address space on a non-separate I&D PDP11 (I used to run a PDP11/34A, and had different kernels to drive all of the terminals without a tape drive, or to have only some of the terminals working with the tape driver compiled in). IIRC, Sun 3 680x0 systems had to have a kernel less than 1MB.
That time is fortunately past, but that does not mean that in these days of a word-processor needing 100's of megabytes of memory just to load, that aspiring to produce efficient code should not be something to strive for.
The counter to this is that although you (and I, I will admit) may not have the skill to fix problems like this, we do have the ability to aid someone who does, with a formal or informal contribution of either money or equipment, and it does not even have to be the developer with the Open Source software model.
I suppose that you could give Microsoft and Adobe money and ask them to do the same, but I suspect that it would disappear into the general coffers, and not significantly affect the quality of the code.
That's interesting, but I don't actually think that table is relevant to my post.
Firstly, all of the life expectancy figures in that table are actuarial, meaning that they are guesses based on historical figures rather than accurate predictions. And my point is that the assumptions that those numbers are built upon are changing because of really dangerous professions either disappearing, or becoming much safer. The only column that is relevant is the "number of lives".
Secondly, as I was commenting on differences due to work done by men and women, quoting a figure for children aged one is largely irrelevant, unless the US has found ways of infants working.
These figures for boys and girls below working age show something different, although I don't have a clue what it is. It can't even be misadventure, because (again) infants probably don't really behave very differently.
Thirdly, within the timespan of the table (1891-2010), there are two very major and several significant minor events that skew the figures in the US. These are the first and second World Wars, together with the Korean police action, the Vietnam war, the first and second Gulf Wars and the following actions, the US war in Afghanistan, and the actions in Balkans. All of these would significantly increase the proportion of male vs. female mortality, as wars are generally still fought by men, and even those who return may have had life altering injuries that persist in affecting the figures long after the events.
It's also interesting that the figures include populations living outside of the US, which probably include people living and working in much less safe environments than the US mainland. I would also contend that Europe probably has more job related health and safety regulation than the US.
We will not be able to see the full figures for the period I was talking about for may years to come.
I certainly wouldn't argue that the figures do show a shorter life expectancy for men, but my point is that recently, and on into the future, the differences due to the type of work people do (my post) will become less as time goes by.
That's called society. Without it, there would almost certainly not be the jobs that you do that pay your salary.
It's one of the things that differentiates homo sapiens from most other mammals. Society as a whole provides what is required for society's benefit (there are other examples, such as wolf packs which have non-breeding members of the pack that contribute to the pack through hunting and nursemaid roles).
You, as a net financial contributor are paying for the next generation. Parents, as net care contributors, are 'paying' for the next generation with care. Some of the next generation in the future will 'pay' (with their care) for older people, whether they've had children or not. Society as a whole benefits from all contributions, and many I've not detailed. It's been this way, in one form or another since man started living in groups, although now it's expanding to a scale where it no longer looks like society any more.
I will admit that in the modern selfish world, where people just say 'what about me' all the time, that this could break down, but hey, nothing is forever.
I think the argument about shorter life expectancy for men, and them not reaching pensionable age is a severely outdated one.
It may have been true up until 50 or so years ago, but health and safety legislation and the changing demographic of work in developed economies have seriously undermined this line of thought.
It's true, there are still mostly manual jobs out there. Construction work is an obvious one, but anything that is mostly physical in nature. In these, anything that would seriously affect the health of the workers is now severely regulated in progressive economies, and professions like mining, steel work, and transport (I'm thinking of dock workers here) have almost completely disappeared either because they don't exist any more, or they have been altered beyond recognition due to mechanisation.
One of the other outcomes of this is that the historical actuarial rules for pensions have been broken, leading to most older pension schemes that assumed that people (esp. men) may either not reach pensionable age, or only have a fey years that need a pension before expiring now face paying 20+ years. This also affects the balance of the state pension (too many pensioners vs. the number of workers), as that is a non-funded pension scheme (workers today are paying national insurance to fund the state pensions of existing pensioners, not paying into a fund for their own retirement). This is one of the reasons why governments actually want population increase, through means including immigration, to provide a mid-term extension of the pensions system.
Many years ago, Girobank once processed a cheque for £1000.00 (one thousand pounds) paid into one of my accounts as £10.00 (ten pounds), even though the cheque was correctly filled in, both words and numbers.
It took about two weeks for them to fix it after I spotted it, because they had to retrieve the original cheque from the document archive. In the meantime, £990 of my money was in limbo, having been taken from the source account but not appearing in the destination account.
They did refund all of the failed transaction charges, and fortunately, the mortgage company accepted that this was not my fault, and did not post a black-mark. Another fortunate thing was that this was the only direct debit from that account.
What part of "about one in five households receive tax credits" that I said earlier don't you understand.
These are mostly normal people, often with both adults in work, challenged by circumstance. Some may be deliberately playing the system, but most aren't. They are just relying on the system to provide what they're told they are entitled to, and using this income to plan their expenditure.
I don't know what you expect these people to do if their circumstances change for the worst. Once, they may well have been able to easily support three children. If they've lost income as a result of losing their jobs and having to move to lower paid ones, or accepting pay cuts in order to keep their jobs, the family breaking up or any of a number of different things, their financial situation could have got worse since they had the children.
Are you an advocate of turning the kids out of the home if you struggle to provide for them? The Victorian foundling homes and orphanages? The work house? Or maybe euthanize them? Come on, let us know.
We don't know this woman's situation, so don't just brand her a chav. because you're a smug bastard who has never been in that sort of situation. She may be in work and still be entitled to these tax credits.
Oh, and by the way. In order to achieve a stable population, what with non-reproducing members, it is necessary for some families to have more than two kids. Three is not only quite normal, but absolutely essential to compensate for people who die having never had children. IIRC, the average number of kids per couple needed to keep the population stable is regarded as somewhere between 2.3 and 2.4.
If everybody was forced to enter heterosexual relationships and have two kids, the population would still fall as a result of natural mortality and infertility. IIRC, the last two or three UK censuses show that discounting people who come in to the UK from abroad who are often young and have children while they are here, the UK population would be falling. And this has serious implications on government finances in the future.
I was thinking about why I was getting downvotes, and it occurred to me that there may be people who don't know what tax credits are in the UK.
They are basically part of the welfare system which is supposed to return tax to workers in low paid jobs, depending on their circumstances. This is supposed to encourage people to keep working in these low paid jobs, rather than giving up and moving to what used to be called unemployment benefit, now jobseekers allowance and income protection support.
The idea was that it would reduce the amount of money paid in benefit, but instead what it does is trap people in low paid jobs that they can't afford to give up, but which offer them no chance of improving.
In many cases, what this does, if treated at face value, is that low paid workers pay negative tax, meaning that not only do they not pay tax, but they get money back.
Something like 5,000,000 families in the UK receive some tax credits, according to government figures, and all households with children which do not have a higher rate tax payer will also receive child benefit.
It is part of the complex tax system that we have in the UK, as envisioned by that <irony>financial wizard</irony> Gordon Brown. I would prefer to see a system where people did not need this level of support (because it's regressive and unfair), and because what it really shows is that either wages are too low, or cost of living is too high in the UK for a significant part of the population.
So, as I said, tax credits are not a windfall, they are part of the regular income in about 1 in 5 of all households in the UK (based on the previous figure, and the total number of households in the UK). This is why it's important. Families rely on it's timely arrival for all sorts of things including food and fuel. That is why this woman is upset.
Her tax credit payment is part of her regular income. As such, she, along with many others, will rely on it to plan their spending. If she's on part-time working or a zero hours contract, it may be one of the few bits of regular income she gets. Why should she not rely on being able to draw on it once it's been paid.
Don't think that for these people, a tax credit is a bonus or a windfall. It's (again) regular, weekly or monthly income.
What the hell do you think she's supposed to use it for? A holiday!
Go talk to some real people rather than guessing at their circumstances.
Oh, and if you want to be regarded as distinct from all of the other myriad of AC posters, why not set up a meaningless handle and post under that. El Reg will still know who you are whether you post AC or not, but nobody else will. That way we can differentiate from all the ACs who post similar sounding comments.
Maybe she can afford them, but relies too heavily on the services of other people to be able to get at her assets. This is looking like a personal vendetta, AC. Do you know her personally?
Lots of people don't have credit cards because they can't get them. Mainstream lenders nowadays are very risk averse, and will not give credit cards to people they think may not be able to afford them (this is what they are being told to do by the financial regulators).
Why do you think there are/were so many pay-day lenders about?
Get off your hight horse and look at the real world!
It may not be illiteracy. It may be circumstance, often forced upon them. For example, if you rent a property now, it is very likely it will have an electricity pre-pay meter. The utility companies don't like rental properties having credit agreements (which is what a monthly or quarterly bill effectively is).
Have you never been in a situation where you've been waiting for a credit into your bank account before doing something? Well, that's what these people are doing, but they're doing it for food and fuel.
Just because you have the means to keep a financial buffer does not mean that everybody does.
If my primary bank account became unavailable to me for whatever reason for more than a few days at certain times of the month, I could be embarrassed by mortgage and other regular payments bouncing. Hungry, maybe not, but definitely unhappy enough to complain, especially when Twitter makes it so easy!
I think many people on this forum are complacent in their relative financial comfort. We've frequently commented to each other that the readership here are pretty unrepresentative of the population in general. This really is another aspect of this.
Whilst I cannot fault your logic, experience tells me that there are a lot of people who rely on a single bank account, through which all of their salary/tax credits/child benefit is paid.
This type of person is also unwilling (so as to avoid the temptation of spending money they don't have) or unable to get a credit card. Also, many operators of pre-payment charging systems for electricity meters will not accept credit cards. Drawing money out of a credit card at an ATM results in punitive credit charges. And many people do not keep significant amounts of cash lying around.
I have been caught in a situation where I could not get money from a bank because of a cock-up. There was a day where there was a backbone telecom failure in my home area, which resulted in all of the ATMs in the area becoming non-functional, and no shops could do any electronic transactions. I fortunately had folding money in my wallet, but I know of many people who were unable to do anything that day unless they had cash.
So contingencies are good, but there may come a time when even that becomes useless. Then, the only safe regime is cash!
Personally, I don't believe that a Wiki can ever be an alternative to a properly structured document management system.
The problem with Wikis is that they're almost impossible to impose any structure to. Whilst they become a good place to drop hints, tips and the various miscellanea of wisdom, using one to be the primary document repository leads to you never being able to find anything without using the search function. And using search is fine until you get to a certain critical point where reading through the myriad of hits for common terms, especially if more than one team are using the wiki, becomes a burden in it's own right.
I suppose it is possible to impose a structure by convention, but such systems are easy to get wrong.
I prefer a system where the storage method imposes structure on the documents, at least for the formal documentation. So, you define a documentation template that includes all of the sections you're ever likely to need. Requirements, Design decisions, implementation details. Implementation plans, Operational procedures, Support procedures, Disaster Recovery etc. etc.
The idea is to try to cover all the bases, so define more sections than you're ever likely to need, and just miss out sections that are not needed for certain projects.
If you store this in some hierarchical storage structure (I actually like just using a tree-structured file system) with change control at every level, finding particular documentats becomes easy. And writing the documentation becomes easier too, as you can populate a new branch for a project from a pre-defined template, and all of the boring dross of getting the look-and-feel correct is done for you. You can immediately tell if a particular section of the documentation has been written or not.
And with regular section numbering and naming conventions, you can 'slice' the tree horizontally to get the first cut of a set of operational procedures for many systems to generate a new document for, say, your operators quickly.
I first saw this done for a UNIX system in the late 1980's, using the directory structure to organise the files, shell scripts to populate and customise a new branch from a template, files containing the sections written in troff/memorendum macros/pic/tbl/eqn/grap in each directory, and SCCS as the change control in each directory.
IMHO, it worked so well that I've adopted something similar whenever I've needed to write something in an organisation without it's own documentation standards.
I know that there are a whole slew of tools that will allow you to do something similar (even with MS Word documents in all their binary obtuseness), but I think this fits in with the UNIX way of doing things. And for search, well, you've got find and grep, and everything is text! And you can always create a permuted index with ptx if you're using troff.
At the time that the system was designed, PowerPC was regarded as an in-vogue processor. Apple were using it, as were Nintendo in the GameCube and the Cell processor (which is where the PowerPC core used in the XBox360 came from) was exciting a lot of people, including Sony with the announcement of what was going in to the Playstation 3.
It also had good price/performance and performance/power levels compared to Intel processors of the same generation. Remember, Microsoft did not have a stellar success with the original XBox, with it's single core Celeron derived processor. And Microsoft even had some expertise in PowerPC, because they had a PowerPC port of Windows NT.
IBM were also much more willing to produce a custom chip for Microsoft at an acceptable price, and the Cell PowerPC core was a very suitable processor for use as a core on a multi-core chip due to the work on the Cell architecture (Intel's Pentium-D was launched in 2005, the same year that the Xbox 360 was announced - previously it was only the expensive Xeons that were multi-core).
It's only in hindsight (and a quite significant change of direction by Intel) that PowerPC looks like it may have not been the best choice, but hindsight is a wonderful thing. If only we could predict what it shows us after the fact.
I still think that POWER and PowerPC processors have a future, although maybe not in gaming consoles.
Those government departments are just penny-pinching. My guess is that they are mostly using AIX (rather than IBM i/iOS/OS400), and there is great compatibility with older systems, so probably there is a good chance that they could install modern IBM POWER systems, and expect the applications to either run unchanged or for newer versions of the applications to be available.
If the code would work in AIX 5.3, then they could also use an AIX 5.3 WPAR on a recent system to run the code. There's plenty of options out there, and modern kit is so much more powerful that they are unlikely to have performance issues. If they are using hardware that needs normal PCI (as opposed to PCIe) or MCA (it'd have to be really old kit for this!) then it would be a problem, but you would probably struggle to find an Intel box with PCI nowadays, and MicroChannel would be completely out of the question for a server with an Intel processor, unless you wanted a 486 system!
They probably just don't want to install any new kit, probably because they no longer have the skill in house to change anything, and their outsourcing partners have quoted the GDP of a small country to do it for them.
I've not studied this much more than the articles in the press, but communications will be a problem for Philae. It's on a lump of who-knows-what (that's what it's there to find out), with very little gravity, spinning or maybe tumbling through space in a chaotic way (as the comet warms up, volatile chemicals will almost certainly boil off, causing changes in attitude), with very limited power available for a strong transmission or steerable communication dishes.
There's no way that it can communicate with earth directly. It will be talking through Rosetta, which itself is in orbit around the comet (not sure how this is possible without a significant gravity field). It will communicate with Rosetta when it can, and Rosetta will talk to Earth when it can.
As the saying goes, it's complicated.
Your definition of "Disaster Recovery" does not match what I've traditionally worked with, although I'm prepared to admit that things have moved on a bit.
Depending on your allowed down time, a disaster recovery solution may include a rental company delivering pre-arranged configurations of machines to a known site. It does not mean that you have to have those servers already installed.
If you have a nominated site, with the correct communication and power already laid on, and you have an agreement with a rental company that can deliver predefined configurations of kit within an agreed timescale, and this (and the rebuild) fits into the allowed window, this can also count as Disaster Recovery (in fact, this used to be what Disaster Recovery was all about).
It all depends on how long you can be down for. Nowadays, many businesses really require full-blown geographically separated high availability or multi-site concurrent solutions, as even a few hours downtime can seriously hurt some businesses.
You've also not included the term Fault Tolerance, although you've alluded to this with the redundant power supply example, but redundancy and fault tolerance are not exactly the same. Maybe it's just normal that people expect fault tolerance by default now, but I'm sure the term is still used in documentation.
Your dream line-up can only be a dream.
Jack Bruce, Rick Wright, John Bonham (OK, officially replaced), Jon Lord, Cozy Powell, Peter Banks (OK, he left Yes a long time ago), Dio, Kelly Groucutt (and Mike Edwards, although they both left ELO), Eric Carr, all gone.
And these are just the ones I found from the bands you mention!
I was supposing that the concession operators balance could be accessed in a timely manner. If that is not the case, then they and their customers are effectively giving the organisers a free, high risk credit facility for no return. I would not accept the risk.
But surely, that should come under some form of regulation, because the festival could be seen as a bank without a banking license?
Many retail tags are 'burned out' at the till (never wondered why they are waved over a plastic pad on the counter that warns you not to put bank cards and mobile phones on it?).
This is supposed to permanently deactivate by destroying the tags. (tags are powered by inductive power - these pads provide so much power that it blows a fuse in the tag).
Doesn't always work, though.
Completely removing the tag would be better, but it's been suggested that many retailers would like to hide the tag so that you don't even know where it is to remove.
Technology like the RFID cards can be used for many purposes, both good and bad.
From the organisers perspective, using the RFID tag as a payment method reduces the amount of cash on site, so it will probably reduce some of the petty theft that happens at these events. This will benefit the customers, the people who run the concessions (they don't need to maintain cash floats), the organisers (who don't need to have cash handling facilities for the concession operators), and the Police who will probably see a reduced number of theft reports, especially if the RFID has a second factor (PIN?) to authorise payment.
It also makes sure that people only have access to what they have paid for, making it more likely that people at the festival actually pay the right price to see what they want.
Tracking people who move around is something completely different. While it will happen as a side effect of the entitlement checking, it's no different to having a barcode on a ticket, which is frequently used at attractions.
The only time it may be intrusive is if they have silent, unmarked RFID scanners scattered throughout the event, not just at the gates.
I think if I was there (which is not going to happen, partly because of these concerns), I would probably want to take a foil-lined pouch or tin to keep the tag in (depending on whether it is a wrist-band or a dog-tag, both terms are used in the article), and only take it out when it is necessary.
I don't really agree with facial recognition, but as that can be applied both in real time and in retrospect to captured CCTV video, there's not really much point in objecting to that, because it will happen anyway.
It's not that I'm paranoid (well, not that much), rather than I object to the concept of there being the ability to track me.
When I learnt it, it was FORTRAN. It's only since that trendy upstart Fortran 90 came along, with it's free-form input, long variable names and pointers (amongst other corruptions) that the capitalisation changed.
I've not written much in the last 25 years, so it's still FORTRAN to me.
But please not, a lot of the Unified Model is still written in FORTRAN 77 and earlier, so the case is a moot point!
Blind benchmarks would work if you take identical code and run them on two separate machines.
Unfortunately, when comparing different languages, the way that the problem is coded is partly conditioned by style, fashion (you don't think fashion is present? Wait until you've been around a while, accepted standards for writing code changes over the years), and personal preference. This makes direct comparisons more difficult, because different people will code the same problem in the same language differently, and some differences can have quite an effect on performance.
Beautifully written code is not always the fastest!
It's not so easy to determine what is right and wrong when the forecast gives probabilities, not yes/no answers.
What you are talking about is the forecasters attempt to turn a complicated forecast into something that numpties like you have some chance of understanding, all in the space of three minutes or less. It's always going to be wrong for someone, because the weather for a whole region over a space of hours will never be the same across that whole region.
What you are complaining about is the generalisations that you hear on the radio or TV not being detailed enough for where you are. Look at the more detailed local forecasts on the BBC or Met Office web sites or apps, and you will find it is quite a bit closer to what actually happens.
There's another thing though. When you have ensemble runs (you run the forecast with slightly different parameters, and the reason why there are up to 3000 per day) it is quite likely that at least one of the ensemble will actually be right!
I had this discussion with a couple of FORTRAN programmers some time back, and actually wrote some test code to see what they were talking about.
The problem with C derived languages is that everything is done by reference (essentially pointers), and this adds an indirection to follow the pointer that needs to be resolved to almost all structured data reference, particularly arrays (very commonly used for this type of work).
On the other hand, FORTRAN works much more directly with the addresses of data in memory (once it has the base address of an array, for example, it can do direct arithmetic on the index rather than having to resolve the pointer again). This means that it is much easier for the data prefetch mechanism to spot consecutive address so as to fill the cache. This is especially as the FORTRAN standards dictate the way that certain structures have to be laid out, which has actually conditioned processor design in the past, and enables the FORTRAN programmer to make some intelligent decisions about which rank of a multi-dimensional array to traverse to maximise the effect of the cache.
I can't remember the figures exactly, but I found that FORTRAN code actually ran faster than it's similarly written C equivalent. You had to write some very unusual C to narrow the gap. There was not a lot in it, but when you are trying to get as much as you can from a system, every clock cycle counts.
This is comparing FORTRAN and C. If you want to include the OO overheads of C++ to the equation, then things get even worse! And the discrepancies are not always fixed by optimising compilers.
The problem is twofold.
Firstly, the Unified Model needs to have quite large sets of data per cell. Currently, the systems are sized with ~2GB per core, and each core is at any one time calculating one cell on the grid. This is to do with the way that the information is arranged, and although current GPUs can address large(ish) amounts of memory, they cannot manage to provide enough memory per core for a "few thousand compute units on a single card". Until the GPUs have the same level of access to main memory that the CPUs and DMA communication devices have, this will always be a block.
Secondly, all of the time steps are lock-stepped together, and at the end of each time step, results from each cell are transferred to all of the surrounding cells in three dimensions (called the 'halo'). As I understand it, the halo is being expanded so it is not just the immediately neighbouring cells, but the next 'shell' out as well. This makes weather modelling more of a communication problem than a computational one, and one of the deciding factors over the decision over the architecture was not how much compute power there was, but how much bandwidth the interconnect has.
To do this work on a system using GPUs for some of the computational work would require significantly more memory than can conveniently be addressed in the current GPU models, and because there are different GPU-to-main-memory models around with each generation of hybrid machine, getting the data into and out of the GPUs is not generic, and currently requires to be written specifically for every different model at the moment. There are also no standardised tools to assist.
Personally, I feel that the current GPU hybrid machines are a dead-end for HPC, as were the DSP assisted systems 30 years ago (nothing is new any more), but what we will see is more and more different types of instruction units added to each core, making what we see as GPUs today just another type of instruction unit inside the core (think Altivec crossed with Intel MIC if you like).
I can't comment on this attack, but if you have a processor with a different instruction set, then many of the stack smashing and buffer overrun vulnerabilities disappear, at least until the malware becomes clever enough to identify the processor architecture before dropping machine code into the target system.
The issue here is that we are fast approaching a monoculture, with x86-64 processors becoming ubiquitous, so there is only one processor target. Granted that different OSs give some protection, but however you do it, if you can get some valid machine code injected and executing on a system, then many things are possible.
Obviously, x86-64 machine code is not valid on, say, a system with an ARM or POWER or Z processor, so this type of attack becomes invalid in the short term. But this only remains the case until another processor type is sufficiently widely deployed to make it worth attacking, where upon you have the existing problems, just with some additional wrinkles.
The film works on railway carriages because they are mostly metal, and the windows are the only place the signal can get in or out.
The same is not true for schools. Brick, cinder block, low density concrete blocks, curtain wall on steel skeletons, terrapin huts (sorry, showing my age there) are all porous to mobile phone signals. You'd have to line the whole room with the film.
Maybe there should be a tongue-in-cheek icon as well as a joke icon.
I meant this in a very light-hearted way, and it was actually addressed at other people than you. You had already demonstrated with your comments about another network that you were far from the average person who just plugs in a router and leaves it with it's default settings.
If I had actually addressed it at you, I would have done it in the same way as I have here, by actually referencing your handle.
I meant no offence.
Biting the hand that feeds IT © 1998–2019