* Posts by Sproggit

37 posts • joined 8 Jul 2012

Watch: SpaceX Dragon capsule breathes fire during crucial hover test


It Makes Sense Though...

If you think about the challenge of deceleration from a purely "basic physics" perspective (i.e. conservation of momentum), then the heavier the capsule, the more energy it will take to decelerate it, using


So by ejecting the heatshield before the decelerative burn, you are basically going to get a greater velocity change [through the deceleration] for the same energy [fuel] cost.


Linus Torvalds lashes devs who 'screw all the rules and processes' and send him 'crap'



I regret that I have only one up-vote to offer you.

But you're *so* right...

SYSTEMD = Screwed Your Machine 'Til Everything's Maliciously Destroyed.


2009 IBM: Teleworking will save the WORLD! 2017 IBM: Get back to the office or else


It's Like The Tide...

... sometimes it is rising, other times it is falling - it remains motionless for very little time indeed.

There are all sorts of reasons that a company might move towards or away from home working. Of all of them], however, the most likely is simply that a new executive has come along and, wanting to make an impression with their seniors, has elected to instigate a new working practice...

I've watched a single IT Organisation, over the course of 20 years, adopt:-

1. A Product-Centric View

2. A Customer-Centric View

3. A Process-Centric View

4. Global Management and Reporting

5. Local Management and Reporting

6. Matrix Reporting

Without exception, each change aligned with new senior management. Each change took on average two years to implement. Each change was announced less than 6 months after "Mission Accomplished" was declared on the previous process [an act which accompanied the departure of the sponsoring executive].

In most cases the declared 'victory' was an outright lie.

People who have not worked for a company the size of IBM might fail to grasp how employees can become institutionalised, or how fiefdoms develop, how little "Cottage Industries" will pop up all over the place when a service previously provided centrally is eliminated due to "cost savings", only to have an executive admonish the supervisors who "just got on with it" and promptly stand up a central function, usually populated by upstart wannabes who want the promotion more than they have the ability to do the job right....

This is office politics, plain and simple. The declarations and claims being made, that black is the new white, up is the new down and in is the new out is just a load of fluff.

Nothing to see here, el Reg. These are not the improvements you are looking for... Move along, move along...


Ex-FBI man spills on why hackers are winning the security game



If it is possible to ignore the atrocious grammatical errors in this piece (el Reg, if you allow your journalists to post articles without being proof-read, then shame on you), then the underlying premise of this article misses the point...

There are several reasons why institutions continue to be vulnerable to hackers, but these are typically:

1. The institution does not understand the actual threats they face. Try and explain to them that the Severity of a Threat = Probability of Occurence x Impact of an Event and you will get a puzzled expression in response...

2. Business Executives undermine security best practices. Have a developer ask a business sponsor, "Would you like me to implement this new feature you have asked for, or would you like me to fix these 5 vulnerabilities?" and too many times the answer will be to request the new feature...

3. Institutions pander to the egos of "key developers" and/or have a cape-and-boots mentality when it comes to fixing issues. Go round any large IT shop and look at the diversity of technologies and architecture in use and you will quickly realize that this stems not from mergers but from arrogant developers or architects insisting on using the latest whizz-bang technology on their project. This is "resume architecture" - if it will look good on your resume, put it in the architecture requirements of your next project. Allowing this to happen will generate a heterogenous technology infrastructure that allows bugs to lurk unseen for years. Then, when thing go wrong, the same organization rewards the person who dons the cape and boots and swoops in to the rescue (when others were floundering) when they should be punishing the same person for failing to document a project properly in the first place...

You get the point...

Over the last 20 years I have spent much of my career being parachuted in to Organisations immediately after a major incident. - and 9 times out of 10 the root cause is idiocy....

1. Ignorance of Threats

2. Executives putting personal agendas ahead of common sense

3. Failing to patch known issues in a timely manner

4. Allowing the unqualified to purchase security toys, I mean tools

5. Needlessly complex infrastructure

6. Lack of Testing

7. You get the picture...

None of these issues are new. We've all seen them. Yet they keep biting Organisations where it hurts - and what's worse, unscrupulous snake oil merchants.... sorry, IT Security Product Vendors, will continue to peddle their snake oil, sorry, security tools, to ignorant middle management.


Ubuntu Linux daddy Mark Shuttleworth: Carrots for Unity 8?



It was the combination of the enforced switch to Unity and then Shuttleworth's decision to start selling your local search terms to Amazon that drove me away from what had always been a solid distro. 12.04 was my last ubuntu system and whilst it was good, I don't miss it.

IMHO the single biggest mistake that Shuttleworth made was forcing Unity on people before it was ready. I am sure that his reasons were benign - i.e. getting more people to test the code - but it was not ready for GA release and should never have been allowed to be the initial desktop. Especially as the ability to add an extra desktop and then select which at log-in time is so trivially easy...

However, what disturbs me is the way that Canonical seem to have unilaterally disregarded all other work in this space, thrown all their toys out the pram, and not attempted to collaborate with other projects in this space. As a previous poster has pointed out, it's not like there has been a shortage to choose from. It's clear that the motivator was to get a single interface across multiple platforms, come hell or high water...

My only question would be: what were the numbers of people either 1) asking for this; or 2) likely to actually use it? I suspect very few - and that this was, in effect, a vanity project sponsored largely by Mark's undeniably deep pockets. Well, I won't knock that - he's doing what he wants with his cash...

Can't help wonder, though... what could Canonical have turned to if they had listened to users? For a start I'd like to see someone offer us a viable alternative to systemd... Tried that with Mint 18.0 on my system - endless issues with hardware, resulting in a wipe and re-install of the bulletproof Mint 17.3...


China gives America its underwater drone back – with a warning


Re: Lying so-and-so's - Historical Precedents

In 1982, the United Nations Convention on the Law of the Sea established the "12-Mile Limit" as the maximum extent of territorial waters. See here:-


Whilst this ostensibly looks to be like a round of posturing between China and her neighbours - perhaps an attempt to set up some form of enforceable "exclusion zone" around the Chinese mainland, this is almost certainly not the case. It is much more likely that China know or suspect that area of ocean floor to contain some form of mineral wealth which China believes can be extracted, either now or in the short-medium term future.

In considering this it is worth comparing the strategies adopted by both China and the US when it comes to securing access to resources [energy, minerals, food, ec] around the world. Over the last few years China has moved to cement numerous deals across Africa and South America; building partnerships, making purchases - even manipulating markets [such as dumping cheap steel across Europe to kill off the European steel industry] - where it believes it appropriate to do so.

The United States has taken a different approach; it has adopted a two-headed strategy that comprises control of financial markets and on the introduction of wide-reaching trade deals. The trade deals in particular, are deceptively powerful - for example the way that they impose things like acceptance of Software Patents, or giving the right to [US] companies to sue other governments if those entities issue laws that would harm the profits of US corporations.

What is unusual about the events unfolding in the South China Sea is that no proxies are involved; these are moves undertaken directly by the super-powers. Whilst Saudi Arabia and Iran have been fighting proxy wars in the Middle East for decades, it looks as though the US and China have moved on to something a little more direct.

We live in interesting times, in the original sense of the phrase.


If only our British 4G were as good as, um, Albania's... UK.gov's telco tech report


Maybe Incentives Would Work?

For as long as I can remember, we [largely the peoples of most western, democratic countries] have been told that "market forces" and "commercial competition" are the best, most effective and efficient means of driving up standards and driving down costs.

Yet here is another example of a spectacular failure in this regard. There could be many causes : inept regulator, collusion, corruption, a cartel of major providers, etc, etc. Frankly, it doesn't matter what it is.

What matters is that the controls, safeguards and processes intended to protect consumers are clearly not working. That's bad enough, but even worse, nobody in authority is doing anything about it. Where is Ofcom through all this?

When the 5G spectrum comes up for auction, we can anticipate that all existing providers will bid for a slice of the service, because they want to grab the profits on offer from the big metropolitan hubs such as London, Manchester, Birmingham, Liverpool, Southampton and so on. But they won't be interested in outlying regions with fewer clients. This is what the Regulator is supposed to be dealing with - to protect those of us who live in rural areas so that the big corporations don't cherry-pick the profits and leave everyone else. Buses, supermarkets, wired broadband, mobile phones.

So what we need - what we have every right to expect - is for OfCom to refuse to grant 5G licenses to any provider that doesn't meet basic minimum criteria for coverage and quality for the existing services.

Oh, wait. That's too sensible. Sorry, forgot.


Citizens Advice slams 'unfair' broadband compensation scheme


If Only OfCom Did What They Were Paid For...

Sorry for taking us a little bit off topic, but the issue here isn't just compensation when parts of the nation's communications infrastructure fall down in a dysfunctional tangle [although that's frustrating enough].

Another one of the issues concerns their misleading malpractice when it comes to handling line speeds. We've all seen the advertisements offering speeds of "up to" 45Mb/s or "up to" 75Mb/s and so on, only to end up with actual line speeds that are 1) nothing like as good; or 2) prove to be unreliable...

So how about OfCom introduce a new and simple rule that says: "If a telco wants to charge a unit price for a 45Mb/s service, but can only offer a slower speed, then the maximum price they are allowed to charge has to be pinned to the (percentage) ratio of the actual speed to the claimed speed."

Or to put it in simple terms: If a telco offered you a 45Mb/s service for say £20/month, but then only managed to provide half the speed [22.5Mb/s] then the maximum they can charge you for the service is £10/month.

One of the reasons that this would be a good idea would be that it would give the telco companies the necessary incentive to deal with degrading and/or poor quality lines, because the slower the modem speed at the client, the lower the income for the telco...

Ultimately, the reason most of us get such shoddy service from the telco companies is because the regulator lets them get away with it.


Virgin Media costs balloon by MEEELLIONS in wake of Brexit


Not Entirely Fair...

Yes, the value of Sterling dropped in the wake of the referendum result in June.


1. The U.K. has not even triggered Article 50, let alone left the EU, so half your statement is wrong. If you had said, "due to sentiment" you would be closer to the truth.

2. Most of the post-referendum movement in Sterling (all...) has come from currency speculation. On Black Wednesday, when Norman Lamont pulled the UK out of the Exchange Rate Mechanism, American Investor George Soros made over $1 Billion by shorting the Pound. In June, many people, including Soros, tried to repeat the trick. The drop in Sterling has been caused by speculators, plain and simple.

I have ZERO interest in the Brexit decision either way, but I do think it is important that we're clear on these points...


A quarter of banks' data breaches are down to lost phones and laptops


Conflated Issues

You're mixing up your stories in an attempt to make this sound more relevant than it is...

For example, you quote the 2014 JPMorgan breach, which was the result of a hack of servers and nothing whatsoever to do with phones or laptops.

The same will likely be true of most, if not all, US companies. This is due to a Californian "Data Breach Reporting Law" that requires any institution that experiences the loss of non-encrypted data relating to clients to notify those clients - a requirement that now applies across the US. As a result of that one piece of legislation, most US banks [i.e., read : "ALL"] instigated processes to ensure that EVERY company-owned mobile asset [laptop, Blackberry, etc, etc] met the minimum encryption requirements, so that the loss of an item explicitly *didn't* trigger data breach reporting requirements.

That being the case, how could your article be based on any solid facts - given that the law explicitly excuses organisations from the need to go public with that data?


Banking system SWIFT was anything but on security, ex-boss claims



This is some aspirational PR fluff from a former SWIFT employee who left their role as CEO 9 years ago. A lot happens in 9 years...

It used to be that connections to SWIFT were only granted to major banks, via dedicated leased lines and wrapped in security. Now, SWIFT themselves will give direct access to large companies, using a VPN solution and Microsoft Windows based software...

SWIFT were greedily eyeing the income that banks were making from handling high-value international trade payments between large companies and figured they wanted some of that for themselves, so they tried to cut the banks out of the loop and go direct.

Oh, and the CEO thinks that there were security issues connecting "smaller banks" ???

Yeah, right!!!!!!!!

Why do I get the impression that there's a dirty little story waiting in the wings to come out, and this is a pre-emptive PR strike?


McCain: Come to my encryption hearing. Tim Cook: No, I'm good. McCain: I hate you, I hate you, I hate you


Re: Theatre. Nothing more.

McCain is stupid, no doubt about that. But even he is not so stupid as to risk offending one of the leading CEO's of Silicon Valley. Had McCain decided to "bully" Tim Cook into showing up, that would have been enough to unite all the Tech CEO's against the Republicans.

That's a lot of lobbying money and campaign contributions that would have walked out the door, right there.

The GOP would never have let McCain be so stupid.


Watch as SpaceX's latest Falcon rocket burns then crashes


Scale of Expectation

The thing that I can't help remembering is that we have barely recovered from "Shit! That dude just put a satellite into space and then *landed the first stage in one piece*!!!" As if that wasn't enough, then we got, "Incredible! Now he's landed on a floating, ocean platform!"

The thing about SpaceX is that unlike, say, Blue Origin, they are willing to fail publicly, fail early and fail often. I remember when they lost a first stage on an earlier flight and their telemetry showed that they had actually run out of hydraulic fluid (because the rocket systems doesn't recycle the stuff). Musk fixed that with a bigger tank for hydraulic oil for future flights... Key thing being, they learn every time...

As SpaceX become more successful, so we seem to be ramping up our expectations of perfection for each launch. Isn't that a little unfair? SpaceX have delivered more innovation in the last couple of years than the likes of the US, Russian, and other space programs managed in the last two *decades*... What really impresses is their rate of learning.

For sure they will be much more conservative when it comes to manned flight, but the simple fact is that NASA has made literally hundreds of launches yet it didn't even *occur* to them to try what SpaceX have done and have made work... You have to wonder what Mush could achieve if given the same budget that NASA spent on the Shuttle orbiters...


Microsoft planning blockchain-as-a-service for Azure apps


Re: Just wondering

The falsification conditions come up as a result of the way that blockchain works. Basically, as the name suggests, this is a public ledger with the use of mutual digital signatures to agree transactions. The problem comes in the way that the blocks themselves are managed. I am going to simplify, largely because although I read through it in detail, that was a while ago and I am not confident of my ability to relate the key points accurately.

When you have large numbers of disparate parties all attempting to append a transaction to the same block in a chain, you are going to get scenarios where there are atomic failures at the transaction level. For example, suppose there is one free slot in a block and two or more parties try and use it. Only one will succeed and the others will fail. The entire blockchain is designed to handle this using a mechanism analogous to the 2-stage commit of XA-comliant systems... However, it does this, at least in part, by allowing splitting of blockchains and downstream conflict resolution of scenarios such as the one I described above.

But what happens if you and I have a transaction thrown out and then, we it's time to re-submit, I decline? This is [as I understood it from the read-through] an extremely narrow scenario that is virtually impossible to predict in advance. When all the potential conflicts and block splits are resolved and confirmed into the chain, then a transaction is deemed "guaranteed". However, there exists a narrow window of time between when the transaction is written to the block and when it is fully committed, in which another user/transaction source could challenge/trump that commit and force a do-over.

I think that's the "almost impervious to falsification" bit, if I understand it correctly...

For what it's worth, people are predicting that we need to change the underlying architecture of Bitcoin because the collision scenario I describe will only get more prevalent as more people use it. There appears to be something of a tussle underway in the BC community on this very point...


Get outta here, officer, you don't need a warrant to track people by their phones – appeals court


What About Privilege?

Does anyone happen to know if the ruling is a blanket statement or narrow in effect? Specifically, if the ruling were applied as it is written in this article, then the en banc review just found that client-attorney privilege is not protected from a warrantless search by a duly authorised law enforcement officer.

This is one of those relatively infrequent events where a ruling that could be used to set case law can, on the face of things, look to upset other, established, legal principles.

Given that appeals rulings are considered powerful arguments in case law - i.e. they are illustrations that one legal team can use to influence a court if they think the ruling favours their point of view - it is critical that the scope of the ruling is appropriately defined. Even if a complete overturn of the ruling is unlikely, don't be surprised to see another appeal here. If the ruling is written as loosely as the article hints, well, that would be bad.


Apple bans benign iOS spyware detection, security info app


Quite Simple Really

Without this application running on your iOS device, it would be possible for Apple to run whatever they want on your handset without your knowledge.

We've seen a number of unsubstantiated reports over the years that odd things happen to existing Apple handsets when new models are released. I'm not saying that there is any truth to the stories, but they are absolutely out there. Now *if* there were any truth to those rumours, the application as written would be able to provide concrete evidence of a handset's performance being "tweaked", because it would show which processes were taking up CPU cycles and/or memory...


SpaceX adds Mars haulage to its price list



Thinking about the successful First Stage landings that we've seen from SpaceX so far (3), it's obvious that Elon Musk is pushing his domestic program to learn enough to be able to safely land the Red Dragon on Mars.

Now, you *could* do all sorts with that, including carry payloads for the likes of NASA and other agencies, such as rovers and the like. At the moment I do not see much in the way of commercial demand for landing payloads there, but would love to be proven wrong.

Where this gets interesting, however, will be the work done in preparation for a human landing. We have already seen how SpaceX uses "spare capacity" to push their design envelope and experiment with new technology. You can bet that if anyone wants to send a partial payload to Mars, then SpaceX will take the order and pack every gram of spare capacity with their own experimental gear. Even if the rockets carry nothing more than spare materials or tools then it will be worthwhile.

The true genius of SpaceX and Musk has been the way they have got their current customers to pay for their R&D in such an efficient way. Don't expect that to stop any time soon.


Experian Audience Engine knows almost as much about you as Google


Is This Legal Under The DPA?

Maybe it has changed, but I thought that when the UK Data Protection Act was passed in 1998, one of the provisions introduced was something that basically said, "Data gathered and used for one purpose cannot then be re-used for a different purpose without the permission of the data subject..."

In other words, Experian might be able to harvest your data to determine if you are credit worthy, but that does not give them the right to sell that data for marketing purposes without your explicit permission...

I may be wrong - can anyone here clarify please?


Stop resetting your passwords, says UK govt's spy network


Re: Too Many bad Movies

With specific reference to your comments regarding defense against brute force attacks... Maximum attempt limits are a great way to allow an attacker to perform a denial of service attack against the your legitimate users. And to those who are reading this and thinking that they would simply include an ever-increasing retry delay to thwart automation of this attack: remember that likely 90% of existing authentication platforms out there simply don't have that functionality... So good luck with adopting that as protection for ooh, say, your platform administration accounts...


SpaceX's Musk: We'll reuse today's Falcon 9 rocket within 2 months


Capture Mechanisms

I guess the hardest part about the application of a capture mechanism is going to be the need for safety. {I don't know, but I am assuming that "Of Course I Still Love You" has no human crew at the point of capture.}

That being the case, perhaps Musk can adopt some technology from his "other" company. Tesla have already demonstrated the "self-attaching charger snake" - i.e. that robotic unit for your garage so that you can get out of your Model S at your front door, only for the car to park itself and attach the charger without help...

Now, with a few examples of that, strengthened and suitably scaled up in size, there has to be a way that Tesla could mount a selection of those around the edge of the landing platform in such a way that they can be swung in to place.

In the descent phase, the 1st stage rocket pops out some small flaps at the upper edge [where it separates from the second stage] that act as drags to keep the motor oriented and descending "rockets pointing down". The same mechanism that is used to pop open those flaps might be able to pop out and reveal some anchor points for robotic, self-attaching anchors.

And the cool part would be that all the complex and heavy tech for that could be mounted on the drone recovery ship, thus not adding to the gross take-off-weight of the Falcon...

Important not to loose sight of the achievement, I guess. In 30 years NASA came up with capsule splashdowns and a dead-stick orbiter return. in 30 *lauches* SpaceX are dropping a rocket motor the size of the Statue of Liberty onto a ship in the middle of a force 8-9 gale in the ocean... Impressive


FBI: Er, no, we won't reveal how we unmask and torpedo Tor pedos


This Is Where It Gets Interesting...

That's an interesting point you raise. Are law enforcement officers permitted to commit other felonies whilst in pursuit of a larger crime?

I am sure there are a stack of different applicable scenarios, case law and situations, but at one end you might argue entrapment. The FBI will counter with the idea that they simply "sat and watched" whilst visitors to the site broke the law - however, if they had to break the law themselves to gain access to the site server, then does that make their evidence inadmissible in court?

It's interesting to note the patterns here, though. Law enforcement is rightly very concerned about crime in the digital domain [we should be grateful for that] but at the same time the test cases they bring to Court [i.e. the San Bernardino phone case against Apple] are clearly being carefully selected not for their severity but for their applicability as "test cases". The really interesting thing is that "public opinion" seems to matter as much as the remedy that they seek...

As the saying goes: "The road to hell is paved with good intentions"


Mud sticks: Microsoft, Windows 10 and reputational damage


"It's the Economics, Stupid... "

No disrespect, Andrew, but the issues with W10 go far, far, deeper than interface design. As I see it, there are three major problems:-

1. Users Have Become the Product

With W10, Microsoft elected to get on the "free product" bandwagon and turn their user base into their product, "selling" their community to advertisers and market research analysts. Selling your users down the river is not a good way to make a popular product.

2. Leasing To Extremes

Rather than charge users a one-off fee for the purchase of their OS, Microsoft aim to make money through a range of subscriptions, such as charging people to disable advertising in the previously "free" desktop games. using this model I expect people to end up paying far, far more for their OS than they would for a one-off purchase, because the payments could be levied for years. For example, the "annual fee" model for disabling advertisements in the "free" games is such that in 2 years Microsoft will recoup more revenue than they would for charging a hardware vendor the [typical] $15-$25 license fee for the OS. So users will end up paying *more* in cash terms for a "free" product.

3. Windows is no Longer an OS

Anyone who has taken a basic CS course would quote you the function of an Operating System as: "an abstraction from the hardware, task scheduling, memory management and resource allocation" and precious little else. Microsoft seem intent on fooling around with the "OS" in attempts to compete with i.e. GNU/Linux, but the net result is that the result is actually *less* effective than it would be if they just thought of it as an OS...

I'm sure that the UI is relevant to the perception of less technically astute users, but the bottom line is that MS have lost the plot with W10. I can only hope that the world wakes up to this sooner rather than later, and that we can have a Windows 11, which would be Windows 7 with a tidy up and the latest DirectX please...


GCHQ: Crypto's great, we're your mate, don't be like that and hate


What's Good For The Goose

<sarcasm>I am encouraged to see the Head of GCHQ proposes that because *some* criminals use encryption to attempt to conceal their intentions [despite the amount of publicity these actions are gaining, despite the fact that Osama Bin Laden was sufficiently careful to not even have a phone line in his compound and despite the fact that there is more than enough evidence to show that the meta-data alone - i.e. the list of who sends messages to whom] ... that we should therefore simply give up our privacy and permit the state to eavesdrop. This distinction ["Because some bad people ... then we must..."] can be usefully applied elsewhere, and I await with baited breath the following proclamations from both sides of the Atlantic:-

1. Because some guns are used to kill people, *all* privately held firearms will immediately be declared illegal and must be destroyed.

2. Because some motor vehicles are used by joy-riders and speeders in ways that result in the deaths of innocent by-standers, *all* motor vehicles will immediately be declared illegal and crushed.

3. Because some Members of Parliament have been caught fiddling their expenses, all second homes will be banned, to be replaced by the conversion of spare loft space in Whitehall buildings into hotel-style rooms that can be booked in advance, with meals served at Westminster...

What's that you say? My additional examples simply won't work? Too extreme? Driven by hysteria and hyperbole? Exactly my point... </sarcasm>


Why Tim Cook is wrong: A privacy advocate's view


Specious At Best, Wrong At Worst

When Trevor writes, "Apple is wrong is in saying that the FBI is asking for a backdoor. It isn't. ", he is misrepresenting the facts as I've seen them reported.

My understanding is: the Apple iPhone in question has been locked using the integral PIN locking mechanism. This has 4 digits and therefore 10000 combinations [0000-9999]. It also has a mechanism such that if someone enters the wrong value 10 times, the phone will wipe it's data. What the FBI are asking for, however, is a mechanism to obviate the "10 strikes" rule built in to the iPhone.

So let's go back.

Trevor wrote, "Apple is wrong is in saying that the FBI is asking for a backdoor. It isn't. ". Well, the FBI are asking Apple to alter the software on the phone to explicitly allow a brute-force attack. If we are to apply debating-society levels of pedantry [I exaggerate only slightly] then that's not far off the truth. But what the FBI are asking for, then, is for Apple to change the iPhone software such that *anyone* could keep working through the 10,000 combinations until they got lucky. Let's say that the FBI find a dexterous employee able to check PINs at the rate of, say, 1 every 5 seconds. That's 12 per minute, or 720 per hour.

In other words, the FBI is asking Apple to create a mechanism that would allow *anyone* [with reasonable dexterity and no concerns about RSI] to crack open an iPhone in approximately 14 hours.

Now let's compare that 14-hour crack with what we'd expect a typical supercomputer to do with current levels of US Government encryption standards. [ i.e. check and see what the NIST recommendations are].

Sorry, trevor, but in all reasonable interpretations, the FBI are asking Apple to *massively degrade* the security of iPhones. Now if you take a look at documents like this one:-


from Apple themselves, you quickly see that the company has invested a great deal of time, money and effort into designing and delivering what they believe to be secure products. So just imagine the law suits that would emerge if Apple were to do an about-face and degrade iPhone security in this way. It would happen quicker than you can say "Class Action".

This aspect of your article is misleading, specious and inappropriate. Correction and retraction, please...


Half of UK financial institutions vulnerable to well-known crypto flaws


And The Banks Don't Care

My bank is one of those UK financial institutions that use vulnerable cryptography. I have now contacted them across three separate occasions, going back almost a year, to warn them about the vulnerabilities. I have written emails and I have telephoned their customer services line and asked to be put through to technical support. All to no avail.

The best response I had came from a senior support supervisor, who took up my call after it was escalated from a first line specialist. After listening to me repeat my concern, their response was [and I'm paraphrasing since this was a while ago], "Look, Sir, we're very grateful that you've called and of course I'll pass the message along, but we employ top security professionals here. I know you're the customer and the customer is always supposed to be right, but what could you possibly know about cryptography - it's a very complex subject..."

To which my response was something along the lines of,

"Well, other than working in the field of IT Security for 20 years, other than being employed to set security policy for my employer and apart from holding a US Patent in cryptography, clearly I don't know enough to be able to call you and alert you to what I believe to be legitimate concerns in such a way that allows me to be taken seriously..."

They still weren't interested. If there was any practical way that I could function in our society without a bank account, I wouldn't have one...


Microsoft in 2016: Is there any point asking SatNad what's coming?


2016: Microsoft FootGun Release 1.1 [new, improved]

Just rebooted one of my Win7 machines this morning, only to see another nag screen for Windows 10. since W10 release I have been *paranoid* about avoiding anything update related, going so far as to cross-check every single "update" MS have shipped. So I'd like to know how MS have just "pushed" this latest bit of nagware onto my systems despite my going out of my way to stop them.

If there is *Anything* that will prompt me to reformat these systems and go 100% to Mint Linux, this is it.

These are *MY* computers and they get to run the way *I* want them to. By all means give me a choice to upgrade, but if I decline, then respect my decision. A failure to do that, the arrogant attitude of "Microsoft Knows Best", will be the end of MS products on all these systems.


LHC records biggest bang ever with 1 Peta-electron-volt jolt


Re: Hot density rocks

From what I've read of the LHC configuration [but please bear in mind, it's 30 years since I studied physics to A-Level], there are a couple of answers to your question...

Firstly, the "ring" that is used to accelerate the particles [well, OK, in fact it is actually 2 rings, one on top of the other, and with the contained particles traveling in opposite directions] is super-cooled in order to allow the magnets that contain the particles to get benefits from super-conductivity [i.e. to maximise field strength]. This means that when the particles are "just circling" they are not, in fact, colliding with anything at all, just zipping along through a vacuum...

Secondly, when the particles are allowed to collide - and the computers that govern the ring control whether or not this happens - the collisions can only occur at the location of the various experiments [ATLAS, CMS] that are positioned around the ring itself.

Now for the tricky part...

If you take two identical cars and set each to travel at 30mph until they meet in a head-on collision, the result of the impact is that both cars should [all things being equal] stop dead on the spot of the collision, because the respective kinetic energy each car brings to the collision would be exactly cancelled out by the other vehicle. The resultant energy is released as heat, [infra-red] light, and sound [and the gasps of insurance under-writers]. Point being that the two cars stop, or, if they do continue to move, it is at a fraction of their original speed and with a fraction of their original energy...

So now lets go back to ATLAS and CMS... Inside the huge chambers that house these experimental sensors, the environment is set up so that the high energy particles travel through the matter of the sensor, leaving a wake of interactions [with the sensor] as they go. Each interaction essentially robs a bit more energy from the particle, until it decays naturally [which is the way these particles behave, given their latent instability].

OK, we got as far as slow-moving particles... Because they are moving much more slowly now, and because their lifespan can be measured in millionths, billionths or even trillions of a second, they literally don't have time to travel beyond the physical confines of the detector before they literally disappear through decay.

So that's why the LHC doesn't melt itself...

I'll say it again: I'm not a physicist and I've only got a basic understanding from reading reports and articles from science journals and tech news sites like this one. Your mileage may vary [YMMV].


World's most complex cash register malware plunders millions in US


Escape Route?

We're seeing more of this sort of thing every day, week, month. One thing remains curiously absent from developments, though, which is any form of consequences for the vendor. [Aside, perhaps from the reputational damage - but memories seem short]. If companies were

1. Taking all reasonable steps to protect their data

2. Not grabbing data that they should not take and do not need

3. Keeping all their technology patched and secure

4. Deploying cyber controls adequate to the risks

then we would likely be seeing less than this. If these retail outlets were vehicle manufacturers shipping cars and trucks with defective breaks, you would expect to see government getting involved and prosecutions for corporate negligence in the works. So why don't we see lawmakers offering to step in and protect the little people from cyber security negligence?


AMD sued: Number of Bulldozer cores in its chips is a lie, allegedly


Re: Reread the Article (@bri)

The original post from td97402 basically called out the fact that some of the AMD chip design involved sharing of some components between pairs of cores, seeming [to my mind] to imply that because the cores shared a branch prediction engine, and both the fetch and decode stages of the instructions, that this meant that the chip didn't really have properly independent cores.

I'll repeat for the record that I'm not a chip architect and that what follows may be factually incorrect...

However, what I wanted to say was that it is entirely possible that the sharing of the fetch and decode units across multiple cores is entirely reasonable. For example [here comes the fiction] suppose that, on average, each instruction takes 4 clock ticks to execute. Suppose that the fetch and decode units can each retrieve and decode an instruction in one clock tick and then switch between different threads in a second clock tick.

If this theoretical model were in any way reflective of the actual CPU, then AMD might have been able to determine that one fetch and one decode unit [with adequate state switching] would be sufficient to "service" two processor cores.

Terrible analogy: I drive a car with a relatively simple 4-cylinder 2-litre engine. The car has one fuel pump. That pump is essential to the engine, since without it those 4 cylinders simply won't get the fuel/air mixture needed for combustion. But the engine only needs one pump, since that pump is plenty capable of supporting all 4 cylinders. In a similar way [again, I have no way of knowing if this is true] the "effort ratio" between what the fetch/decode units do and what the processor does could *easily* be such that these two components can be very effectively shared.

I really didn't want to pick an argument with the original post, just to point out that there could be all sorts of design reasons [and, in modern CPUs, there are all sorts of examples] of sharing or time-slicing components across the broader system design.

To my way of thinking, in order to show that sharing single fetch and decode units between 2 CPU cores is deliberately misleading, someone would have to first show that having one fetch and one decode unit per CPU could actually produce more throughput. Without this, the plaintiff's case is based on conjecture and lacking a basis in fact. Now that presents a massive problem for the plaintiff, since the only way that they could demonstrate this would be to have AMD build such a chip. Which I can't see AMD inclined to do...


Re: Reread the Article

I'm not a chip architect, but doesn't the presence or absence of duplicate numbers of those components depend entirely upon things like chipset timing?

Specifically, is it not possible to have a pre-fetch unit that is running at [in practical terms] double the clock speed of the cores? Or put another way, is it safe to assume that the throughput of the pre-fetch unit is tightly tied to that of the processors?

Let's put this another way...

If you fire up any modern manager on an Intel Core i7 powered machine, you will see that the "core count" is precisely double what Intel claim for the chip, thanks to Hyperthreading. But what most but the nerdy aren't aware of is the fact that the Intel chips will typically "sleep" one ore more of these "cores" in order to manage the temperature of the chip... So [being argumentative] we could argue that Intel can't claim the number of cores they do if the chip isn't designed to use them all simultaneously?

I'm not trying to pick an argument with you, I'm just try to offer a view that says that modern chip design has become so hideously complex that this entire [and seemingly frivolous] case seems to be built entirely on semantics.

The irony here is that anyone truly concerned with this "nth level" of performance from their CPU is not actually going to count or measure things like this, but actually review simulations of performance from industry-accepted measurement and benchmarking tools like (I think) SiSoft SANDRA. [ I might be a bit out of date with that example!] So for someone to come along at this point with an argument like this is not far short of spotting an ambiguity in the documentation for a 10-year old car and thinking they can sue. Caveat emptor!


Is Windows 10 slurping too much data? No, says Microsoft. Nuh-uh. Nope


We've All Lost The Plot

If you look up the "traditional" definition of an Operating System, it is designed to provide just a few basic services:-

1. Task Switching

2. Resource (i.e.) Memory Management

3. Hardware Abstraction

4. Bound to be something I forgot

If you go back to the DoJ AntiTrust trials and Microsoft fighting to argue that Internet Explorer - a *web browser* was an inseparable part of the OS, we find the early signs of Microsoft trying to shore up their continual upgrade cycle by folding more functionality into the OS that forces user upgrades and thus forces the cash-cow to keep on churning out their income.

When we get into discussions such as this one [and it's fascinating, with lots of terrific and thoughtful insight] it's interesting to see how far the "question" has been moved by Microsoft [and others] and their marketing. Maybe even the GNU/Linux model of shipping a platform that can pull in any of tens of thousands of apps in an instant even helped push MS in this direction. But the point here is that Microsoft are playing fast and loose with the definition of an OS as an excuse to push functionality that nobody wants. Once before the DoJ took them to task for it - but it is sad though perhaps inevitable to see that nobody is even discussing that here, let alone seeing a chance of it happening.

The argument we're seeing from all the major platform vendors is, "As long as there is another platform we can argue you have a choice. As long as we can make that argument we are free to do what we want."

Sadly the simple fact is that W10 is shipping in enough volumes and people are using it in enough numbers for MS to be able to continue undiminished. If there had been a mass movement in the industry to boycott the platform, we might have seen a partial retreat on this. But they astutely decided that a retreat from something new, shiny and free was unlikely, especially given that less than 1% of the users will be aware, much less care, how badly they are being abused.

I have nothing but empathy for everyone posting and sharing their frustration here in this comments thread, but I suspect the truth is that the lack of change/concern/remorse from MS since W10 launch means we can only expect this to become more restrictive and intrusive with time.

It boils down to this [and I apologise in advance for being so blunt about it]:-

Either use the product and shut up, or don't [and shut up]. Those are your only options. Don't bother to argue, complain, plead, beg, cajole, implore, impress, entreat or otherwise attempt to influence Microsoft to change W10. They are not listening. Get over it.


So what the BLINKING BONKERS has gone wrong in the eurozone?


On German Economics Between the Wars


You wrote, "It should be said here that they don't think that inflation produces Nazis, no. Because the great German inflation was actually in 1921 to 24 and that didn't bring Adolf Hitler to power at all."

I'm not an economist, but I do remember being told [by a history teacher, which of course does not make it true] that one of the primary factors in the Germany economic turmoil in the early 1920s was in large part due to the somewhat unreasonable restitution burden placed on Germany by the Allies at the end of the First World War. I'm not sufficiently familiar with the economic trends of the time, but I believe that whilst the reparations might have looked like a firm and somewhat punitive response in fair economic conditions, they became untenable as the pan-European situation worsened as time progressed into the early 20s.

Further, whilst I would be very happy for someone to provide more details and correct me if I am wrong, I also understand that one of the reasons for the present Greek financial crisis could easily be described as Germany's own fault. Specifically, this is the fact that, at the time that Greece joined the Euro as one of the "first trenche" of countries to do so, they had in fact failed to meet the criteria for economic convergence as set down by the Bundesbank [um, sorry, ECB]. Because Germany [um, sorry, the Bundesbank... no, wait, the ECB] wanted the Euro to be successful, they turned a blind eye to the massive hole in Greek tax receipts. [The same was true, to a greater or lesser extent, to Portugal, Italy and Spain]. As a result, the Euro accepted into their midst not one but four economies that each carried significant fiscal risk. As if that wasn't enough, what then followed [to varying degrees] were economies [and Britain under Gordon Brown's Chancellorship was the same] that spent far more than they earned, and borrowed to cover the shortfall, relishing the availability of cheap money thanks to falling interest rates.

Except of course the credit has to run out at some point, but the issue for the Eurozone was that the wreck happened at the same time for many countries around the world, leaving them no ability to work around the problem.

But the "root causes" of the Greek situation can be summarised with two simple observations:-

1. At the time that Greece was admitted to the Eurozone it failed to meet the convergence criteria and therefore was nothing more than a problem stored up for the future;

2. At the time that Greece was admitted, the oversight of the convergence process was handled by German banking regulators [since of course the ECB is basically the Bundesbank with a new logo over the door, and run entirely along German banking lines].

This last point is relevant given that Germany runs her economy in a way that is in harmony with ECB policy; Greece does not. Unless or until Greece is willing to run her national economy like Germany, this friction/tension/trouble will remain in the Eurozone. It is a philosophical challenge as much as an economic one.

Oh, one more thought...

If the Eurozone and the ECB doesn't get their act together and sort this out pronto, then other EU economies will start to fall perilously close to the trouble that Greece are in now. And just wait for what *could* happen if France gets into difficulty. Not because their economy is weak: it isn't. But because such a *huge* percentage of France's economy is reliant entirely upon the state, whilst Germany has a smaller state machine and more activity in the private sector. Trouble in France could potentially see the Euro fall apart completely...

p.s. Not an economist, or a historian, don't have a clue what I'm talking about. ;)


That shot you heard? SSLv3 is now DEAD


If only there was a way to shame companies into upgrading their security promptly.

Bank of Scotland online, for example, is still using TLS1.0 [not exactly the same as SSLv3, but not far enough removed to be considered significantly more secure] for all it's banking activity.

What are they thinking?

If you try and email their support line, you get an auto-reply which begins, "Thank you for alerting us to the suspicious e-mail you have received."


How can one of the big national banks (part of Lloyds Group these days) have the temerity to operate like this?


SpaceX touts latest gear: new module, rocket demo


Program Costs

I think Kharkov makes some interesting points about the cost of the U.S. space program, but in large part I suspect they miss the truth of it. American space exploration was launched by Kennedy with two aims in mind. One was as a means of demonstrating the 'superiority' of Capitalism on the world stage - the chief protagonist being the U.S.S.R. at the time. The other was as a means of boosting the US economy. In economic terms, the 1950s had largely been about the US shifting back from a war economy (and recovering from the costs). Kennedy's plan was simple - pour vast amounts of public funds into the space program at the top (via NASA) and then ensure that it trickled out into the broader economy by allowing NASA to award contracts. This wasn't quite a "cost no object" approach, but remember that NASA had a sponsor (Kennedy) who was very keen to demonstrate American technical prowess with a moon landing. So the start of the space program was done on a "just make it happen" budget.

The second aspect to this is to think about contemporary capability at the time: materials science was pathetically ignorant in comparison with what we've learned since. More, one of the major challenges in getting the moon program started was the development of an avionics computer capable of adjusting the vectored thrust of the launch vehicle 50 times a second. With today's technology, our reaction to that would be "Pfft!" (too easy) - back then they had to develop that capability from scratch.

Since the 1950s mankind has learned that a space program isn't merely "science fiction" but that it has a broad range of commercial benefits, including better communications, SatNav, (satellite dish) entertainment and even R&D. Operating in space has become a legitimate commercial goal.

When you combine these two major factors (the pressure to be commercially viable and the *massive* advances in the relevant sciences) it stands to reason that we should be able to start today and develop a program that is massively cheaper than NASA's original 1950s offering.

However, despite all of the above, I do think that what we're witnessing today is special - but for different reasons. If we compare not just the design but the entire mindset behind Rutan's SpaceShip One, for instance, the articulated tail section is not just engineering genius, it's a very elegant solution to the deceleration/re-entry problem... If we look at Musk's idea to allow the "spent" first stage of a rocket to retain just enough fuel to safely land itself... Then it's becoming clear that we've moved beyond the occasionally wasteful "just enough to work" mentality of the "public sector" space program of the 1950s, and are now looking at a commercially viable future.


Haswell micro: Intel’s Next Unit of Computing desktop PC



It looks as though Intel are leaving the best of their kit for others (Apple) to debut. When you look at the best of what's been announced (i.e. Haswell with Iris Pro 5200 graphics) then the only vendor selling actual kit is Apple with the Macbook (inc Air/Pro) ranges that include Retina Display.

Hopefully 2014 will see availability spread to other manufacturers. When it does, this particular offering will be consigned to the parts bin. Personally, I'd wait for something like a 16Gb, i5, 5200-based Intense, from fit-pc.com. That would be entirely passively cooled (completely silent) and have two digital monitor output ports - likely HDMI and DisplayPort. The 5200 GPU has the bandwidth to comfortably support a pair of 1920x1200 monitors (or a single 2560x1600) and we'll likely also see something with decent optical output.

This gets us close to an ideal configuration for, say a web developer, with one screen for a browser and one for an IDE, with decent sound output and no noisy fans. I can live in hope, I suppose...


Tesla's Elon Musk v The New York Times, Round 2


The Balance Of Evidence

This second round of claim and counter-claim brings some useful additional data to the table. Fortunately, we can ignore most of it, as stated here, as a bit of "he said, she said". What we can't do, however, is ignore the facts.

Our NYT journalist, in their article, made very clear mention of the fact that they used cruise control in order to preserve battery life. If you look at the analysis data provided by Tesla, it is impossible to spot any period of the test drive during which cruise control was active - the vehicle speed is just a series of irregular spikes, even when on a sustained run. Point 1 to Tesla...

Where our NYT journo did get specific about aspects of the journey - for example he was very clear and precise in terms of reporting different vehicle speeds, taken from the dashboard of a car on which the speed is very, very easy to read. The data from the trace - and this is clearly visible - reports very different speeds. Point 2 to Tesla...

Our journalist is also very specific about the timing of charges during the journey. Once again, the Tesla trace data reports this very differently. It is important to note with this point that whilst in his latest response our journo replies with the answer that he was doing what the people from Tesla told him to do... or that he stopped charging when the range indicator said the vehicle had enough charge... Thinking about that, it almost makes sense on the surface. However, if you gave me a car capable of 40mpg for a test drive, and I put 2 gallons of fuel in the tank for a 50 mile drive, I think you'd agree that in reasonable conditions it would get me there. But if I drove around at 6000rpm in 1st gear, chances are it would not. So the response that "I stopped charging when the guage said I should get there..." is a little specious if the unfinished remainder of the sentence is, "... and then drove like a plonker to ensure I wouldn't." Point 3 to Tesla.

Final thought. Whilst I'd concede that Musk's rebuttal is a bit heavy on the righteous indignation, it is very clearly supported by graphically presented, factual data, captured from the actual vehicle performing the actual test drive. I notice with keen interest that whilst the journalist is very heavy on responses, at not one point does he respond with: "Your data is wrong." That speaks volumes.

[ Oh, FWIW, I'd consider myself a complete petrolhead and have zero interest EVs... but in this case it looks like a journalist being caught out trying to make up a salacious story and being caught in the act... ]


Review: Samsung Chromebox


And The Target Was?

Lots of interesting comments already made and I agree with the assessments of the review and comments that question the point of this product. Maybe we all got it wrong by considering it to be aimed at consumers? Suppose it was aimed at Apple instead - a device that was intended to try and compete with the latest incarnation of the Mac Mini, but one which was critically compromised by the choice of OS, limited thanks to Samsung's partnership with Googe over Android?

As others have posted, there is no shortage of viable (and much better) alternatives out there.

Quite a few people mention the Acer Revo PC. I had one of these, but just replaced mine with the Shuttle X35. The advantages of the latter are:

Faster Processor

Completely silent fanless design

More powerful on-board graphics extends display capability to 1920x1200 pixels

Add an SSD and you've got a seriously quick, fully featured machine capable of stand-alone operation.

I partnered mine with Ubuntu 12.04 (works fine with Mint, too) and I get accelerated graphics courtesy of an Intel driver, snappy response and ultra-low power consumption.

Yes, I must concede that it costs more than the Samsung - especially if you opt for a decent SSD - but it's so worth it.

This Samsung mess has just got to be prompted by the desire to poke Apple in the eye, as opposed to real consumer demand...



Biting the hand that feeds IT © 1998–2017