You may want to revisit your section on software development. What you describe is a view in principle of a very rigid development process, as might be designed by an auditor. It bears little to no relation to how things work in the real world.
Let me point out a few of your biggest errors:
"It is absurd to suggest that the development team would then create software outside the boundaries of those specifications."
No, it's not. The general consensus seems to be that developers shine when given opportunities to push boundaries. Now in shops that develop software for public consumption, that (sometimes) takes second place to producing a quality product, but for internal development for companies that are pushing to be on the cutting edge, they'll allow their developers a much longer leash.
"Any data which could not be explained by those technical specifications would raise alarms and be investigated. That is the whole point of testing software before it is deployed - to ensure that it is doing what it was designed to do and that it is stable."
The first does not follow from the second. The point of testing software is to ensure that it does what it was designed to do,and that it is stable. But very rarely does testing reach to proving that the software does NOTHING BUT what it was designed to do, which is the gist of your first sentence.
"But in the interests of objectivity, even if we accept that this code was not noticed during the testing stage (which really is stretching the realms of possibility), once a project has been deployed testing continues on live data. This is important because once a project is deployed in the real world it often behaves differently to how it behaves in a lab environment. Resource efficiency needs to be checked, external factors need to be controlled or at least mitigated and data has to be accurate. This means that even if all the above stages failed to notice the data being generated by this code, once in a live environment it would be impossible to miss."
This is the one which proved to me that you don't know the real world of software development AT ALL. Deployed projects do have testing, but that is usually reduced to minimal amounts to avoid causing performance problems. Resource efficiency monitoring and error logging would be about it. Given the likely relative sizes of the different types of data being collected, most compression effort and troubleshooting of space usage would likely be focused on the photographic component.
Your entire piece also entirely ignores one standard development practice which goes a long way to explain how code ends up in projects without the managers ever knowing it's there. And it's this practice that Google themselves claim caused this issue: the use of external libraries. Google's story is that the Wi-Fi library they used in the StreetView project was developed in their labs as an experimental project, and was included by the StreetView development team because it did what they needed, and they were either unaware or unconcerned that it collected more data than they needed. I'm not saying that I buy this story, but the fact that you don't even mention it puts a huge question mark on your understanding of this issue.
Finally, there is the issue of patents:
"Then on June 3rd 2010 as a result of ongoing class action suits in the US it emerged that Google had filed a patent application for similar technology in 2008, this reinforced our opinion that this could not have been rogue code. In order for a patent application to be filed, it seemed obvious to us that Google's legal department would have had to review the technology and submit the application. This also would suggest that the project had been funded which in itself would require the attention of managers, designers, developers and testers."
Software development companies try to patent EVERYTHING THEY DO -- even experimental stuff that they have no intention of actually using. They do it because they know every other software development company is trying to patent everything THEY do, and patent portfolios are used both offensively and defensively in this business. So the fact that Google applied for a patent means only that they developed the software, not that they ever intended to use it.
Your determination that Google did this deliberately is based on some very flawed (some might say naive) views of software development. There also seems to be some indication of bias -- you seem to be avoiding any points that lessen your case that it was deliberate.
I don't agree with what Google did, and I don't know whether they did it deliberately or not. But I know that you have not done the analysis necessary to determine whether or not they did it deliberately.