"If the burden of argument in the US is the same as English law than it would be balance of probabilities". That applies to issues of fact, but the meaning of that clause of the GPLv2 is an issue of law. So the court will determine that matter of law, and if Perens is correct in his assessment of the license then he has a defence of truth for the claim of defamation.
198 posts • joined 3 Jul 2009
Single-flow speed of nBase-xx4 links (was: SFPs + Fiber = cost more than switch?)
"So, that means that the highest possible speed for a single connection is 100Gb?"
No, you get 400Gbps. The nBase-xx4 interfaces run four "lanes" of ethernet symbols. The symbols are round-robined between the four lanes. An ethernet symbol is 64 bits logical, 66 bits on the wire (to allow for clock recovery).
If you are thinking that this means the media carrying the four lanes needs to have exactly the same latency then you would be correct. This is conveniently enforced using fibre assemblies and connectors with multiple fibres.
Warrant for access to a safe?
You can't get a warrant to access a safe from a safe manufacturer. There is no backdoor. They'll just tell you to buy a drill and brute-force it.
You can place a warrant against the safe's end-user. But that's exactly what the feds are trying to avoid here. Because this isn't about access to gain evidence, it's about access to do surveillance. That's why the Five Eyes forum was seen as appropriate by the Australian government, and downright Orwellian to the rest of us.
Re: I don't get it...
"Intel are claimed to be using protected IP in their product, but Apple are being taken to court?"
Yep. You are thinking along the right track. If you buy a chip from I, and they've used Q's invention without a patent license, then I is the only party from which Q can gain satisfaction. You, as the purchaser of I's phyical product, have no liability (which isn't as great as it sounds, as the settlement between I and Q might well remove from the market the product you purchased, thus lowering its usefulness).
But to this we add the ITC. They can prevent import of a product into the USA based upon a claim of patent infringement. Now toss in some sharp business practice by Q: they ask you for a patent license. Now you can respond "no", upon which Q says "it would be a shame if we made an allegation of patent infringement to the ITC". Now you could choose to fight this out, and win. But a win is not useful if you have been forbidden from selling your widgets for the years the court system can take. So you pay Q.
Moreover Apple are complaining that Qualcomm aren't just seeking at patent license based on the price of the radio chip (bugger all) but based on the price of the iPhone. That is, the patent license fee covers the inventions of others too. That's cuteness by Apple -- you can base a patent license fee on the phase of the moon -- but all the same it is an appealing argument.
So what's the cost to people running internet routers? We've taken a handful of route table entries and auctioning them by /24 increases the number of route table entries a hundred-fold. I think we should probably put a stop to this behaviour before it becomes endemic and filter out the more specifics of auctioned addresses.
SDN future is driven from cloud providers, not supplier strategies
"The problem is that large customers rely almost exclusively on Cisco and VMware, and they aren't interested in the open-source switches and open-source hypervisors with open-source management software that's needed to make hybrid SDN actually workable today."
This paragraph summarises my issue with the article. It's writes as if the enterprise vendors are the major source of influence over SDN. Whereas SDN is being driven by the cloud vendors, all of whom build their own switches, all of whom run their own software on those switches. It's likely that the future of SDN in the enterprise will be a byproduct of the main game at those cloud vendors; rather than anything in the strategic plans of VMware or Cisco.
In my view it's very likely that one of the cloud vendor SDN technologies will become so widely known that enterprises persisting with traditional enterprise networking and VxLAN will find themselves in an expensive niche.
I'd be a little bit cautious to ascribing an outage to the last thing to fail in a chain of failures. Especially in a report written by one of the players. It soft-soaps AEMO running its own weather models, and thus missing the warnings from BoM. The result was that the SA grid hadn't been prepared for a major weather event. Also there's a number of forward-looking statements in the report about future grid design, but the question why AEMO management failed to address these design issues prior to the SA outage isn't discussed.
There's plenty of blame for all involved. Even for SA residents and their installation of air conditioning rather than purchasing efficient homes in the first place. Demand management is one area which the SA government hasn't sought change, despite it being one of the cheapest ways to lower electricity prices.
Let's see what other countries do
I suppose the test will be what the UK and France do, as they have access to substantially the same facts.
Banning large batteries from the cabin isn't the worst idea. It's basically a decision that they'd like to deal with explosions of 150g to 1000g of explosive in the hold rather than in the cabin. The list of airports seems approximately where a substitution of battery for explosive could be expected which also have flights to the USA.
I also wonder if the agencies are concerned about an explosive laptop being used as a tool in a larger scheme, such as breaching the flight deck door.
Weather in South Australia
Folks, it hardly matters what the energy mix was. Let's have a thought experiment where we return to operation the coal-burning power stations at Port Augusta and Leigh Creek. The six tornadoes would have still cut the large powerlines between Adelaide and those generators.
The essential failure was the lack of awareness of South Australian weather at NEMCO. That lead to poor decisions, such as not bringing online all the gas generation actually located in Adelaide. We even had this misunderstanding from the Deputy Prime Minister, who said that this wasn't a severe weather event on par with a cyclone, which is to misunderstand the destruction a tornado can cause, although in a smaller area than a cyclone.
The shutdown of wind power due to electrical distribution system instability was very unfortunate. But again, that software behaviour was squarely NEMCO's job to know. And they didn't. At least being software this issue is cheap to fix. Not that there was enough wind power for the state in any case, since those tornado-affected distribution lines were carrying power from many of those windmills too.
The discussion about nuclear reactors is even more laughable. Less than a year ago South Australia had a Royal Commission into the nuclear fuel cycle -- including nuclear power -- which reported that all forms of nuclear power are uneconomic for this state.
What is really interesting is the very different read of this issue within South Australia -- people who actually experienced the edge of the weather event -- and elsewhere.
Re: Reminds me
I have a plane pilot's headset with bluetooth and it's excellent. Keeps the noise out and you can use the phone whilst in the datacentre. They come up on the auction sites every now and again at a reasonable price. Recommended.
"Effectiveness" is code
Note that the spokesperson is saying that the future review is into the "effectiveness" of the section. In Australian Public Service policy language "effectiveness" is a very different thing from "efficiency". "Effectiveness" is how well the mechanism works _without regard_ to other factors, such as expense or the robustness of the Australian Internet.
This would signal a substantial policy change from the current s115a, which requires the judge to weigh up the competing interests when approving a proposed injunction to block access to the "online location". That is, the legislators desired website blocking to be "efficient" rather than merely "effective". Therefore "effectiveness" should not be the primary criteria for evaluation of the legislation.
It would have been useful for Simon to have questioned the spokesperson on their choice of words. If the response was written then the expectation is that words hold their usual meaning.
Re: I must be way out of step..
I think what is lacking is compelling *systems*.
Drones aren't an interesting thing. A set of drones which can find a lost child on a crowded Bondi Beach is interesting.
Similarly wearables aren't interesting. But a wearable which manages your diet and exercise is interesting. At the moment they only pump out raw numbers and if you want to track diet and exercise there's still a lot of "getting thongs to talk with things" to do the analysis. Let alone putting that analysis into immediately useful terms: can I have this bit of cake I just waved under the wristband's camera?
The basic problem is that whilst hardware is cheap, systems are expensive. The iPhone wasn't only a touch screen, battery, CPU and radio. It was the "app store" system which made that bit of glass interesting; just as iTunes Music Store made the iPod a better MP3 player than the better hardware from Creative.
CES simply threw a lot of hardware out there. Worse still, it will throw out different hardware next year. So if systems builders rely upon products released at CES will never get beyond the "make it run on the platform" stage before having to start over. At best CES is a demo of technical capability which allows systems builders to assess potential hardware partners.
My prediction: WPIT
The acronym WPIT will become known outside Canberra. The Welfare Payments Infrastructure Transformation is essentially the replacement of the Model 204 database and applications code originally established by the Department of Social Security in 1983. The code has survived name changes (to Human Services/Centrelink), umpteen ministers, and 35 years of budgets and mini-budgets of changes (all of which had to be live by a particular date, a date usually set for political or accounting reasons rather than as the result of an implementation plan, so we're not talking a lot of programming to a deadline with no nice-to-haves which might ease future maintenance or migration).
The cost of rewriting this code to run on a replacement system is said by the government to be $1b to $1.5b. $1.5b seems optimistic: even on simple SLOC-based measures the 30m lines of code will cost roughly $2b. It's hard to see how it could be lower, as a lot of the measures for reducing cost aren't available for this task (eg, incremental feature delivery). All this technical discussion hides that Australia doesn't have many people with management experience of this scale of project and management is where the real risk hides (the seeming over-optimism about future project costs is a worrying sign).
This is high stakes IT: the scale; the risk to clients; the macroeconomic risk. Stuff this up and there's no saving your government and your country could enter recession.
The Minister appears competent, which is a good start. But of course if he's too good then he won't be content to stay at DHS for the decade this job will take.
Not sure this works in Assange's favour
I don't think this is a win for Assange. He still can't leave the embassy, as the UK will arrest him for his failure to appear, at which point the USA might well lob in a deportation request. A request which will then be top of the queue, assuming that Sweden withdraw their request for arrest.
As for things being different with President Trump, let's see. Because Trump owes the FBI a lot, and the US law/intelligence agencies desperately want Assange. If only to make an example of, as they are doing with Manning. I'm not sure Trump views Assange as anything more than a convenient dropbox for the work of Putin, and if Wikileaks didn't do the job then someone else would have been found.
I get the feeling that this is much more about solving Equador's problem than Assange's problem.
VW Dieselgate engineer sings like a canary: Entire design team was in on it – not just a few bad apples, allegedly
Realistic tests are a recent development
The problem with faulting the 'government' tests is that you assume that the test is possible outside a lab. Remember how VW got busted: a lab had finally made it's emissions test gear small enough to fit inside a car, so emissions could now be tested in the field.
Before the car-portable test what is the government to do? To not regulate at all, because no realistic test was possible? Or to regulate a lab test and then ensure some real-world effect by preventing car makers from optimising specifically for the test?
Update -- Comodo to abandon trademark registration
This thread <https://forums.comodo.com/general-discussion-off-topic-anything-and-everything/shame-on-you-comodo-t115958.3.html> contains the most hilarious statement ever by a CEO, see comment #3. A staffer later posts that Comodo will file to abandon the trademark registration:
"@robinalden Reply #28 on: Yesterday at 03:41:45 PM:
"Comodo has filed for express abandonment of the trademark applications at this time instead of waiting and allowing them to lapse.
"Following collaboration between Let's Encrypt and Comodo, the trademark issue is now resolved and behind us and we'd like to thank the Let's Encrypt team for helping to bring it to a to a resolution."
I think it very much depends on the sector as to what BYOD means.
For universities it means that students bring their own laptop and expect it to work with minimal fuss: connect to wireless, print, plug in somewhere to recharge. There's no attraction at all in a device without a screen -- the huge use of mobiles by students suggests that the screen is actually the important part of the computer.
For schools I wonder if you could take your idea once step further. The kids don't carry their computers around at all, but only the computer's storage (say, a Micro SD card). That storage is the boot device for a VM at both home and school. Add some simple software maintenance and I think this has some value and is worthwhile poking around with. The biggest problem would be Windows.
Business doesn't know what to do about BYOD, and they keep watering down the concept in the hope that it becomes something else. Unfortunately in doing so they lose the benefits of the BYOD approach, and loop back to the start of the process without making any headway. Increased BYOD by contractors and the lack of "enterprise mobile" means they'll have to grasp the nettle eventually. If only offering "outside the firewall" Internet with a certificate-mediated access (VPN or PKI) back into selected resources.
@James51 and originality
The NAPLAN test is the worst sort of high stakes testing. Writing a essay outside of the standard criteria will --- even with humans marking --- get you poor results as it won't fit within the marking rubrics. These rubrics -- 'marking criteria' would be the less jargon phrase -- are designed to allow no scope for creativity. As a trivial example of creativity: if you gave the answer as a poem that would garner no additional marks and would threaten the marks allowed for grammar and spelling.
The NAPLAN system is gamed by schools, with weeks of "teaching to the test" being commonplace. Although the government denies it, the NAPLAN preparation constrains the time available for actual teaching of material. In particular the Year 9 NAPLAN falls exactly when algebra is being taught and at a recent corridor chat at a teaching conference there was consensus that there was a fall in student ability in basic symbolic manipulation because NAPLAN has vacuumed time away from that foundation skill.
The government denies the tests are high stakes. But in reality they gateway admission to all advanced programmes. Even for trades programmes oversubscribed programmes are often determined by NAPLAN ranking -- why wouldn't you drag up your school's average given the opportunity?
Perhaps more attention to dimensions and weight?
Looking around uni is always interesting, as students put down their own cash for laptops and expect to use them seriously rather than for games. The typical notebook by far is the Macbook Air, followed by the Dell XPS 13. With that in mind I'd suggest that this review doesn't give enough attention to dimensions, weight, and battery life. Just on dimensions alone it is difficult to recommend a lot of the laptops in this review, as they're not going to fit well into a school bag.
If you want to see what bargain manufacturers could be doing for school users then look at the Toshiba Chromebook 2. Small, light, good screen, quiet. It's well underpowered for WIndows, it's lack of sockets limits its upgradability (and thus lifetime), but you'd hope that manufacturers would take hints from the form factor.
Re: More, please
@gerdesj: "Has anyone actually seen harassment... A non-issue?"
This would appear to answer your question: http://geekfeminism.wikia.com/wiki/Timeline_of_incidents
Those home router DNS servers are unlikely to have run BIND as their DNS forwarder.
Return to Home not much safer
The Return to Home function doesn't solve the problem. There's 25% odds that path to Home will be across the firebombing circuit. Realistically Return to Ground is the only safe alternative. But there's a strong scofflaw element within drone operators and due to the likely loss of the aircraft such a safety mechanism is likely to be disabled.
What's needed here is a social change. As one small example, no stories in the online media with INCREDIBLE DRONE FOOTAGE OF FIRE from non-official sources.
BTW, there's a huge lack of understanding of aviation in the drone forums discussing this issue. Such as postings claiming that rotor blades aren't under any stress hitting a drone, or that drone shrapnel can be sucked through turbine blades without threatening the aircraft. There's no appreciation at all for the pilot workload of firefighting operations, something apparent to even beginning pilots.
Re: Bad marketing El Reg
"The 2GB is needed to run Firefox"
You'll excuse my doubts, given that I'm posting this from Iceweasel (aka Firefox) on a Raspberry Pi 2 (1GB RAM).
BYOD works in some organisations, not that you'd know it from this author
Reading this article you wouldn't think that universities happily have thousands of BYOD devices on their networks every working day. It would have been better if the article, rather than condemning BYOD outright, looked at how they do it and the risks and benefits to the organisation.
Rather too upbeat
What an odd article. No large computing platform uses Oracle hardware or software: Google, Facebook, Amazon, and so on. They don't even subcontract Oracle's engineering expertise to construct their own internal-use products.
What's left is really the crumbs, with an "enterprise" label whacked on. And those crumbs are under threat from the products developed or maintained by the large computing platforms: from Linux to OpenStack to Software Defined Networking. Worse still, despite the increased costs over Google, Facebook, etal the performance and uptime of enterprise applications is typically worse than the cloud applications.
Maybe the Liberal Party supporters of metadata collection could publish their phone call metadata for the past few days of political instability. I can think of nothing more illustrative of the privacy dangers of metadata collection than that request.
So Microsoft joins the fray
This is just Microsoft's (late but good) attempt at owning cloud authentication. Every company is trying to do that at the moment: Facebook, Google, LinkedIn, ... It is part of the reason that authentication on the web is such a mess. Microsoft has some advantage in already being at the heart of a lot of enterprise authentication, and is trying to use that as a lever.
I use LibreOffice on a Mac. It is good. It will even open Visio drawings, which is nice.
But my daughter also uses LibreOffice and trying to round trip documents -- author them on LibreOffice/Mac, edit them at school on Word/Windows, edit them again at home on LibreOffice/Mac -- fails too often to make for happy users. There's only so many times you're willing to go and fix formatting details.
So I'd strongly recommend LibreOffice/Mac if you are able to share the document as an unrevisable PDF. If you need to edit the thing then either use LibreOffice or Word at both ends.
Not Amazon but ASIO
If such a proposal does get up then it won't be put to tender. The "agencies" will make sure they are legislated to provide the service (because that is "more secure") and will charge well over the odds for it. And then in a few decades time we'll find out that they've been sharing it with all and sundry and using it well beyond it's legislated purpose.
Telecommunications is a substitute for travel, we urgently need substitutes for travel
Money for government services has to be raised somehow. Complaining merely because it is on something we use a lot of is no better than the special pleading of other groups on whom taxes fall.
The questions of taxes are if they are fair, efficient to collect, don't distort the economy in unwelcome ways, and don't conflict with broader policy priorities. At the moment a tax on Internet traffic would be progressive (the rich paying more than the poor) and efficient to collect (easily measured, identifiable parties to request payment from).
You could argue that the effect on economic activity isn't going to be great. The "tax" of overpriced mobile telecommunications hasn't stopped people using mobile phones. Demand for telecommunications seems very inelastic to price.
I am opposed to this because of the conflict with the policy priorities of government. Increasing the price of telecommunications inhibits greater use of telecommunications; telecommunications at high speeds are a substitute for travel; and reducing the use of internal combustion engines is a national priority to avoid climate change catastrophe.
Almost all governments seeking increased funding could, for the next 10 to 20 years, do that through a carbon tax. That would kill two birds: advantage government policy in an important area, and raise revenue.
Interaction of low power LED and better PV solar efficiency.
To me the real effect of LED using less power is that designs using PV solar cells and a small battery becomes the obvious solution to powering the LED, rather than the cost of cabling to the mains.
So LEDs might well use more electricity. But not add to CO2 creation.
The other benefit of FLAC is that you don't have the noise floor come up as you edit and re-mix the file, as you do with the continual de-encoding and re-encoding when mixing using MP3s. For that reason FLAC is popular with musicians.
Tapping backhaul providers would do the same job more simply
Cables are cut by fishing boats all the time. They are pulled up onboard a ship and repaired all the time. So although repair and re-splicing is fiddly, it is also an everyday fiddly task. Not some impossibility as your article comes close to suggesting. If the 10KV was actually enough to damage a fishing trawler we'd be very happy -- sadly it readily dissipates in seawater. That gives NSA a simple technique to cut a live cable -- clamp chains to the cable 100m apart, chop the cable in the middle, pull up the chains.
The point Briscoe makes is that SCCN would know about this. But you can readily imagine some misdirection, such as cutting the cable once again a few Km away to give a despatched repair ship something to fix.
The question is -- is this likely? And it's not really, because of the backhaul problem. You've applied your splice, you've got a copy of all the data, now how do you get that data back to land? The only choice is to hire wavelengths or complete fibre on the same cable under some pretext (such as connecting Pine Gap back to the USA). That's not really possible to do mid-span without a high chance of stuff up (such as the wavelength used gaining power mid-span, or a FEC incompatibility).
The NSA's desire is much more simply met by tapping the backhaul fibre heading away from the landing site : there's no water, no voltage, no close monitoring, no forward error correction. Just simple dark fibre in a conduit.
The NSA could require a Room 641A type arrangement to tap each cable as it is patched from the undersea cable headend to the customer. But Briscoe is saying that isn't the case (although he explicitly did not call out the Australian landing sites in his denial). Briscoe might well be truthful -- you can only imagine that having had Room 641A revealed by a junior technician that the NSA would look to less apparent ways to do the task.
I don't think it's likely that the NSA are using CALEA or other interception requests for transmission networks. Those legislative mechanisms don't suit transmission networks at all.
BTW, carriers don't encrypt link traffic. It was thought that there was no need. It's fair to say that the various leaks from the NSA are changing that view. However encryption of high speed, high latency, high natural error links isn't as simple as you might hope. That means it's expensive and thus the engineering desire has to overcome the beancounting hardheads.
"Expensive" is in the engineering sense
Oninoshiko, it's all very well to put in a dig at Cisco, but you miss the meaning of "expensive". In this space "expensive" is really talking about wireless transmission time. The more you transmit, the shorter the battery life or, in the case of an aircraft, the more money you hand over to satellite owners.
The article is very fine at describing the current situation where the applications and architecture are obvious, but also where unless there is one interoperable specification then its not going to find its way into consumer devices.
Bear in mind the real work has yet to begin. For example, there's no profiles for things such as machines reporting their parts inventory and status. Let alone to determine to whom that inventory should be reported to: to the manufacturer, or to your chosen washing machine repairer? It's in everyone's interest for the repairer to appear with the likely-to-be-faulty part on their first visit, that's a feature any purchaser would see the benefit in. And yet the industry is doing its best to make sure this never happens. Let alone more advanced applications.
Uber, lighting $100 notes by the box
Adelaide is a small Australian city of a million people -- it's pretty much on the other side of the planet from everything. Yet Uber are burning cash here like there is no tomorrow: huge billboards near the airport, relentless Facebook ads trying to recruit drivers, direct appeals to taxi drivers. Yet Uber isn't available to answer simple questions: does driving for Uber breach the Passenger Transport Act; do I need insurance above the typical motor vehicle insurance.
Anyway, if Uber are burning so much cash here in Adelaide they must be shovelling into the incinerator in larger cities. It all seems very "crash through or crash", and very much aimed at IPO returns to venture capital rather than any sane way to build the business.
@IGnatius T Foobar -- ARM
Sure, Intel could have used ARM. But it would have been stupid to do so.
Firstly there are no ARM designs for 14nm. So it would have to license and then develop a design. So it is already costing Intel more than using its own x86 architecture, for which it need not pay licensing fees. Intel *would* have to license: using the historically-licensed design isn't going to cut it as Intel would want the ARMv8 64b design. It does not make financial sense to develop a 32b design with its 2GB memory restriction -- that simply won't have the sales lifetime for the investment required.
Secondly, the market would expect those ARM designs to retail for less than what they would get for the equivalent x86 design.
Thirdly, ARM sales have less lock-in than with x86. If a fab overcommits capacity then a customer can run off some cheap ARM and threaten your business plan. That doesn't happen with x86.
Fourthly, a lot of the work making ARM work on 14nm benefits later arrivals to the 14nm process who then can license that work from ARM Ltd rather than pay development costs. Why would Intel, the first with a 14nm process, ease the way for its rivals?
The statement that "Linux runs fine on ARM" is irrelevant. Linux runs fine on x86 too.
Generation-long problem, but what are the side effects?
Phil argues it's going to be one of those generation-long problems, similar to access to strong crypto. That doesn't mean that it there aren't knock-on issues beyond that generation. In that way Phil is too sanguine.
Take crypto. When I wrote a Pine patch to provide PGP-encrypted mail there was a notice issued preventing the export of that beyond Australia. So we had a generation of mail clients without strong crypto (Pine was the "market leader" in Internet e-mail clients at that time, so competitors would have sought feature parity). Importantly, without strong crypto there can not be sophisticated crypto key management.
That lack of sophisticated key management -- that is, who you communicate with and how well you know who they are -- pretty directly allowed the rise of spam. Now there have been attempts since at "email reputation management" to mark particular uses are spammers or compromised, but the lack of widespread key management for e-mail means that those attempts have never got much further than the network layer -- marking particular IP addresses as suspect.
The cost of the side-effects has been immense. We can't even mark a Nigerian scammer as untrused. It's not at all clear that the two decades of additional ability to tap email has resulted in less threat to the people's welfare.
Not too bad
This isn't too bad for a well designed network.
(1) OSPF shouldn't be seen or accepted on the leaf subnets used by computers. (2) It requires the defeat of OSPF authentication (easy or hard, depending solely upon the randomness of of the key).
A surprising element is that Cisco's OSPF will accept unicast OSPF from anyone, not just predefined unicast neighbours. That's something to add to the router protection access control lists.
On a poorly designed network this is a bit of a disaster, since the only recovery is to reboot the router (which isn't really an issue: since it has just blackholed all IPv4 traffic the router was no longer doing much worthwhile anyway). By far the quickest work-around for those networks is to deploy OSPF MD5 authentication.
Nice work picking up on the importance of single laser versus multiple lasers and wave-division multiplexing.
In the future you could also look into the uncorrected error rate, the distance (or optical loss) and if it used a ITU-specified cable (ie, something which may be in the ground versus something lab-built). That would help with the apples v oranges nature of these sort of comparisons (and I'm not at all suggesting that the variation is deliberate, merely reflective of the small number of labs working in this field, all of which make different reasonable choices in an environment where there's no pressure for interoperation).
Tbps not a useful measure
The lifetime of routers is set by the port density of their fastest interface. Quoting that, rather than aggregate inter-port Tbps, is a more useful measure of the awesomeness (or otherwise) of a router. Also useful is maximum packet-per-second of small packets: this is particularly where CPU and network processor designs are used, as this limit is usually reached well prior to bps.
What is cruft, what is security, and can the LibreSSL programmers tell the difference?
@bazza This issue had been fixed in the original OpenSSL code. I think it is reasonable for people to look at this bug and to have a concern that in "decrufting" the code people may be removing features which are actually essential and are not cruft at all.
Detail for four USB ports controller?
The Model B uses a LAN9516 for ethernet and USB. That chip has a fast ethernet controller, two downstream USB ports, all presented as being on a USB hub connected to a upstream USB port (which the Model B connects to the single USB port on the BCM2835 SoC).
So what controller are they now using to get the four USB ports? Or is there a distinct hub chip now?
NBNCo was founded in 2009. It is now 2014 and they still can't tell you what their product is.
The Liberal Party have -- by design -- turned the NBN project into a massive IT failure.
Why 25/50 when 40Gbps exists?
To answer some questions above, 40Gbps is implemented as four 10Gbps channels.
The cost of four lasers within a QSFP is obviously four times the cost of the one laser. But worse, where each laser is run over its own fibre (as must be the case for multimode fibre) the MPO/MTP connectors are expensive, fragile and almost impossible to clean and test in the field. Using 40Gbps ethernet has a high operational cost.
Using 25Gbps channels rather than 10Gbps channels halves the amount of cabling whilst remaining economic. Note that this is being promoted as a top of switch technology, so losing the benefit of 40Gbps over single mode of being able to be optically multiplexed by ITU-compliant 10Gbps passive WDM systems isn't a worry.
Drupal's performance is fine. View Varnish, memcached, APC as part of the product.
To my mind there's two issues. Firstly, authentication and authorisation. Because if you stuff that up, then hey, you've turned defacement of one site into a denial-of-service against government.
Secondly, the software-as-a-service outsourcing. That contract will need to be carefully written because flash loads on government websites have to be met. 404-ing a heap of affected people trying to access a information page about a natural disaster isn't acceptable. Furthermore, the site might need to refuse some users whilst allowing others -- as happens with the CSIRO maps of bushfire activity: the same content is presented to the public and to bushfire controllers, but access by bushfire controllers superceeds best effort.
Similarly the physical location of the service matters. There's no use having all of the Australian government web sites on servers in Singapore or the west coast of the USA. Because when the going gets tough that data won't be readily accessible. The web sites need to have a huge links to peering points on the Australian mainland.
These copyright-hungry journals are slowly harming themselves. My institution weights down copyright-hungry journals, simply because it prevents the institution (and the nation) re-using the very materials it paid to develop. Rather than have my first-class paper be weighted as if printed in a second-tier journal, I simply choose a top-tier journal without hungry copyright conditions.
Bit of a surprise to see a society publishing a copyright-hungry journal. You've really got to consider if that advances the society's goals.
Re: So just to clarify,,,
The certification is for the phone, not for the standalone software or hardware. It is just paperwork: multiple presentations of the one set of tests conducted by the phone manufacturer. If you import a device then you ask the manufacturer for their test pack and munge it into the format expected by the local regulator.
Measuring the wrong thing
The analysis confuses watts (instantaneous energy use) with Ah (energy use to complete the task). The second is the more interesting number when the CPUs have different processing rates. I could readily believe that Atom uses more watts, but also that it finishes the task sooner and can return to a quiescent state sooner than the ARM. So the question is: does Atom's behaviour use more amp-hours than ARM on various workloads. Or more concretely: which will exhaust the battery sooner?
Common platforms, and where you might use this chip
Christian, ARM Ltd are aware of the need for a common platform for operating systems software, and have proposed "Server Base System Architecture" (SBSA) as a standard systems architecture in this space.
Where a AArch64 server chip fits is an interesting question, especially in the sort of quantity needed to make money on a chip. I rather see this as AMD putting their toe into the water, and I imagine that's how their initial customers will also be approaching the chip.
Where AMD with ARM could competitively take on Intel is in offering a system-on-chip for servers: that would have to be funded by a Google or Amazon, but they might see sufficient mainboard simplification to make that worthwhile.
Also, there's a considerable market for 64b ARM in network middleboxes and appliances. These are constrained by heat, so Intel has always been problematic. But ARM hasn't been an option due to the lack of 64b parts (ie, your middlebox can't have more than 2GB of RAM, which is pushing it if you need a full routing table with multiple routing instances).