167 posts • joined 3 Jul 2009
Interaction of low power LED and better PV solar efficiency.
To me the real effect of LED using less power is that designs using PV solar cells and a small battery becomes the obvious solution to powering the LED, rather than the cost of cabling to the mains.
So LEDs might well use more electricity. But not add to CO2 creation.
The other benefit of FLAC is that you don't have the noise floor come up as you edit and re-mix the file, as you do with the continual de-encoding and re-encoding when mixing using MP3s. For that reason FLAC is popular with musicians.
Tapping backhaul providers would do the same job more simply
Cables are cut by fishing boats all the time. They are pulled up onboard a ship and repaired all the time. So although repair and re-splicing is fiddly, it is also an everyday fiddly task. Not some impossibility as your article comes close to suggesting. If the 10KV was actually enough to damage a fishing trawler we'd be very happy -- sadly it readily dissipates in seawater. That gives NSA a simple technique to cut a live cable -- clamp chains to the cable 100m apart, chop the cable in the middle, pull up the chains.
The point Briscoe makes is that SCCN would know about this. But you can readily imagine some misdirection, such as cutting the cable once again a few Km away to give a despatched repair ship something to fix.
The question is -- is this likely? And it's not really, because of the backhaul problem. You've applied your splice, you've got a copy of all the data, now how do you get that data back to land? The only choice is to hire wavelengths or complete fibre on the same cable under some pretext (such as connecting Pine Gap back to the USA). That's not really possible to do mid-span without a high chance of stuff up (such as the wavelength used gaining power mid-span, or a FEC incompatibility).
The NSA's desire is much more simply met by tapping the backhaul fibre heading away from the landing site : there's no water, no voltage, no close monitoring, no forward error correction. Just simple dark fibre in a conduit.
The NSA could require a Room 641A type arrangement to tap each cable as it is patched from the undersea cable headend to the customer. But Briscoe is saying that isn't the case (although he explicitly did not call out the Australian landing sites in his denial). Briscoe might well be truthful -- you can only imagine that having had Room 641A revealed by a junior technician that the NSA would look to less apparent ways to do the task.
I don't think it's likely that the NSA are using CALEA or other interception requests for transmission networks. Those legislative mechanisms don't suit transmission networks at all.
BTW, carriers don't encrypt link traffic. It was thought that there was no need. It's fair to say that the various leaks from the NSA are changing that view. However encryption of high speed, high latency, high natural error links isn't as simple as you might hope. That means it's expensive and thus the engineering desire has to overcome the beancounting hardheads.
"Expensive" is in the engineering sense
Oninoshiko, it's all very well to put in a dig at Cisco, but you miss the meaning of "expensive". In this space "expensive" is really talking about wireless transmission time. The more you transmit, the shorter the battery life or, in the case of an aircraft, the more money you hand over to satellite owners.
The article is very fine at describing the current situation where the applications and architecture are obvious, but also where unless there is one interoperable specification then its not going to find its way into consumer devices.
Bear in mind the real work has yet to begin. For example, there's no profiles for things such as machines reporting their parts inventory and status. Let alone to determine to whom that inventory should be reported to: to the manufacturer, or to your chosen washing machine repairer? It's in everyone's interest for the repairer to appear with the likely-to-be-faulty part on their first visit, that's a feature any purchaser would see the benefit in. And yet the industry is doing its best to make sure this never happens. Let alone more advanced applications.
Uber, lighting $100 notes by the box
Adelaide is a small Australian city of a million people -- it's pretty much on the other side of the planet from everything. Yet Uber are burning cash here like there is no tomorrow: huge billboards near the airport, relentless Facebook ads trying to recruit drivers, direct appeals to taxi drivers. Yet Uber isn't available to answer simple questions: does driving for Uber breach the Passenger Transport Act; do I need insurance above the typical motor vehicle insurance.
Anyway, if Uber are burning so much cash here in Adelaide they must be shovelling into the incinerator in larger cities. It all seems very "crash through or crash", and very much aimed at IPO returns to venture capital rather than any sane way to build the business.
@IGnatius T Foobar -- ARM
Sure, Intel could have used ARM. But it would have been stupid to do so.
Firstly there are no ARM designs for 14nm. So it would have to license and then develop a design. So it is already costing Intel more than using its own x86 architecture, for which it need not pay licensing fees. Intel *would* have to license: using the historically-licensed design isn't going to cut it as Intel would want the ARMv8 64b design. It does not make financial sense to develop a 32b design with its 2GB memory restriction -- that simply won't have the sales lifetime for the investment required.
Secondly, the market would expect those ARM designs to retail for less than what they would get for the equivalent x86 design.
Thirdly, ARM sales have less lock-in than with x86. If a fab overcommits capacity then a customer can run off some cheap ARM and threaten your business plan. That doesn't happen with x86.
Fourthly, a lot of the work making ARM work on 14nm benefits later arrivals to the 14nm process who then can license that work from ARM Ltd rather than pay development costs. Why would Intel, the first with a 14nm process, ease the way for its rivals?
The statement that "Linux runs fine on ARM" is irrelevant. Linux runs fine on x86 too.
Generation-long problem, but what are the side effects?
Phil argues it's going to be one of those generation-long problems, similar to access to strong crypto. That doesn't mean that it there aren't knock-on issues beyond that generation. In that way Phil is too sanguine.
Take crypto. When I wrote a Pine patch to provide PGP-encrypted mail there was a notice issued preventing the export of that beyond Australia. So we had a generation of mail clients without strong crypto (Pine was the "market leader" in Internet e-mail clients at that time, so competitors would have sought feature parity). Importantly, without strong crypto there can not be sophisticated crypto key management.
That lack of sophisticated key management -- that is, who you communicate with and how well you know who they are -- pretty directly allowed the rise of spam. Now there have been attempts since at "email reputation management" to mark particular uses are spammers or compromised, but the lack of widespread key management for e-mail means that those attempts have never got much further than the network layer -- marking particular IP addresses as suspect.
The cost of the side-effects has been immense. We can't even mark a Nigerian scammer as untrused. It's not at all clear that the two decades of additional ability to tap email has resulted in less threat to the people's welfare.
Not too bad
This isn't too bad for a well designed network.
(1) OSPF shouldn't be seen or accepted on the leaf subnets used by computers. (2) It requires the defeat of OSPF authentication (easy or hard, depending solely upon the randomness of of the key).
A surprising element is that Cisco's OSPF will accept unicast OSPF from anyone, not just predefined unicast neighbours. That's something to add to the router protection access control lists.
On a poorly designed network this is a bit of a disaster, since the only recovery is to reboot the router (which isn't really an issue: since it has just blackholed all IPv4 traffic the router was no longer doing much worthwhile anyway). By far the quickest work-around for those networks is to deploy OSPF MD5 authentication.
Nice work picking up on the importance of single laser versus multiple lasers and wave-division multiplexing.
In the future you could also look into the uncorrected error rate, the distance (or optical loss) and if it used a ITU-specified cable (ie, something which may be in the ground versus something lab-built). That would help with the apples v oranges nature of these sort of comparisons (and I'm not at all suggesting that the variation is deliberate, merely reflective of the small number of labs working in this field, all of which make different reasonable choices in an environment where there's no pressure for interoperation).
Tbps not a useful measure
The lifetime of routers is set by the port density of their fastest interface. Quoting that, rather than aggregate inter-port Tbps, is a more useful measure of the awesomeness (or otherwise) of a router. Also useful is maximum packet-per-second of small packets: this is particularly where CPU and network processor designs are used, as this limit is usually reached well prior to bps.
What is cruft, what is security, and can the LibreSSL programmers tell the difference?
@bazza This issue had been fixed in the original OpenSSL code. I think it is reasonable for people to look at this bug and to have a concern that in "decrufting" the code people may be removing features which are actually essential and are not cruft at all.
Detail for four USB ports controller?
The Model B uses a LAN9516 for ethernet and USB. That chip has a fast ethernet controller, two downstream USB ports, all presented as being on a USB hub connected to a upstream USB port (which the Model B connects to the single USB port on the BCM2835 SoC).
So what controller are they now using to get the four USB ports? Or is there a distinct hub chip now?
NBNCo was founded in 2009. It is now 2014 and they still can't tell you what their product is.
The Liberal Party have -- by design -- turned the NBN project into a massive IT failure.
Why 25/50 when 40Gbps exists?
To answer some questions above, 40Gbps is implemented as four 10Gbps channels.
The cost of four lasers within a QSFP is obviously four times the cost of the one laser. But worse, where each laser is run over its own fibre (as must be the case for multimode fibre) the MPO/MTP connectors are expensive, fragile and almost impossible to clean and test in the field. Using 40Gbps ethernet has a high operational cost.
Using 25Gbps channels rather than 10Gbps channels halves the amount of cabling whilst remaining economic. Note that this is being promoted as a top of switch technology, so losing the benefit of 40Gbps over single mode of being able to be optically multiplexed by ITU-compliant 10Gbps passive WDM systems isn't a worry.
Drupal's performance is fine. View Varnish, memcached, APC as part of the product.
To my mind there's two issues. Firstly, authentication and authorisation. Because if you stuff that up, then hey, you've turned defacement of one site into a denial-of-service against government.
Secondly, the software-as-a-service outsourcing. That contract will need to be carefully written because flash loads on government websites have to be met. 404-ing a heap of affected people trying to access a information page about a natural disaster isn't acceptable. Furthermore, the site might need to refuse some users whilst allowing others -- as happens with the CSIRO maps of bushfire activity: the same content is presented to the public and to bushfire controllers, but access by bushfire controllers superceeds best effort.
Similarly the physical location of the service matters. There's no use having all of the Australian government web sites on servers in Singapore or the west coast of the USA. Because when the going gets tough that data won't be readily accessible. The web sites need to have a huge links to peering points on the Australian mainland.
These copyright-hungry journals are slowly harming themselves. My institution weights down copyright-hungry journals, simply because it prevents the institution (and the nation) re-using the very materials it paid to develop. Rather than have my first-class paper be weighted as if printed in a second-tier journal, I simply choose a top-tier journal without hungry copyright conditions.
Bit of a surprise to see a society publishing a copyright-hungry journal. You've really got to consider if that advances the society's goals.
Re: So just to clarify,,,
The certification is for the phone, not for the standalone software or hardware. It is just paperwork: multiple presentations of the one set of tests conducted by the phone manufacturer. If you import a device then you ask the manufacturer for their test pack and munge it into the format expected by the local regulator.
Measuring the wrong thing
The analysis confuses watts (instantaneous energy use) with Ah (energy use to complete the task). The second is the more interesting number when the CPUs have different processing rates. I could readily believe that Atom uses more watts, but also that it finishes the task sooner and can return to a quiescent state sooner than the ARM. So the question is: does Atom's behaviour use more amp-hours than ARM on various workloads. Or more concretely: which will exhaust the battery sooner?
Common platforms, and where you might use this chip
Christian, ARM Ltd are aware of the need for a common platform for operating systems software, and have proposed "Server Base System Architecture" (SBSA) as a standard systems architecture in this space.
Where a AArch64 server chip fits is an interesting question, especially in the sort of quantity needed to make money on a chip. I rather see this as AMD putting their toe into the water, and I imagine that's how their initial customers will also be approaching the chip.
Where AMD with ARM could competitively take on Intel is in offering a system-on-chip for servers: that would have to be funded by a Google or Amazon, but they might see sufficient mainboard simplification to make that worthwhile.
Also, there's a considerable market for 64b ARM in network middleboxes and appliances. These are constrained by heat, so Intel has always been problematic. But ARM hasn't been an option due to the lack of 64b parts (ie, your middlebox can't have more than 2GB of RAM, which is pushing it if you need a full routing table with multiple routing instances).
If you use the popular macports to make your Mac more like another operating system then you might want to update the macports-managed software.
What's wrong with this picture, and the general response to Heartbleed? Our servers are running software which may leak your private data. But *we will keep them running*.
In the contest between security and revenue, revenue wins.
I am paying for OpenSSL, via my Red Hat subscription
Firstly, there are middleman here -- Red Hat Inc, Novell and Google. I pay Red Hat for my Linux, Google for my phone software, in return they should be paying people to do whatever to produce the software they are selling to me. If OpenSSL aren't getting a cheque from Red Hat/SuSE/Google then I have some questions...
Secondly, there's the complexity of SSL/TLS itself. Whilst your article contacts the author (and kudos for that) I would be just as interested in an interview with the IETF chair of the area which published the specification in the first place. The small gain from allowing data in a response (to probe for MTU failures on non-TCP protocols) doesn't appear to me to justify the risk from a change to a security function. It's the chair's role to make that call.
Thirdly, there's C. We desperately need a new systems programming language. We've written enough applications programming languages to know what works and what doesn't (Java, Python, Lua, etc) but those languages simply aren't deployable in the systems programming space.
Finally, there seems to be a whole culture around security bugs which is simply broken. It's pretty much the task of the NSA to lead the response to this, and yet they seem to be the body most assumed to know of the bus existence but not to have told anyone. Not to mention that every bug is seen as an opportunity to sell stuff: create a consultancy, win a bug bounty, scare customers into buying products, scam the unwary, and so on.
Primacy of software
Could have had a little more about the primacy of software: IBM had a huge range of compliers, and having an assembling language common across a wide range was a huge winner (as obvious as that seems today in an age of a handful of processor instruction sets). Furthermore, IBM had a strong focus on binary compatibility, and the lack of that with some competitor's ranges made shipping software for those machines much more expensive than for IBM.
IBM also sustained that commitment to development. Which meant that until the minicomputer age they were really the only possibility if you wanted newer features (such as CICS for screen-based transaction processing or VSAM or DB2 for databases, or VMs for a cheaper test versus production environment). Other manufacturers would develop against their forthcoming models, not their shipped models, and so IBM would be the company "shipping now" with the feature you desired.
IBM were also very focussed on business. They knew how to market (eg, the myth of 'idle' versus 'ready' light on tape drives, whitepapers to explain technology to managers). They knew how to charge (eg, essentially a lease, which matched company's revenue). They knew how to do politics (eg, lobbying the Australian PM after they lost a government sale). They knew how to do support (with their customer engineers basically being a little bit of IBM embedded at the customer). Their strategic planning is still world class.
I would be cautious about lauding the $0.5B taken to develop the OS/360 software as progress. As a counterpoint consider Burroughs, who delivered better capability with less lines of code, since they wrote in Algol rather than assembler. Both companies got one thing right: huge libraries of code which made life much easier for applications programmers. DEC's VMS learnt that lesson well. It wasn't until MS-DOS that we were suddenly dropped back into an inferior programming environment (but you'll cope with a lot for sheer responsiveness, and it didn't take too long until you could buy in what you needed).
What killed the mainframe was its sheer optimisation for batch and transaction processing and the massive cost if you used it any other way. Consider that TCP/IP used about 3% of the system's resources, or $30k pa of mainframe time. That would pay for a new Unix machine every year to host your website on.
There's missing context here.
Sievers' treatment of systemd bug reports is poor, usually closing them or pointing the finger elsewhere. For example the journal logging bug, which flooded messages to the syslogger, locking up systems upon reboot; or the shutdown bug, where shutting down a system whilst shutting it down prevented future logins after the reboot.
In both of those cases people where left with non-functioning systems and repeated bug reports being closed until it was undeniable that the fault was with systemd. This behaviour would so delay bug finding that users were left with unusable Fedora installations for weeks.
With the kernel issue he's simply struck a community which, informed by those previous issues, put its foot down promptly and firmly.
Arrrgh, you right Eric. EndNote was the nightmare I lived through.
OneNote isn't popular with students for its note-taking, it is popular because it lays out bibliographies perfectly. That's a concern for students as bibliographic referencing is their main defence against the claims of plagiarism made by automated essay checkers. The days when you wouldn't reference material well known to a practitioner in the field are well over (Illich's "Deschooling society" had one reference). Lesser academics focus on the presentation aspects of bibliographies rather than their content, so OneNote is highly valued by students for its ability to churn out a variety of formats correct down to minor details of quotes/italics, comma/full stop.
The major competitor is Zotero. It's a fine product and well worth a look. It works quite differently -- being a extension to a web browser -- but the workflow of simply whacking the Z button every time you read something interesting works well and makes OneNote seem rather clunky.
Re: Support for the Locale!!!
The locale is still Australia, as locale isn't the same as the language you are writing in (think currency, date formats, etc).
The characters in Aṉangu and Yolŋu (including the Pitjantjatjara dialect) scripts aren't that rare and will be covered by most large Unicode fonts, including those in recent Windows.
In Mac and Linux you use the system keyboard configuration to alter the Compose key to produce Aṉangu and Yolŋu script. iPhone and Android need a keyboard definition. WIndows Xp was more complicated and AuSIL and others have software. Windows 7 isn't too bad and you can use the system keyboard configuration to add a Compose key. There is a common set of composing keystrokes, so please don't make up your own.
An alternative might be to visit Aspitech in Adelaide on your way through (http://www.aspitech.com.au/) and grab some of their refurbished PC goodness. They are shipping with Win7 and Office at the moment. The people there have strong social aims and many people in the "community sector" find them a godsend.
All this shows is that the analysis company is behind the times. Let me count the ways.
1) Tablets. Where are they?
2) Phones. Where are they?
3) Laptops are the choice of people who need to create content and people who need the cheapest computer possible (Chromebook). That explains the rise of MacOS as a proportion, as pure content consumers have moved to tablets. Also the market quantities are falling and this pushed back through percentages: simply put MacOS users are wealthier, and so more able to afford both a tablet and a laptop.
4) Desktops and laptops are no longer serve the same audience. Gamers want desktops. So you really need to pull out retail desktops as a distinct figure.
5) Sales figures undercount Linux on desktop and laptop. Web usage figures ignore the main revenue from Linux, which is from server and embedded use.
6) The retail and business motivations for purchasing computers have never been so different. Lumping them together doesn't give insight.
Insight is the point collecting data. But this has been presented so that it gives no insight. In fact it is misleading, you'd think that Microsoft with a 90% share of laptop+desktop computing was doing fine. In fact presenting it as percentages is poor: you get no idea of the huge shrinkage in sales.
I think the trick with DIY is to know when to step back. A good example is IP addressing: DHCP is here, it has worked for a decade, but the number of IT shops which have a DIY IP address allocation service. Sigh.
Another thing is not to get caught up reinventing. Not reinventing should be the main reason for choosing a language. LOC costs pretty much the same in every language, you want the maximum amount of work done by the language so that you write the minimum LOC.
Finally the nature of DIY has changed. These days it isn't the IT Dept going its own way. It's the IT Dept participating in a worldwide open source project which sets the future standard. This means that DIY isn't a dead end, but just the future arriving sooner.
ISPs are moving away from NetFlow for accounting as the current number of flows make it too onerous and router manufacturers are finally coming to the party with better customer accounting mechanisms.
So the notion that collecting IP flow metadata can occur at low cost is now wrong for the largest Australian ISPs.
The Sony Experia Z1 Compact is close to the ideal corporate phone, although sadly at corporate pricing. If you are looking for an Android equivalent to the iPhone 4S then this would be it.
Computing is needed to understand the modern world
Folks, computing is the new mathematics. Just as the development of symbolic manipulation opened completely new fields of science, computing is now starting to do the same. You simply can't understand the modern world built by science and engineering without understanding computing.
Embedded computers are changing the hard elements of engineering too. You can't build a car without a computer. In a few years you'll have computers changing the gears on your bike.
This is my objection to the curriculum. As you can read above the curriculum is very focused on Information Technology. There's very little computing and very little about computing devices. It's all about how to use computing to do administrative tasks.
In short you'll end up with a classroom where everyone has a smartphone but no one can tell you it's anything other than magic.
Enterprise computing is like competing to be the new mainframe
Why the fuss about enterprise server applications? They might matter to VMware, but they are a diminishing proportion of computing. ARM AArch64 might be able to run enterprise applications, but why would a ARM chip manufacturer go up against Intel with a high power, high throughput chip when it could make more certain money with a low power design.
What AArch64 will do is to totally win the "appliance" space, as those little 1RU boxes which do useful things will have less power draw (and thus heat issues, and thus be cheaper to design and own and be more reliable). Those appliances pretty much all run Linux, or will.
AArch64 also has a decent run at a peculiar sort of desktop -- the space which used to be filled by the "IBM mainframe terminal". Low power -- with its reliability and a small size on the desk -- makes ARM more attractive than x86.
I doubt ARM has much hope in the cloud, as it's performance per watt still trails x86 at maximum throughput. Remember that cloud servers are provisioned to be either at maximum throughput or to be off. If ARM is used it will be because cloud providers specify their own CPU, and obviously AArch64 is available for that, whereas x86 is not.
There's plenty of opportunity to make money with ARM servers without going for the hardest market first. The only attraction of enterprise is the large profits available due to their poor management of computing. But that very same poor management makes them adverse to change.
Not in the business of updates
I've read here a few times that HTC isn't in the business of updates. I don't get that -- why aren't they in the business of updates? If they charged $20 they'd turn updates from a cost centre to a revenue centre, and they'd be getting money without all the trouble of a new hardware design. And it would encourage customers to buy a HTC phone rather than a carrier phone. I just don't get why HTC persists in the current economics of Android updates when they could change the system for a better result for themselves and their customers.
Re: Desk Clutter
Tom7, this is so small you'd mount it on the back of the monitor.
But the enterprise has gone...
There's no "going back to the enterprise" as a safe ground for Blackberry.
The reason is simple: the iPad. That device's sheer ubiquity has forced IT managers to do the previously unthinkable: accept unmanaged devices onto their networks. Sure some networks still won't do that -- sometimes for valid security reasons -- but those networks accept a fall in productivity. If you're business IT -- arguing that computing improves productivity -- then not allowing personal-owned IT is a path to irrelevance.
There's also finance. Everyone has a smartphone. Why on earth would en enterprise want to issue its staff with one? You might do it as some sort of non-monetary salary. But the idea that key staff get a work mobile is one that beancounters are no longer keen on.
The result is that enterprises aren't as keen on Blackberry's special sauce as they were during the era of the standardised desktop operating environment.
No they can't. You have a contract which lists the items you paid consodweration for, and that will list firmware updates.
However you would be surprised how few IT shops actually reflect all the contributing factors in the lftetime cost of ownership in their contracts.
I am sure HP will happily provide already-contracted firmware updates where it is required to, and happily collect an annual fee from the other 99%.
Re: Nice apologist article, Simon !
"...the indonesians consistantly send there boys over to us to collect classified information from us."
It's a bit naive to think that Indonesian spies are primarily interested in the activities of the Australian government. They are much, much more interested in the activities of Indonesian nationals in Australia. In short, you don't find them trying to tap the "secure blackberrys" of Australian politicians but intimidating people raising funds for West Papua and ensuring that Indonesian students studying at Australian universities know that their government is watching them.
I think that part of the anger of the Indonesian establishment towards Australia's spying activities is that this focus of Indonesian spying activities away from Australia's government has been shown up by the depth of Australia's penetration of Indonesia's government. Not once -- as during the East Timor crisis -- but now twice.
It also helps that Australian police forces have taken foreign government intimidation much more seriously in the past decade, a positive side-effect of the War on Terror.
Re: Radiation Monitoring
Required here in Australia. The sensors are typically mounted on the input hopper and on the forklifts. Where I live in Port Adelaide there was an incident recently where a forklift sensor alarmed and the quick thinking and selfless operator drove the forklift well away from the factory's buildings before running.
2600 participants isn't a huge MOOC, it's about the usual completion number for a typical course.
If this stadium looks like a vulva, then the average stadium look like an anus.
Assange is living in Western Australia?
Surely the other political parties will contest his enrolment in the WA electoral roll. It's not like he will have spent even a night at his claimed domicile.
Simon, In the Apple II/BBC Micro era Australia used to have one of the best computing teaching resources in the Parks Computer Centre in western Adelaide. Unfortunately this was disbanded, but most of the staff are still around. There are also some outstanding computing educators. I would have thought that building upon their experience would be the approach to take, but I can't see that this has been done.
It would be well worth your time to track down a few of the old Parks staff and interview them about what works and what doesn't.
ID is pointless
Let's simply ink the fingers of people as they vote. No ID required. It's compulsory voting, so assuming that any adult presenting themselves with a uninked finger and matching a name on the electoral roll is valid is pretty good. In any case inking fingers is a lot better the presenting a fakable ID.
Not that the problem is large, the AEC estimate was maybe 800 people voted twice.
Host key generation is more of a risk
The real risk is the generation of SSL host keys so early in the system first boot that there is no other source of entropy other than the hardware RNG. Best of all these weak keys are permanent.
Midnight Oil hardly the only source
Des Ball wrote a very fine book on Pine Gap in 1988. It had huge press coverage at the time. The book was pretty much a summary for the public, so most of its facts were already known, not the least through the Democrats tabling leaked papers in the Senate.
The Labor Party was pretty much down on Pine Gap after the interference of the CIA in Australian domestic politics in 1974/5. One of the surprises of the Hawke government was that it didn't close the base in response when it came to power in 1983. Rather it negotiated a treaty spelling out exactly the management and function of the facility. Needless to say, this upset the Oilz.
So yeah, not news.
Re: Eduroam, and similar
I'd also add that universities differ from business because: (1) Unis are in IT for the long haul. They're not put off by a half-decade-long project with international agreements and interoperability like Eduroam. (2) Academics are used to listening and criticising proposals. So you get a good hearing, and then you get a bucket-load of encouraging criticism. Part of the reason for the quality of uni networks is the free review from people who's consulting rates are thousands per day. (3) Business simply doesn't operate at the same scale nor require the same availability. I've had business employing a few 10,000s people tell me they run a "big" network, whereas 10,000 users would be a quiet day for a uni network.
"the University core networks" -- no. The learning and research facilities are the core network. It's the administration networks which are non-core. That's the essential mindset difference between university and business computing.
The same is true of applications. You break some Oracle thingie used by administtration, that's bad news. You break e-mail across the university, you're fired.
At universities BYOD is simply fact. It's not a "strategy" open to debate. Even non-IT staff will have a laptop, a tablet and a phone and will expect equivalent access to resources from all of them. The university may or may not own all of those devices. Students definately don't want the uni to provide their IT -- although if the uni can arrange a hefty discount on a MacBook Air they'd be grateful.
The idea that you can limit access to administrative systems to a subset of platforms isn't a goer either. Just the other day I checked a student's recorded test mark from my phone (connected via Eduroam), whilst the student and I were discussing their progress. Business would call this "responsive customer service" and the more you tighten down the access to the admin systems the less responsive the staff can be.
Ubuntu, the Maralinga of Canonical's nuclear testing
So because Canonical has ambitions in the mobile phone market they are going to once again use Ubuntu as the testing ground for their technology. Didn't we have enough of this when they re-did the user interface so it worked better on tablets. And on netbooks before that.
Here's a thought. You've already got millions of users who want a nice desktop and laptop operating system. How about keeping them happy?
- Geek's Guide to Britain Kingston's aviation empire: From industry firsts to Airfix heroes
- Analysis Happy 2nd birthday, Windows 8 and Surface: Anatomy of a disaster
- Adobe spies on readers: EVERY DRM page turn leaked to base over SSL
- Google chief Larry Page gives Sundar Pichai keys to the kingdom
- Breaking news: Google exec veep in terrifying SKY PLUNGE DRAMA