461 posts • joined 26 Jun 2009
Vendor lockin coming back to bite
He also seems to evade the point about his customers suffering from being shackled to Microsoft: "many customers' tech estates - as UK.gov is finding out - are "intertwined" with Microsoft" - this isn't a hidden cost of Google Apps, it's a hidden cost of having implemented Microsoft products in the past!
Interesting, though, that it seems to be only larger customers who struggle to cast off those expensive shackles - and not a good omen for MS.
I had a tough few weeks earlier this year with a broadband fault - it turned out there were two faulty router ports on BT's backbone, but getting to that point required SIX engineer visits to my home and a long conference call between managers from two rival BT divisions to argue over who was going to have to ask the third division to fix the problem for them.
A lot of this was because of automated fault-mishandling flowcharts: BT Wholesale automatically pass all faults over to Openreach to send an engineer out to work on the line - without investigating whether the problem is actually line-related rather than backhaul. Eventually, it got escalated to actual human beings (BT Wholesale's "High Level Escalations"), who knew enough to pass the fault over to BT TSOps (formerly BT Operate), who found and fixed the first router fault, then continue investigating until finding the second fault and passing that to the Ops team at Adastral Park.
The whole saga was a fine example of this article's subject, though: BT Wholesale tried to close the fault ticket without doing any investigation a total of NINE times (*all* new faults are auto-closed twice within minutes of reporting, on the off-chance the fault isn't a real one), then force-closed it once (so it had to be re-opened as a 'new' fault). Even then, there was a lot of luck involved (maintenance meant my traffic was re-routed to another backbone segment for a few hours, which made the problem disappear during the diversion) - which gave enough information for the ISP to get BT investigating the right bits of network at last.
"Why would Uncle NSA be so pissed about Euro data not crossing over to any place they can tap it?"
It seems "any place they can tap it" actually includes quite a lot of the EU anyway, between their UK base with GCHQ and various more covert efforts on mainland Europe.
Of course, *you* can put *your* mail server anywhere you like - but when you're communicating with a typical person with a Hotmail/Gmail/Yahoo address, *those* servers are in the US anyway (a quick trace route from the UK shows Hotmail mail going through routers in NYC and on to somewhere in California). Good luck getting the public to give up all their email addresses.
I have my own domain, so control everything about the inbound email routing. Where did I put it? New York, because I like the email service an Australian company - Fastmail, previously owned by Opera in Norway - offer. Yes, I could have bought hosting in Paris or Berlin, so a different set of acronyms got to snoop on it all, but I wouldn't see that as an improvement; given the choice, I'd worry less about the NSA than about their French or German counterparts anyway.
A truly moronic lawsuit to file - if the US government had ordered Baidu to filter out, say, articles about Snowden, that would indeed be wrong. The First Amendment restricts the government - not private or indeed foreign entities. If I wanted to build my own search engine which only indexes, say, websites hosted on Linux, or websites which contain the word "meh" in the title, I am free to do so - just as Baidu are free to publish an index of websites which are PRC-approved.
Why did this ever get as far as court?! Google, Bing and co have been filtering and tuning their search results for years now to cull spammers, give better results, comply with DMCA takedowns etc.
Re: What about removal protection.
I'm very pleased they're finally going palindromic - "about bloody time too" IMO - but some sort of locking mechanism would be very welcome. Maybe a small hole a padlock could go through? (Our open-access areas are all tied up with nasty kludges involving small padlocks around cables in hopes of stopping keyboards and mice going walkies; smaller plugs would make that impossible!)
Re: Enhance life?
In some ways, yes, things are more complex. My TV is more complex to operate than the TV I had 20 years ago - but that's because this one has a hundred times as many channels and DVR functionality built in. "Watch the news" involves a few extra button-presses - I have to switch on and dial in channel 501, or scroll through a menu - but recording the programme I'm watching right now involves one button and zero tapes, compared to all the fiddling involved in using a VCR - and pausing the programme I'm watching to answer the phone wasn't even possible in those days.
Yes, my mobile phone is far more complex than the only one I'd used 20 years ago - but that one had a shoulder strap (it weighed 4.2kg!) and could only make or receive phone calls, nothing more - not even SMS. My smartphone takes an extra button press or two to make a call - but other buttons access email, the web, music, camera - all things that old handset could never attempt.
I doubt we'll see a "robot" in the article's sense any time soon - more because it's not a good way to solve the problem than anything else. I don't need a six foot pseudo-person to operate my current vacuum cleaner: a tiny self-propelled vacuum like a Roomba will do that much more effectively.
For now at least, we have a very long way to go in improving individual pieces of equipment before we need a full-on human replacement. Self-driving cars, a content-aware fridge (probably RFID-based), a smarter washing machine (maybe RFID again, to identify the clothes inside and appropriate cycle) ... once the Roomba can pick up dirty clothes and get them washed, while the fridge can tell me the milk's off and ask if I'd like the car to go and get more, do we need robot arms and legs involved?
"Install Windows. Install anti-virus. Install WiFi driver. Go online. Update anti-virus. Run Windows update from WSUS offline DVD. Install apps. Validate Windows and Office."
Back when I was doing this all too frequently, I found doing the updates and other applications first, then AV and online updates after that, worked much faster - otherwise, the AV software burns lots of CPU cycles doing real-time scanning of all the updates as they install. (It was XP I was rolling out in those days - fortunately, most of the larger Windows updates could be 'slipstreamed' into the installation directory beforehand, which also helped.)
The 'no photographing houses' thing does seem downright bizarre, as does the idea of needing a permit to go there!
Re: Best common sense tip?
It's not all that easy - but easy enough that BT is rolling this change out on AXE10 exchanges this week, with the older exchanges to be updated later - cutting the delay from 2-3 minutes to 10 seconds.
Re: Punishment should fit the crime.
Community service de-lousing charity PCs for the public would be a nice touch, yes .. maybe put him to work in one of those charities that recycles old hardware?
It's a depressingly light sentence, particularly for a persistent repeat offender - and have there been any consequences for the scammers in India he was working with? I have a nasty feeling the fine just dented his profits a bit, and the rest of the "sentence" will just give him a reason to avoid getting caught for a short time.
This is the electronic equivalent of someone saying "excuse me, you've left your front door key in the lock, so anyone could break in and steal your stuff" - and getting a rant about snooping on his private front door for their trouble.
Meanwhile, in the alternate universe, mirror-Raj Bala is angry at Amazon *not* spotting his stupid newbie mistake, leaving him with a six figure AWS bill and a long time with the police explaining why his AWS account was being used to host malware/child porn/phishing sites...
This is something I've hated about UK broadband in the last few years, with lots of "ISPs" competing to offer "broadband" for 50p a year, free with a tin of beans etc - hacking investment to the bone or beyond, and now comparison shopping sites listing ISPs by price so you can make sure you're getting the bare minimum.
We do see this in low-end web hosting ("unlimited" space and bandwidth! Only 5p/year! May collapse if your site is actually visited...) but I think it's unlikely in this market - hard to oversubscribe RAM or storage, and any shortage of network bandwidth would quickly be well-known.
Re: BT has some very smart and helpful employees...
Absolutely true ... I had a fault which, after SIX engineer visits to test my line, was finally escalated to someone with a clue, who found and fixed two router faults on the backhaul path. (PCHIP errors - a packet processor fault - on an Alcatel router, and a 10GbE port throwing errors on the MPLS core network.) Needless to say, four of the six engineers just shrugged and tried to close the ticket since they don't test end-to-end connectivity to the ISP, one confirmed my diagnosis but couldn't help, and one tried moving me to another FTTC cabinet port to rule that out.
I'm just glad I have A&A for an ISP: I can't imagine many having the tenacity or knowledge to keep chasing BT for almost two months over a core network fault BT refuse to look for!
Still applies: "Infinity" is just VDSL2 instead of ADSL2+. If there's a problem with the wire (like being cut, loose, wet etc) it'll screw up both voice and broadband - and a voice fault is much easier to get BT to fix.
"Mainstream" support for Office 2003 ended back in 2009 - and "extended" support for it ends early next month. I wonder how many installations of this won't get patched, particularly if this issue doesn't get patched by next month's cut-off? 2007 is out of "mainstream" support too, and I'm sure it's far from extinct out there - and probably far from currently patched...
What took so long?
For me, the only surprise here is that it hadn't already happened - particularly for something as simple as a battery. Having seen cars coming off largely-automated production lines in the 90s, the idea that even simple parts of mobile phones would still be hand-assembled two decades later seems bizarre.
As for economic impact: labour is a significant part of costs, and barely a penny of wages paid in China will get spent in Western economies anyway - so roll on robot!
One small step...
It's a step in the right direction, certainly - but they still have no DNSSEC (or indeed DNScurve!) on google.com, which still bugs me. (I have both on my personal domain; I use Fastmail for mail services, which does do STARTTLS with valid certificates as well, redirects HTTP to HTTPS and supports several one-time password setups - but no DNSSEC on their domain either.)
So, not bulletproof - indeed, a BT subsidiary (accidentally?) hijacked traffic to some of Google's DNS servers last week: http://www.itnews.com.au/News/375278,google-dns-servers-suffer-brief-traffic-hijack.aspx - but still, better than nothing ... not to mention, better than Hotmail's SMTP servers offer!
Re: I don't want a tv
That sounds rather like what we know as "monitors", except that you probably don't want quite that high a resolution - 1080p would probably be fine for most, plus a few DVI/HDMI inputs (and sound, unless you have a separate system, which monitors tend to lack).
I replaced my 720p non-LED-lit LCD with a 1080p LED-lit LCD in late 2012 - putting the old 720p upstairs in the guest room, since it still worked fine apart from a couple of "stuck" pixels.
Why might I upgrade, assuming it doesn't break down in some way? A bigger screen maybe, or higher resolution. WiFi? Forget it: the screen needs power and at least one DVI/HDMI cable going in anyway, why would adding an Ethernet lead be a problem? I much prefer the simplicity and robustness of plugging it straight into the switch behind the TV (the same one the STB, wireless access point, games console and other Net-enabled devices already plug into) rather than relying on wireless and having to update passwords (you DO change your wifi password regularly, right ... then have to feed the new one into every wireless device using it?)
Attack on transit providers?
From the Netflix deal we saw recently - essentially, Netflix cut Level3 out of the equation and started paying Comcast directly for the bit of transit that went to Comcast. As long as Comcast aren't charging more than Level3 were, Netflix aren't out of pocket, Comcast come out ahead - and it's Level3 getting screwed in multiple ways (both losing the cheap bulk local traffic to Comcast they could dump at nearby peering points, and losing the revenue it brought).
AT&T are just greedy, wanting to get paid more for delivering the service they already sell - just as if Royal Mail suddenly announced they wanted to charge me for bringing mail to my door, because it's hard work. (In fact, AT&T already get to charge both ends in many cases, since peering isn't free except between the biggest operators - and even among those, there are occasional fights...)
Re: PAF 18 years out-of-date
The mail delivery purpose occasionally trips those other users up, too; I recall a few years ago when I had to update my insurance policy: my car was being kept in a car park - on a road which had no postcode. The first insurance call-centre drone couldn't understand this: 'it must have a postcode! Everywhere has a postcode! Phone Royal Mail and ask them what the postcode is, they'll tell you!' No: they confirmed the road in question had no postcode, since it had no mail delivery points on (just the car park in question, which of course does not receive mail).
Fortunately, the call-centre supervisor was a bit more clued-up and understood this - maybe had the same issue before - and had a sensible workaround (using the nearest postcode that did actually exist).
Of course, none of this would be a problem if they weren't abusing a mail delivery system for mapping purposes...
IBM and NSA secrets
Given IBM's history, including the design of DES - where it emerged, decades later, that the S-box values had been carefully chosen for resistance to differential cryptanalysis, which IBM and the NSA were keeping a closely guarded secret at the time - it's not exactly far-fetched to think IBM might be doing things covertly now as well. Indeed, to assert IBM hasn't done secret things with NSA would be a flat-out lie (they've worked together on classified projects for decades); the only question is if and how much this impacts IBM's other customers. (For that matter, Google employs people with high security clearance, like many high-tech US companies - and of course what they do is secret, so they can't actually tell us whether it infringes our privacy or not...)
David Snowden did actually work for IBM, though I suspect the article's supposed to be referring to the more famous Edward J Snowden currently living in Russia.
The big question is how Netflix's fee to Comcast for this arrangement compares with the Level3 fees they were paying to get that transit indirectly - a question none of the parties is likely to answer publicly, of course.
I'd imagine a lot of Netflix customers are on Comcast - so if this means Netflix can replace a couple of 1 Gbps Level3 transit pipes with 1 Gbps private peering links straight into Comcast for about the same price or less, it's actually quite sensible. That's a big if, of course.
Settlement-free peering would be better - but since we can be certain Level3 wasn't providing Netflix with settlement-free transit, this may not be such a bad development after all. It also reduces the issue about peak traffic economics: if Comcast would balk at an extra 10 Gbps of peering with Level 3 which would sit idle 18 hours a day, but Netflix can pay a little bit to get a 10 Gbps port on Comcast's core, Netflix customers get better results this way.
Where the phone-spammers belong...
I've always thought those people belonged in jail, but I'd rather hoped being behind bars would *prevent* them making more illegal anonymous nuisance calls, not pay them to make more!
(Illegal because I'm TPS registered, anonymous so they don't get caught and fined for violating it. Sadly, they seem quite quick to hang up if you try to identify the spamming outfit for enforcement purposes...)
FTTN/FTTC is just ADSL with more frequencies and fancier coding - squeezing a bit more bandwidth out of the same wire. No need to replace copper that works with ADSL, just connect a different filter/splitter at the first socket and replace the modem.
Now, if you have a lousy line, yes, it'll still be lousy - and if it's bad enough to need replacing anyway, or you're building a new development and putting in fresh wire anyway, FTTH/FTTP makes perfect sense.
Here in the UK, we're going FTTC (fibre to the cabinet, FTTN in NBN-speak) almost everywhere, with FTTP limited to some new sites and an expensive option for those who really, really want it enough to pay through the nose for it. (Personally, I'd love to make the jump to actual fibre - but when I get 80/20 down plain old copper with FTTC, it's too hard to justify the expense for now.)
Faults are still a pain though: I had *SIX* engineer visits recently, before BT admitted there was actually a core network fault - and then auto-closed the trouble ticket. Then tried to arrange a seventh visit before agreeing to re-open the ticket until the problem was actually fixed. Thank goodness for having a tenacious enough ISP to fight BT for as long as it takes to get the issue fixed properly!
Re: Quad core may not be 4xthe same core
There's already a lower-powered ARM core in the iPhone 5S - the M7 'motion co-processor' chip uses its own ARM Cortex-M3 core to offload movement-related processing, rather than burning more power using the big 64-bit ARM cores on the A7 for the same sums.
Samsung have been using ARM's own big.LITTLE setup, pairing Cortex A15 and A7 cores together so you can move workload between the two to adjust your power-performance tradeoff; I don't recall any indication of Apple going down that route yet, but it wouldn't surprise me to see that soon.
Re: Oh Dear
Yes, Australia could indeed change their tax laws as they wish, give or take whatever international tax/trade treaties might apply - so, why don't they? Maybe because all the other taxes (GST or whatever it's called there, income tax from Apple's staff, property taxes on the shops, import duties on hardware) mean the government's already getting its pound of flesh anyway, so doesn't need to squeeze any harder?
If I buy a £600 iThing here in the UK, £100 of that already goes straight to the government in VAT. Then the shop I buy it in pays thousands a month in business rates, and more in payroll taxes ('employer's NI' here) for employing staff to sell stuff.
I do lean towards the idea of taxing turnover instead of "profit" (because, as Apple's accountants demonstrate, the amount of "profit" you make and pay taxes on is pretty much whatever number you want it to be, while turnover is pretty much fixed) - but then, that's pretty much what GST/VAT delivers already, so why bother duplicating it?
Re: "legacy systems effectively impose a debt on an organisation"
That's fine as long as nothing at all changes - and you don't get bitten by Y2K-type bugs. If that 35 year old system's connected to a network, though, you need to consider security - which is exactly why so many places are panicking about SCADA security now, because a lot of those systems aren't as isolated as people assumed decades ago - or they were isolated once, but need to integrate with other systems now.
Can you actually get parts for that 35 year old computer now? The guy who set up your code 35 years ago probably isn't going to be working much longer, even if you can still get hold of him. I knew someone a while ago working on moving a factory control system from a big old Vax with a hundred serial ports (each connected to a different bit of the production line) to a Linux machine with Ethernet-serial converters. (They did actually retain the code, but updated for the new platform.) Of course, he's retired now - and will that code still run properly a decade from now on future Linux systems? Sooner or later, it'll need another update.
Re: May I suggest some additional tests?
My one a few years ago was memorable. "This business has submitted a tariff change request, known as a 'cease and reprovide' in BT-speak since you're moving a line from one bill to another. Do you (a) change the tariff, leaving everything working, or (b) cut off the line, then insist on the customer getting a 'new' line installed a week or two later?"
It's B, of course, and extra credit for screwing up the address on the account and sending the Openreach guy to the wrong building to install the "new" line instead of just reconnecting the old one.
It's nice to see apprenticeships, though - particularly if they're real ones, not just cheap basic labour - and if BT can improve their service later with 1,000 properly trained technical people, that's good too.
Bugs in sensors
A while ago, I remember a local hospital having problems with tiny critters (harvest flies?) crawling into the fire alarm sensors, shorting something out and triggering a fire alarm each time. Of course, you can't really turn the fire alarm off, but there's not a lot you can do to keep them out of the sensors either. So, people get used to false alarms, then you start worrying about the real thing happening...
Custom v COTS
I've seen good and bad aspects of both. Yes, sometimes we find ourselves wrestling with expensive proprietary crud to make it do things that would be easy, or mocking an in-house kludge - but then we get "big projects", micro-managed to specifications written by people with no clue. They'll want a few basic functions ... but specify it must all be done in Visual Basic and get hosted on some rusting Windows cluster. Never mind that complying with those demands triples the cost! (Public sector, needless to say; they dumped that in-house deployment, after discovering that the hosted version of the same web app running on a budget server in another country was much faster as well as cheaper.)
Back at my previous place, when we switched email platforms, someone demanded that we include the facility to tell when an outgoing message was read. That eliminated most of the sane options and left us enduring proprietary crudware for years, throwing more and more money at bigger and faster SANs and servers to make the crud chug at a semi-adequate speed - ten times the resources for a quarter of the performance.
Right now, I'm limited to a 50 Mb home directory at work, on in-house servers - plus 25 Gb for email, on commercial outsourced ones, for a fraction of the cost. Can anyone really justify the extra cost for less service? I'd like to think nobody would try: better to focus on services where in-house staff could actually deliver a *better* service than a cheap off-the-shelf service.
"the identity of any non-Gmail users can only be found out if someone goes through all the non-Gmail users whose addresses are on file in its systems and then sifts through the responses"
That's a really difficult job. If only some big search engine company had some sort of toolset for processing large log volumes ... maybe they could call it Google Sawzall, like Google did when they created it over a decade ago?
I'm not too convinced of the case's merits, but the idea Google of all people can't process their own logfiles to find the names in question?!
Re: Cost analysis not feasable
I'm not sure the comparisons hold up - developing a new technology is different from a large-scale rollout of existing technology to the public. So yes, you *invent* the laser and the radio techniques behind WiFi - but then you figure out beneficial applications before mass-producing lasers or WiFi access points.
Having said that, upgrading Australia's Internet infrastructure has obvious benefits, they just haven't been fully quantified yet - and perhaps can't be at this stage. At best, I expect the study could establish a lower bound on the benefits; as long as that shows break-even, going ahead should be a no-brainer unless someone can find a better option.
That's one obvious flaw - not so much for 80x25 terminals, but Braille-readers and similar: how exactly do blind people draw on a map they can't see? Passwords are fine, they can touch-type, but a map?
Even an 'obvious' location won't be as obvious as a password of 'password' though: ok, the Eiffel Tower is a major landmark - but so's the Empire State Building, Niagara Falls, the Leaning Tower of Pisa ... if I were to tell you I'd picked an obvious major landmark as mine, which would it be?
My brother might pick the church he got married in - or the hotel the reception was in, or the one the honeymoon was in. Or maybe the first school we both attended, or the house we lived in at the time (both in the countryside, so easy to pick out on a map) - ALL 'obvious' places to someone who knows him well, so which would it be? Those span two continents and only one is within a few miles of his home, and I could think of another dozen equally "obvious" places he might pick instead.
Compare that with the usual: mother's maiden name, place of birth, siblings and offspring? All listed on Facebook, for a lot of people! Much easier to find. Add a hint like "bad NY hotel" and you have a specific building I could find in 10 seconds on Google Maps - and it's not in New York, either, that would be too obvious.
3.78 is just the average speed of actual streaming to Google Fiber users - obviously, Netflix won't have any content encoded to stream at 50 Mbps (full BluRay rate!) or anywhere close - probably some new HD/ultra-HD stuff at up to 10, plus a load of SD at nearer 1.
10 Gbps is just a pointless bragging exercise though: can you actually download *anything* at that rate, even from Google's own servers let alone anything the far side of a peering point? Can your computer handle that bitrate, given that most only have gigabit Ethernet cards?
Dialup to 512k cable modem was a huge step for me. That to 20 Mbps ADSL was an improvement; 100 Mbps would be a bit better still - but 10G? I'd never fill 1G anyway, even with 4K streams (BluRay peaks somewhere under 50 Mbps; even several 4K streams at once won't be 200 times that bitrate!)
Now, if I could just get a decent 100 Mbps connection - with fast upload, static IP, no "no servers/torrents/blah" rule, I'd be happy. Even faster would be nice, for the right price, but not a big deal.
Does the software market really need "resellers" to such a significant extent now? For hardware, yes, you have distribution, spares, warranty service ... none of that applies to software, where just buying the download rights and activation keys directly from Microsoft (or other supplier) seems better all round, IMO. Exactly what my own small company has done for Windows, Office, Sage, Google Apps, Dropbox and the AV package, in fact: are we really losing out on anything at all by going direct instead of paying for a "reseller" to get in between us?
Re: Kill it now
You want to keep TW rather than Comcast, because you've lost service 30 times in 14 months requiring 7 engineer visits and 4 modem replacements?! That's scary: are Comcast really even worse than that?
Here in the UK, I've had BT engineers visit six times in 3 weeks (all for a single fault - probably a backhaul fault, but their fault-non-finding approach doesn't allow testing for that, so they keep re-testing my phoneline instead) and I'm about ready to jump ship to the cable company instead. They do throttle your bandwidth (lose 10% or so if you pull over 5 Gb during peak time in a day). Only snag is P2P throttling (which I think also caught my SSH uploads when I was with them before).
Even as a die-hard geek with a 78/20 Mbps connection (and online backup) I have never come close to 1 Tb in a month, though! Maybe if I start using my Netflix subscription much more I could come close, then I'd need to change ISP anyway.
I'd rather see drivers committed back to the public source tree - and a guarantee we can run "stock" firmware if and when we want. (Which, I suppose, is why I picked a Nexus 4 last time I bought an Android phone, and a Nexus 7 for a tablet.) I'd hate to be reliant on the manufacturer for a software update: imagine if you needed Dell or Acer's permission before updating Ubuntu or indeed Windows on a laptop they sold you?
Re: Won't someone think of the 3 million?
If they're doing it to cut debt load, they'll be selling off the bit of Comcast that serves those 3m - so they wouldn't just switch off service to a bunch of customers. Even if they did switch it off (probably not allowed by the local regulator), someone else would want to take over the existing wires and run a service: the expensive bit, all the digging to put in the wires, is already done and paid for.
Comcast don't seem that bad by comparison - the UK only has a single cable company now (still named Virgin, but now owned by Liberty) - with no plans for IPv6, no static IP options - though at 120/11.7 Mbps, they're theoretically the fastest around, apart from harshly traffic shaping anything they consider P2P traffic and being coy about the details.
The "almost read only" nature of shingled drives sounds like a good fit for something like NetApp's WAFL - originally designed because they used RAID4, which struggles with writes unless you write a whole stripe at a time. Not at all hard to imagine them offering a shelf of shingled drives as an archival tier, in the same way they like to mix the fast pricey SAS/FC drives with slower, bigger, cheaper SATA ones already. I'm sure lots of data on a typical array doesn't change from one month to the next, but still needs to be accessible in the milliseconds of disk not the minutes of tape - so, shuffle it off in big batches onto a bunch of shingled drives, keeping new or frequently-changed data on regular platters and/or flash.
I doubt a disruption would bring us to the point you couldn't get hold of a replacement HAMR or helium drive for one that had failed - easy enough to keep some spares in stock if you're that worried, and replacing HAMR with helium or vice versa would be a big issue for your array. (Would it? Obviously there's a huge difference between shingled and conventional drives, do HAMR and helium drives differ much in performance?)
It does seem bizarre that there's very little roaming or similar arrangement - 3 used T-mobile and Orange to plug some regional gaps, but that seems to be about it.
On remote islands etc, if it doesn't quite make sense for any one company to put in their own cell, why not split the cost: one installs a cell, the others split the cost and get roaming access in exchange? In effect I suppose this comes close: 3+EE or Voda+O2 will now be splitting the cost of a site, so it will be easier for both pairings to cover more marginal areas.
One or two niche SIM vendors do cross-network roaming, I know - Manx Telecom perhaps? That way, even out of range of one you can still use another; a bit pricey, but for people spending time in remote areas and really needing communications (rural GPs etc?) it's worthwhile: up here in Scotland I've noticed you can often get one network but not another, so roaming would still work fine but no single network would.
'marko' isn't that unusual a username: I've seen plenty of setups using firstname+lastinitial like that, including Microsoft (hence 'billg').
Or, given the final score, maybe it was the password for remote-control of the Broncos' secret weapon, and a Seahawks fan put it to good use?
Consumer v creator
Actually, the consumer/creator dichotomy Trevor points out is quite relevant - contrary to the wishes of big media, ISPs are NOT just there for passively downloading web pages: there's peer-to-peer, content uploads, hosting our own servers.
Neutrality is particularly relevant now, with the battle over Netflix traffic - Comcast refusing to upgrade their peering with Level3 to handle the level of traffic they get, unless they get paid extra for it. Why, it's almost as if Comcast were an ISP who also had a commercial interest in some sort of rival video distribution service...
There's a different problem ahead though, with ISP port blocking, "carrier grade" NAT and other obstructions forcing end-users down to the lowest common denominator. Will ISPs be allowed to hand out RFC1918 IP addresses so users can't even receive incoming connections?
The '250-1000' will come from a simple comparison: "it's got Gigabit Ethernet, most people have 1-4 Mbps of upload" I imagine. Which has a bit of logic behind it, I suppose, you'll be able to write to the device at LAN speeds then leave it to upload over the Net much more slowly later on. Marketing BS, of course, and I doubt it can really max out GbE for more than a few seconds at a time - then again, ISPs and hosting providers can't handle saturated pipes 24x7 either.
A local storage cache like that would actually be quite handy - maybe an S3 wrapper. I remember looking for one a while ago and not finding it, though.
Trying to dodge the rules?
It seemed pretty blatant that Google were using it as a floating building to get around all the planning laws that apply if they put it on land - and unfortunately for them, I think a little Vulture said previously there's a specific law against exactly that: the wet bit's there for boats, not for buildings that aren't allowed on the land next to it.
As I recall, on land Google would have had to file architect's plans and much more detail about what the building is for - which seemed quite reasonable to me (is it a shop? factory? office? datacentre?) but didn't suit Google's "total privacy for us, none for you" agenda. They do seem to be taking that secrecy to ridiculous lengths, though: why not just admit it's their answer to the Apple Store? (Or a test bed for Glass, or whatever.)
Promising ... in theory
It's a nice theory, but I'm sure the government will manage to screw it up somehow in order to keep the cash flowing into the pockets of the failure-factories as it has for years - chop that billion pound fiasco into a dozen £99m pieces, so it just ends up costing a bit more for the same lousy result.
The root problem of civil servants wanting to buy a fire-breathing monster truck when a Transit van would do the job fine will take a lot more than just a spending cap, though - more like a wholesale cultural change. Having seen the process up close, people with no understanding get to write specifications and sign the cheques: demanding that a hosted service be written in a particular language (or, on one project I worked on, demanding that the web application's appearance conform to both of two conflicting templates, since they were re-organising at the time and had no idea which one actually applied!) ... at least they eventually moved off IE 6 though. That alone was adding a hefty chunk to the development costs on one project, between the extra workarounds and the testing involved.
Even if they lose power for an average of over 3 days per year, which would be a pretty lousy power supply, that's 1% of their capacity lost. Could you really provide backup power - UPS+generator or flywheel+generator - to a computer for less than 1% of its purchase price? I doubt it.
For non-urgent batch jobs like most HPC, you get better overall throughput by shutting down for the power outage and putting your UPS budget into buying some more compute nodes to be faster the other 99% of the time.
Of course for any real time service like a bank or ISP, stamping out that last 1% of downtime each year is worth spending a lot of extra money on: being offline 1% of the time will cost you a lot more than 1% of your revenue. For batch work like this, though, just work slightly faster the other 99% of the time and you come out ahead overall.
Even if HAMR and SMR won't play nicely together, surely helium could still be used to cram SMR platters more closely together, giving the benefits of both: 7 platters instead of 4 in the same space, so somewhere around a 10 Tb drive with the combination of current technologies?
RAID rebuild times are getting insane, though: because a drive twice the size still takes twice as long to read/write fully, you can get big arrays now which would take multiple days to rebuild fully. During which time, of course, you're at risk of a second drive failing during that rebuild cycle - and requiring a rebuild of its own, if you're using double-parity. Suddenly, that triple-parity stuff doesn't seem so paranoid after all...
I like the idea of a common connector - though micro-USB doesn't seem ideal. Can it handle 2A or more, for bigger devices to charge at a sane speed? I hope the next version of the plug doesn't care which way up it's inserted, and delivers decent current when needed.
Perhaps if the adapter options remains, we'll be OK: the future phone with a superconducting 10A nano-USB socket can still be charged much more slowly from micro-USB to keep compliant until the legislation is updated.
I would guess the shorter lifespan comes from cycling the spindle speed up and down more often: it's virtually always when a drive is power-cycled that it dies, rather than while it's sitting there spinning at a constant speed for days. Just like a laptop, slowing or stopping the drive will save power, but shorten the drive's life.
Lower power/heat would be appealing for a big array: a smaller/cheaper power supply per drive shelf, less cooling cost per rack, a smaller/cheaper UPS/generator etc. It's not just shaving a few % off the electricity bill for the drive itself. I get the impression a lot of big installations have a lot of "cold" data where even a 7200rpm SATA drive is excessive, but the latency of tape would still be a deal-breaker.
I wonder if they could do a bigger version, like the old Quantum Bigfoot 5.25" HDDs? Pack 10 or 20 Tb in a single unit, with slower access times - we've moved the other way, to 3.5" and now 2.5" drives to get faster and faster access times, but in some cases a slower bulk HDD is just what the doctor ordered.
"the social network needs chips with screaming fast CPUs to deliver dynamic web pages and perform similar tasks"
No, not really. What they need is lots and lots of CPU capacity - but not individual speed-at-all-costs cores.
On the desktop/workstation, you can really only use a few cores for most things, so you need those cores to be as fast as possible. Intel are very good at that: the i7 and latest Xeons are packed full of really clever tricks to shave every microsecond off the execution time of individual instruction streams so you can get the next frame of your FPS drawn in time, or get that complex protein molecule calculated and drawn that tiny bit faster. Having more cores doesn't give you much benefit if any, for all the effort put in lately trying to make tasks multithreaded wherever possible. To double the speed, you can be quadrupling the cost - and if you need the speed, that's where you go.
Facebook, on the other hand, have millions of requests coming in each second. More and cheaper cores are a big win for them: half the performance for a quarter of the price means they can get twice as much throughput for the same money.
Intel see this too, of course: it's why they've been adapting their Atom core for server use, as well as portable. It'll be a tough fight, though: their biggest asset, Windows compatibility through x86, just doesn't apply to these big Linux server farms: a LAMP stack should be just as happy on ARM as on x86.
I wonder if/when Intel might get a big Atom sale to the likes of AWS or Azure? Or, conversely, when Microsoft might get round to offering ARM builds of Windows Server...
Unlimited with lots of limitations
I'm really growing to hate "unlimited" offerings for exactly this reason. Barring illegal use, what does it matter to my ISP whether that gigabyte of data I just sent was going to visitors to my home-hosted website or me uploading a batch of photos to Flickr? Why do they pretend "server" use somehow matters?
Of course, it's all a dodge to try to clamp down on (some) heavy users. I'm very glad to be with an ISP now which does have traffic charges, but no arbitrary BS prohibitions: I have a static IP, no filtering. If I want to move my personal website onto my home connection, or be able to VPN to my home LAN, or do my own SMTP and DNS service, they're quite happy - I'll just have to pay a bit extra if it uses more bandwidth, because that's the bit that actually costs money.
Even Google can't offer a truly unlimited-usage gigabit pipe to everyone: it just doesn't work. Instead of trying to pretend it matters which end sends the TCP SYN packet and which is replying with the SYN|ACK, why not accept this and charge, say, $1 per 10Gb? Heavy users cost more, and pay more accordingly - and priced at a level that actually reflects costs, it's fair and cheap enough not to deter anyone sane.
- Mounties always get their man: Heartbleed 'hacker', 19, CUFFED
- Batten down the hatches, Ubuntu 14.04 LTS due in TWO DAYS
- Samsung Galaxy S5 fingerprint scanner hacked in just 4 DAYS
- Feast your PUNY eyes on highest resolution phone display EVER
- AMD demos 'Berlin' Opteron, world's first heterogeneous system architecture server chip