437 posts • joined 26 Jun 2009
Re: Oh Dear
Yes, Australia could indeed change their tax laws as they wish, give or take whatever international tax/trade treaties might apply - so, why don't they? Maybe because all the other taxes (GST or whatever it's called there, income tax from Apple's staff, property taxes on the shops, import duties on hardware) mean the government's already getting its pound of flesh anyway, so doesn't need to squeeze any harder?
If I buy a £600 iThing here in the UK, £100 of that already goes straight to the government in VAT. Then the shop I buy it in pays thousands a month in business rates, and more in payroll taxes ('employer's NI' here) for employing staff to sell stuff.
I do lean towards the idea of taxing turnover instead of "profit" (because, as Apple's accountants demonstrate, the amount of "profit" you make and pay taxes on is pretty much whatever number you want it to be, while turnover is pretty much fixed) - but then, that's pretty much what GST/VAT delivers already, so why bother duplicating it?
Re: "legacy systems effectively impose a debt on an organisation"
That's fine as long as nothing at all changes - and you don't get bitten by Y2K-type bugs. If that 35 year old system's connected to a network, though, you need to consider security - which is exactly why so many places are panicking about SCADA security now, because a lot of those systems aren't as isolated as people assumed decades ago - or they were isolated once, but need to integrate with other systems now.
Can you actually get parts for that 35 year old computer now? The guy who set up your code 35 years ago probably isn't going to be working much longer, even if you can still get hold of him. I knew someone a while ago working on moving a factory control system from a big old Vax with a hundred serial ports (each connected to a different bit of the production line) to a Linux machine with Ethernet-serial converters. (They did actually retain the code, but updated for the new platform.) Of course, he's retired now - and will that code still run properly a decade from now on future Linux systems? Sooner or later, it'll need another update.
Re: May I suggest some additional tests?
My one a few years ago was memorable. "This business has submitted a tariff change request, known as a 'cease and reprovide' in BT-speak since you're moving a line from one bill to another. Do you (a) change the tariff, leaving everything working, or (b) cut off the line, then insist on the customer getting a 'new' line installed a week or two later?"
It's B, of course, and extra credit for screwing up the address on the account and sending the Openreach guy to the wrong building to install the "new" line instead of just reconnecting the old one.
It's nice to see apprenticeships, though - particularly if they're real ones, not just cheap basic labour - and if BT can improve their service later with 1,000 properly trained technical people, that's good too.
Bugs in sensors
A while ago, I remember a local hospital having problems with tiny critters (harvest flies?) crawling into the fire alarm sensors, shorting something out and triggering a fire alarm each time. Of course, you can't really turn the fire alarm off, but there's not a lot you can do to keep them out of the sensors either. So, people get used to false alarms, then you start worrying about the real thing happening...
Custom v COTS
I've seen good and bad aspects of both. Yes, sometimes we find ourselves wrestling with expensive proprietary crud to make it do things that would be easy, or mocking an in-house kludge - but then we get "big projects", micro-managed to specifications written by people with no clue. They'll want a few basic functions ... but specify it must all be done in Visual Basic and get hosted on some rusting Windows cluster. Never mind that complying with those demands triples the cost! (Public sector, needless to say; they dumped that in-house deployment, after discovering that the hosted version of the same web app running on a budget server in another country was much faster as well as cheaper.)
Back at my previous place, when we switched email platforms, someone demanded that we include the facility to tell when an outgoing message was read. That eliminated most of the sane options and left us enduring proprietary crudware for years, throwing more and more money at bigger and faster SANs and servers to make the crud chug at a semi-adequate speed - ten times the resources for a quarter of the performance.
Right now, I'm limited to a 50 Mb home directory at work, on in-house servers - plus 25 Gb for email, on commercial outsourced ones, for a fraction of the cost. Can anyone really justify the extra cost for less service? I'd like to think nobody would try: better to focus on services where in-house staff could actually deliver a *better* service than a cheap off-the-shelf service.
"the identity of any non-Gmail users can only be found out if someone goes through all the non-Gmail users whose addresses are on file in its systems and then sifts through the responses"
That's a really difficult job. If only some big search engine company had some sort of toolset for processing large log volumes ... maybe they could call it Google Sawzall, like Google did when they created it over a decade ago?
I'm not too convinced of the case's merits, but the idea Google of all people can't process their own logfiles to find the names in question?!
Re: Cost analysis not feasable
I'm not sure the comparisons hold up - developing a new technology is different from a large-scale rollout of existing technology to the public. So yes, you *invent* the laser and the radio techniques behind WiFi - but then you figure out beneficial applications before mass-producing lasers or WiFi access points.
Having said that, upgrading Australia's Internet infrastructure has obvious benefits, they just haven't been fully quantified yet - and perhaps can't be at this stage. At best, I expect the study could establish a lower bound on the benefits; as long as that shows break-even, going ahead should be a no-brainer unless someone can find a better option.
That's one obvious flaw - not so much for 80x25 terminals, but Braille-readers and similar: how exactly do blind people draw on a map they can't see? Passwords are fine, they can touch-type, but a map?
Even an 'obvious' location won't be as obvious as a password of 'password' though: ok, the Eiffel Tower is a major landmark - but so's the Empire State Building, Niagara Falls, the Leaning Tower of Pisa ... if I were to tell you I'd picked an obvious major landmark as mine, which would it be?
My brother might pick the church he got married in - or the hotel the reception was in, or the one the honeymoon was in. Or maybe the first school we both attended, or the house we lived in at the time (both in the countryside, so easy to pick out on a map) - ALL 'obvious' places to someone who knows him well, so which would it be? Those span two continents and only one is within a few miles of his home, and I could think of another dozen equally "obvious" places he might pick instead.
Compare that with the usual: mother's maiden name, place of birth, siblings and offspring? All listed on Facebook, for a lot of people! Much easier to find. Add a hint like "bad NY hotel" and you have a specific building I could find in 10 seconds on Google Maps - and it's not in New York, either, that would be too obvious.
3.78 is just the average speed of actual streaming to Google Fiber users - obviously, Netflix won't have any content encoded to stream at 50 Mbps (full BluRay rate!) or anywhere close - probably some new HD/ultra-HD stuff at up to 10, plus a load of SD at nearer 1.
10 Gbps is just a pointless bragging exercise though: can you actually download *anything* at that rate, even from Google's own servers let alone anything the far side of a peering point? Can your computer handle that bitrate, given that most only have gigabit Ethernet cards?
Dialup to 512k cable modem was a huge step for me. That to 20 Mbps ADSL was an improvement; 100 Mbps would be a bit better still - but 10G? I'd never fill 1G anyway, even with 4K streams (BluRay peaks somewhere under 50 Mbps; even several 4K streams at once won't be 200 times that bitrate!)
Now, if I could just get a decent 100 Mbps connection - with fast upload, static IP, no "no servers/torrents/blah" rule, I'd be happy. Even faster would be nice, for the right price, but not a big deal.
Does the software market really need "resellers" to such a significant extent now? For hardware, yes, you have distribution, spares, warranty service ... none of that applies to software, where just buying the download rights and activation keys directly from Microsoft (or other supplier) seems better all round, IMO. Exactly what my own small company has done for Windows, Office, Sage, Google Apps, Dropbox and the AV package, in fact: are we really losing out on anything at all by going direct instead of paying for a "reseller" to get in between us?
Re: Kill it now
You want to keep TW rather than Comcast, because you've lost service 30 times in 14 months requiring 7 engineer visits and 4 modem replacements?! That's scary: are Comcast really even worse than that?
Here in the UK, I've had BT engineers visit six times in 3 weeks (all for a single fault - probably a backhaul fault, but their fault-non-finding approach doesn't allow testing for that, so they keep re-testing my phoneline instead) and I'm about ready to jump ship to the cable company instead. They do throttle your bandwidth (lose 10% or so if you pull over 5 Gb during peak time in a day). Only snag is P2P throttling (which I think also caught my SSH uploads when I was with them before).
Even as a die-hard geek with a 78/20 Mbps connection (and online backup) I have never come close to 1 Tb in a month, though! Maybe if I start using my Netflix subscription much more I could come close, then I'd need to change ISP anyway.
I'd rather see drivers committed back to the public source tree - and a guarantee we can run "stock" firmware if and when we want. (Which, I suppose, is why I picked a Nexus 4 last time I bought an Android phone, and a Nexus 7 for a tablet.) I'd hate to be reliant on the manufacturer for a software update: imagine if you needed Dell or Acer's permission before updating Ubuntu or indeed Windows on a laptop they sold you?
Re: Won't someone think of the 3 million?
If they're doing it to cut debt load, they'll be selling off the bit of Comcast that serves those 3m - so they wouldn't just switch off service to a bunch of customers. Even if they did switch it off (probably not allowed by the local regulator), someone else would want to take over the existing wires and run a service: the expensive bit, all the digging to put in the wires, is already done and paid for.
Comcast don't seem that bad by comparison - the UK only has a single cable company now (still named Virgin, but now owned by Liberty) - with no plans for IPv6, no static IP options - though at 120/11.7 Mbps, they're theoretically the fastest around, apart from harshly traffic shaping anything they consider P2P traffic and being coy about the details.
The "almost read only" nature of shingled drives sounds like a good fit for something like NetApp's WAFL - originally designed because they used RAID4, which struggles with writes unless you write a whole stripe at a time. Not at all hard to imagine them offering a shelf of shingled drives as an archival tier, in the same way they like to mix the fast pricey SAS/FC drives with slower, bigger, cheaper SATA ones already. I'm sure lots of data on a typical array doesn't change from one month to the next, but still needs to be accessible in the milliseconds of disk not the minutes of tape - so, shuffle it off in big batches onto a bunch of shingled drives, keeping new or frequently-changed data on regular platters and/or flash.
I doubt a disruption would bring us to the point you couldn't get hold of a replacement HAMR or helium drive for one that had failed - easy enough to keep some spares in stock if you're that worried, and replacing HAMR with helium or vice versa would be a big issue for your array. (Would it? Obviously there's a huge difference between shingled and conventional drives, do HAMR and helium drives differ much in performance?)
It does seem bizarre that there's very little roaming or similar arrangement - 3 used T-mobile and Orange to plug some regional gaps, but that seems to be about it.
On remote islands etc, if it doesn't quite make sense for any one company to put in their own cell, why not split the cost: one installs a cell, the others split the cost and get roaming access in exchange? In effect I suppose this comes close: 3+EE or Voda+O2 will now be splitting the cost of a site, so it will be easier for both pairings to cover more marginal areas.
One or two niche SIM vendors do cross-network roaming, I know - Manx Telecom perhaps? That way, even out of range of one you can still use another; a bit pricey, but for people spending time in remote areas and really needing communications (rural GPs etc?) it's worthwhile: up here in Scotland I've noticed you can often get one network but not another, so roaming would still work fine but no single network would.
'marko' isn't that unusual a username: I've seen plenty of setups using firstname+lastinitial like that, including Microsoft (hence 'billg').
Or, given the final score, maybe it was the password for remote-control of the Broncos' secret weapon, and a Seahawks fan put it to good use?
Consumer v creator
Actually, the consumer/creator dichotomy Trevor points out is quite relevant - contrary to the wishes of big media, ISPs are NOT just there for passively downloading web pages: there's peer-to-peer, content uploads, hosting our own servers.
Neutrality is particularly relevant now, with the battle over Netflix traffic - Comcast refusing to upgrade their peering with Level3 to handle the level of traffic they get, unless they get paid extra for it. Why, it's almost as if Comcast were an ISP who also had a commercial interest in some sort of rival video distribution service...
There's a different problem ahead though, with ISP port blocking, "carrier grade" NAT and other obstructions forcing end-users down to the lowest common denominator. Will ISPs be allowed to hand out RFC1918 IP addresses so users can't even receive incoming connections?
The '250-1000' will come from a simple comparison: "it's got Gigabit Ethernet, most people have 1-4 Mbps of upload" I imagine. Which has a bit of logic behind it, I suppose, you'll be able to write to the device at LAN speeds then leave it to upload over the Net much more slowly later on. Marketing BS, of course, and I doubt it can really max out GbE for more than a few seconds at a time - then again, ISPs and hosting providers can't handle saturated pipes 24x7 either.
A local storage cache like that would actually be quite handy - maybe an S3 wrapper. I remember looking for one a while ago and not finding it, though.
Trying to dodge the rules?
It seemed pretty blatant that Google were using it as a floating building to get around all the planning laws that apply if they put it on land - and unfortunately for them, I think a little Vulture said previously there's a specific law against exactly that: the wet bit's there for boats, not for buildings that aren't allowed on the land next to it.
As I recall, on land Google would have had to file architect's plans and much more detail about what the building is for - which seemed quite reasonable to me (is it a shop? factory? office? datacentre?) but didn't suit Google's "total privacy for us, none for you" agenda. They do seem to be taking that secrecy to ridiculous lengths, though: why not just admit it's their answer to the Apple Store? (Or a test bed for Glass, or whatever.)
Promising ... in theory
It's a nice theory, but I'm sure the government will manage to screw it up somehow in order to keep the cash flowing into the pockets of the failure-factories as it has for years - chop that billion pound fiasco into a dozen £99m pieces, so it just ends up costing a bit more for the same lousy result.
The root problem of civil servants wanting to buy a fire-breathing monster truck when a Transit van would do the job fine will take a lot more than just a spending cap, though - more like a wholesale cultural change. Having seen the process up close, people with no understanding get to write specifications and sign the cheques: demanding that a hosted service be written in a particular language (or, on one project I worked on, demanding that the web application's appearance conform to both of two conflicting templates, since they were re-organising at the time and had no idea which one actually applied!) ... at least they eventually moved off IE 6 though. That alone was adding a hefty chunk to the development costs on one project, between the extra workarounds and the testing involved.
Even if they lose power for an average of over 3 days per year, which would be a pretty lousy power supply, that's 1% of their capacity lost. Could you really provide backup power - UPS+generator or flywheel+generator - to a computer for less than 1% of its purchase price? I doubt it.
For non-urgent batch jobs like most HPC, you get better overall throughput by shutting down for the power outage and putting your UPS budget into buying some more compute nodes to be faster the other 99% of the time.
Of course for any real time service like a bank or ISP, stamping out that last 1% of downtime each year is worth spending a lot of extra money on: being offline 1% of the time will cost you a lot more than 1% of your revenue. For batch work like this, though, just work slightly faster the other 99% of the time and you come out ahead overall.
Even if HAMR and SMR won't play nicely together, surely helium could still be used to cram SMR platters more closely together, giving the benefits of both: 7 platters instead of 4 in the same space, so somewhere around a 10 Tb drive with the combination of current technologies?
RAID rebuild times are getting insane, though: because a drive twice the size still takes twice as long to read/write fully, you can get big arrays now which would take multiple days to rebuild fully. During which time, of course, you're at risk of a second drive failing during that rebuild cycle - and requiring a rebuild of its own, if you're using double-parity. Suddenly, that triple-parity stuff doesn't seem so paranoid after all...
I like the idea of a common connector - though micro-USB doesn't seem ideal. Can it handle 2A or more, for bigger devices to charge at a sane speed? I hope the next version of the plug doesn't care which way up it's inserted, and delivers decent current when needed.
Perhaps if the adapter options remains, we'll be OK: the future phone with a superconducting 10A nano-USB socket can still be charged much more slowly from micro-USB to keep compliant until the legislation is updated.
I would guess the shorter lifespan comes from cycling the spindle speed up and down more often: it's virtually always when a drive is power-cycled that it dies, rather than while it's sitting there spinning at a constant speed for days. Just like a laptop, slowing or stopping the drive will save power, but shorten the drive's life.
Lower power/heat would be appealing for a big array: a smaller/cheaper power supply per drive shelf, less cooling cost per rack, a smaller/cheaper UPS/generator etc. It's not just shaving a few % off the electricity bill for the drive itself. I get the impression a lot of big installations have a lot of "cold" data where even a 7200rpm SATA drive is excessive, but the latency of tape would still be a deal-breaker.
I wonder if they could do a bigger version, like the old Quantum Bigfoot 5.25" HDDs? Pack 10 or 20 Tb in a single unit, with slower access times - we've moved the other way, to 3.5" and now 2.5" drives to get faster and faster access times, but in some cases a slower bulk HDD is just what the doctor ordered.
"the social network needs chips with screaming fast CPUs to deliver dynamic web pages and perform similar tasks"
No, not really. What they need is lots and lots of CPU capacity - but not individual speed-at-all-costs cores.
On the desktop/workstation, you can really only use a few cores for most things, so you need those cores to be as fast as possible. Intel are very good at that: the i7 and latest Xeons are packed full of really clever tricks to shave every microsecond off the execution time of individual instruction streams so you can get the next frame of your FPS drawn in time, or get that complex protein molecule calculated and drawn that tiny bit faster. Having more cores doesn't give you much benefit if any, for all the effort put in lately trying to make tasks multithreaded wherever possible. To double the speed, you can be quadrupling the cost - and if you need the speed, that's where you go.
Facebook, on the other hand, have millions of requests coming in each second. More and cheaper cores are a big win for them: half the performance for a quarter of the price means they can get twice as much throughput for the same money.
Intel see this too, of course: it's why they've been adapting their Atom core for server use, as well as portable. It'll be a tough fight, though: their biggest asset, Windows compatibility through x86, just doesn't apply to these big Linux server farms: a LAMP stack should be just as happy on ARM as on x86.
I wonder if/when Intel might get a big Atom sale to the likes of AWS or Azure? Or, conversely, when Microsoft might get round to offering ARM builds of Windows Server...
Unlimited with lots of limitations
I'm really growing to hate "unlimited" offerings for exactly this reason. Barring illegal use, what does it matter to my ISP whether that gigabyte of data I just sent was going to visitors to my home-hosted website or me uploading a batch of photos to Flickr? Why do they pretend "server" use somehow matters?
Of course, it's all a dodge to try to clamp down on (some) heavy users. I'm very glad to be with an ISP now which does have traffic charges, but no arbitrary BS prohibitions: I have a static IP, no filtering. If I want to move my personal website onto my home connection, or be able to VPN to my home LAN, or do my own SMTP and DNS service, they're quite happy - I'll just have to pay a bit extra if it uses more bandwidth, because that's the bit that actually costs money.
Even Google can't offer a truly unlimited-usage gigabit pipe to everyone: it just doesn't work. Instead of trying to pretend it matters which end sends the TCP SYN packet and which is replying with the SYN|ACK, why not accept this and charge, say, $1 per 10Gb? Heavy users cost more, and pay more accordingly - and priced at a level that actually reflects costs, it's fair and cheap enough not to deter anyone sane.
Not in Chinese hands, he left it in .. China?!
Leaving information in Hong Kong doesn't seem like a very effective way of keeping it out of Chinese hands to me. Now, maybe those journalists did quickly get it out of Hong Kong again, to somewhere safe - but his confidence in that seems a little optimistic to me.
Of course, the NSA thought they had technical precautions in place to stop Snowden getting most of the information he's been leaking, up until it was too late. How can Snowden be so sure his own precautions are better than theirs - or that he's the first to breach the NSA's own? People have been speculating a lot about NSA-inserted backdoors in other products: what about PRC-inserted backdoors in the NSA itself? Were the holes Snowden exploited to get his haul of information all accidental or his own, or was he following in the footsteps of others?
It just seems a bit far-fetched to me that if one lone idealist could do this just as a matter of principle, the Chinese, Russian and other intelligence services wouldn't have gone the same route too. If they did, would the NSA know it (they didn't spot Snowden in time!) and would they admit it if they did?
Re: Enjoy testing.
That may be an unexpected benefit of the "fake capacitor" fiasco of a few years back - remember all those bulging/bursting electrolytics which used a (badly) copied formula? I've seen a few motherboards lately (like Gigabyte's "Ultra Durable" range) bragging about using solid polymer capacitors, rather than traditional electrolytic ones with liquid electrolyte vulnerable to the problem you mention.
Besides, when you're buying on Facebook's scale ("We're thinking of filling our new datacentre with your motherboards, will you help us get them working in liquid coolant?") the suppliers have a rather bigger incentive to help than if you or I asked the same question about a home system: I'm sure they'll know, or investigate, what it takes to do this properly. Running a few test systems for years wouldn't bother them, and the component manufacturers probably have a fair idea already: immersion and liquid cooling's not just for PCs, after all.
He might have a bit of a point, given Google being caught illegally snooping on people's electronic communications recently - they do seem to have been getting off far too lightly with that, as the extent of their antics slowly becomes known.
It's disappointing to me they seem to have focussed more on the reporters illegally buying information, rather than the police illegally selling it to them: both crimes, but IMO the latter is the worse. Selling drugs is much more serious than buying them, the police breaking the law is much more serious than 'ordinary' people doing so.
None of that's a defence, of course: if you break the law and point out someone else committed a bigger crime, the correct answer is to prosecute both. Jail the bent journalists, their police sources, and whoever at Google thought it was OK to go poking around everybody's WiFi network and harvesting all the data.
It's funny to see IBM having had the *lower* bid, by a substantial margin: one word that has never been associated with IBM is "cheap". For Amazon to be 50% more expensive yet better overall, there must have been something quite strange going on: were IBM only doing half the job, or had Amazon got some wonderful extra to offer that was worth a huge extra fee?
I could imagine the positions reversed: Amazon beat IBM's price by a third, IBM get upset and challenge. For Amazon to win with a much higher price, though? Maybe IBM's last bribe^Wcampaign contribution check bounced, or went to the wrong side, or maybe Amazon had something really clever on their side. Or IBM's had big vendor lock-in risks? Getting shackled to a proprietary hardware+software platform might be worse than the initial 33% saving, after all.
Amazon office offering
So far, it's Google Apps v Office365, and Azure v Google Cloud v AWS: Amazon haven't shown an interest in the office side, just cloud services.
It's not hard to imagine them offering some sort of e-mail service, though - they're already added static web hosting, DNS and e-mail sending facilities; putting inbound (MX), spam filtering, POP3/IMAP wouldn't be a huge departure. Trying to offer their own office suite, though? That would be a huge leap for them, and a strange direction to move in, too.
"European readers will be looking slightly smug at this point, after the EU's digital tsar 'Steelie' Neelie Kroes forced telcos to cut roaming charges to a pittance"
Misplaced smugness in this case: Kroes is making a fuss about *reducing* the roaming charges *within the EU* - T-Mobile eliminates them for a much larger number of countries. Rather like greeting news of an international airport with "yeah? We've already got a BUS STOP, so there!"
I like the idea of scrapping roaming surcharges, but T-Mobile are going a lot further here than Kroes has even contemplated attempting so far. I hope they're starting a trend here!
Rejected for acting under duress?
Given the NSA's status, I doubt it was a case of the NSA asking nicely for their help in snooping and all the companies rolled out the red carpet for them - more a case of "we're going to connect these boxes up, and you're not allowed to peek inside or monitor them or we'll make you disappear in secret".
Right now we know these companies are fighting in court over it, despite ongoing gagging orders: is the EFF really attacking the right people here?
So ... like cookies, but Microsoft can see and control them, users can't necessarily delete or even view them, being on MS servers? I'll be rushing to implement that ...
For individual sites/domains you can already achieve this for free, with session state stored on your own servers. Presumably, this is all about sites sharing data across domains, outside the user's control - no thanks. I see the appeal there for advertisers, of course, but for users?
Google's ISP ambitions
Given Google's plans to build gigabit-per-user fibre services, how can they of all people now be struggling with congested WiFi? Presumably, a backhaul issue - did they cheap out and try to use wireless mesh, instead of running fibre or copper to each AP?
Re: $100,000 worth of equipment...
By some tragic coincidence, the arc just happened to pass through the box temporarily storing all the team's government-issue crappy mobile handsets, which due to budget constraints will only be replaced if they break...
(Having had a user "accidentally" slam his laptop shut with the plug sitting on the keyboard, prongs upwards, the week after his office-mate got a shiny new Apple thing, it takes a lot to surprise my inner cynic...)
"Definitely. If it's good enough for a home network NAS, it's good enough for multinational industries."
Seems to work pretty well for Google, Backblaze, Netflix... All the really big public screwups seem to revolve around 'enterprisey' storage going bad, not commodity clusters.
Having worked in one university and studied at another, the one using commodity hardware clustered gave far, far better results than the big-budget SAN managed, even at multiples of the price. (OK, not colossal installations: a few petabytes of storage, a few gigabits of traffic: enough to run most big businesses, though.) If an architecture is good enough to run Google's business but not yours, you're either doing something very special, or you're doing it wrongly - probably the latter.
Cisco need QNX, which is the obvious target for them, being the basis of the top-end IOS (not that one!) platform. Maybe the odd wireless patent or something relevant to their VPN offerings, too, but that's probably all.
SAP - maybe the BES document handling? They're the odd one out in this list to me.
Google ... mobile patents to use against MS/Apple in protecting Android, of course, and probably BBM: a good way to one-up Apple's iMessage and open things up, if they want to go that route.
Apparently, BB owns 130 encryption patents they bought from Certicom 4 years ago, which presumably Cisco, Google and SAP would all have an interest in. The handsets, though? Hard to see anyone buying that up now: it's coming fourth in what is barely a three-horse race now, Android/iOS/WinPhone.
Re: Maybe I'm just too old a fart and remember things that should've been forgotten...
How is a Windows server any better suited to that than a Unix one? Cheaper hardware, thanks to the higher volume, but a less reliable OS with restrictive licensing? (We were a mixed Solaris/NetWare shop in those days; NW was pretty good at the file/print handling, Solaris did everything else very nicely. Windows really didn't have anything to offer on either side.)
With hindsight, I really wish the Linux/BSD push had come that little bit earlier - a much more sensible migration path from proprietary Unixes. Still, it's doing pretty well these days...
Too little, too small
It's been irritating me for a few years now that laptop monitors suddenly leapt backwards. After two decades of progress, from crummy VGA (and lower, sometimes mono) up to 1920x1200 17" or larger becoming a standard option at the top - now, Apple drop 17" entirely, the other manufacturers downgrade to 1920x1080 and brag about this being "full HD" as if that somehow makes it OK to be a step down.
Retina does sound nice, but I want more screen area on my laptop dammit! Start by giving the 2" or 120 pixels back, then try making some progress again...
Which current level?
If they're all to be the same, is that the half-amp (2.5W) of regular USB? The 1A (5W) of the higher-power option, or c 2A/10W for some tablets?
I like the idea of a standard connector, and micro-USB is adequate for that (though like other posters here, I much prefer mini) - but demanding the charger itself be the same seems too restrictive. Why not just take the sensible route of stipulating the charger must have a regular USB port? That way, everything's fine: just need a USB to micro-USB cable, which everyone will have anyway for syncing etc, and it actually makes the charger more useful too (can use it for non-phone purposes too: tablets, charging those external 'emergency' batteries, etc).
Now, if they were to push for a standard, say 48V at 1/2/5 amps with a standard plug, for laptops, I'd be very happy. (Lower voltage means more current, and high-end laptops can be drawing insanely high current which is why the plugs are getting thick and prone to failure. Manufacturers probably love this, given the prices they charge for current power supplies...)
Heck of a broad problem scope
So, not a problem with, say, "Intel's 2012 mid-range AHCI controller" - but "AHCI". As in, "pretty much all SATA controllers from the last decade" - and like the first two comments here point out, since everybody else has figured our how to work with AHCI perfectly well by now, it does sound as if VMWare have done something very dim. Apart from anything else, a problem with all AHCI controllers seems to rule out specific implementation issues on the controller side.
Still, just as well it's been caught in beta: they'd be in a whole world of pain if their SAN got caught eating production data in a full release.
Trust our security precautions, they say...
They claim it's safe to trust them with all our data, because their security systems stop people snooping on things they shouldn't. So ... where was that security when Snowden was downloading what is starting to resemble their entire stash of secrets? If it's not good enough to stop him grabbing every classified document under the sun, how on earth are we supposed to believe it's good enough to stop all his thousands of colleagues looking at my email or phone records?
There is something rather absurd about the boss saying "trust our security precautions to keep you safe from our staff", when we only know those precautions are needed because their security was so comprehensively breached by one of those same people!
"Limited availability phase"
Sounds like it's still a beta to me - which would explain why they haven't got the documentation polished up properly. If a client has paid to become a beta tester, though, I'd expect a much more pro-active response to their problems - apart from anything else, because that's the whole point of the beta phase, to find these problems before you release it fully!
Re: 'Whiff of octogenarian media lord sends 1 in 5 running'
I use Be for the office line - no changes yet, beside a few messages saying "you have now been assimilated, but nothing is changing yet besides the name on the invoices". It certainly doesn't help that they've done such a lousy job of communicating with customers about what they have planned - assuming, that is, that something IS planned, beyond "let's buy this ISP, it's got lots of customers we could have".
I would have said the recent network congestion problems could be a much more potent reason to switch - after all, we generally picked Be for the performance in the first place - and still no sign of IPv6 either. Moreover, there was a disruptive network upgrade being rolled out; I had a few emails from Be amounting to "at some point in the next year your line will stop working, at which point you will have to upgrade your router firmware and do a funny dance to get it working again". Being the sole broadband service in that office, carrying two VoIP trunks as well, that was almost reason enough in itself to switch ISP (at least that way, I'd know and have some control over the switchover date!)
It's a shame, in multiple ways: Be were a fine 'power user' performance ISP, and somehow I doubt Sky will do a good job preserving that. Still, it could be worse: I was a Nildram customer once, and watched in horror as they got Borged by TalkTalk and their Chinese censorship machine, of all people.
Do as they say, not as they do?
Google.com? No DNSSEC there. Google.co.uk? Same. Likewise OpenDNS.com.
It's depressing: when even the DNSSEC *advocates* aren't actually enabling it themselves, who will? (FWIW, my personal domain has DNSCurve and DNSSEC, as well as IPv6 - it's truly disappointing that Google don't!)
Anonymous nuisance calls
One thing I'd like to see changed is a bar on non-personal use of 'number withheld'. EU rules give individuals the right to make anonymous calls free of charge - but does and should this apply to businesses too? I don't think so. That, and make anonymous call rejection a no-cost option to be offered to all customers at installation time, along with being ex-directory etc.
The £90k fine is a welcome first step, but nothing like enough: disconnection from the phone network - no more phonespam - would be better. The US approach of charging hundreds/thousands of dollars per illegal call documented would be good too.
Maybe grab their outgoing call records (from the telco; should go back at least a few months for billing purposes anyway), check against TPS; for every hit, require them to provide proof of the 'pre-existing business relationship' or specific consent required for that call to be legal or pay £1000 for the illegal call.
Anything less, they'll just shrug it off as a cost of doing business - hell, they probably pay more than £90k for their electricity each year!
The idea of needing to pay for "title and excerpt" of a web page sounds alarming to me - all too close to the shake-down news companies have been trying in Germany with Google, demanding that Google pay for the privilege of including their pages in Google's search results.
Maybe Meltwater's "excerpts" were too big, but it still worries me in that context. If they were copying whole articles, fine, that's a clear-cut violation, but ...
"whoops, we lost £21m down the sofa" ... I can imagine getting 10k, maybe even 100k adrift on a big company (depreciation calculations, things like that) - but even on an NHS/government scale, £21m is a big accounting hole to explain.
I got quite worked up enough when the last quarter was about £300 out (couple of misplaced expenses forms)!
Nice to be quoted in a Reg article there*! From their later updates, it seems the original problem was excess route entries overflowing the TCAM (fast lookup memory in big routers), which makes them fall back to a much much slower routing approach - which can't keep up with the level of traffic you see on an ISP backbone.
That was problem #1: the lab stress-test leaked out and flooded the live network, bringing it to its knees. (IPv6 itself was unaffected - it's routed separately anyway - but since the line is authenticated over IPv4, as soon as you disconnect, your authentication packets get lost, stopping the PPP link coming back up.) Like their autopsy says, filtering out the routes then re-booting the routers cleared the problem - then they discovered a Cisco line card was still misbehaving.
The wonky line card seems to have been problem #2: it seems to be working again now, but they're keeping most traffic away from that card until Cisco can figure out why it misbehaved in the first place.
* Apparently the router problem took out their VoIP lines and access to their own status system - downside to "eat your own dogfood" as an ISP, when your own services are down, you can't use them to communicate about the problem - normally, I've found Entanet to be very helpful and responsive. In this case, the live traffic graphs on noc.enta.net were enough to tell me it wasn't just my line that was dead - how many other ISPs give you that kind of detail?
- Vid Hubble 'scope snaps 200,000-ton chunky crumble conundrum
- Bugger the jetpack, where's my 21st-century Psion?
- Windows 8.1 Update 1 spewed online a MONTH early – by Microsoft
- Something for the Weekend, Sir? Why can’t I walk past Maplin without buying stuff I don’t need?
- Review 'Mommy got me an UltraVibe Pleasure 2000 for Xmas!' South Park: Stick of Truth