1577 posts • joined 12 Jun 2009
This sounds like it may be compiling these apps to NaCL? This sounds cool, but I thought NaCL was at least portable between Chrome running on different platforms. Apparently not?
Agreed, supporting ESR releases seems sensible. You want to use a newer version? It'll probably work but I wouldn't want to have to retest everything every 6 weeks either.
"None of us can do much about the need for Windows to reboot."
Yeah, unfortunately. The big reason for this: POSIX (UNIX standards) permit deleting an *in-use* file, and replacing it with a new one (the old file's space is not freed up until the last program closes the old file). So, *generally*, Linux only requires restarting to start into a new kernel. (And I saw a patch that allowed going straight from one running kernel to another without reboot; how, I have no idea, I would think any changed data structures would break it...) Windows, these files must be updated while the system is shutting down or booting up.
I must get in my 2 cents... doing the regular updates in any given Ubuntu install, I have not had problems with updates introducing bugs. Following Debian's roots, the updates to a LTS (long term support) version are very conservative and mainly fix bugs and security holes.
Just depends on how it's operated...
I'm with Hargrove hit the nail on the head. It's not that IT is incapable of grasping what cloud means. It's that some companies want to keep their information in-house, or are required to. Or they already have well-managed infrastructure that doesn't make cloud make sense. Or they might have data in a mainframe (which will scale out as needed). Or they've analyzed costs and cloud doesn't make sense... cloud saves the most when there are unpredictable bursty loads since you can rent some extra machines to cover the burst; if the loads predictable, once some economies of scale are hit it should cost less to run your own infrastrcuture than outsource it to the cloud, since the cloud provider wants to break even plus make some profit.
I think the issue here...
I think the issue here may be EU wanting to continue strong data protection laws, whilst UK is probably following the US line of "Well, let's 'balance' this against the need for whatever" which means watering down and removing people's rights. If you have UK and only UK (out of EU members) arguing to water this stuff down, then eventually the rest will just quit listening to them.
Re: Licence fee payer travelling overseas
"Well you're 'forced' to pay rent and services on the home you have in the UK too aren't you? You should demand the right to not pay for any local rent or water and demand a free hire car whilst you're away too."
Nope, he could go ahead and have services shut off. He also has the right to sublet his home, or store his stuff there, or whatever; you know, make use of it. Unlike the license fee where he's apparently being forced to pay it but expected to get nothing out of it. Good on him for getting his money's worth!
if one was to say "Oh, those people heavily using telephones, they might be using them to illegally stream music, phone companies should be required to report all heavy phone users to us", they would be laughed out of the room. This is exactly as absurd.
What they said --^
What they said --^. CenturyLink's DSL also drops (for 30 seconds to 5 minutes) at least 4 or 5 times a week. It's not the copper, if you get service from a competitor (who has their own DSLAM across the street from the phone company central office here), it may go down once every 6 months.
As for Rackspace... I've heard about their zeal to buy them up. I'm not sure what to think, though, because CL has come out and said competing on price is a race to the bottom (which it probably is), so they aren't going to even try. Their current price is like 4-8x the price of their competitors, and they have no plans to lower prices. How will they compete then? Well.... who knows. Their one big asset is that fiber optics, and they have made no mention of, say, providing below-average-cost transit to attract bandwidh-using customers. They seem to think they will just be able to market (way above normal cost) services to their existing DSL customers, despite the poor reliability of the DSL. Will they keep Rackspace together or rip them apart? I don't know.
I am not sure if a DMCA notice actually obligates a site to preventing re-postings of some information. The MPAA etc. had been pushing for obligating sites to install automated systems, but as far as I know they are not.
I think John Robson got it right -- underage pics got this subreddit banned. The confusion arises I think from all the *other* reasons an ordinary website would have already closed a problematic forum, or section, or (in this case) subreddit.
"ISPs and internet backbone providers argue large websites, such as Netflix, that make heavy use of the underlying network should pay towards the deployment and improvement of the web's infrastructure."
Which they do. They pay standard rates at the internet exchange points they connect to based on the capacity of the connection they get, i.e. if they generate more traffic they buy more capacity and so are paying more.
Fuck you to ISPs like AT&T and Verizon in particular... Firstly, for trying to make it out Netflix etc. are somehow freeloading when they pay a price based on capacity (mbps or gbps) of their connections just like everyone else -- in other words they ARE paying for their use by having to buy a high-gbps capacity connection. And secondly, for trying to "double dip" by wanting Netflix (etc.) to pay a second time when they are already paying the internet exchange point. These ISPs have been laughing all the way to the bank for years, reaping high profit margins from their very high prices while neglecting to upgrade their infrastructure enough to handle year-over-year increasing traffic (well not Verizon as much in their fiber-optic markets, but cable and DSL providers both big and small have run into this). This is simply not Netflix's problem, this is these ISPs choosing to put off capital expenditures in order to further increase profit margins, then expecting unrelated parties to provide them cash to spend on these expenditures they've put off.
"If it's a 2 in 1 you can argue it's enough of a laptop to command a laptop's price, with the convenience of turning into a tablet. That was the Metro dream after all. We'll see if consumers take the bait."
2-in-1s have been on the market all along, and potential customers have not taken the bait.
Why? Every one I've seen has *not* resembled a "laptop with the convenience of turning into a tablet." They appear to be an EXTREMELY expensive tablet (and actually expensive as a plain laptop for that matter), with a nasty rubber keyboard, and saddled with power-hungry Intel chips (this chip will help with that part) and Windows 8, which makes it not so great as a tablet *or* a PC.
Until some vendor starts shipping ARM notebooks, this Core M may be an acceptable stopgap. But, not if they only stick it in tablets and "2-in-1s". I think tablets are a non-starter, ARMs are even lower power. And 2-in-1s... oh, boy, *1* USB port! I want a real keyboard and not to spend money on a touch screen I'll never use, thanks. Also, not if they saddle it with Windows 8 -- blank please! I've taken a hard line, I will not pay Microsoft a single penny for software I'll never use; my recent solution has been to buy only used hardware.
" Further, what cell phone company puts up towers for NON-paying customers regardless of their location? And why would the location of the tower have anything to do with the presence or absence of encryption?"
I'm thinking perhaps microcells or some DAS (distributed antenna system) type installations? They tend to be added by an end-user who wants to fill in a coverage hole, and (since it's meant to cost like $100, much less than a cellular base station) it may not follow the usual standards and practices of the given cell company.
As for issues like phone cos failing to keep notifications of encryption being disabled etc.... I just don't get it. Why do these companies feel in any way they need to "help law enforcement"? Law enforcement is not their customer, and law enforcement can go ahead and help themselves.
Not to minimize this..
I heartily agree with this article BTW.
Not to minimize the inappropriateness of this type of behavior, but I do know a person or two who, once they are drunk, if nobody sets them off on a different topic, they'll talk about who they've banged, want to bang, exes and who they are banging, if some TV show or movie's on they'll try to turn every line into a double entendre and like "Oh what I'd like to do with her..." for 2/3rds of the female actresses that come on the screen. When it's that over the top, it's pretty embarassing among men too. I'm sure if they went to conferences, after a few drinks they'd be a real horses ass to any women at the conference while they're at it.
Well, when I looked at the specs for Sanbolic, for example, it could use EMC and a few other storage arrays; it supports "cloud storage" (no I would not use this either...), and it supports flash and whatever disks you throw into the systems. They describe this setup as "virtual RAID", whether it's disk-level like RAID or file or block level, or uses it's own distributed file system, I don't know. It looks like these setups do all push using a pool of local disks for storage.
I have noticed more modern servers no longer have the space to stick a good 5 or 6 disks into it, but as far as I know storage chassis are still on the market, so you can hook plenty of disks up to each server if you want. Of course if your usage is extremely storage-heavy (compared to number of servers) you really won't want to do this. It's definitely workload dependent.
Yeah, personally, I would just make it large enough to hold a battery and two layers of LCD backlight.
"The manual control requirement is perfectly reasonable, but the insurance requirement is kind of ridiculous. Human-driven cars only need $35,000 worth of liability insurance."
For now, I think it's due to the fear of a faulty design just locking up and plowing through... well... more than $35,000 worth of stuff.
These designs are experimental after all. What are the failure modes? What should be done in case of catostrophic fault (for example if the computer locks up or crashes)? I seriously doubt it'd cause $5 million in damage, or even close to it. But, the "big fear" is a faulty design where it just locks at whatever steering angle and accelerator position it was at; I do think they'll take safety precautions and this unlikely. But does the car just suddenly come to a dead stop? This can be dangerous too if it's in the middle of the road or going around a curve. Does a secondary system try to pull it over to the shoulder? These are things that'll have to be worked out.
I expect the getting $5 million coverage from a commercial provider may be to get an insurance co to look at these vehicles as they would before insuring any new make/model of car, and see if there are ways the insurance co suggests to make the car safer that the engineers didn't think of.
I'm not the biggest fan of UEFI, but it'll make sure the system can boot. Using ACPI is a good move, it allows the vendor to still stick these serial ports etc. on however they want but let the OS know (via the ACPI tables and code) how to use these devices.
Could it be the codec?
Is it possible Vodafone uses a different codec? For instance, AT&T sounded **HIDEOUS** for years here in the US in most markets. They were using AMR-FR (full rate) in just a few markets, and AMR-HR (half rate) *all the time* in most markets. T-Mo at that point was usually running AMR-FR (full rate) with half rate used only if the site was busy, and then only if your signal strength was good enough (i.e closer to the site -- since half rate has poorer error correction, calls would go back up to full rate further from the site.) AT&T apparently got over this more recently. T-Mobile now advertises "HD" calls, running I think 14kbps codec (which is a little higher than the usual AMR-FR.)
I won't compare with Sprint and Verizon, the CDMA codecs are quite different, and there is no defined "half" and "full" rate on this setup. It's still possible to tank call quality by setting average call bit rate too low though.
Not seeing the problem here.
I'm not seeing the problem here at all.
When you get right down to it, Cloud = (usually large scale) server farm + hype. Any "cloud" improvements should amount to being able to move VMs among machines better & more easily, use pools of storage better & more easily, and deploy VMs more easily, all things that are good to have even if you don't consider your physical servers you are running VMs on to be a "cloud".
vGPU itself is admittedly pretty useless for servers, but also doesn't take away from everything else being developed. But, any improvements in latency that may be being done with an eye towards desktop virtualization will still reduce the latency on server VMs as well.
This is S.O.P. (Standard Operating Procedure) for Microsoft. When they release a failed version of Windows, they claim the next version is coming out *real* soon now, just a few more months now... for however many months or years it takes them to get the next version ready. Regarding Windows 9 being ready (or a test version being ready), I'll believe it when I see it (being reviewed online or whatever) and not a day earlier.
The thing that always amazed me about this technique... and indicates Microsoft's continued anti-competitive abuse of monopoly position*, is that a normal company ("company A"), if they say "something much better is coming out soon!" usually *decreases* sales as people hold out for that better model, or buy from someone else if their product seems a bit ahead "company A"'s model.
*Not 100% monopoly, but for anti-trust purposes a "monopoly" is generally defined as >90% of the market, and abusing this position to maintain and extend this monopoly, for example by making agreements force-bundling their software with almost every PC sold and not honoring the license clause saying the software can be returned for a refund.
So what about refunds? Google Market *used* to have some dubious apps (and I'm sure Google Play at least has a few still.) But, they have a "no questions asked" 15 minute refund. That's not long, but it's long enough so if you paid $8.99 for an app that says "itunes? Here's the download link for Windows", you can give it a quick 1-star rating and return it. Sounds like Microsoft is not even doing this!
Re: Another Ballmer stuffup
I don't know what CloudVolumes, Xenapp, or Windows have to do with Apple. But, yeah, OSX runs on the Mach microkernel, so Apple went to a microkernel effective 2001.
Just to be clear, I don't think Steam games are missing due to Linux drivers -- even the ATI ones are pretty good. (And the Intel ones have vastly improved the last year or two). I could be wrong, but I think a few of the games on SteamOS are Linux native, and quite a few more are still "for Windows" but are running under (I would assume customized) wine. I would fully expect Valve will improve wine compatibility, and as they do more and more of these other games will then be "SteamOS compatible."
Luckily for everyone involved, a main weak point in stock wine is that they are unwilling for legal reasons to put in workarounds for rights restriction systems (DRM) on games (one system for example tries to load a kernel module -- which of course doesn't work since wine doesn't have an NT kernel to load modules into; a few other DRM systems decide things look a bit fishy, conclude they are running under a debugger, and abort the game.) Steam games usually have disabled any DRM systems the non-Steam (CD or downloadable) versions of the game may have.
For those who have not seen games or apps run under wine... compared to installing a game in Windows, it installs so much faster in wine you'll think the installer has malfunctioned (the "write out a bunch of small files" workload is MUCH faster in wine on Linux than in Windows). Running typical apps (those that work), those that are CPU bound run the same in Windows or Linux; those that make intensive Windows calls tend to run a bit faster and lower CPU useage compared to in Windows. Games of course are not a typical app. I would estimate (a few years ago) that the overhead of running a Direct3D game was approximately 10-20% (due to Direct3D->OpenGL conversion overhead and possibly slower drivers at the time), running games that support it in OpenGL mode avoided this overhead (since the OpenGL calls can essentially be passed right on.)
Yeah, Ubuntu goes psychedelic too
"I don't know what he did here. I have the same BRIX platform running SteamOS and I have no graphics issues."
I don't either, but I've seen it -- the Ubuntu boot logo (the word "Ubuntu" with some dots underneath that change color as it boots) shows up all psychedelic on boot on probably 1/4th of the systems I've installed it on (JUST the boot logo, once it boots into X it looks fine, and text consoles also look fine.) It doesn't seem at all consistent, I'm sure I've seen two computers with the same video chip (something old like an intel 945) where it did it on one and not the other; it's not consistent based on if you have ATI, NVidia, or Intel chip either. I haven't noticed it with Ubuntu 12.04.5 (or other 12.04.x version with the newer "Trusty" kernel and X installed).
So, I don't think he did anything, I think Steam has the same buggy boot graphics code that Ubuntu does (both inheriting the Debian code probably) and he's got one of the 1/4th of installs that goes funky on boot.
Normally, I love slamming Apple for gratuitously and unnecessarily using non-standard plugs for no good reason. And I still do. For instance, running thunderbolt on the device end is still dumb.
But, if the goal on the other end of the cable is a reversible USB cable? It makes much more sense to do what Apple did (make a reversible USB cable) than to go with USB type C (apparently reversible, but not compatible with any existing USB connectors.)
"I wonder if they know where the sources are for NT3.1"
This is one thing I've seen on some embedded Linux devices; they'll have a pretty old kernel (not NT 3.1 old but pretty old), but an arbitrarily modern userland on there.
I could see this transition as genuinely not being successful. If you have large quantities of windows-specific apps (that don't run cleanly under wine), then running some kind of "Linux + lots of Windows desktop use via Citrix" doesn't really help much.
They would be selling off 600mhz spectrum. Sprint has a small amount of 1ghz spectrum, and T-Mobile has none. (Hopefully) AT&T and VZW would not pursue this when they already have 700mhz spectrum.
For historical perspective... we've got VHF 2-7 (VHF low), 8-13 (VHF hi). These aren't used much with digital TV (ATSC can't deal well with the kind of burst noise from motors etc. that VHF gets that UHF pretty much doesn't), unfortunately 3(!) local channels do for me (1 is close and the other 2 just don't come in at all). UHF, originally 14-83. When 850mhz cellular went online this knocked it back to 16-69. Now, it is 16-51 (but channel 51 is also being cleared as an additional guard band.) There couldn't possibly be a need for that many channels here in the midwest, but areas for instance around NYC (with Baltimore, Washington D.C., Philadelphia, Boston, etc. all within close enough proximity to cause potential RF problems) the dial's apparently rather crowded already.
One thing here that is VERY different than in UK -- when channels went digital here, there was very little reduction in number of actual physical channels being run (there's just about as many digital multiplexes as there were analog channels). The channels here went digital, put the old analog channel (but in HD) on the digital channel, and added maybe one additional channel (one of RTV, MeTV, AntennaTV, ThisTV, which play older movies and TV shows. These all appeared after the digital transition). I haven't heard of a single case where two seperate (analog era) channels have combined onto one multiplex; although it seems like a good way to save loads on the ol' power bill.
2 points.... plus a few more
1) 386 got an MMU and in fact supports all the features of a VAX. It is definitely possible to keep process address spaces seperate, to use seperate kernel and user modes, and in fact these are all done by Linux, Windows, and Mac. (Android uses both the MMU and Java-style sandboxing.. which is probably unnecessary since the MMU would already keep everything seperate AFAIK.)
2) The VAX. I think, if you look objectively, you'll find these had plenty of security problems despite use of the MMU hardware. Note I'm not distinguishing between VMS, BSD for VAX, and Ultrix (UNIX for VAX) bugs here, just saying the VAX software had plenty of security flaws over the years. Among these, they shipped with a field service account. Which for years was username: FIELD, password: SERVICE. Yup, walk up and log in and you've got superuser access. FTP with anonymous turned on, but could read/write where you shouldn't, not just a special ftp directory. Doing a "cd .." with FTP or other utilities to escape the "top level directory" you were supposed to be restricted too. World readable /etc/passwd files. Network utilities that would allow you to send a system a file, THEN ASK IT TO EXECUTE IT -- sometimes this utility would run as root, not user nobody! UUCP (UNIX to UNIX CoPy) had minimal to no security, you could request (for example) /etc/passwd off a system with this. On some systems, a user could submit cron jobs which would be run on root. This ignores the havoc packet sniffers could cause with everything unencrypted (encryption wouldn't be feasible with the processor speeds of these systems.) Seriously, though, the list goes on and on. See the Morris worm of 1988 (I'll get to that below.)
I think you'll find the reason that *cough* certain OSes... are not as good security-wise is simply design and history. Quite simply, UNIX was deisgned to be multi-user (almost) from the start -- and programming practices since like the 1970s reflect this. Windows still supports methodologies from Windows 95 and older which assumed a single user account or complete system access. I'm quite sure there's some real messy code in there to support this. The big factor though, I think -- Microsoft didn't start to take security that seriously until NIMDA worm or so -- about 2001. UNIX vendors didn't usually take security all that seriously up through the 1980s either -- but they had their "NIMDA moment" with the Morris worm back in 1988. Quite simply, they had a 13 year head start.
The problem is that programs in absolute isolation are just not that useful. Lets say you want to download a file, edit it, and print it. With perfect isolation.. first, the browser or FTP utility or whatever would not be able to get anything on to the screen, since after all that would break isolation of both the utility and the display software. Lets say you could download the file. Then it'd be in the browser or FTP utility's secured area; the word processor would be unable to access it. Lets say you get the word processor to open it, you edit and save it. Now, the print driver or utility (if isolated) would not be able to get any information to print. It's these needed points of overlap that can be hacked and exploited to pwn a person's (lets face it, probably Windows...) system.
Do not piss off fans
Or they will piss off leaving you with no money.
Indeed. Major League Baseball has been apparently trying to figure out why viewership has dropped off... it was already dropping like a rock 15 years ago when I was in college. Why did it drop off? They make it so quite a few games are only available with expensive sports packages. With an antenna you certainly will not get enough games to bother watching it. And no online coverage without paying like $150 for a MLB online package (not even streaming radio.) Result? A lot of younger people don't have a TV (they watch videos on the computer if at all), and even fewer spend big money on some cable or dish package; and they aren't about to pay $150 to watch. So they don't.
Re: Fixing at the wrong layer
"I was thinking the same - why is the wifi base station not able to cache a half second or so of recent packets and re-transmit them as needed as part of a wifi protocol?"
It does. The default is 7 retransmissions of unacknowledged packets. Wifi is a harsh mistress, despite the retransmission mechanism it still has some degree of packet loss, particularly under poor RF conditions.
No sympathy from me!
"Despite the administrative incompetence, you have to have some sympathy with the councils."
I don't have to have any sympathy for them. Being transferred from paper to electronic records ***IS NOT*** an excuse to ignore a person's privacy. They should have either 1) Respected people's preferences; those on open registry could go on the open registry in electronic form, and those not on open registry would stay off open registry. Or 2) Transfer nobody to the electronic registry, if you have to register every year anyway then why wouldn't they turn on the electronic system and let it fill up with registrations as people actually register?
So, do you run into the problem there where companys sell, re-sell, and re-re-sell the same info? Or do these companys jettison your info once it's unavailable from the original source? Here, I think I could end up with some numpties putting my name on a list, and by the time I'd get the mail saying they'd done it, at least one company would already have my name from that list and would re-sell it for all eternity. Boy would I ever sue my city council if they did this to me!!! My parents still get occasional junk mail for me, and I haven't lived there for over 15 years.
Thing that bothers me...
The thing that bothers me is how it appears Samsung was unaware a business partner of there's went broke. Like, (for sake of argument, I don't think either is "on the ropes"...) if UPS or Fedex closed up shop, would Samsung US just keep trucking over pallets of stuff ready to ship and leave it at their doorstep? I do hope this works out for everyone with their Samsung stuff off in limbo.
Anonymous vs. Anonymous
Yeah I saw this; then a second Anonymous account is like (paraphrasing here) "Yeah we have a name too, and it's not the name those Anons released."
The good news, I think, the report of the name found being the wrong one seems to have spread far more widely than the wrong person's name and address.
I do think the PD (Police Departrment)'s behavior has been quite irresponsible though. Hiding the name of the (allegedly...) responsible party. The head of the PD pledged (I think Monday?) to take any lawsuit against the PD through the courts as far as possible (which is irresponsible to say before they had a chance to even investigate this properly). Indiscriminate use of teargas and rubber bullets. And arresting and harrasing the media as well as protestors.
It sounds like at this point, due to the level of misconduct the last few days in.. umm... "crowd control"... (plus building animosity against the local PDs at this point), the Ferguson PD (and probably other local PDs) are basically being kicked out of Ferguson and Missouri Highway Patrol taking over.
OpenOffice compatibility and cloud services
"The data in those documents needs to be liberated and ODF is seen as the way to do this – modern-day editions of Office also support ODF. Just don’t expect to install open-source OpenOffice on the desktop and open your old Microsoft Office docs. It won’t work – many documents won’t display properly"
Must call BS on this one. I have not heard of anyone having a problem with *many* documents not displaying properly. Don't get me wrong, some found the few documents they had not display properly were absolutely mission-critical; but this is by no means some widespread issue. And, I would venture, neither are macros.
Secondly.. if it were my gov't doing this, I would find it absolutely irresponsible for them to knowingly spend much more overall for a software subscription to save a bit up front. But, not surprsing, gov'ts love to "kick the can down the curb" when it comes to spending even if they know it's going to screw them later.
To me, physical separation is the best way to go. Don't get me wrong, the rest is also important, but these systems should usually be totally separated.
If you have to connect them, very restrictive firewall. Remote diagnostics? Read-only access to the engine parameters. Some auto park system or whatever that requires "write" capability to steer or break? The firewall should allow only traffic from the CPU responsible for auto-park, and only the type of traffic the auto-park system actually uses. Most current exploits involve unusual traffic types, coming from ports and devices the traffic would normally never come from. Oh and do make sure the firewall is secure, obviously it is not useful if an attacker can just change firewall rules then pass their traffic through.
"Do I understand this correctly? Rimini provide some kind of oracle managed service but think that they can use other peoples licenses?"
Yeah basically... it looks like a customer would already be running Oracle; they didn't want to spend the big bucks on running it on their own hardware. So, Rimini would copy this software onto a system on their end instead.
So, first off, not "IP theft" in any sense of the words. Remini and their clients in a few cases exceeded the letter of their license terms. But they were not exceeding the "spirit"; they weren't exceeding number of users, or running excess copies of Oracle, or exceeding licensed hardware limits, or using the copy of Oracle licensed to one client to serve other clients, or really anything that should be any of Oracle's concern. But (just like Microsoft) Oracle has some extra-special clauses in their licenses which Rimini and some of their clients were violating.
Surprised he got off so light
I'm surprised he got off so light. I don't think he should get hard prison times and crippling fines, but I could get $1000 fine off a couple speeding tickets (well, not in my state...), I would think intentionally crashing a companies' stock price* would warrant a *little* more than that.
*Well, crashing it through false information. Crashing it through true information should be fair game so long as you're not shorting their stock at the time.
Pop-ups are of course the most evil advertising ever invented; since the pop-up is actually not held within the page in any way, one can sometimes not even know which web page to blame for forcing pop-ups. May Zuckerman burn in hell for even thinking it was a good idea to use them.
I'm interested to see total tablet sales. Fondleslabs and sales of expensive tablet sales are dropping? No kidding. I can get a quad-core Android tablet at a local store for $50 (I knew they were possible to find tablets that cheap by special ordering from China but I didn't expect to find them locally). Other stores locally have them marked up to more like $80 or $90. I'm sure people are thinking twice about spending $150+ (up to $500+) on a tablet when they can get a nicely spec'ed one under $100.
We have the same thing here in the US... It's weird. I know there are usury laws (in Iowa these cap interest AFAIK at 29.99%... pretty bad still but yeah.) But payday loan places routinely end up with rates that amount to 300-400% APR (if not higher.) It's pretty dumb for anyone to go for a payday loan but these rates really are quite predatory, a lot of these people would not have trouble repaying even at the 29.99% rate and they simply don't realize how badly that 25% (or whatever) per month builds up.
ipv6-literal.net not reserved.
" For this purpose, Microsoft registered and reserved the second-level domain ipv6-literal.net on the Internet."
Apparently not! Windows is just hard-wired to handle ipv6-literal.net addresses specially. The actual ipv6-literal.net domain is just owned by some kind of cybersquatter, in a browser for instance it goes to one of those generic pages with ads with "IPv6" in the titles.
My guess? Rolled out newer DB software. It corrupted something. On rolling out older DB software, DB still corrupted so it didn't come back up properly. Restored DB from backup and (possibly) brought backup up to date using transaction logs.
Obviously for reliability for a "cloud" DB, it should have been possible to upgrade a small portion of the DB machines (perhaps just 1 machine at first), and see it fail, without really impacting service since the rest would be running the old DB software.
Is it a surprise that Sonos would no longer develop the app for older phones? No. Support it (i.e. provide tech support?) Maybe not (but I'd assume they'd have any bugs worked out by now for ios 4 and 5 so it might not matter). But I'm surprised they'd simply be cut off; I would have assumed they would run some audio protocol (even if proprietary), and have it stabilized enough by now to not make non-backwards-compatible changes.
Not even 3G? Wow.
Not even 3G? I'm just surprised, some networks already are going 3G-only.
3 of course was built out 3G-only; AT&T has areas where they had to turn off all GSM to run one 3G channel, so they did (in those areas); some of the Canadian carriers went directly from CDMA+EVDO to "GSM path", but to HSPA (no GSM whatsoever.) US and Canada use different bands but excluding those I would hope for 1800/1900 GSM+3G and 2100 3G.
Otherwise, having a phone that plays music and makes calls for a low price, it's not glamorous but it should sell phones.
Re: The higher the frequency,...
"As for range vs. speed, I wish they would give good range spectrum priority to voice"
VZW does this -- if they have 800mhz and 1900mhz spectrum, they run all their CDMA 1x (voice) at 800mhz. They only run any EVDO (3G data) on 800mhz if they have room, otherwise they run it all at 1900mhz. LTE did end up at 700mhz but it doesn't really reach further than 800mhz CDMA (plus, who wants to deploy CDMA on brand new bands in this day and age?)
The big problem cognitive radio, this is another angle at trying to use "whitespace" radio spectrum. This is tricky! Intel and Microsoft went to demo a whitespace radio setup a few years ago that they were assuring everyone would do a great job of picking up signals and not stepping on them. It basically didn't work at all. Why? The relatively small antennas in the base and the mobile devices were not picking up a signal that was in fact there and they stepped all over it. Thus the current solution of expecting all devices to do a database lookup before they can use a particular (potentially licensed) chunk of spectrum. I pick up my stations from about 60 miles away, the last thing I need is some phones not picking these up at all and stepping all over my shows. The mobiles do not have a directional, high-gain UHF antenna in them so I don't expect they could detect there's a signal at all; the base may be able to if it's reception threshold isn't set too low.
Well, I have an unlimited plan. Here's the info I've gathered on all this...
1) VZW does give an actual number. The say "top 5%" cutoff is currently 4.7GB. They don't specify the throttle speed because they don't have one.... these users will not be throttled to xKB/second on busy sites, they plan to allocate them some percentage of the channel while everyone else gets the rest, so the speed would vary depending on how many throttled users were on the site. Once this kicks in people will I suppose report real-world speeds from this.
2) People were thinking the FCC's objection is due to specifically rules on the 700mhz C block that Verizon bought (introduced by Google) barring discriminatory network practices. However, this seems to really bar throttling *specific* services (i.e. if they were throttling streaming video or whatever) rather than barring throttling based on total useage. Now it seems (just in the last day or so) that the FCC is leaning on Sprint, T-Mobile, and AT&T about throttling practices as well, though, so this may not even be related to the C block provisions.
3) Personally I don't see the big problem. I'm paying $30/month for unlimited. VZW has a "promotional" 6GB for $30 ($5/GB) to try to get people off unlimited. Most plans running $7.50 to $10/GB, with the minimum being $30 for 2GB (nothing less available for someone who just wants to use wifi), plus about $40-50 of voice/text costs for unlimited voice and text (they don't give you a choice of getting less any more). Oh, ranging up to apparently 30GB for $300 or 50GB for $375 (plus voice plan). Ouch. I just don't see the issue of people paying this kind of price for each and every GB getting a little priority over someone like me grandfathered into a low flat rate.
You hack me? Hah I reverse hack you back. I take yo' photo.
"maybe less than stellar CPUs in Russia, could lead to advances in computer science instead"
Well, I do remember in the 1990s using a DOS disk cache that a Russian gentleman wrote. This sucker even did elevator sorting. Man Windows 95'd start in less than 10 seconds with that cache on (versus about 40 seconds with it off.) But, the CD-ROM locked it solid with this cache on so I had to take it off. Oh well. Linux got elevator sort a few years later, so I guess I have them back now 8-)
Anyway, I think ARMs could work pretty well for desktops and servers. There are tasks where they are long-running and don't parallelize, you want a core as fast as possible for this. However, as long as a single core is fast enough so a desktop or server app is not sluggish, adding more cores or making each core much faster are equivalent in terms of adding more total processing power. Those ARMs could have plenty of cores and still save serious power in servers and portable PCs (and of course desktops but people don't worry about that as much.)
- Analysis iPhone 6: The final straw for Android makers eaten alive by the data parasite?
- First Crack Man buys iPHONE 6 and DROPS IT to SMASH on PURPOSE
- First Fondle Reg journo battles Sydney iPHONE queue, FONDLES BIG 'UN
- TOR users become FBI's No.1 hacking target after legal power grab
- Vid Reg bloke zips through an iPHONE 6 queue from ZERO to 60 SECONDS